Researcher’s Bleg: Looking for a technical solution to enable project & document management, collaboration and revising

UCSF researcher Ralph Gonzales writes to get our advice regarding his Wiki-Whiteboard (or Wiki-Noteboard). Here is his description of what he is looking for:

Version 1.0. Lives on my iPad. A handwriting recognition program that allows one to organize documents into different notebooks (i.e., projects), and that allows one to attach different types of documents (Word, Powerpoint, PDF, scanned documents, etc.) to different locations on different pages and notebooks. Think about the “insert comment” function in Word… for this we would have an “Insert document” function. The mock-up/layout could actually resemble the word document, except instead it’s my handwritten notes with documents inserted. It would be nice to be able to insert documents directly from different sources such as email folders, as well has hard-drive.

Version 2.0. Lives on a server with all the same functions as above. Selected individuals could also access the specific Noteboards and provide comments to the notes or attached documents using something similar to “Track Changes” from Word… using the “Insert Comments section. You would have different colors for different individual’s comments.

Great question. Team, can we offer some ideas/recommendations?

How Can We Make Biomedical Studies More Inclusive?

Even in 2011, persons with disabilities (more than 47 million Americans)
are still “ profoundly underrepresented in mainstream health research”. In their recent article, the researchers Ann Williams and Shirley Moore propose a “Universal Design of Research” (UDR), which allows “routine inclusion of persons with disabilities in studies, without the need for adaptation or specialized design.”

They offer a few guidelines and ideas to support researchers in designing materials in accessible formats. Some good food for thought as the new UCSF Participant Recruitment Service (PRS) takes shape. Here is what they propose:

… provide multisensory, flexible options for recruitment, research instruments (such as questionnaires), measurements, and responses from participants, with reasonable accommodations that invite and facilitate participation by persons with disabilities; and when you do not know how to include someone with a disability, consult someone who does (the potential research participant, another person with that disability who is knowledgeable about the range of methods people use for living fully with it, or a professional who works with persons who have that disability). 

Practical guidelines for implementing the Universal Design of Research include:

… (i) plan multiple options for people to learn about, respond to, and arrive at opportunities to participate in research; (ii) provide multiple means to communicate the information in research instruments and instructions for participants; and (iii) provide multiple means of responding to research instruments and self-management interventions.

I wonder what our PRS team thinks about these ideas. And, do we know of other successful approaches, web-based technologies or great examples we could share?  Ann Williams and Shirley Moore are looking for ideas to develop comprehensive guidelines.

Compelling Video Describes New Visualization Tool “Many Eyes”

It can be challenging to create animated video that conveys a complex message. Here is a great example that shows it’s doable – mind you, without a single spoken word.

A 60 second social story about developing and refining ideas, gaining insight and sharing through community; all based on the premise that many sets of eyes are better than one!

Take a look and let me know what you think. – Btw, the visualization tool “Many Eyes“, developed by IBM, is worth a look as well.

“No Health Without Research”

For the first time in its history, the World Health Report 2012 will focus on the theme of research for better health. To “complement and substantiate the key messages” in the report, the WHO and PLoS launched a new initiative to invite the submission of research papers.

Decisions on healthcare are still made without a solid grounding in research evidence, and an impetus is required for this state of affairs to change. Aimed at ministers of health, the report will provide new ideas, innovative thinking, and pragmatic advice on how to strengthen health research systems.

Let your colleagues and researcher friends know…
More info:

Image Credit: Kees Straver at flickr.com

Mining ClinicalTrials.gov Data

The ClinicalTrials.gov results database now offers summary trial data that were not previously available publicly. A new article, published in The New England Journal of Medicine, summarizes the updates, key issues, and limitations of the database. However, according to the authors, ClinicalTrials.gov is continually adding features and linkages to facilitate the use and repackaging of the data by different audiences. The article provides some good food for thought as we’re looking for additional public data sources to expand our research networking tool UCSF Profiles.

Turning Science Communication into a Dialogue

The Stanford School of Medicine managed to promote science stories broadly without issuing any press releases. At the national CTSA Communications Meeting, John Stafford, New Media Strategist at Stanford, shared some insights how this worked.

Depending on the science story, they posted what’s newsworthy on their blog Scope , Twitter, Facebook, Flickr, and – very important – they successfully leveraged the informal relationships with their “blogger friends”. As a result, some of their stories made it into leading science magazines and newspapers.

But the story doesn’t end here: John also demoed a few online monitoring tools to measure media reach and brand leadership. These tools provide dashboards for monitoring how many and what types of media outlets pick up science stories, and even what attitudes readers have towards those stories. Here is a list of tools that might become useful to some of our organizational initiatives:

  • Radian6: Provides a platform to listen, measure and engage with customers across the entire social web.
  • ScoutLabs: A self-serve, web-based tool that includes natural language processing techniques for sentiment and tone scoring. Read article
  • Sysomos Heartbeat: Provides constantly updated snapshots of online conversations.
  • General Sentiment: Media Measurement Dashboard, Reporting Service, and Data API.
  • Jive: Social media monitoring, engagement, and measurement.
  • Klout: Helps you identify people you might want to start a conversation with.
  • Cotweet
  • Tweetreach

For those who still seek more, Stanford will be hosting a social media conference “Medicine 2.0” in September this year.

Online video might be effective, whether people watch it or not

A study based on multivariate testing conducted by Treepodia seems to show that not only shoppers who view a product video buy at a higher rate, but surprisingly also those who choose not to watch the video. This article suggests that online video serves as a trust factor. Users might associate it with believe and investment in a product.

The results on the best way to display video are also interesting: adding a simple link to video from any given page, led to a 5%-15% video view rate, while a video player embedded on the same page delivered 10%-35%.

Neil McBean from RivalSchools who we’re working with on our video project pointed me to the study and to Zappos successful use of video demos online.

Currently one of my favorite online videos is Google’s piece on Gmail Priority Inbox. They truly have found a way to turn even a basic feature like this into an enjoyable thing to learn about.  Watch it

Notes from the Science of Team Science Conference at Northwestern University

The event was packed with theories about the motors and challenges of team science and some interesting initiatives and tools. One of the highlights for us was certainly the introduction to UCINET, a social network analysis tool for team science, which might be useful for the further impact analysis of ShareCenter and crowdsourcing tools like our Open Forum. John Skvoretz from the University of South Florida walked us through the basic methods of social network analysis for team science. Using this program we could get better insight into whether these tools help researchers from different disciplines to connect, and whether most users make new connections or connect with people they already know offline which revives the question whether distance is dead. The program can handle a maximum of 32,767 nodes and includes centrality measures, subgroup identification, role analysis, and more. Now, the challenge is to get the data!

Bonnie Spring from Northwestern University presented COALESCE, a CTSA Online Assistance for Leveraging the Science of Collaborative Effort, that will “create, evaluate, and disseminate new, durable, readily accessible on-line learning resources to enhance essential skills needed to perform transdisciplinary, team-based, basic and clinical translational research”. Four learning modules will be developed over the next two years for the “Science of Team Science,” “Team Science Research Process in Basic Science and in Clinical Science”, as well as “Team Science in Behavioral Medicine.” The Team Science module, for example, will introduce the key concepts of team science by showcasing successful national transdisciplinary NIH research programs and interviews with prominent team science experts.

In the spirit of a web portal for collaboration, we learned about a couple of tools that help researchers manage and evaluate collaborations. 1) Gary Olson from UC Irvine talked about a “collaboration success wizard”, a web-based tool to help researchers assess the prospective success of a collaborative project before it starts. The tool is expected to be available July this year. 2) Howard Gadlin, who runs – as he puts it – an emergency room for team science at the NIH Center for Cooperative Resolution, gave a fabulous presentation talking from the other end of the telescope, about the “dark side” of collaboration. He introduced us to a collaborative agreement, a “pre-nuptial agreement” for scientists, to help scientific collaborators commence their project by anticipating, discussing, and resolving possible areas of disagreement. Using the pre-nup, the parties can jointly define a process for constructively handling disputes should they arise in the future. And 3) the National Cancer Center will launch a “team science toolkit” shortly that intends to provide an online hub for team science-related resources and communication.

William Trochim from Cornell University introduced us to concept mapping, a mixed methods participatory approach that combines group processes (brainstorming, sorting, group interpretation) with a sequence of multivariate statistical analyses (multidimensional scaling, hierarchical cluster analysis) – maybe something to explore in the light of our upcoming survey and research projects. See his paper about “Concept Mapping as an Alternative Approach for the Analysis of Open-Ended Survey Responses”.

Katy Boerner  from Indiana University spoke about the “Cyberinfrastructures for Network Science”. She presented a couple of tools, such as 1) the Network Workbench, a large-scale network analysis, modeling and visualization, which purportedly supports network science research across scientific boundaries, and 2) the Scholarly Database (SDB) that focuses on supporting large studies of changes in science over time and communicating findings via knowledge-domain visualizations. The database currently provides access to around 18 million publications, patents, and grants. In the future, Boerner said, she wants to leverage the power of network analysis to understand better what delays and inhibits science. The tools are available at http://sci.slis.indiana.edu/registration/user/.

More resources from the SciTS Conference are available at http://scienceofteamscience.northwestern.edu/team-science-resources

Further reading:

Written by: Rachael Sak, Leslie Yuan and Katja Reuter

Tangential Thoughts: Robot Scientists for a Better Use of Existing Data – And Why Translational Science May Still Need a Slightly Different Approach

Is serendipity necessary for innovation? Or in other words: Would an autonomous scientific discovery process that utilizes all available data at the time be incapable of innovation? Some think so. But not researcher Andrew Sparkes and colleagues who created Adam and Eve, two robot scientists, designed to carry out biomedical scientific research. The researchers claim that scientists robots will “make scientific information more accurate, reproducible and reusable”.

Adam and Eve are capable of generating hypotheses about a problem based on information obtained from publicly available databases, designing experiments to test these hypotheses, running the physical experiments, analyzing, interpreting the resulting data – and they even collaborate. Eve, for example, is a prototype system to demonstrate the automation of closed-loop learning in drug-screening and design.

So why not stretching this idea a bit? Could such a robot help support the clinical and translational research process? The authors of the recent paper “Translational Medicine – doing it backwards” may disagree. They argue that the general approach to hypothesis-driven research poorly suits the needs of translational biomedical research “unless efforts are spent in identifying clinically relevant hypotheses”. As Steinman pointed out, animal models, for example, can lead to results that are the opposite of what is ultimately seen in human disease. So, the authors propose “that hypothesis tested research should follow ‘factsdriven research’ and only when the collection of facts relevant to human disease has been extensive, should hypotheses be constructed to expand beyond what can be directly observed. What is needed is an approach that begins at the Bedside and then goes to the ‘Clinical Bench’.”

I guess once there are public databases available filled with “clinical realities” provided by clinically active physicians and non-physicians, robots like Adam and Eve could frame their research questions accordingly and reverse the discovery process starting with the “human reality”.