Crowdsourcing the Analysis and Impact of Scholarly Tweets

“Twitter is one of the fastest tools to discover newly published scholarly papers”, Martin Fenner wrote in one of his earlier posts. Now Fenner and Euan Adie finished the first phase of an interesting new experiment, the CrowdoMeter project.

In the last 2 months, they used crowdsourcing to analyze the semantic content of almost 500 tweets linking to scholarly papers (953 classifications by 105 users for 467 tweets). Their preliminary results show: 

  • 3 predominant subject areas: Medicine and Health, Life Sciences, and Social Sciences and Economics.
  • Most tweets (88%) discussed the papers, 10% were in agreement, 3% disagreed.
  • Most papers are not tweeted by their authors or publishers

In his recent guest post on Impact of Social Sciences, Fenner makes the argument that “social media, and Twitter in particular, provide almost instant, relevant recommendations as opposed to traditional citations.

A few years from now the ‘personalized journal’ will have replaced the traditional journal as the primary means to discover new scholarly papers with impact to our work.

What is still missing are better tools that integrate social media with scholarly content, in particular personalized recommendations based on the content you are interested in (your Mendeley or CiteULike library are a good approximation) and the people you follow on Twitter and other social media.

Fenner’s view is also based on Gunther Eysenbach’s study from 2011 that showed “highly tweeted papers were more likely to become highly cited (but the numbers were to small for any firm conclusions; 12 out of 286 papers were highly tweeted)”.

Fenner and Adie are using altmetric.com to track the scholarly impact of your research. Other tools – some of which we wrote about – include  ReaderMeterTotal ImpactPLoS Article-Level Metrics, and ScienceCard.

Have you heard of the Crowdsortium?

It’s a group of currently more than 75 crowdsourcing industry practitioners that share “best practices, education, data collection and public dialog.” Interesting for us, as problems become more complex, the Consortium could be helpful in defining the right crowdsourcing model for future crowdsourcing needs at UCSF and CTSI.

The Crowdsortium aims to provide each of these constituents with the knowledge, data and best practices to get the most out of participating in crowdsourcing.

The Crowdsortium recognizes that the crowdsourcing ecosystem is comprised of five participants: funders, practitioners, customers, the crowd and researchers.

As the problems crowdsourcing address become more complex, so do the problems that the crowdsourcing ecosystem face.

For example:                                                                         THE CROWDSOURCING ECOSYSTEM

• What crowdsourcing model should we use?
• How do we handle intellectual property rights?
• How do we logistically manage international participants?
• How do we fairly reward participants for their work?
• What are the benefits of monetary versus non-monetary rewards?
• How can game mechanics influence crowd dynamics?
• Should the crowd be anonymous or individually identified with their real names?

Anyone may participate in the public forums, feeds and email lists related to this website, members, however, enjoy additional benefits. Membership is currently available for funders, practitioners, customers and researchers at no cost. More at http://www.crowdsortium.org/

Using Research Networking Effectively in Academia: UCSF-CTSI Team Presents On National AMIA Panel

Three of us from the Virtual Home team at CTSI went to this year’s AMIA (American Medical Informatics Assoc) meeting in DC and presented on a panel with Griffin Weber of Harvard University. The panel was called “Four Steps to Using Research Networking Effectively at Your Institution”

Griffin spoke on cutting edge features of research networking tools, such as linked open data and social network analysis.

Eric Meeks of UCSF spoke on standard APIs, such as OpenSocial, to leverage a community of developers, I spoke about incentivize usage and understand your audience, and to round it out, Brian Turner spoke about using data, tools and strangers to improve user interfaces.

The panel presentation was a 90 minute break out session and we were happy to have a good turnout and an engaged audience. I think that the work that UCSF has put into the ‘social engineering’ of the tool has really paid off. Our usage and engagement numbers are on the rise and comparatively speaking, Griffin mentioned that our traffic is about 5-times that of what Harvard Profiles is currently getting.

In addition, Eric also had a poster session at the meeting!

The UCSF presentations will be up on Slideshare, available on the CTSI channel and via our individual UCSF profiles:

http://profiles.ucsf.edu/ProfileDetails.aspx?From=SE&Person=5333232
http://profiles.ucsf.edu/ProfileDetails.aspx?From=SE&Person=4621800
http://profiles.ucsf.edu/ProfileDetails.aspx?From=SE&Person=5333232

“Using Prizes to Spur Open Innovation”: The National Institutes of Health (NIH) Explore Potential Approaches

This week’s NIH conference “Crowdsourcing: The Art and Science of Open Innovation” could be a hint that the research agency is seriously considering new ways to take advantage of the “processing power of lots of willing brains”.

ScienceInsider published a summary report that states:

NIH Director Francis Collins would soon sign papers that would ensure NIH is compliant with the America COMPETES Act, which gives federal agencies the authority to offer cash incentives for researchers to tackle high-risk, high-reward research questions that have eluded more traditional funding platforms, such as grants and sponsored research.

The America COMPETES Act was first passed in 2007 and was reauthorized in December. Under its authority, federal agencies outline a problem they’d like solved on Challenge.gov, then open the competition to individuals or teams, evaluate the results, and award a money prize to whoever turns in the best solution.


Crowdsourcing for idea generation

Can big institutions get better ideas by including more people? High-profile crowdsourced ideation projects in our communities include:

  • Harvard Catalyst ran a contest last year, inviting Harvard and external community members to come up with their own answers to the question “what do we not know about Type 1 Diabetes?” They got over 190 responses. A panel of experts culled the list down to a dozen top award-winning ideas, and seven new projects have been funded to investigate these questions. Top ideas came from faculty, students, staff, and a patient.
  • The White House developed the SAVE Award, a national contest for federal government employees to suggest ways to make government processes cheaper and easier. The 2009 contest drew 38,000 entries, while the 2010 contest drew 18,000 entries, plus 160,000 votes. Winning ideas include not throwing away bulk medication at the VA, online scheduling at Social Security offices, and ending the mailing of print copies of the Federal Register. Check out the huge range of submitted ideas.
  • UCSF is running Bright Ideas, a campus-wide suggestion box. The campus community can share and vote on ideas. Previously-implemented Bright Ideas included a system to share unused office furniture and supplies, and the installation of audio signals for the visually impaired at Parnassus crosswalks.

(Image by Faith Grober)

Compelling Video Describes New Visualization Tool “Many Eyes”

It can be challenging to create animated video that conveys a complex message. Here is a great example that shows it’s doable – mind you, without a single spoken word.

A 60 second social story about developing and refining ideas, gaining insight and sharing through community; all based on the premise that many sets of eyes are better than one!

Take a look and let me know what you think. – Btw, the visualization tool “Many Eyes“, developed by IBM, is worth a look as well.

World’s first crowdsourced clinical trial?

PatientsLikeMe, an online community where individuals can track their conditions and compare symptoms with algorithmically-similar patients, just published in Nature Biotechnology what it calls  “a patient-initiated observational study refuting a 2008 published study that claimed lithium carbonate could slow the progression of the neurodegenerative disease, amyotrophic lateral sclerosis (ALS).”

The story in the Wall Street Journal adds:

“A new clinical trial found that lithium didn’t slow the progression of Lou Gehrig’s disease, but the findings released Sunday also showed that the use of a social network to enroll patients and report and collect data may deliver dividends for future studies. The study was based on data contributed by 596 patients with the disease, formally called amyotrophic lateral sclerosis or ALS. By showing that the drug didn’t have any effect on progression of the condition, it contradicted a small study three years ago that suggested such a benefit was possible. The new study, published online in the journal Nature Biotechnology, represents an early example of how social networking could play a role in clinical trials, an area of medical science with strict procedures that many would consider especially difficult to apply in the online world.” [via]

Read more:

Unlocking the hospitalization algorithm

Heritage Provider Network (HPN) is launching the $3 million Heritage Health Prize, with help from data prediction contest operator Kaggle.

HPN is releasing anonymized patient health records, hospitalization records, and claims data. The team that can come up with the best algorithm to predict which patients have the greatest need for hospitalization wins the big bucks.

As they put it:

“More than 71 Million individuals in the United States are admitted to hospitals each year, according to the latest survey from the American Hospital Association. Studies have concluded that in 2006 well over $30 billion was spent on unnecessary hospital admissions. Each of these unnecessary admissions took away one hospital bed from someone else who needed it more…Can we identify earlier those most at risk and ensure they get the treatment they need? The Heritage Provider Network (HPN) believes that the answer may be “yes” – but to do it will require harnessing the world’s top experts from many fields. Heritage launched the $3 million Heritage Health Prize with one goal in mind: to develop a breakthrough algorithm that uses available patient data, including health records and claims data, to predict and prevent unnecessary hospitalizations.

Link