ImpactStory: Telling Data-Driven Stories About Research Impact

ImpactStory is the relaunched version of total-impact. Don’t miss the post by @jasonpriem and Heather Piwowar, published on the Impact of Social Science blog. They describe some of the highlights:

To use ImpactStory, start by pointing it to the scholarly products you’ve made: articles from Google Scholar Profiles, software on GitHub, presentations on SlideShare, and datasets on Dryad (and we’ve got more importers on the way).

Then we search over a dozen Web APIs to learn where your stuff is making an impact. Instead of the Wall Of Numbers, we categorize your impacts along two dimensions: audience (scholars or the public) and type of engagement with research (view, discuss, save, cite, and recommend).

In each dimension, we figure your percentile score compared to a baseline; in the case of articles, the baseline is “articles indexed in Web of Science that year.” If your 2009 paper has 17 Mendeley readers, for example, that puts you in the 87th-98th percentile of all WoS-indexed articles published in 2009 (we report percentiles as a range expressing the 95% confidence interval). Since it’s above the 75th percentile, we also give it a “highly saved by scholars” badge. Scanning the badges helps you get a sense of your collection’s overall strengths, while also letting  you easily spot success stories.

 

Describing the Difference Research Has Made to the World

Here is an interesting new blog post by Heather Piwowar about the different ways research can impact the world and the importance of telling them apart. Good food for thought as we think about ways to help researchers analyze how people are reading, bookmarking, sharing, discussing, and citing research online.

I think Anirvan made a great point to think about ways how we can integrate “altmetrics” data with UCSF Profiles. Some of the metrics mentioned below may be a great starting point.

Here’s what Heather writes:

Figuring out the flavors of science impact. CC-BY-NC by maniacyak on flickr

We have clustered all PLoS ONE papers published before 2010 using five metrics that are fairly distinct from one another: HTML article page views, number of Mendeley reader bookmarks, Faculty of 1000 score, Web of Science citation counts as of 2011, and a combo count of twitter, Facebook, delicious, and blog discussion.

We normalized the metrics to account for differences due to publication date and service popularity, transformed them, and standardized to a common scale. We tried lots of cluster possibilities; it seems that five clusters fit this particular sample the best.

Here is a taste of the clusters we found.  Bright blue in the figure below means that the metric has high values in that cluster, dark grey means the metric doesn’t have much activity.  For example, papers in “flavour E” in the first column have fairly low scores on all five metrics, whereas papers in “flavour C” on the far right have a lot of HTML page views and Sharing (blog posts, tweeting, facebook clicking, etc) activity. View image and read on

Further reading: