Teenage Migraine Researcher Uses Mobile Technology to Enhance Study

A new clinical trial for adolescent migraine is underway, and it’s harnessing the power of consumer technology to collect better data and make study participation easier. The BRAiN-M Study, which is examining whether melatonin (a natural supplement) is effective in preventing teenage migraine, uses Fitbit devices and an online “headache diary” to collect data from study participants remotely.

Besides trying to figure out how to prevent teenage migraine, the study’s lead investigator, Dr. Amy Gelfand of UCSF, is looking to make pediatric migraine clinical trials more inclusive and accessible. Continue reading

UCSF dentistry collaborations, visualized

Looking at cross-institutional co-authorship networks is a useful way of seeing not only who we work with, but also where there may be gaps of interest.

I first looked at dentistry-related publications by UCSF researchers published in 2013, breaking out the institutions we co-authored with. And there we are, sitting pretty in the center of our universe, collaborating with major institutions in the US, Korea, Australia, Italy, Denmark, and more.

(Details: Institution node sizes indicate the total volume of dentistry-related articles published. Connecting line widths indicate the number of articles co-authored between two institutions. Distance between nodes indicates the tightness of co-authorship networks, and different sets of node colors help distinguish groups of institutions whose researchers frequently co-author together. Of 462 institutions that collaborated with UCSF researchers, we’re showing only 91 that had 10 or more cross-institutional articles in that time.)

View full-size visualization (PDF)

UCSF dentistry research co-authorships, Jan 1 - Dec 5 2013

Then I looked at the total universe of dentistry-related publications published in 2013 (see below). Notice a difference? I have to admit that it took me a while to find UCSF in the mess of dots. (If you look at the full-size view, we’re in the medium blue section, next to the pinks.) Of course this says more about the sheer volume of research being published by universities all over the world, than about any lack of cross-institutionally collaborative spirit on our part; in fact I hid over 80% of the institutions in the first image to keep it readable, which accounts for a a good chunk of the difference. But the sheer weight of institutions from Europe, East Asia, and Latin America in this second image that aren’t there in the first is intriguing, and something I’m going to try digging into.

(Details: Institution node sizes indicate the total volume of dentistry-related articles published. Connecting line widths indicate the number of articles co-authored between two institutions. Distance between nodes indicates the tightness of co-authorship networks, and different sets of node colors help distinguish groups of institutions whose researchers frequently co-author together. Of 2,575 institutions that we found, we’re showing only 374 that had 10 or more cross-institutional articles in that time.)

View full-size visualization (PDF)

Dentistry research co-authorships, Jan 1-Dec 5 2013

(And yes, I realize fully well that I’m probably looking at the wrong things here, privileging increasing the count of cross-institutional collaborations as an end in itself, avoiding any consideration of research quality, and giving greater visual weight to institutions that publish more, regardless of the size of the institution or the quality of work. Pretty pictures lie can hide lots of flaws. I hope you’ll bear with me as I publicly iterate through these topics, step by step, hopefully getting just a little bit less dumb every time.)

Additional uninteresting details: I searched Web of Science for dentistry-related articles published in 2013 (i.e. from January 1-December 5, 2013). I began by running a search for any articles published in 2013 matching a number of dentistry-related keywords (dental, dentistry, electrogalvanism, endodontics, jaw relation record, mouth rehabilitation, odontometry, oral, orthodontics, periodontics, prosthodontics, teeth, tooth), then filtered only those that matched the “DENTISTRY ORAL SURGERY MEDICINE” Web of Science category.

CTSA 2013 Annual Face to Face: The Power of Storytelling

Hosted by: University of New Mexico’s Health Sciences Center (HSC) in cooperation with UNM’s Clinical and Translational Science Center (CTSC)

This year’s Clinical and Translational Science Awards (CTSA) communications key function committee (CKFC) Annual Face to Face  focused on the critical role of storytelling to lift research of out its silos to a wider audience.

Richard Larson, MD, PhD, UNM HSC Vice Chancellor for Research compared communicators to ambassadors of information – after all, “research ignored is research wasted.”

Purpose/Objectives of the Annual F2F:

  • Increase understanding and support of NCATS and NIH priorities
  • Improve awareness of CTSA value, dissemination of key information, and collaboration among key stakeholders across the consortium
  • Inspire CKFC members through new connections, skill building, clear direction, and storytelling

Here’s a selection of tweets by CTSA communicators during the two-day conference:

Continue reading

ImpactStory: Telling Data-Driven Stories About Research Impact

ImpactStory is the relaunched version of total-impact. Don’t miss the post by @jasonpriem and Heather Piwowar, published on the Impact of Social Science blog. They describe some of the highlights:

To use ImpactStory, start by pointing it to the scholarly products you’ve made: articles from Google Scholar Profiles, software on GitHub, presentations on SlideShare, and datasets on Dryad (and we’ve got more importers on the way).

Then we search over a dozen Web APIs to learn where your stuff is making an impact. Instead of the Wall Of Numbers, we categorize your impacts along two dimensions: audience (scholars or the public) and type of engagement with research (view, discuss, save, cite, and recommend).

In each dimension, we figure your percentile score compared to a baseline; in the case of articles, the baseline is “articles indexed in Web of Science that year.” If your 2009 paper has 17 Mendeley readers, for example, that puts you in the 87th-98th percentile of all WoS-indexed articles published in 2009 (we report percentiles as a range expressing the 95% confidence interval). Since it’s above the 75th percentile, we also give it a “highly saved by scholars” badge. Scanning the badges helps you get a sense of your collection’s overall strengths, while also letting  you easily spot success stories.

 

Describing the Difference Research Has Made to the World

Here is an interesting new blog post by Heather Piwowar about the different ways research can impact the world and the importance of telling them apart. Good food for thought as we think about ways to help researchers analyze how people are reading, bookmarking, sharing, discussing, and citing research online.

I think Anirvan made a great point to think about ways how we can integrate “altmetrics” data with UCSF Profiles. Some of the metrics mentioned below may be a great starting point.

Here’s what Heather writes:

Figuring out the flavors of science impact. CC-BY-NC by maniacyak on flickr

We have clustered all PLoS ONE papers published before 2010 using five metrics that are fairly distinct from one another: HTML article page views, number of Mendeley reader bookmarks, Faculty of 1000 score, Web of Science citation counts as of 2011, and a combo count of twitter, Facebook, delicious, and blog discussion.

We normalized the metrics to account for differences due to publication date and service popularity, transformed them, and standardized to a common scale. We tried lots of cluster possibilities; it seems that five clusters fit this particular sample the best.

Here is a taste of the clusters we found.  Bright blue in the figure below means that the metric has high values in that cluster, dark grey means the metric doesn’t have much activity.  For example, papers in “flavour E” in the first column have fairly low scores on all five metrics, whereas papers in “flavour C” on the far right have a lot of HTML page views and Sharing (blog posts, tweeting, facebook clicking, etc) activity. View image and read on

Further reading: