“Am I having a stroke?” UCSF Researchers Test New Way of Connecting Physicians With Information Seekers Online

Five questions with UCSF neurologist and stroke researcher Anthony Kim about his new study on how the Internet can help to connect with people who are searching for information online and potentially reduce incidences of preventable diseases.

Anthony S. Kim, MD, MAS is assistant clinical professor of neurology at the University of California, San Francisco (UCSF) and Medical Director of the UCSF Stroke Center. His research focuses on improving the diagnosis and cost-effective management of stroke and transient ischemic attack (TIA, also called “mini-stroke”).  An estimated 800,000 new strokes occur each year in the U.S., making it the fourth leading cause of death in America. Anthony Kim believes that the Internet opens up new opportunities that will change the way we develop interventions and conduct research to improve health.

Q: Millions of Americans search online for health information each year. Scientists are using this type of data to better understand flu outbreaks, the seasonal variance of kidney stones, the demographic prevalence of stroke, and even to demonstrate the online effectiveness of health awareness campaigns. What did you learn in your latest study?

We were surprised to see that tens of thousands of people were regularly ‘asking’ a search engine about stroke-related symptoms in many cases shortly after the onset of symptoms. In fact, every month, about 100 people were finding our study website by entering the query: “Am I having a stroke?” directly into their Google search box.

One of the challenges with mini-stroke is that most people do not seek urgent medical attention because the symptoms are transitory by definition. So people don’t realize that it is a medical emergency. Even though the symptoms may have resolved, the risk of a subsequent stroke is very high—upwards of 11% within the next 90 days—with most of this risk concentrated in the first hours and days after the mini-stroke. So getting the message out there about urgent medical attention is key.

We started this study because we thought that if people who have had a mini-stroke are looking online for information on their symptoms, then rather than just listing static health information about the disease on a website, maybe we can engage them by making the website more interactive and asking them to enter some of their symptoms online. And we wondered whether we could use this information to assess whether or not it was a true TIA or stroke and then encourage them to get the urgent medical as appropriate.

One third of the people we identified hadn’t had a medical evaluation for mini-stroke yet, which is critical, because it is a medical emergency. Instead of calling a doctor or going to the emergency room, many people were turning to the Internet as the first source for health information.

Q: How did your approach work exactly?

When a person searched on Google for stroke-related keywords, a paid text advertisement “Possible Mini-Stroke/TIA?” appeared with a link to the study website (Image). The ad appeared on the search results page and on related websites with existing content about the topic.

When users clicked on the text ad link, they were directed to the study website. Those visitors who met all of the study’s entry criteria were asked to provide informed consent online. They then reported their demographic information and symptoms based on a risk score developed for use by clinicians.

We were notified in real-time as soon as someone enrolled, and then we arranged for two vascular neurologists to follow up with the patient by telephone.

Q: You tested the approach for about four months. What’s your verdict?

We definitely think that there is a lot of potential here. About 60% of U.S. adults say that their real-life medical decisions have been influenced by information they read online. This changes the way we think about providing medical care and conducting research.

With a modest advertising budget, we were able to attract more than 200 people to our study website each day from all 50 states. About one percent of them (251 out of 25,000) completed the online questionnaire, which allowed us to contact them for follow up. Although this seems low at first, it is comparable to conversion rates in other domains of online advertising.

Also, even though the people who joined the study were a highly selected group, the incremental costs for reaching an additional person were low and the potential for applying a targeted and cost-effective public health intervention in this group would still be very interesting to evaluate in the future.

Before we started, we thought that we might lose people throughout the enrollment process since we confirmed eligibility and asked for consent online, but we didn’t. For the most part, if people were interested in participating, they completed the entire online enrollment process.

During follow up calls, we learned that 38% of enrollees actually had a mini-stroke or stroke. But fully a third of them had not seen a doctor yet. Our approach made it possible to connect with these people fairly efficiently and early on in order to influence their behavior acutely.

Despite these potential advantages, Internet-based public health interventions that target people who are looking for health information online are still underdeveloped and understudied. There’s a lot for us to learn in this space.

Q: What online tools did you use to carry out your project?

We used Google AdWords and Google’s Display Network to target English-speaking adults in the U.S. During the four-month enrollment period, the tool automatically displayed our ads more than 4.5 million times based on criteria such as location, demographics, and search terms.

Ideally, to minimize ongoing costs you would want to build and optimize a website so that it ranks highly among the non-paid (organic) search results. Non-profits can also take advantage of Google Grants, a program that supports in-kind donations of advertising resources to help selected organizations promote their websites on Google.

Q: Do you have any tips for others who want to develop similar projects?

We quickly realized that it helped to work closely with our Institutional Review Board (IRB) given that this is a new and evolving area of research, and to ensure data security and safety mechanisms are in place to protect participants. I definitely recommend that.

It’s also important to be realistic about the goals and metrics of success, and not to over-interpret numbers that seem to reflect low engagement. We saw that most visitors (86%) immediately exited the website within a few seconds of arriving at the home page. This probably reflected people who were looking for something else and clicked away immediately. But the beauty of the Internet is that it is very efficient to reach people across a wide geographic area very quickly. So it is not unexpected that we would also screen visitors who may not be qualified for the study or are not interested in enrolling.

Groups interested in using this approach should think about selection bias, authentication, validation, and the “digital divide”. Even though there is some evidence that disparities in access and adoption of Internet technologies are narrowing in the U.S., depending on the goals and target for your study or intervention the reach of the Internet is not uniform.

But selection bias issues aside, for a public health intervention you may be most interested in other metrics such as the number of people reached per dollar spent, or the burden of disease averted per dollar spent, which the Internet is particularly suited to help optimize.

And, it’s definitely beneficial to bring different subject matter and methods experts to the table. Knowledge of search engine optimization, online accessibility, website and user interface design is not necessarily part of the core expertise of a traditional clinical researcher, but developing these skills and interacting with experts in these areas could become very important for the new cadre of clinical researchers and public health professionals coming down the pipeline.

The original article was published on CTSI at UCSF

ImpactStory: Telling Data-Driven Stories About Research Impact

ImpactStory is the relaunched version of total-impact. Don’t miss the post by @jasonpriem and Heather Piwowar, published on the Impact of Social Science blog. They describe some of the highlights:

To use ImpactStory, start by pointing it to the scholarly products you’ve made: articles from Google Scholar Profiles, software on GitHub, presentations on SlideShare, and datasets on Dryad (and we’ve got more importers on the way).

Then we search over a dozen Web APIs to learn where your stuff is making an impact. Instead of the Wall Of Numbers, we categorize your impacts along two dimensions: audience (scholars or the public) and type of engagement with research (view, discuss, save, cite, and recommend).

In each dimension, we figure your percentile score compared to a baseline; in the case of articles, the baseline is “articles indexed in Web of Science that year.” If your 2009 paper has 17 Mendeley readers, for example, that puts you in the 87th-98th percentile of all WoS-indexed articles published in 2009 (we report percentiles as a range expressing the 95% confidence interval). Since it’s above the 75th percentile, we also give it a “highly saved by scholars” badge. Scanning the badges helps you get a sense of your collection’s overall strengths, while also letting  you easily spot success stories.

 

Leveraging the Social Web for Research Networking

Five Questions with CTSI Technical Architect Eric Meeks About the Benefits of OpenSocial 

This article highlights…

  • …what OpenSocial is and how it can help advance research networking, 
  • …what institutions interested in using OpenSocial should keep in mind,
  • …what the Clinical and Translational Science Institute (CTSI) at UCSF has done to promote OpenSocial in academia and to build a community of developers and supporters,
  • …and how the business sector can tap into this emerging market.

Eric Meeks has worked for numerous startups in Silicon Valley including Ning, one of the first social network systems to support OpenSocial. Since 2009, he has been the lead technical architect for the Clinical & Translational Science Institute (CTSI) at the University of California, San Francisco (UCSF), the first academic biomedical institution using OpenSocial in an open source research networking product. He is also a founder of the Open Research Network Gadgets (ORNG).

Q: OpenSocial is built upon the idea that ‘the web is better when it’s social’. It encourages developers to build standardized web applications that can be shared across different social networking platforms. How can academia and biomedical research benefit from OpenSocial?

If you look at the topography of all the different research institutions, many of them run different back-end systems, from Windows to Linux, Oracle to MySQL to SQL Server, and all with custom data models. Despite differences in our underlying systems, however, we’re becoming very common in that we deploy research networking systems to help investigators find and connect with one another. Some systems are based on Profiles, some on VIVO, some are custom like LOKICAP, or Digital Vita. As much as we want to share new features and applications to extend these systems, it’s really difficult to do that because the applications are hard-coded and tailored to our specific institutional databases. As a consequence, many applications are rebuilt for the various systems. OpenSocial allows us to change that.

By agreeing upon a solution that is standardized by a large community, we can agree upon a way for sharing these things. We can build one version, share it amongst everybody, and be more cost-effective. In other words, implementing OpenSocial makes applications interoperable with any social network system that supports them. Academic research institutions can use OpenSocial to open up their websites so that multiple people can add new features and applications and they can all do it at the same time, and independently of one another. ‘A platform beats an application every time,’ as O’Reilly Media put it. I see that as extremely powerful. It’s the only way I see to solve the problem that we have with different institutions deploying different research networking tools.

Q: What are some of the ways that the social web and OpenSocial are relevant to academia?

In research as in in other areas of life, communication and collaboration are supported by relationships. At their core, research networking systems are similar to social networking systems like Facebook, LinkedIn or Ning. They are showing a researcher’s expertise and how researchers are connected to each other.

The difference is that the social ‘friend’ in an academic network is embellished with different attributes such as co-authorship, mentorship, and shared areas of research. Research networking systems were created to leverage and enhance these academic relationships.

Q: You lead the OpenSocial efforts at UCSF’s Clinical and Translational Science Institute. How would you describe your work to improve research networking?

We’re working with researchers and administrators to identify needs for applications that are appropriate for research networking tools. The gadgets that we have developed are intended to fill some of the gaps that exist. They allow individuals to update their research profile by adding presentations hosted at SlideShare, data that indicates an interest in mentoring, and relevant web links. Gadgets also make it easy to export publications in different formats from any profile, and help users build lists of people based on common attributes like research interests. And finally, a more generic Google Search gadget broadens the existing Profiles search to include free-text fields like the profile narrative and awards. All of these applications are available to any institution that wants to utilize them. (See the full gadget library.) It is our hope that our library of free gadgets will grow as more institutions join the OpenSocial community.

Right now, we’re in the process of building the community, which also means that the first members of this community don’t really get to benefit from it, but that’s changing. Wake Forest University, for example, has adopted OpenSocial and is using one of our applications. But they also built their own applications and made them available for free. Andy Bowline, Programmer at Wake Forest University, created a gadget that matches NIH reporter grant data with researchers’ profiles to identify grants that may be appropriate for a researcher to look into. Andy reached out to the NIH OER Grant Search team and got permission to scrape their website on a daily basis. The gadget grabs the data and throws them into a search engine. It also talks to the web service that Andy built on top of that to find matching grants. Andy not only shared the gadget that does this, but they also allow us to use their web services. That’s exactly what we want to see. We’re not competing. On the contrary, we’re trying to do the same thing. By using OpenSocial we can do it together.

Most excitingly, our OpenSocial code is becoming an official part of both the Profiles and the VIVO products. We have been working with both developer teams of both products supporting OpenSocial in our development environment. It’s great to see the same gadgets running in both of these two different systems, especially when you consider that the technology stacks between the two products couldn’t be more different.  Profiles runs on Microsoft technologies while VIVO runs on Java.  However, since they are both supporting OpenSocial, those differences don’t matter; the gadgets run in either environment without alteration.

Q: What tips do you have for academic institutions interested in adopting OpenSocial? 

There are a few things. I think it is helpful to have internal discussions with your development team and web strategy leaders to discuss how existing applications could be repurposed in a research networking site. Not all applications are suited for this type of deployment of course, but for some it may be the best way to make sure that these applications are seen and used.

OpenSocial is a huge API. I recommend integrating with Apache Shindig, which is the reference standard for OpenSocial. It shows you in ‘living code’ what an OpenSocial website should do and it also serves as a library to make your website OpenSocial compliant. LinkedIn, Google, Nature Networks and others are using it.

I also recommend taking advantage of the applications we already developed to help save time and money. We’re giving out both the applications and the code we use to convert our website to be OpenSocial-enabled, which lowers the technical bar quite a bit. It’s easy to apply the code in a few days.

Another important lesson that we learned is the need to prepare for managing expectations and overcoming political hurdles. OpenSocial is extremely powerful, but as with any technology, it doesn’t do everything and it does require some amount of technical investment.

It is also helpful for interested groups to know that we have combined OpenSocial with the Resource Definition Framework (RDF) standard that is core to VIVO and can now also be found in Profiles and LOKI. RDF is a component of the semantic web and Linked Open Data. When applications support RDF it is much easier for them to share data. As a matter of fact, the CTSA network recommends that institutions use VIVO compatible RDF within their research networking tools so that all of our data can be accessed more easily. With OpenSocial, we are able to use VIVO RDF to expose much richer data to our gadgets than the OpenSocial specification originally allowed. This is a great win and allows us to build gadgets that are very specific to our biomedical researcher needs without having to sacrifice interoperability.

If your institution uses a product like VIVO or Profiles that you think would benefit from being OpenSocial, we definitely want to hear from you, because we want to make sure that we have the same flavor of OpenSocial across our products that are truly interoperable. And, consider joining our new initiative called Open Research Networking Gadgets (ORNG), pronounced “orange.”

Q: Originally, OpenSocial was designed by corporations such as Google and MySpace Google for social network applications. While OpenSocial is seeing wider adoption in enterprise companies, that adoptions has been slower in the academic biomedical arena. What would you like to see from the business sector?

I would like for industry to recognize that there is an emerging market here that they can tap into. It’s a market that has a lot of value, a lot of social benefit, and a lot of wonderful brands behind it such as UCSF, Harvard, and Cornell. This is the kind of work that industry should be proud to be a part of, and they can convert that into a marketing message. I also want industry to know that we would like to work with them.

What we don’t want is for OpenSocial to drift off into some area where it’s dominated by the entertainment or finance industry and no longer viable to science and academia. The OpenSocial Foundation is a main driver in this respect, and they are eager for adoption by people and institutions working in the health sciences. The Open Social Foundation is much more targeted at collaboration and productivity as opposed to entertainment.

Q: What’s your vision for OpenSocial at UCSF and for academia in general? 

A standard only has value when it has adoption across multiple platforms, so we want to promote it and build a community. We also want to be a part of that community to be able to share the benefits of the networking effect. Right now research networking systems only give us a hint of what they’re capable of doing. People today are using these platforms to find out about one another, and even this is happening in a limited sense. People should be using these platforms not just to find out about one another, but to interact and get things done. That’s what people are doing with LinkedIn, Facebook, etc. With a strong OpenSocial community we can advance and extend current research networking systems much faster and cheaper to give researchers and administrators the opportunity to be hyper-connected and hopefully more productive.

This Q&A is part of “Digital Media & Science: A Perspectives Series from CTSI at UCSF” moderated by Katja Reuter, PhD, associate director of communications for CTSI. This series explores how digital media and communications can be used to advance science and support academia. 

Original post on CTSI at UCSF

How Social Media Is Changing the Way We Talk About Science

Five Questions With UCSF Neuroscientist Bradley Voytek

Brad Voytek, PhD, a post-doctoral fellow at the University of California, San Francisco, makes use of big data, mapping, and mathematics to discover how brain regions work together and give rise to cognition. In his work as a researcher, science teacher, and outreach advocate, he regularly uses social media such as his blog Oscillatory ThoughtsTwitter, and Quora. In 2006, he split the Time magazine Person of the Year award.

Bradley Voytek

Brad Voytek, PhD

Q: You’re interested in leveraging data to modernize science. Do you see a role for social media in changing research? 

Social media already is changing research. In many ways! First, there’s a direct effect wherein scientists are beginning to use data collected by social media organizations to analyze behavioral patterns, such as a study from 2011 that looked at millions of tweets to analyze fluctuations in mood.

The way we conduct and communicate research is also changing. Since the 1600s, scientists have communicated findings through peer review. But these results are static, and conversations regarding specific study details, methods, etc. were private. Now, scientific publishing organizations like Nature Network or SciVerse and professional sites such as Mendeley and ResearchGate are providing platforms for open communication and ongoing conversations about research projects.

Q: You’re using social media to promote your work. What motivates you to do that, and what do you see as the benefit?

While in a sense the first statement is correct, “promotion” is a loaded term. Science is an opaque process, and scientific publications are jargon-laden, dense documents that are inaccessible to all but the field-specific experts. These publications give an idealized view of the scientific process–from clear hypothesis to statistically significant result. The reality is a world messier and less certain than that, and I use my blog to communicate that.

Having students just jump in and read these artificially-refined and specialized manuscripts and Social media quote_Bradley Voytekexpecting them to learn from it is like trying to teach English by having someone read Shakespeare: it’s technically correct but the end result will be a mess.

I get a sense that many of my blog’s readers are undergraduate and graduate students, and I aim to communicate the real difficulties and uncertainties of science with them. I remember being there feeling confused, and feeling really dumb because I didn’t “get” scientific papers and could never imagine myself coming up with a novel idea, running an experiment to test it, and writing a paper. I remember looking at the CVs of really smart post-docs and professors and seeing page after page of amazing compliments and thinking I was inadequate.

My goal is to demystify the scientific process, to make it more real, to show how hard everything is, but also that it’s doable. I’ve got a whole section of my CV titled “Rejections & Failures” outlining every grant or award I was not given, every paper not published. I believe that listing those failures shows fledgling scientists that the process is hard, but not because it requires super-intelligence, but rather super-diligence.

Q: You recently made an offer on Twitter inviting people to ask you questions about neuroscience. Can you tell us about that?

This exemplifies another reason I blog, use Twitter, etc. When teaching, I took to heart the idea that if you can’t explain something clearly, then you truly have not internalized it and don’t really understand it.

Social media is a way for me to continue sharpening my understanding of difficult concepts. The time investment isn’t important to me–my job is to learn and discover, and this is another aspect of that. And if in the process I make something more clear and accessible to a possible future scientist, all the better. No scientist achieved their breakthroughs because they communicated less.

As for the offer on Twitter, I got quite a number of excellent questions, but a few stood out that really made me think. Specifically, there were three that are directly relevant to my research and that got me digging around the literature some more to figure out the answer. The questions essentially boiled down to two ideas: First, how plastic is a mature brain? And second, how many neurons can you lose before you (or someone else) notices?

Ultimately I wrote a blog post on what I’d found and rolled some of that writing and those ideas into peer-reviewed papers I’m still working on. This kind of challenge, discussion, and ideation exchange is extremely valuable for me, and it’s part of the reason that I make offers such as the one on Twitter or use Q&A sites such as Quora.

Quora is a particularly interesting example. It’s a site populated by very intelligent people, but given the kinds of neuroscience-related questions that appear there, it’s clear that there are still some pervasive misconceptions about how the brain works. On a site such as that, the feedback and discussions seem to flow a bit more easily than they do on my own personal blog, but they’re not limited in scope as on Twitter. It’s a nicer platform for the level of discussion I’m seeking.

It also doesn’t hurt my motivation when I get comments from people such as, “I’m a grown man with a family and a career and [Brad] made me want to become a neuroscientist!” or “I accidentally started liking science stuff thanks to you!”

Q: Lots of scientists are not using social media. When asked why, many say they don’t think people will care about their scientific work. What do you think about that perspective?

People who say such things underestimate the interest level and intelligence of the non-scientist public. When I hear this, in my head it translates to either, “I don’t care about what I’m doing,” or, “I’m not confident enough in what I’m doing to explain it to anyone who may ask really simple questions that undermine what I do.” The former is fine; not everyone needs to “love” their job or work to be excellent at it. The latter is emblematic of unclear thinking.

Q: What tips can you give researchers who are thinking about using social media but don’t know where to start?

Generally the tips I’ve seen from a lot of bloggers are “write consistently” and “be engaging”, but that’s like saying to be a good scientist you need to “work harder and be smarter”: technically true but not very useful. I wish I had some magic formula for how to be successful at using social media for science, but I don’t have such a thing.

Broadly speaking, knowing how to communicate complex ideas effectively is critical, but just as important is knowing how to network, how to spread your ideas, and how to write something other people want to read. You’ve got maybe a few seconds to capture peoples’ attention online, and getting them to read a 1000-2000 word article is hard. Time and attention are premium commodities in people’s lives, and what you’re asking them to do is sacrifice that commodity to you. You have to keep that in mind. When you write, don’t think “this will only be read by a half dozen of my friends who read my blog.” Instead, think, “this might get picked up and read by tens of thousands of people. Is this worth the time of thousands of people?”

I find social media helpful to clarify my thinking, but other people may have other methods of accomplishing the same result. The only remaining advice I have is to seriously consider the reasons for not using social media: are you not blogging/tweeting/whatever because you honestly think it’s a waste of time and can see no return-on-investment for you? Or, are you not doing it because simplifying your ideas is too challenging?

This Q&A is part of “Digital Media & Science: A Perspectives Series from CTSI at UCSF” and was originally published on the UCSF CTSI website. This series explores how digital media and communications can be used to advance science and support academia.

Related posts by Bradley Voytek

Brad is also interested in leveraging data to modernize research. He’s one of the creators of brainSCANr, an online resource that uses existing publication data to show the probability of relationships between neuroscience topics and ultimately support the discovery of novel research ideas. He is also a fan of zombies, and has devoted some of his time to mapping brain damage that would be caused by zombification.


Describing the Difference Research Has Made to the World

Here is an interesting new blog post by Heather Piwowar about the different ways research can impact the world and the importance of telling them apart. Good food for thought as we think about ways to help researchers analyze how people are reading, bookmarking, sharing, discussing, and citing research online.

I think Anirvan made a great point to think about ways how we can integrate “altmetrics” data with UCSF Profiles. Some of the metrics mentioned below may be a great starting point.

Here’s what Heather writes:

Figuring out the flavors of science impact. CC-BY-NC by maniacyak on flickr

We have clustered all PLoS ONE papers published before 2010 using five metrics that are fairly distinct from one another: HTML article page views, number of Mendeley reader bookmarks, Faculty of 1000 score, Web of Science citation counts as of 2011, and a combo count of twitter, Facebook, delicious, and blog discussion.

We normalized the metrics to account for differences due to publication date and service popularity, transformed them, and standardized to a common scale. We tried lots of cluster possibilities; it seems that five clusters fit this particular sample the best.

Here is a taste of the clusters we found.  Bright blue in the figure below means that the metric has high values in that cluster, dark grey means the metric doesn’t have much activity.  For example, papers in “flavour E” in the first column have fairly low scores on all five metrics, whereas papers in “flavour C” on the far right have a lot of HTML page views and Sharing (blog posts, tweeting, facebook clicking, etc) activity. View image and read on

Further reading:

The “Sociometric Badge”: Measuring the Flow of Information and Ideas in Teams

Imagine wearing an electronic sensor that recognizes your communication patterns, e.g., your body language, tone of voice, who you talk to, where you talk to that person. What may sound like science fiction, has already been tested in the real world including innovation teams, post-op wards in hospitals, customer-facing teams in banks, and call centers.

What do you think? Would you wear it?

Dashboard summarizes data and helps people adjust their behavior.

Here is an interesting video with Alex “Sandy” Pentland, director of MIT’s Human Dynamics Laboratory. He explains how it works. According to Pentland, the “dynamics are observable, quantifiable, and measurable. And, perhaps most important, teams can be taught how to strengthen them.”

Key findings:

  • Patterns of communication are the most important predictor of a team’s success.
  • The best predictors of productivity were a team’s energy and engagement outside formal meetings.

The summary of the article is available here.


“Visible Tweets”: A Free Animation Tool To Display Twitter Messages In Public Spaces

CTSI’s visibility on Twitter is growing – thanks to the tweets from CTSI programs and people like Anirvan (see his latest post AMIA 2012 Joint Summit: a report back in tweets).

But how can we leverage and highlight this activity, for example at upcoming events (retreats, conferences, symposia, etc.)? Visible Tweets” is a great tool to do just that. Type in a search term, for example @CTSIatUCSF, and go…

Try it!

Tweets fly in and out… Here is an example.

How Do You Cite a Tweet in an Academic Paper?

Twitter is getting its own standard format to fit the requirements of “publish or perish”. The Modern Language Association has developed a standard format. In his post, Alexis Madrigal takes a closer look at the shortcomings of the instructions. He writes:

It’s simple. Also, I just love the “Tweet” at the end. However, it’s curious that no URL is required, especially given the difficulty of Twitter search for anything not said in the past day or two.

Here are the instructions developed by the Modern Language Association:

Begin the entry in the works-cited list with the author’s real name and, in parentheses, user name, if both are known and they differ. If only the user name is known, give it alone.

Next provide the entire text of the tweet in quotation marks, without changing the capitalization. Conclude the entry with the date and time of the message and the medium of publication (Tweet). For example:

Athar, Sohaib (ReallyVirtual). “Helicopter hovering above Abbottabad at 1AM (is a rare event).” 1 May 2011, 3:58 p.m. Tweet.

The date and time of a message on Twitter reflect the reader’s time zone. Readers in different time zones see different times and, possibly, dates on the same tweet. The date and time that were in effect for the writer of the tweet when it was transmitted are normally not known. Thus, the date and time displayed on Twitter are only approximate guides to the timing of a tweet. However, they allow a researcher to precisely compare the timing of tweets as long as the tweets are all read in a single time zone.

In the main text of the paper, a tweet is cited in its entirety (6.4.1):

Sohaib Athar noted that the presence of a helicopter at that hour was “a rare event.”

or

The presence of a helicopter at that hour was “a rare event” (Athar).

Crowdsourcing the Analysis and Impact of Scholarly Tweets

“Twitter is one of the fastest tools to discover newly published scholarly papers”, Martin Fenner wrote in one of his earlier posts. Now Fenner and Euan Adie finished the first phase of an interesting new experiment, the CrowdoMeter project.

In the last 2 months, they used crowdsourcing to analyze the semantic content of almost 500 tweets linking to scholarly papers (953 classifications by 105 users for 467 tweets). Their preliminary results show: 

  • 3 predominant subject areas: Medicine and Health, Life Sciences, and Social Sciences and Economics.
  • Most tweets (88%) discussed the papers, 10% were in agreement, 3% disagreed.
  • Most papers are not tweeted by their authors or publishers

In his recent guest post on Impact of Social Sciences, Fenner makes the argument that “social media, and Twitter in particular, provide almost instant, relevant recommendations as opposed to traditional citations.

A few years from now the ‘personalized journal’ will have replaced the traditional journal as the primary means to discover new scholarly papers with impact to our work.

What is still missing are better tools that integrate social media with scholarly content, in particular personalized recommendations based on the content you are interested in (your Mendeley or CiteULike library are a good approximation) and the people you follow on Twitter and other social media.

Fenner’s view is also based on Gunther Eysenbach’s study from 2011 that showed “highly tweeted papers were more likely to become highly cited (but the numbers were to small for any firm conclusions; 12 out of 286 papers were highly tweeted)”.

Fenner and Adie are using altmetric.com to track the scholarly impact of your research. Other tools – some of which we wrote about – include  ReaderMeterTotal ImpactPLoS Article-Level Metrics, and ScienceCard.