UCSF’s top 20 most diverse internally-collaborative departments

When UCSF researchers collaborate between departments, how diverse are the collaborations? Here are the top 20 UCSF departments, ranked by the average numbers of UCSF departments their multi-departmental papers include as co-authors (from among the 39 departments whose researchers had a total of 25+ multi-departmental publications published between January 2012 and November 2013).

Details: Data is drawn from UCSF Profiles, and is based on a list of all publications listed on PubMed published between Jan 2012–Nov 2013 whose authors include groups of researchers with primary affiliations to more than one UCSF department. We counted only those publications from researchers with a listed department, and only those departments whose current associated researchers published 25+ publications in conjunction with current members of other UCSF departments between Jan 2012–Nov 2013. No attempt was made to account for the widely varying sizes and scopes of different departments, the fact that researchers may have multiple departmental affiliations, or the fact that some publications may have been authored before the researchers were affiliated with their current primary departments at UCSF. These are the top 20 departments, out of a total of 39 that match our criteria.

  1. Physiological Nursing: co-authors from avg. 2.57 other UCSF departments, among 116 multi-department papers
  2. School of Nursing Dean’s Office: co-authors from avg. 2.44 other UCSF departments, among 52 multi-department papers
  3. Anesthesia/Perioperative Care: co-authors from avg. 1.84 other UCSF departments, among 69 multi-department papers
  4. Physiology: co-authors from avg. 1.83 other UCSF departments, among 29 multi-department papers
  5. Family Health Care Nursing: co-authors from avg. 1.64 other UCSF departments, among 47 multi-department papers
  6. Laboratory Medicine: co-authors from avg. 1.63 other UCSF departments, among 104 multi-department papers
  7. Pharmaceutical Chemistry: co-authors from avg. 1.63 other UCSF departments, among 120 multi-department papers
  8. Pathology: co-authors from avg. 1.62 other UCSF departments, among 234 multi-department papers
  9. Radiation Oncology: co-authors from avg. 1.60 other UCSF departments, among 53 multi-department papers
  10. Microbiology and Immunology: co-authors from avg. 1.57 other UCSF departments, among 49 multi-department papers
  11. Cellular & Molecular Pharmacology: co-authors from avg. 1.57 other UCSF departments, among 74 multi-department papers
  12. Orofacial Sciences: co-authors from avg. 1.57 other UCSF departments, among 53 multi-department papers
  13. HDF Comprehensive Cancer Center: co-authors from avg. 1.55 other UCSF departments, among 31 multi-department papers
  14. Anatomy: co-authors from avg. 1.55 other UCSF departments, among 55 multi-department papers
  15. Pediatrics: co-authors from avg. 1.53 other UCSF departments, among 321 multi-department papers
  16. School of Nursing Community Health Systems: co-authors from avg. 1.52 other UCSF departments, among 31 multi-department papers
  17. Surgery: co-authors from avg. 1.50 other UCSF departments, among 227 multi-department papers
  18. Biochemistry & Biophysics: co-authors from avg. 1.49 other UCSF departments, among 75 multi-department papers
  19. Neurological Surgery: co-authors from avg. 1.47 other UCSF departments, among 393 multi-department papers
  20. Cardiovascular Research Institute: co-authors from avg. 1.45 other UCSF departments, among 53 multi-department papers

UCSF’s top 20 internally collaborative departments

Some UCSF departments consistently reach out out to collaborate with other members of the UCSF community. Here are the top 20 UCSF departments whose researchers have the highest proportion of publications co-authored with members of other UCSF departments from among departments whose researchers had a total of 100+ publications published between January 2012 and November 2013.

Details: Data is drawn from UCSF Profiles, and is based on a list of all publications listed on PubMed published between Jan 2012–Nov 2013 whose authors include groups of researchers with primary affiliations to more than one UCSF department. We counted only publications from researchers with a listed department, and departments with 100+ publications by current associated researchers between Jan 2012–Nov 2013. No attempt was made to account for the widely varying sizes and scopes of different departments, the fact that researchers may have multiple departmental affiliations, or the fact that some publications may have been authored before the researchers were affiliated with their current primary departments at UCSF. These are the top 20 departments, out of a total of 42 that match our criteria.

  1. Epidemiology & Biostatistics: 51.1%
    424 of 829 publications co-authored with other UCSF departments
  2. Proctor Foundation: 50.3%
    82 of 163 publications co-authored with other UCSF departments
  3. Pathology: 49.2%
    234 of 476 publications co-authored with other UCSF departments
  4. Physiological Nursing: 45.5%
    116 of 255 publications co-authored with other UCSF departments
  5. Neurological Surgery: 43.9%
    393 of 896 publications co-authored with other UCSF departments
  6. Orofacial Sciences: 42.7%
    53 of 124 publications co-authored with other UCSF departments
  7. Family Health Care Nursing: 37.0%
    47 of 127 publications co-authored with other UCSF departments
  8. Clinical Pharmacy: 36.9%
    58 of 157 publications co-authored with other UCSF departments
  9. Family & Community Medicine: 36.2%
    54 of 149 publications co-authored with other UCSF departments
  10. Radiology and Biomedical Imaging: 35.0%
    449 of 1284 publications co-authored with other UCSF departments
  11. Psychiatry: 33.5%
    252 of 753 publications co-authored with other UCSF departments
  12. Pharmaceutical Chemistry: 33.3%
    120 of 360 publications co-authored with other UCSF departments
  13. Pediatrics: 32.5%
    321 of 989 publications co-authored with other UCSF departments
  14. Anatomy: 31.8%
    55 of 173 publications co-authored with other UCSF departments
  15. Ob/Gyn & Reproductive Sciences: 30.7%
    185 of 602 publications co-authored with other UCSF departments
  16. Cell & Tissue Biology: 30.5%
    32 of 105 publications co-authored with other UCSF departments
  17. Dermatology: 30.1%
    129 of 429 publications co-authored with other UCSF departments
  18. Medicine: 27.7%
    1257 of 4545 publications co-authored with other UCSF departments
  19. Biochemistry & Biophysics: 26.8%
    75 of 280 publications co-authored with other UCSF departments
  20. Neurology: 26.8%
    400 of 1495 publications co-authored with other UCSF departments

CTSA 2013 Annual Face to Face: The Power of Storytelling

Hosted by: University of New Mexico’s Health Sciences Center (HSC) in cooperation with UNM’s Clinical and Translational Science Center (CTSC)

This year’s Clinical and Translational Science Awards (CTSA) communications key function committee (CKFC) Annual Face to Face  focused on the critical role of storytelling to lift research of out its silos to a wider audience.

Richard Larson, MD, PhD, UNM HSC Vice Chancellor for Research compared communicators to ambassadors of information – after all, “research ignored is research wasted.”

Purpose/Objectives of the Annual F2F:

  • Increase understanding and support of NCATS and NIH priorities
  • Improve awareness of CTSA value, dissemination of key information, and collaboration among key stakeholders across the consortium
  • Inspire CKFC members through new connections, skill building, clear direction, and storytelling

Here’s a selection of tweets by CTSA communicators during the two-day conference:

Continue reading

Interesting & cool! Sign Language Researchers Broaden Science Lexicon – NYTimes.com

Sign Language Researchers Broaden Science Lexicon – NYTimes.com.


Imagine trying to learn biology without ever using the word “organism.” Or studying to become a botanist when the only way of referring to photosynthesis is to spell the word out, letter by painstaking letter.

For deaf students, this game of scientific Password has long been the daily classroom and laboratory experience. Words like “organism” and “photosynthesis” — to say nothing of more obscure and harder-to-spell terms — have no single widely accepted equivalent in sign language. This means that deaf students and their teachers and interpreters must improvise, making it that much harder for the students to excel in science and pursue careers in it.

“Am I having a stroke?” UCSF Researchers Test New Way of Connecting Physicians With Information Seekers Online

Five questions with UCSF neurologist and stroke researcher Anthony Kim about his new study on how the Internet can help to connect with people who are searching for information online and potentially reduce incidences of preventable diseases.

Anthony S. Kim, MD, MAS is assistant clinical professor of neurology at the University of California, San Francisco (UCSF) and Medical Director of the UCSF Stroke Center. His research focuses on improving the diagnosis and cost-effective management of stroke and transient ischemic attack (TIA, also called “mini-stroke”).  An estimated 800,000 new strokes occur each year in the U.S., making it the fourth leading cause of death in America. Anthony Kim believes that the Internet opens up new opportunities that will change the way we develop interventions and conduct research to improve health.

Q: Millions of Americans search online for health information each year. Scientists are using this type of data to better understand flu outbreaks, the seasonal variance of kidney stones, the demographic prevalence of stroke, and even to demonstrate the online effectiveness of health awareness campaigns. What did you learn in your latest study?

We were surprised to see that tens of thousands of people were regularly ‘asking’ a search engine about stroke-related symptoms in many cases shortly after the onset of symptoms. In fact, every month, about 100 people were finding our study website by entering the query: “Am I having a stroke?” directly into their Google search box.

One of the challenges with mini-stroke is that most people do not seek urgent medical attention because the symptoms are transitory by definition. So people don’t realize that it is a medical emergency. Even though the symptoms may have resolved, the risk of a subsequent stroke is very high—upwards of 11% within the next 90 days—with most of this risk concentrated in the first hours and days after the mini-stroke. So getting the message out there about urgent medical attention is key.

We started this study because we thought that if people who have had a mini-stroke are looking online for information on their symptoms, then rather than just listing static health information about the disease on a website, maybe we can engage them by making the website more interactive and asking them to enter some of their symptoms online. And we wondered whether we could use this information to assess whether or not it was a true TIA or stroke and then encourage them to get the urgent medical as appropriate.

One third of the people we identified hadn’t had a medical evaluation for mini-stroke yet, which is critical, because it is a medical emergency. Instead of calling a doctor or going to the emergency room, many people were turning to the Internet as the first source for health information.

Q: How did your approach work exactly?

When a person searched on Google for stroke-related keywords, a paid text advertisement “Possible Mini-Stroke/TIA?” appeared with a link to the study website (Image). The ad appeared on the search results page and on related websites with existing content about the topic.

When users clicked on the text ad link, they were directed to the study website. Those visitors who met all of the study’s entry criteria were asked to provide informed consent online. They then reported their demographic information and symptoms based on a risk score developed for use by clinicians.

We were notified in real-time as soon as someone enrolled, and then we arranged for two vascular neurologists to follow up with the patient by telephone.

Q: You tested the approach for about four months. What’s your verdict?

We definitely think that there is a lot of potential here. About 60% of U.S. adults say that their real-life medical decisions have been influenced by information they read online. This changes the way we think about providing medical care and conducting research.

With a modest advertising budget, we were able to attract more than 200 people to our study website each day from all 50 states. About one percent of them (251 out of 25,000) completed the online questionnaire, which allowed us to contact them for follow up. Although this seems low at first, it is comparable to conversion rates in other domains of online advertising.

Also, even though the people who joined the study were a highly selected group, the incremental costs for reaching an additional person were low and the potential for applying a targeted and cost-effective public health intervention in this group would still be very interesting to evaluate in the future.

Before we started, we thought that we might lose people throughout the enrollment process since we confirmed eligibility and asked for consent online, but we didn’t. For the most part, if people were interested in participating, they completed the entire online enrollment process.

During follow up calls, we learned that 38% of enrollees actually had a mini-stroke or stroke. But fully a third of them had not seen a doctor yet. Our approach made it possible to connect with these people fairly efficiently and early on in order to influence their behavior acutely.

Despite these potential advantages, Internet-based public health interventions that target people who are looking for health information online are still underdeveloped and understudied. There’s a lot for us to learn in this space.

Q: What online tools did you use to carry out your project?

We used Google AdWords and Google’s Display Network to target English-speaking adults in the U.S. During the four-month enrollment period, the tool automatically displayed our ads more than 4.5 million times based on criteria such as location, demographics, and search terms.

Ideally, to minimize ongoing costs you would want to build and optimize a website so that it ranks highly among the non-paid (organic) search results. Non-profits can also take advantage of Google Grants, a program that supports in-kind donations of advertising resources to help selected organizations promote their websites on Google.

Q: Do you have any tips for others who want to develop similar projects?

We quickly realized that it helped to work closely with our Institutional Review Board (IRB) given that this is a new and evolving area of research, and to ensure data security and safety mechanisms are in place to protect participants. I definitely recommend that.

It’s also important to be realistic about the goals and metrics of success, and not to over-interpret numbers that seem to reflect low engagement. We saw that most visitors (86%) immediately exited the website within a few seconds of arriving at the home page. This probably reflected people who were looking for something else and clicked away immediately. But the beauty of the Internet is that it is very efficient to reach people across a wide geographic area very quickly. So it is not unexpected that we would also screen visitors who may not be qualified for the study or are not interested in enrolling.

Groups interested in using this approach should think about selection bias, authentication, validation, and the “digital divide”. Even though there is some evidence that disparities in access and adoption of Internet technologies are narrowing in the U.S., depending on the goals and target for your study or intervention the reach of the Internet is not uniform.

But selection bias issues aside, for a public health intervention you may be most interested in other metrics such as the number of people reached per dollar spent, or the burden of disease averted per dollar spent, which the Internet is particularly suited to help optimize.

And, it’s definitely beneficial to bring different subject matter and methods experts to the table. Knowledge of search engine optimization, online accessibility, website and user interface design is not necessarily part of the core expertise of a traditional clinical researcher, but developing these skills and interacting with experts in these areas could become very important for the new cadre of clinical researchers and public health professionals coming down the pipeline.

The original article was published on CTSI at UCSF

How Social Media Is Changing the Way We Talk About Science

Five Questions With UCSF Neuroscientist Bradley Voytek

Brad Voytek, PhD, a post-doctoral fellow at the University of California, San Francisco, makes use of big data, mapping, and mathematics to discover how brain regions work together and give rise to cognition. In his work as a researcher, science teacher, and outreach advocate, he regularly uses social media such as his blog Oscillatory ThoughtsTwitter, and Quora. In 2006, he split the Time magazine Person of the Year award.

Bradley Voytek

Brad Voytek, PhD

Q: You’re interested in leveraging data to modernize science. Do you see a role for social media in changing research? 

Social media already is changing research. In many ways! First, there’s a direct effect wherein scientists are beginning to use data collected by social media organizations to analyze behavioral patterns, such as a study from 2011 that looked at millions of tweets to analyze fluctuations in mood.

The way we conduct and communicate research is also changing. Since the 1600s, scientists have communicated findings through peer review. But these results are static, and conversations regarding specific study details, methods, etc. were private. Now, scientific publishing organizations like Nature Network or SciVerse and professional sites such as Mendeley and ResearchGate are providing platforms for open communication and ongoing conversations about research projects.

Q: You’re using social media to promote your work. What motivates you to do that, and what do you see as the benefit?

While in a sense the first statement is correct, “promotion” is a loaded term. Science is an opaque process, and scientific publications are jargon-laden, dense documents that are inaccessible to all but the field-specific experts. These publications give an idealized view of the scientific process–from clear hypothesis to statistically significant result. The reality is a world messier and less certain than that, and I use my blog to communicate that.

Having students just jump in and read these artificially-refined and specialized manuscripts and Social media quote_Bradley Voytekexpecting them to learn from it is like trying to teach English by having someone read Shakespeare: it’s technically correct but the end result will be a mess.

I get a sense that many of my blog’s readers are undergraduate and graduate students, and I aim to communicate the real difficulties and uncertainties of science with them. I remember being there feeling confused, and feeling really dumb because I didn’t “get” scientific papers and could never imagine myself coming up with a novel idea, running an experiment to test it, and writing a paper. I remember looking at the CVs of really smart post-docs and professors and seeing page after page of amazing compliments and thinking I was inadequate.

My goal is to demystify the scientific process, to make it more real, to show how hard everything is, but also that it’s doable. I’ve got a whole section of my CV titled “Rejections & Failures” outlining every grant or award I was not given, every paper not published. I believe that listing those failures shows fledgling scientists that the process is hard, but not because it requires super-intelligence, but rather super-diligence.

Q: You recently made an offer on Twitter inviting people to ask you questions about neuroscience. Can you tell us about that?

This exemplifies another reason I blog, use Twitter, etc. When teaching, I took to heart the idea that if you can’t explain something clearly, then you truly have not internalized it and don’t really understand it.

Social media is a way for me to continue sharpening my understanding of difficult concepts. The time investment isn’t important to me–my job is to learn and discover, and this is another aspect of that. And if in the process I make something more clear and accessible to a possible future scientist, all the better. No scientist achieved their breakthroughs because they communicated less.

As for the offer on Twitter, I got quite a number of excellent questions, but a few stood out that really made me think. Specifically, there were three that are directly relevant to my research and that got me digging around the literature some more to figure out the answer. The questions essentially boiled down to two ideas: First, how plastic is a mature brain? And second, how many neurons can you lose before you (or someone else) notices?

Ultimately I wrote a blog post on what I’d found and rolled some of that writing and those ideas into peer-reviewed papers I’m still working on. This kind of challenge, discussion, and ideation exchange is extremely valuable for me, and it’s part of the reason that I make offers such as the one on Twitter or use Q&A sites such as Quora.

Quora is a particularly interesting example. It’s a site populated by very intelligent people, but given the kinds of neuroscience-related questions that appear there, it’s clear that there are still some pervasive misconceptions about how the brain works. On a site such as that, the feedback and discussions seem to flow a bit more easily than they do on my own personal blog, but they’re not limited in scope as on Twitter. It’s a nicer platform for the level of discussion I’m seeking.

It also doesn’t hurt my motivation when I get comments from people such as, “I’m a grown man with a family and a career and [Brad] made me want to become a neuroscientist!” or “I accidentally started liking science stuff thanks to you!”

Q: Lots of scientists are not using social media. When asked why, many say they don’t think people will care about their scientific work. What do you think about that perspective?

People who say such things underestimate the interest level and intelligence of the non-scientist public. When I hear this, in my head it translates to either, “I don’t care about what I’m doing,” or, “I’m not confident enough in what I’m doing to explain it to anyone who may ask really simple questions that undermine what I do.” The former is fine; not everyone needs to “love” their job or work to be excellent at it. The latter is emblematic of unclear thinking.

Q: What tips can you give researchers who are thinking about using social media but don’t know where to start?

Generally the tips I’ve seen from a lot of bloggers are “write consistently” and “be engaging”, but that’s like saying to be a good scientist you need to “work harder and be smarter”: technically true but not very useful. I wish I had some magic formula for how to be successful at using social media for science, but I don’t have such a thing.

Broadly speaking, knowing how to communicate complex ideas effectively is critical, but just as important is knowing how to network, how to spread your ideas, and how to write something other people want to read. You’ve got maybe a few seconds to capture peoples’ attention online, and getting them to read a 1000-2000 word article is hard. Time and attention are premium commodities in people’s lives, and what you’re asking them to do is sacrifice that commodity to you. You have to keep that in mind. When you write, don’t think “this will only be read by a half dozen of my friends who read my blog.” Instead, think, “this might get picked up and read by tens of thousands of people. Is this worth the time of thousands of people?”

I find social media helpful to clarify my thinking, but other people may have other methods of accomplishing the same result. The only remaining advice I have is to seriously consider the reasons for not using social media: are you not blogging/tweeting/whatever because you honestly think it’s a waste of time and can see no return-on-investment for you? Or, are you not doing it because simplifying your ideas is too challenging?

This Q&A is part of “Digital Media & Science: A Perspectives Series from CTSI at UCSF” and was originally published on the UCSF CTSI website. This series explores how digital media and communications can be used to advance science and support academia.

Related posts by Bradley Voytek

Brad is also interested in leveraging data to modernize research. He’s one of the creators of brainSCANr, an online resource that uses existing publication data to show the probability of relationships between neuroscience topics and ultimately support the discovery of novel research ideas. He is also a fan of zombies, and has devoted some of his time to mapping brain damage that would be caused by zombification.

Describing the Difference Research Has Made to the World

Here is an interesting new blog post by Heather Piwowar about the different ways research can impact the world and the importance of telling them apart. Good food for thought as we think about ways to help researchers analyze how people are reading, bookmarking, sharing, discussing, and citing research online.

I think Anirvan made a great point to think about ways how we can integrate “altmetrics” data with UCSF Profiles. Some of the metrics mentioned below may be a great starting point.

Here’s what Heather writes:

Figuring out the flavors of science impact. CC-BY-NC by maniacyak on flickr

We have clustered all PLoS ONE papers published before 2010 using five metrics that are fairly distinct from one another: HTML article page views, number of Mendeley reader bookmarks, Faculty of 1000 score, Web of Science citation counts as of 2011, and a combo count of twitter, Facebook, delicious, and blog discussion.

We normalized the metrics to account for differences due to publication date and service popularity, transformed them, and standardized to a common scale. We tried lots of cluster possibilities; it seems that five clusters fit this particular sample the best.

Here is a taste of the clusters we found.  Bright blue in the figure below means that the metric has high values in that cluster, dark grey means the metric doesn’t have much activity.  For example, papers in “flavour E” in the first column have fairly low scores on all five metrics, whereas papers in “flavour C” on the far right have a lot of HTML page views and Sharing (blog posts, tweeting, facebook clicking, etc) activity. View image and read on

Further reading:

The “Sociometric Badge”: Measuring the Flow of Information and Ideas in Teams

Imagine wearing an electronic sensor that recognizes your communication patterns, e.g., your body language, tone of voice, who you talk to, where you talk to that person. What may sound like science fiction, has already been tested in the real world including innovation teams, post-op wards in hospitals, customer-facing teams in banks, and call centers.

What do you think? Would you wear it?

Dashboard summarizes data and helps people adjust their behavior.

Here is an interesting video with Alex “Sandy” Pentland, director of MIT’s Human Dynamics Laboratory. He explains how it works. According to Pentland, the “dynamics are observable, quantifiable, and measurable. And, perhaps most important, teams can be taught how to strengthen them.”

Key findings:

  • Patterns of communication are the most important predictor of a team’s success.
  • The best predictors of productivity were a team’s energy and engagement outside formal meetings.

The summary of the article is available here.

How Do You Cite a Tweet in an Academic Paper?

Twitter is getting its own standard format to fit the requirements of “publish or perish”. The Modern Language Association has developed a standard format. In his post, Alexis Madrigal takes a closer look at the shortcomings of the instructions. He writes:

It’s simple. Also, I just love the “Tweet” at the end. However, it’s curious that no URL is required, especially given the difficulty of Twitter search for anything not said in the past day or two.

Here are the instructions developed by the Modern Language Association:

Begin the entry in the works-cited list with the author’s real name and, in parentheses, user name, if both are known and they differ. If only the user name is known, give it alone.

Next provide the entire text of the tweet in quotation marks, without changing the capitalization. Conclude the entry with the date and time of the message and the medium of publication (Tweet). For example:

Athar, Sohaib (ReallyVirtual). “Helicopter hovering above Abbottabad at 1AM (is a rare event).” 1 May 2011, 3:58 p.m. Tweet.

The date and time of a message on Twitter reflect the reader’s time zone. Readers in different time zones see different times and, possibly, dates on the same tweet. The date and time that were in effect for the writer of the tweet when it was transmitted are normally not known. Thus, the date and time displayed on Twitter are only approximate guides to the timing of a tweet. However, they allow a researcher to precisely compare the timing of tweets as long as the tweets are all read in a single time zone.

In the main text of the paper, a tweet is cited in its entirety (6.4.1):

Sohaib Athar noted that the presence of a helicopter at that hour was “a rare event.”


The presence of a helicopter at that hour was “a rare event” (Athar).