A decent attempt to stifle altmetrics at birth

Heather Piwowar has a note in Nature about a minor change in NSF grant review policy – allowing applicants to be more expansive in the research outputs they cite as evidence of their eminence and productivity. I’ve been skeptical about ‘altmetrics’, of which Piwowar is a proponent, since I first heard of it. The idea is that ‘likes’ on Facebook, and so on, measure the impact of a scientific work. Piwowar’s piece is densely packed with peerless arguments against altmetrics. For example:

Even when applicants are allowed to include alternative products in grant applications, how will reviewers know if they should be impressed? They might have a little bit of time to watch a short video on YouTube demonstrating a wet-lab technique, or to read a Google Plus post describing a computational algorithm.

I know that answer. They should not be impressed. You plan to mention a Google Plus post in your grant application. Good luck with that. I would close the application at this point and move on to the next one.

What are the problems with using alternative measures of impact? The problems are that they are shallow, subject to gaming and could well refer to material that has not been peer reviewed. If I “like” a paper, there is no indication that I read it (less still that I understand it), or if I just like the title.  I doubt of course, that I will ever be able to rate a paper as ‘insubstantial and inconsequential’. So we will drown in relentless positivity. I can force the people in my lab to get a bunch of fake email addresses and like our stuff to death. Many of the annotations in public databases are very poor quality – they are useful for bioinformatics in as much as they can provide ideas for real experiments. Measuring how often people have linked to a database is very tenuous link to quality of the database. Many of the links, or likes, may not result in a real research output anyway.

There is a way to assess the quality of an application. Read the application, and read the papers that support the application. Give some bonus points for service. And when you’ve downloaded the material you need, step away from the computer.

4 thoughts on “A decent attempt to stifle altmetrics at birth

  1. What can I say? I will just post the first two quick thoughts that came to my mind when I read this article:

    1. that fake video about the new iphone 5: http://www.kungfugrippe.com/post/29903123779/replacing-human-connection
    “Am I good?”… “Well, if everyone tells me I am in Facebook…then I must be GREAT!” (but I loose the ability to auto-evaluate my real qualities, I only look at my reflection in the social network).

    2. the Fable of the bear, the monkey and the pig, which ends like this: “Authors, of those who judge of you/ A wise man’s blame may make one sad,/ but a fool’s praise is twice as bad”.

    How do we distinguish in this ‘altmetrics’ system the pig from the monkey, the wise from the fool?

    Here’s the fable:
    http://books.google.es/books?id=h3wBAAAAQAAJ&pg=PA26&lpg=PA26&dq=fable+the+bear+the+monkey+and+the+pig&source=bl&ots=Kvq7C80dA6&sig=jnCmjs9IdogXZGC848nJhof4_KQ&hl=en&sa=X&ei=CMvuUNjPJIi7hAeojYH4BQ&redir_esc=y#v=onepage&q=fable%20the%20bear%20the%20monkey%20and%20the%20pig&f=false

  2. Pingback: Why you should ignore altmetrics and other bibliometric nightmares

  3. Pingback: More on Altmetrics | Serious Piffle

  4. Pingback: The Leiden Manifesto | Serious Piffle

Leave a comment