NISO ALM standardization workshop
Wed Oct 16, 2013
This was a one day workshop discuss the creation of standards around Article Level Metrics. NISO has a grant from the Sloan foundation to look into this, and this event was the first of three planned information gathering events. My own opinion on this topic has changed over the past year, evolving from being strongly opposed, to broadly supporting the idea of building some kind of standard or best practice for the field. There is currently a tight knit community of people working on ALMs, and we currently internally know that we are all
doing the right thing (TM), however as interest in the space grows the close community is not guaranteed to remain fully aware of everything that is happening in the space. In addition there is a desire in some sectors - librarian, institutional and funders - to get some clarity on what it all means. These two things make me feel that if not a standard, then at least an agreed set of best practices, will help with adoption. I could imagine the following conversation:
Funder one - I’m using the PLOS alm explorer tool.
Funder two - that looks interesting, but our administration would like some guarantee that it’s not totally unreliable.
Funder one - well. the way they gather sources is transparently described, and they follow industry best practice, for what it’s worth.
Funder two - oh, Ok, well, I’ll have a look then, and see if it’s helpful.
With that preamble in mind here are my simplified notes from the meeting.
Generally the introduction is fairly standard. By far the most interesting point of discussion is whether this is the right time to do this. The underlying data is still shifting, and the tools of maximal utility are still to emerge, but if we have an attack into the mindset that needs standards, then I think this effort can be a good one, as long as cautionary aspects that I worry about can be allayed.
Todd says that he wants the community (scholars) to be able to trust metrics that the ALM community provide in the same way that they currently trust the impact factor.
Euan Adie - Altmetric.com
Working with 24 publishers. They get about 2M requests to the API per day.
Publishers tend not to care too much about the “metrics”, but authors really like, especially being able to see specific tweets.
Generally the reaction is very positive, even when Altmetric misses stuff.
Best CTR is 1% for a badge that is listed under the article title. Other links to info about the ALM info of an article has a lot lower CTR.
Biggest blocker to adoption on behalf of publishers is it can be scary, what if there is no conversation happening around your article?
Michael Habib - Elsevier
Looking back to May 2008 - Michael shows some data from a survey, back then about 50% of respondents to the survey said web 2.0 would play a key role in 5 years time (1800 respondents, mostly from researchers and librarians).
1 year ago they selected 54k random samples from Scopus. They got 3k respondents.
- 82% knew IF
- 43% knew H-index
- 10% journal usage factor
- 1% knew about altmetrics
This is really really interesting. Also of interest is that metrics with the highest awareness is also considered to be the most useful.
People under 35 liked more metrics. Older people (over 65), don’t like altmetrics.
Africa and developing nations are more open to altmetrics. Europe and USA like them less.
Tweets don’t correlate well with citations.
Greg Gordon - SSRN - talking about Trust
Thinks that gaming is the bogey man in the story that could erode trust in ALMs. (I think data quality is the current biggest issue).
How do you solve the problem before you get stuck with the idea that everything is bad.
He cites research from someone. One of the main concerns with Altmetrics is that people just don’t know what they are, they are unfamiliar with them. There is also an information sharing issue. But a lot of the actual metrics are easily understood - a download - a like.
Heather Piwower - Impact Story
“we bleed for each data point”. If people don’t get credit for the data that they create, then they won’t share that data.
Mendeley have many different kinds and variants of a work uploaded to Mendeley. The challenge of keeping track, de-duplicating and keeping tracks of different documents has been where a majority of their work has gone into in the last year. FWIW I can attest to the fact that we had issues when I was with Mendeley, back in 2010 - 2012.
Peter Brantley - annotations
Annotations are clearly web addressable, and as such should work really well with web addressable documents.
Talks about the discussion around community standards. Todd mentions that some work took place with the Mesur project to look at the business model and business case for creating such a data clearinghouse, and that case study might prove a useful resource now.
Breakout - Business and use cases
The most interesting thing about this session is how much against standardization both Heather from impact story, and Mike from Plumb Analytics were. They felt that standardization would cause calcification, and that none of their customers have asked for standardization. In the discussion we were unable to identify a single strong identified customer for a standardized version of ALMs.
I felt that this session was an important one, as we need to clearly identify who our customer is, and what value this thing brings to that customer. I feel that an industry-wide adoption of at least best practice could help with wider adoption, but this is only a feeling that I have, and again, the two ALM vendors in the room disagreed with me, so perhaps I’m wrong. I tried to introduce the product management tool “Desired outcomes” to these questions, but it didn’t work, we didn’t have a clearly enough defined thing to talk about, so we abandoned trying that framework. The one thing that framework did help with was to focus our discussion around who the customer is, so we discussed - publishers, authors, the ALM community and to a small extend funders.
I will be interested to see the outcomes of the other planned workshops.