synthesis of breakout session from day 1 - institutions and metrics
Fri Sep 26, 2014
Thanks to – Kevin Dolby, Martijn Roelandse, Mike Taylor and Andrea Michalek for taking the notes from each of the breakout sessions, I have synthesised them here.
Altmetrics could be used as a way to indicate the pathway of impact
Institutions should define their game plan, what do we want to achieve, what metrics can help get us there they could give guidance to researchers on what platforms to adopt (the landscape is cluttered, but at the same time Institutions probably don’t know), that said funders are behind the principle that universities drive what metrics they want to collect, and the set of standards, instead of prescribing metrics (don’t get led by what gets measured, define the change you want to affect first).
Researchers still don’t know about these metrics, there are some routes to education, in particular via the library (a course exists in Sheffield), but there is a general sense of “it’s not worth the time”. A key is going to be to get the younger researchers to adopt. We need to find incentives (I’m not going to mention what form those incentives could take)
On the topic of adoption, it’s clearly discipline dependent (there is a first mover problem in a field, if others are doing it, it can be seen as more acceptable)
one university had faculty need to to spend “two points” on community outreach, managing the department twitter account allowed them to tick this requirements.
In contrast to what one group reported, another reported that younger researchers more engaged, those at the top - no interest
Making them personal was considered appealing
It’s clear we can’t mandate participation.
Can there be a standard? No, for reasons above (mainly around participation). So if there isn’t, then anyone who provides altmetric data will inevitably curate them, favourably. When does this become cheating? How do we learn how to read altmetric data as funders? What other evidence might be provided?
Gaming will happen, perhaps we could embrace that Altmetrics can incentivise researchers to make their research available in open access. In addition don’t forget the Humanities!!
Could be used to Raise the institutional profile, even help with track public engagement, however …
Metrics (both new metrics and traditional citations count “hits” so both negative and positive hits are counted equally. This leaves the “quality” problem there
Looking at what metrics are valuable around a discipline-level vs institution-level could also be interesting
Looking at late adopters of social media use, using social media metrics about their work could be helpful in showing the value of social media to them
Do please use standards, DOIs, ORCIDs etc.
Where artefacts live in different plances (when content is promiscuous), find a way to have usage data flow between different object silos.
How do we stop altmetrics being misused, a la JIF. Since there is no safety catch on altmetrics, people are free to mis-use them! Openness and communication and conferences like this one will help