Partially Attended

an irregularly updated blog by Ian Mulvany

PLOS ALM 13 day 2

Thu Oct 17, 2013

1479 Words

Connecting ALM and Literature

As I took part in the first session I don’t have many notes from it. I’ve posted the slides from my talk, and I’ll write up some more on those in due course. For me the standout talk of the session, if not the entire meeting, was from Jevin West who talked about using networked ranked data to provide recommendations. The algorithms his group are working on are being tested on [SSRN][ssrn], and will be rolled out to PLOS. The first set of tests indicated that recommendations driven by this algorithm are out-preforming a random recommendation, and out-preforming a collaborative-filter based approach. From discussion the previous day it looks like the emergence of reader driven metrics might also provide opportunities to explore how to make not only recommendations on the existing literature, but also how to predict future success. In particular that Post-Docs and PhD students were found to read more highly cited literature than other demographics in Mendeley begs the question, could we identify a cohort amongst these groups that reads papers before they become highly cited. Aside from those speculations, the work that Jevin presented was both visually appealing, and possibly useful. The platform he is working on is open, and is ready to be tried by you, dear interested publisher.

Lisa Schiff and Carly Strasser talked about the intricacies of introducing ALMs into the context of an organization that has a high bureaucratic overhead. It was nice to be reminded that the sacred DOI only covers a set of the scholarly record. ARCs are in widespread use, and as a community we should get on top of that. I was very impressed by Carly’s passion to bring change to this system, and share her hopes that we can make things better.

Putting ALM in Context

Amy Brand Faculty appointments and the record of

scholarship

Amy wrote on this topic in eLife in Point of view: Faculty appointments and the record of scholarship.

Harvard only introduced tenure track about 7 years ago. They now have a real commitment to tenuring junior faculty. They do a lot of faculty coaching and mentoring.

She only heard about the San Francisco Declaration on Research Assessment a few days ago - but this mirrors a lot of what Harvard does in terms of faculty appointment.

Harvard only gives tenure to people who have a Harvard degree. If you come from outside of Harvard and they want to give out tenure, then they give you an honorary degree.

The main thing that flows through the system is the “Case statement” - a dossier.

The works of scholarship that are getting featured in the CV is getting diverse - including software and public communications, which also includes artifacts such as blog posts.

The most important part is the peer review, and the peer evaluations - that come from 10 - 15 independent experts, and confidential letters from every member of the faculty to the Dean.

They do this analysis in a comparison set, where they look at a set of scholars together. One of the key things is field definition. They look at citations to papers - journal name is not listed. This citation report is not extremely important, they just want to know if the person is in range, it’s only interesting if it indicates that there are any outliers. I’ll say that again. For the citation record they don’t look at the journal names. They only use citations to help understand whether the candidate is operation within the normal expected range within their discipline, and in comparison to their peers. It’s used as a sanity check, and not insanely used as the key piece of evidence.

ORCID is really important. They want to extend the “on-grid” record. They want service info to be included in the ORCID record.

Change will happen in an institution like Harvard if it is driven from the faculty.

Peer review of people and papers tend to be closed systems. There is high trust, known expertise, but closed systems. In open systems there is less trust, but the actors can be identified, and that’s an opportunity.

Mike Taylor The many faces of altmetrics: mapping the social reach of research

Mike is with labs.elsevier.com. He has been working on Altmetrics for the last two years. He is looking at the social reach of research.

He finds it helpful to explain ALMs simply but putting them into low-judgment buckets of data classes. This helps to communicate internally. His buckets are:

  • Social activity
  • Component re-use
  • Scholarly commentary
  • Scholarly activity
  • Mass Media mentions

He is trying to see if these buckets can be tied against the different areas in which ALMs can be applied. It turns out that there are lot’s of areas of research needed. Mike shows a great graph looking at different type of activity.

A few years ago we were were in a black hole. A paper would be published and we would wait a few years before citations coming in. ALM data is now starting to sprinkle some light into that darkness, but we need a lot more data, the volume of papers with ALM data remains low.

We need more data points, more buckets.

He had a look at PR notices going out. Many don’t have links to the primary research. Government documents are worse.

Mike mentions Geo Location to research - that’s the first mention of that idea across the three days so far.

Not all buckets are equal.

If research gets picked up in social networks it often drops it’s formal name, gets known by a nickname, happens in real time. As it progresses through these networks, the more the name changes and evolves, the less likely that it will continue to be linked back to the original paper.

Juan Alperin Are ALMs/altmetrics propagating global inequality?

Juan is talking about the potential dangers of ALMs. This is a barnstorming talk. One of the other stand out highlights of the conference. I’m embarrassed that I didn’t take notes, I must have been too engaged. TL;DR we have to be careful not to fuck up how we structure these systems to not leave the global south behind. We are building these tools through the lens and web of the affluent North. Just go look at his slide deck now.

Expanding the breadth of ALMs

Eva Amsen The metrics of public referee reports

She is going to talk about a product that doesn’t exist yet - ALMs for referee reports. F1000 research uses Cross Mark to link versions of the article together. Great slide on the history of open peer review in the literature. If you are accused of being a predatory journal, having open reviews really helps.

They are providing a 50% reduction in article processing fees on any article that is submitted in the next year, for researchers who do a review for them - smart!.

When they asked their community what feedback they had - they were asked for badges, metrics and making the reports “citable”.

Is there anything that we can standardize about how we display referee reports? That would allow for aggregation of report data across different journals.

Would a referee report metric affect the ALM metric?

If you are a referee of an article that is very popular, does that credit get to rub off on you? - we could totally do this in eLifee.

Martin Fenner & Jennifer Song Building for the future: PLOS ALM application

They use vagrant to get the PLOS ALM app running. They have a feature that tracks health of the calls to external API calls.

They have integrated ALM scores into search results page of PLOS. I got to play with the ALM app over the weekend, and was able - with Martin’s help - to setup an instance with eLife DOIs. We will probably roll that out permanently over the next few weeks.

Gregg Gordon The real impact of ALMs

They have about 240k authors in the library. They have about 500k papers, and are getting about 70k papers per year. They are getting to the point where they were getting more revisions happening per day, than new submissions - shows that this is a corpus of living documents, they go into SSRN and then they evolve there.

There are a lot of different metrics out there, their first metric was Download. They spend between 50 - 100k per year verifying downloads.

There is trust in the systems, and authors are now using these downloads as a proxy to look at other factors of impact.

You see gaming everywhere, you just have to deal with it. They have an internal fraud index report.

They created CiteReader within SSRN. They have introduced eigenmetrics into their system.

Get the book - picture cook see make eat - a different interface for recipes.

This work is licensed under a Creative Commons Attribution 4.0 International License