Soft peer review


The following paper "Soft peer review: Social software and distributed

scientific evaluation" was passed along to me by alf today. I think

another copy has been haunting my file system for a few days, but this

seemed like a good reason to sit down again with it.

It's by Dario

and the abstract is as follows:

Abstract: The debate on the prospects of peer-review in the Internet age and the

increasing criticism leveled against the dominant role of impact factor

indicators are calling for new measurable criteria to assess

scientific quality.

Usage-based metrics offer a new avenue to scientific quality assessment but

face the same risks as first generation search engines that used unreliable

metrics (such as raw traffic data) to estimate content quality. In

this article I

analyze the contribution that social bookmarking systems can provide to the

problem of usage-based metrics for scientific evaluation. I suggest that

collaboratively aggregated metadata may help fill the gap between traditional

citation-based criteria and raw usage factors. I submit that bottom-up,

distributed evaluation models such as those afforded by social bookmarking

will challenge more traditional quality assessment models in terms of coverage,

efficiency and scalability. Services aggregating user-related quality indicators

for online scientific content will come to occupy a key function in

the scholarly

communication system

and I get a mention in the acknowledgments, which is cool.

It is a very nice essay on the potential of social bookmarking as a tool for ran

king academic articles, in addition to adding metadata to scientific articles. D

ario discusses the issue of ranking the expertese of people who are bookmarking

and proposes a really nice method to get over the scaling problem that is inherr

ent when we try to intoduce manual methods to rank people. He suggestes that a u

sers notes and annotations could be made available about a bookmark on an anonom

ous basis. Others would have the option to copy these annotations, or rate them.

This would be a form of soft peer review on the annotations, which would in tur

n effect the standing of the person creating these annotations.

There would be ways to cheat this system, but with enough signal, one hopes that

such noise could be drowned out.

The paper also pointed out which I'd not

seen before and which is

pretty amazing.

I really like this paper. Thanks Dario!

Read and post comments |
Send to a friend