Category Archives: PaperCritic

Want to write a review? Pick your format of choice!

Last week, I summarized some of the main issues with the recent approaches to post-publication peer review (PPPR), such as comments and different types of altmetrics. The conclusion was that, while these approaches have a certain amount of potential, they usually fail to reach a high quality level, predominantly due to their very short and unstructured nature.

As mentioned in the previous post, most researchers will be writing summaries and reviews of research articles on a regular basis, yet these will usually be kept private. And even if one were to find an incentive that would drive researchers to publish these reviews, they would in most cases be of little value to others. The reason for this is that every researcher has his own particular focus of interest and will write his review accordingly, meaning that there will be a lot of conflict between researchers, particularly ones coming from different disciplines. The solution to the problem, in my opinion, lies in making reviews more systematic even at the personal level.

The need for systematic reviews has so far been relatively well addressed only in the health care section, with the establishment of the Cochrane Collaboration and Cochrane Reviews in the 1990’s. Unfortunately, other disciplines have so far failed in developing a similarly structured format for reviewing its advances. However, I would not necessarily see this as a failure of the disciplines, but as an indication that it can be simply infeasible to agree on any one particular review format for a specific research area.

In fact, there is no reason why smaller fields of study should not be given the opportunity to develop their own review formats that would have a higher chance of being adopted by its scholars because they would provide more value to both review writers and readers, than any generalized review format could ever do (from this perspective, a free-text review is basically as generalized a format as it gets).

In order to facilitate the above, PaperCritic will be rolling out a review template system this Summer that will enable its users to collaboratively define review formats that are most meaningful for their particular field of study. Furthermore, we will also be launching both public and private interest groups, aimed at bringing together scholars from one disciplines and sharing their views on the latest research in the field.

Ultimately, the hope is that having more structured reviews will make it significantly easier for scientists to both produce and consume these, thus making it a much more enjoyable process. Additionally, having a set of established review templates available in one’s field might just provide the needed encouragement for researchers to publish their reviews by alleviating the fear of not having composed the review well enough.

As ever, comments and suggestions are more than welcome as this is really a proposal and work in progress. Other than that, we’re looking forward to bringing the review templating system to you as soon as possible, so keep following us here and on Twitter.



Filed under PaperCritic

Where is all the quality post-publication peer review?

Post-publication peer review (PPPR) is something that we all would like to flourish, yet at the same time seem to struggle to find a proper format for. Arguably the most accessible and widely accepted form of PPPR is writing comments in the form of letters, or directly on a publisher’s website. However, Kent Anderson has recently discussed the problems associated with comments being used for PPPR, the main issue being a lack of substance and quality in such ´reviews´.

In search of new forms of PPPR, a new movement called ´altmetrics´ has established itself recently as a potential front-runner, basically switching the focus from traditional full-text reviews to widespread usage-based metrics, ranging from the more traditional citation counts, to more recent inventions, such as tweets, Facebook likes, etc. Unfortunately though, these metrics suffer from several drawbacks themselves, as discussed by David Crotty in one of his recent posts. Not surprisingly, the main issue identified was that these metrics are extremely hard to interpret, especially since one needn’t be an expert in the field (or even know what an article is about) to retweet its link to others for example.

The big question that remains then is: Does a form of PPPR exist that can provide a sufficient amount of substance, not be too demanding for the reviewer and in fact provide an incentive for the reviewer in itself? To answer this question, I suggest that we stop looking at public metrics for just a second, and focus on the reviewer. This person will in most cases will be a (semi-) professional scientist, meaning that the person will be spending a lot of time reading and evaluating the work of others, looking for gaps that could be addressed in future research.

In effect, every researcher spends half the time writing personal comments and reviews on published books and articles, which may partially end up in the “Introduction”, “State of the art” and “Literature review” sections of the researcher’s articles, books, dissertations, etc. There is already an abundance of quality PPPR out there, our task is to encourage researchers to make (some of) this material public, by providing the right tools and formats.

How can we do this? The short answer is: by providing even more value to the researcher. I’m still mulling over the details somewhat, so I’ll put off the full proposal for a follow up post which should appear later this week. I’d be happy to hear what your thoughts are so far, though I guess the solution is the most interesting aspect to all this.


Filed under PaperCritic

The pyramid of Open Science – Which way is up?

Having just read Dan Gezelter’s (now relatively old) post on the definition of Open Science, it has struck me that the two main streams of the movement have been around for quite a long time, namely: the openness of data, documents, code; and the openness of communication and collaboration that leads to the creation of the above. However, if one looks further into the blog posts on the Web, and the talks given at different Open Science conferences, then it become instantly apparent that the former far outweighs the latter, yet is that really justified?

Is it possible that we’re trying to tear down the ‘pyramid’ of science by approaching it from the wrong end completely, i.e. by trying to damage its foundation which is highly resistant, instead of moving the more loose bricks at the top first? In other words, could it be the case that the use of different public review tools like PaperCritic, altmetric measures listed on and open scientific collaboration tools (which I sadly don’t know a really good example for) will actually drive science to be more open by setting a standard for open communication. In a way, this could (and should) satisfy the motivations Dan outlines more and more as these tools become more widely accepted, leading to more openness in the actual sharing of data, thus ultimately putting the pyramid on its head!

What I’m trying to say is certainly not that we should stop trying to make scientific papers, the corresponding code and data as open as possible. On the contrary, those remain very important aspects of the whole movement. However, in order to truly revolutionize the way science is conducted, we just might want to put a bit more effort into raising the value of scientific communication and collaboration as an accepted metric in the community.

1 Comment

Filed under PaperCritic

A proactive approach to making peer review more transparent

It seems that every second blog post written, talk given, or tweet tweeted on the topics of science and research nowadays is in one way or other concerned with the shortcomings of the established peer review process and the ways in which one could evaluate papers more openly and transparently. For now, the discussions usually get stuck at the crossroads between needing to publish in high impact publications and the fact that these are more often than not rather “old-fashioned” when it comes to their peer review practices. But isn’t it in our hands, as scientists, to change this?

Admittedly, the reviews we get are anonymous and often not full of praise to put it mildly – yet that shouldn’t stop a researcher from wanting to share the review with others, provided there is an easy technical solution for doing that. If you think about it, if the review is very (un-)helpful, it’s in everyone’s interest to share it for the sake of exhibiting how (bad) the reviewers of a particular publication are. If the review reflects really well on the submitted paper, then there seems to be a natural impetus for the scientist to share it. And even if the review is very critical of the paper (in which case scientists tend to switch into full-on denial mode), it’s still in your interest to share the review and collect more opinions on the paper in order to understand if there is really a flaw in it, or if the reviewer was just being unprofessionally picky for some reason.

Going somewhat in this direction RePEc have recently issued a call to editors to submit the reviews that were written by the referees of their journals for open display. Even if the idea is very much biased towards whatever the editors deem as appropriate for showcasing, I applaud the effort. However, the scientists themselves should be also more proactive in this area and just publish the reviews that they obtain!

PS. There might be a slight legal caveat to this idea, so I asked an organizer of an event if he would see a problem in me publishing the reviews I got on submitting for presentation. The answer I got was that while it’s something they probably can’t do anything about legally, the reviewers might get “annoyed if what they wrote appeared on the Internet unexpectedly”. Well (especially considering that this would still be done anonymously) I don’t see a reason why they should be, unless they’re ashamed about what they wrote.


Filed under PaperCritic

Looking forward to the Open Science Summit 2011

As you might be aware of already, the Open Science Summit 2011 is coming up this weekend, with a host of very exciting speakers and surely an abundance of heated discussions on the theme of making science more open. While the range of topics that will be addressed at this year’s event (see program) is quite broad, there is one in particular that I’m especially looking forward to – the introduction of the ‘social’ aspect to science.

What I’m referring to is that even if one sidesteps the issue of open access, there still remains the trend of science being conducted by ‘closed’ entities, be it individuals, research groups or large-scale collaborations. Regardless of how large any such entity might be (some collaborative projects span tens of universities and hundreds of scholars), it is usually the case that research will still be performed from start to finish within the entity itself and without much feedback from the rest of the community. It is only after a corresponding publication will be published that any kind of ‘social’ involvement can begin. But even that is not the biggest issue!

As can be seen from this comparison between blog posts and letters to the editor, there are easy ways of voicing one’s opinion in the science domain, yet, as noted by the author, where blogging (or any other social approach for that matter) falls short is its value in terms of career building and pumping up one’s CV. Now it would be unreasonable to ask of scientists to ignore their careers and devote their lives to science alone, what we need instead is a way of introducing a structure of rewards for social engagement in science. The idea has been quietly gathering momentum in the recent years, as exemplified by Kathleen Fitzpatrick’s recently published book Planned Obsolescence (see this blog post on Inside Higher Ed for a very interesting interview) and a session on microattribution at this year’s Science Online London Conference. If this trend continues at the current rate, we can be optimistic about a scientist’s level of social engagement becoming a valid professional metric in academic institutions rather sooner than later.

All that being said, if you’re attending the event, make sure you enjoy all the talks, but also make sure to take a look at the different poster/app presentations during the breaks which will be running throughout the weekend. If not, you can still follow all the talks and discussions via a live stream that will be available at Hopefully, the organizers will be able to stream some of the off-stage presentations as well, so you might get a glimpse of PaperCritic there. I will post an update on my Twitter account with more specific details if I find out when our presentation will be streamed.

Leave a comment

Filed under PaperCritic

PaperCritic basics in place – Where to next?

Since the public launch of PaperCritic was announced one and a half months ago, we have been working hard on making the basics of the app as solid as possible. In particular, with the help of some very valuable comments from our first users, we were able to identify and solve the majority of the low-level usability issues of our site, such as being able to access one’s Mendeley library, being able to easily access a list of own reviews, as well as edit and delete these.

As ever, we are not going to sit and wait for something to happen on its own and so will continue improving our app as best we can. Given the number of very encouraging and positive comments about PaperCritic that we received over the better part of the last two months, we would like to take this opportunity to ask our potential users:
Where to next? What features should we add to our site next in order to make it more attractive and usable for everyone?

The following is a list of the features that were already requested (in no particular order). Which of these do you think should be addressed as soon as possible (and which maybe put on hold for now)? Can you think of something essential that is seemingly missing from the list? Do let us know!

  • Email subscriptions: Users should be able to subscribe to reviews on papers that interest them (this is already possible via RSS on a paper basis, though not cumulatively). In addition, let users auto-subscribe to the documents from their Mendeley library.
  • Pre-publication review: Note that this is actually possible as Mendeley has an “Unpublished work” document type, but it might be useful if unpublished works and their reviews were more explicitly separated from post-publication reviews on the site.
  • Trackback system: While we hope that users will take the time to come to PaperCritic to submit their reviews, we also want to be the hub for all mentions about a paper – allowing for bloggers and tweeters out there to send trackbacks to PaperCritic will allow us to aggregate all related content about a paper and become a central repository of public opinion in scientific publishing.
  • API: This is probably the most common request for every app nowadays – there should be an API that would allow publication houses, bloggers and others to fetch every review on a given paper via its Mendeley UUID, DOI or similar identificator.

The list can be clearly extended quite a bit, but these seem to be the most relevant points for now. As mentioned above, if you agree/disagree or have alternative suggestions – do let us know! However, I would like to stress out that this post is intentionally only touching on technical features of the site – attracting users and increasing the impact of post-publication reviews will be discussed shortly.


Filed under PaperCritic

Introducing PaperCritic – an open publication review tool powered by Mendeley API

We live in a world where our lives are broadcast by Facebook and Twitter, our news consumption is dominated by blogs and our knowledge is defined by Wikipedia articles. Yet somehow, science, which should really be at the front of any kind of advances, remains 20 years adrift in terms of the amount of collaboration in its echelons, not to mention when it comes to opening up to the broader public.

In fact, right now, the only acceptable way of presenting one’s academic work and obtaining critical reviews for it is through the tedious and obscure submit-and-wait process. We at PaperCritic find this way of promoting science severely outdated and simply unacceptable.

Thankfully, ever since Mendeley, the biggest player in the publication management market, has opened the door to its resources with an Open API, it has become possible to construct simple and powerful apps that leverage the Mendeley database in order to promote science and make it more collaborative.

Having been made aware of this, as well as the Mendeley/PLoS API Binary Battle, we have quickly put our hands to work and came up with PaperCritic – an app that offer researchers a way of obtaining feedback for their scientific work, and everybody interested – a way of providing it, in a fully open and transparent environment.

Apart from helping the scientific community, PaperCritic also helps you as a researcher or a science enthusiast organize your publication library even better. You’ve surely already embraced all the neat things Mendeley offers, such as tags, summaries and in-text notes. But what you want is to rate the different aspects of publications, or to write critical reviews of your own. Well, using PaperCritic you can now rate and review any publication just like the big guns at IEEE!

Disclaimer: While PaperCritic is powered by the Mendeley API, the two services are in no way affiliated to one another.


Filed under PaperCritic