Showing posts with label peer review. Show all posts
Showing posts with label peer review. Show all posts

Sunday, October 16, 2011

Should we review for any old journal?

It's no secret that academic publishers are able to cut expenses by getting free content, free review, and often free editorial expertise from the scientific community. Web hosting, copy editing, and printing costs remain, of course, so publishers cover these expenses by charging for the content - often by charging subscription and article access fees (directly or indirectly) to the very same researchers who provide their expertise for free. When some commercial publishers are generating impressive profits in spite of the bad economy, many researchers are rightfully perturbed. How should we, as a research community, respond?

Mike Taylor, writing at SV-POW! and Times Higher Education, argues that (among other strategies) scientists should refuse to review manuscripts submitted to non-open publications. To his credit, he has put his money where his mouth is (no surprise to those who know how solid Mike's character is). If done by enough people, this will surely have the desired effect of slowing down the cogs of the big non-open access journals (and making open access [OA] a more appealing alternative). But, what is the collateral damage? Is it worth it? Who would even receive the message?

I argue that, unless carefully constructed, such reviewing boycotts may never be noticed by some of the concerned parties. A typical journal editor will think "oh, Reviewer 1 refused to review. . .on to Reviewer 2." Even if the refusal to review is accompanied by a note explaining the reasoning behind the refusal, only the editor will ever see it (and potentially the publishing admins - who have little vested interest in changing the status quo).

Second, when the pool of qualified reviewers is small to begin with, this could have the consequence of letting some really bad stuff slip into publication. I've reviewed enough papers and read enough literature to know that unless I flag some manuscripts, nobody else will. (Richard has a similar sentiment in his comments at SV-POW!). Despite the schadenfreude of seeing non-OA journals become associated with increasingly substandard work, it would also mean that we're left with a mess to clean up (particularly in the case of "new" species). Profits are reported quarterly, but we have to deal with crummy taxonomy forever.

Third, the journals are not the ones hurt most directly by review boycotts; it is the authors. The journal will almost always find someone else to review the paper (with a delay as these reviewers are recruited); and if not, the manuscript will be returned for lack of qualified reviewers (with a delay as the paper is prepared for submission elsewhere). Rightly or wrongly, publications are a primary currency of academia. If getting that publication delayed means my friend or colleague doesn't get a job, or a grant, or tenure, I have hurt them, not just the profits of the journal.

There are some constructive alternatives, fortunately - given a choice, I would say #2 and #3 have the most utility and best balance intended and unintended consequences.

1) Refuse to review the paper, but fully explain why in a letter submitted directly and separately to the editor, journal, and authors. This way everyone gets the message - not just a select few.

2) Review the paper, but include a message with the review (perhaps both in the review text and in a direct letter to the authors) on the shame of the work being locked behind a paywall. Make the authors think twice about whether or not the intended audience will ever see the paper.

3) Submit your own work to open access journals, cite work in open access journals, and encourage your colleagues to do the same.

I sympathize with the sentiment that we academics shouldn't be propping up the questionable practices of some publishers, but we also need to avoid shooting ourselves (and our colleagues) in the foot as a result.

Update: Mike Taylor has posted a response to this post at SV-POW!

Tuesday, January 6, 2009

Responding to Peer Review

So far in this series, I've discussed my approaches to reviewing a paper. Aside from writing a quality paper and recommending potential reviewers, the process is out of the author's hands. Reviewers do their thing, and then editors do their thing. After a few weeks or months, you (the author) get an email in your inbox with the dreaded(?) results.

Inevitably, this email never has the verdict in the subject line. So, you'll have to click and open the email. . .and see one of several possible overall responses:
  1. Reject without review. This is the most common sort of response you'll see if you're uppity enough to aim for Science or Nature. You might also get this if the editor deems the paper unacceptable for grounds of style (you didn't follow the instructions to authors), scientific quality (often reserved for the wackiest of paleo conspiracy theories), or scope (your focus is too narrow for the intended audience--this is what you'll see most frequently). Usually, the editor won't provide any detailed comments beyond a "thanks, but no thanks."
  2. Rejection after review. This may be done for reasons of scientific quality or scope. Perhaps your arguments or methodology are flawed. Sometimes, the reviewers will think it's a good paper but just not of sufficiently broad scope or importance for the readership of the journal.
  3. Accept with major revision. The paper has some good stuff, but needs a lot of work in one area or another. Maybe additional data are needed, or expansion of a description.
  4. Accept with minor revision. I love seeing this one! The paper is generally good, except for a few minor points. Change these, and you're as good as published (barring any last minute antics from the editors).
  5. Acceptable as is. This has never happened to me, and I don't know that it even happens that often in the most lax of journals (and almost never in the most rigorous!).
One other thing. . .
Don't be afraid to bug the editors if you think the review is taking an abnormally long time to get back to you. More than once, I've discovered that a reviewer had forgotten about my paper they promised to review, and my little push was enough to get things moving again. Use your judgement, and ask around to friends or colleagues who have published in the journal previously about their waiting periods. In these days of electronic manuscript submission, three months is not an unusual turn-around time, and six months usually is an acceptable time to start rattling cages.

What if you're rejected?

This happens to everyone--more than once. When you get that rejection letter, take a good, hard look at it. Usually, there are some good reasons for rejection (and editors and reviewers who are doing their jobs will give you a good summary of these reasons), and if you fix up the paper you might have better luck in another journal. If it's a question of scope, resubmit to a more specialized journal (more on journal selection in an upcoming post). If it's a question of scientific content, try to improve your research and resubmit elsewhere. If you think you were unfairly scorned, you might consider a response to the editors - but only if your case is very, very good. There have been one or two times when I've been close to doing so, but decided against it after "cooling off" for a day or two. It stinks, but the editors and reviewers are often correct in their decision to reject your treasured manuscript (at least in my own personal experience)!

What if you need to do revisions?
If I get a request to revise and resubmit, I'll usually set the reviews and manuscript aside for a day or two, and then return. This often saves some silly mistakes or misinterpretations in the heat of having one's pet project marked up by an anonymous reviewer. More than once, I've returned to a review only to find out that the reviewer didn't actually say what I thought they said. This saved me a ton of work in the end!

Then, do the revisions! I'll often do the easy stuff (grammar, references, and stylistic tweaks), and then move on to the more grinding aspects of the revision. This often takes time, but it's time well spent. As I make the revisions, I'll also craft my "response to the reviews" letter. As Dave Hone mentioned elsewhere, it's not necessary to note every little change--my letter usually has a line to the effect of, "All stylistic and grammatical suggestions were incorporated into the manuscript." This letter is important--it's your chance to really highlight how you've incorporated the editors' and reviewers' comments. It never hurts to thank those involved, either.

What if they're full of it?
Sometimes, you'll get a comment or request that's completely unreasonable, or just flat-out wrong. In this case, first make sure you're absolutely certain that you're in the right--did you mis-state a point, or perhaps your phrasing was confusing to the reviewer? If this is the case, fix it and move on. If the reviewer is full of it, be proactive in a positive way. In your response to the reviews, state why you disagree with the reviewer and provide evidence to back this up. In most cases, editors will accept this. Be polite and thorough, and you're in the clear.

The last step
Let your manuscript sit for a day or two. Review all of the comments, and make sure you addressed them. Now, you're ready to resubmit. The manuscript is revised, figures are finalized, and your resubmission letter is complete. Good luck!

One final thing
Remember--the editors and reviewers are not your adversaries (well, 99 percent of the time). They are colleagues and scientists who want to help you publish the best research possible. I've had a variety of experiences with peer review--some agonizing, some ridiculously drawn-out, some finished in a breathtakingly short amount of time. In every single case, my work was improved by the process.

I'll close with an anecdote. Awhile back, I poured my efforts into what I thought was a great paper. I submitted to a high-profile paleo journal, with high hopes. It was rejected without review on grounds of being too narrow in scope. Despite this disappointment, I was absolutely ecstatic by the fact that the editor wrote two pages of comments on how I could improve the manuscript and resubmit elsewhere. Even though he didn't send it out for review, he read the paper, understood it, and took the time to provide constructive feedback. My research is better because of this one editor. I find inspiration in his efforts, and hope that you will too.

For more on responding to reviews, see Dave Hone's post on "How to write a paper." Also, Jeff Martz also offers some excellent perspectives on the importance of peer review over at his blog. I agree 100 percent.

Saturday, January 3, 2009

Hone-ing Peer Review

With apologies for the exceptionally bad pun, Dave Hone of Archosaur Musings fame has just posted a nice entry on how to review a paper. He fills in some gaps in my coverage, emphasizes some different areas, and provides an all-around good introduction to the topic. Definitely worth reading!

Tuesday, December 30, 2008

Nuts and Bolts of Edited Volumes

What fantastic timing! Dave Hone has just posted his perspectives on "How to edit a volume of papers." The post provides a behind-the-scenes look at the editorial process, from someone who has just finished up a major project (Flugsaurier: Pterosaur Papers in Honour of Peter Wellnhofer -- a must-have for anyone with pterosaur or general archosaur inclinations). Go check it out!

Saturday, December 27, 2008

Nuts and Bolts of Peer Review II

In the previous posting, I talked about the lead-up to the peer review process. Here, I'll discuss what I look for when I'm reviewing a paper.

The Text
This is the part of the review that usually takes the most time. I read through the entire paper from start to finish, usually in about two sittings. I find that if I try to plow through the whole thing at once, I'll get a little lazy towards the end (especially if it's a really long paper). As I read, I look carefully at every aspect of the text. Is each section logically and carefully written? Are all of the necessary references cited? Could the authors cite a few more papers? Are any portions of the paper redundant or superfluous? Is more detail needed in some sections?

In the introduction, I look to make sure that the authors really introduce their topic. I want to see a good case made for the necessity of the research, with a clear summary of previous work. It doesn't look good if the authors ignore previous publications on the topic--it gives the perception that they didn't do the sufficient background research or that they're downplaying others' contributions and overplaying their own.

In the methods section, I want to see a full explanation of what the authors did. If there's a novel method, it sure as heck had better be written up in detail. In cases of well-established methods, it's usually ok to just refer to a previous paper.

In the results section, I read through and make sure that the results make sense, and that the various graphs and tables match up with the data. For that matter, are there graphs and tables? If there's a description of specimens, I want to see good, clear text that fully describes the specimens under investigation. Figures, photos, and diagrams are important in this regard. Also, it's great to see lots and lots of measurements. If a structure is "relatively large," how large is it, and in relation to what?

The discussion and conclusions are usually where I pay very close attention. This is the "meat" of the paper, and the part that will often have the most impact on future researchers. Do the interpretations flow from the results? Are there alternative interpretations? Do the authors lay out any further questions that arise from the data? How do the interpretations of this paper fit into the broader literature? Once again, I want to see why this paper is important (without overselling the research). Many times, the discussion and conclusions "make or break" the paper for me as a reviewer.

Illustrations
Do the illustrations show what the authors want them to show? Are there enough illustrations? Too many? Is color necessary or helpful? When I'm reviewing illustrations, I try to look at the problem from two different angles. As a paleontologist, I will probably rely on the figures as a reference in the future. So, I want to make sure that the figures are as useful as possible! For photographs, is the resolution appropriate? Is there a scale bar? Are contrast and lighting such that the relevant features are clearly visible? Are the views of the specimens sufficient, or are there other views that might be useful additions to the paper?

The second angle is more of an aesthetic one, although this frequently ties into the previous angle. Are the figures attractive? Is there too much white space, or not enough between panels of a multi-part figure? For graphs, are the symbols legible? If color is used in a chart, is the color necessary or could it be turned into a grayscale without loss of information or clarity (this could save a lot of money for the authors and/or the publisher, because color pages ain't cheap!)? If color is used, are red and green featured in a way that will make life difficult for the color-impaired (there are quite a few of color-blind folks out there)?

Size also matters. Is the full-page illustration suggested by the author really necessary, or could it be a single column figure without loss of information? Is that single-column figure just too small to show this critical feature?

Finally, I'll read the caption with the figure. Does the caption make sense? Is everything labeled correctly?

By and large, paleontologists do a good job with figures. We're usually pretty visual folks, and there are some great graphical eyes out there. So, this is often one of the "fun" parts of the review.

Don't Forget the Basics
There are always some basic tasks that I try to undertake in reviews.

If there is a phylogenetic analysis, I'll run the dataset myself and see if I can duplicate the author's (authors') results. Sometimes an old version of the tree will accidentally "piggyback" into the final version of the manuscript or figures, and it's important to be able to catch this. I'll also "spot check" any matrices to make sure that things are coded correctly (within reason, of course).

For statistics, I want to make sure that A) the methods are appropriate to the question; B) the assumptions of the statistical methods are met; and C) the discussion of the results logically follows from the results themselves. As an author, I try (sometimes less successfully than more successfully) to follow these three criteria when designing and writing up my research.

Finally, I'll check the reference list against the text citations. It's amazing how frequently citations slip between the cracks (even in this age of Zotero and Endnote), even in my own writing. This is always the last thing I do with a paper before writing up my review--I read through the manuscript from start to finish, and mark each citation in the bibliography. If anything's missing (or extra), this is noted for the authors. Missing or extra citations are never a big deal in terms of manuscript acceptibility, but catching them is important for ensuring the overall quality of presentation.


The Final Write-Up
The last step for a reviewer is to write up the review in an intelligible manner, both for the editors and the authors. Of course, they'll all be getting the marked-up copy of the manuscript. But, I also think it's important to have a more narrative summary of the review. This is usually a page or so in length, and I'll cover a handful of topics.

First, I'll write a little summary paragraph--stating authors, title, and the main point of the article. I'll try to summarize my opinion on the clarity of the writing and figures, as well as the general quality of the science and novelty of the research. I also like to provide a brief statement on the probably interested audience for the paper--specialists on a particular taxon, workers in a particular subfield, a general audience, etc. Finally, I provide my opinion on the overall publishability of the paper--major or minor revision, or (very, very rarely) unacceptable in current format. These are just opinions, of course--the editors always have the final say.

Next, there's a section on "general comments." If the authors did something really great, I'll put that here. It's depressing to get a review back where the reviewers don't acknowledge the good points of the paper, and as an author I've always appreciated advice on what I'm doing well. The most important information, though, is the broad comments on the paper. Is the overall analysis well-conceived? Etc.

Finally, I write up a section of "specific comments." These are organized by page, paragraph, line number, and address individual sections of the text. Is there a misprint here? Should they have cited someone else there? Is this or that sentence not completely clear?

Anonymity or No?
In many scientific fields and for some journals, anonymity of reviewers is a given. Ostensibly, this is to protect the reviewers from potential future retribution as well as to ensure that previous personality clashes between the reviewers and the authors don't lead the authors to unreasonably reject comments. Some journals, however, allow one to "sign" a review. When given this option, I'll take it. For one, I try to avoid writing anything that I wouldn't be willing to say to the authors in person (although perhaps exceptional circumstances might someday lead me to reconsider). This doesn't mean I won't dispute crummy research--just that I won't hide behind anonymity in order to pursue a personal agenda. Secondly, I think it's useful to the authors to be able to contact the reviewers if there are any specific questions. This doesn't happen often, but I have occasionally had to pursue this route as an author myself. Finally, paleo is a small field, and how easy is it to remain truly anonymous? There might only be three experts on taxon X, and two of them are authors on the paper. It doesn't take a tenured professor to figure out who at least one of the anonymous reviewers might be (especially when the anonymous reviewer recommends the citation of 12 different papers, all by the same author).

So, that's how I usually go about reviewing papers. Everyone has a slightly different method and emphasis, but this seems to work for me. Any thoughts? Comments? For you authors, what is most useful for you when you receive reviews?

In the final post of this series, responding to reviews. . .

Tuesday, December 23, 2008

Nuts and Bolts of Peer Review

Peer review is one of the important cornerstones of academic paleontology--this process attempts to ensure that manuscripts considered for publication contain good science from start to finish. It certainly ain't perfect, but until someone proposes a practical, consistently-applicable alternative, peer review (when properly implemented) is a pretty effective "gatekeeper" of the literature.

Of course, peer review relies on us--the scientists. If you're at all interested in academic paleontology, you have been, are, or will be involved in this process at some level. Personally, I've been very fortunate to take part in peer review at both the "giving" and "receiving" end. In this series of posts, I'll talk about my approach to reviewing manuscripts (which I've now done for a number of journals and other publishers) and receiving reviews of my own papers. The intent is not to say that my philosophies are perfect (or that I follow them perfectly myself)--they'll certainly change as I gain experience as a researcher and a reviewer. Instead, the purpose is to provide some insight into the process for those who are relatively new to the field.

Why Participate in Peer Review?
From a pragmatic perspective, you have to play the game if you want to get into the literature (in most cases). From a scientific perspective, peer review ostensibly separates the wheat from the chaff (or at least lets the chaff blow over to another journal). Like it or not, peer review is something every scientist has to face.

Personally, I have found the process to be quite rewarding. As an author, the feedback I receive from reviews is invaluable--and no matter how much it hurts sometimes, the reviewers are usually right. They catch my awkward sentences, inappropriate analogies, and convoluted analyses. Sometimes they'll even suggest new angles that greatly increase the scientific value of the paper. Often, it involves more work--but my papers are always better for it. Sure, it's never fun to have mistakes pointed out, but I'd much rather this happen at the manuscript stage than in a public rebuttal on the pages of a journal or blog.

As a reviewer, I just have fun with it! First, I won't deny that it's a bit of an ego boost to be asked to review a paper. Beyond this, it's very gratifying to be able to use some of my (obscure) research skills to contribute to the scientific process. Science involves community--and peer review is an important civic duty. Also, it's kinda fun to learn about the latest breaking research months before it appears officially in print.

Why Me (or Her, or Him) as a Reviewer?
Journals usually find their reviewers through two sources--from the authors or from an informal "reviewer pool." As an author, you always have the option to suggest a list of potential reviewers (whether in the cover letter or in a specific part of the online submission form). This also means you can suggest the exclusion of reviewers. If you think that Professor X will give an unfair review no matter how good your paper is, it's perfectly within your rights to request he or she be excluded as a reviewer (and there's no need to say why). But, recognize that it's also perfectly within the journal's rights to ignore your suggestion. However, my general sense is that journal editors will try to respect authors' wishes whenever possible. So, this is probably the most control you (as an author) have over the process (aside from writing good quality papers). It goes without saying that one shouldn't abuse this privilege--don't try and pack the review panel with "easy" reviewers (good editors will see right through this), or exclude someone just because you think they might be "tough." An editor once told me that authors don't realize that the referees suggested by the authors are often the toughest critics of manuscripts!

The second source of reviewers for journals is from a "reviewer pool"--the informal list of experts whom editors think would know something about the manuscript in question. The easiest way to get into this pool is to publish your own quality work. Once you're known as an expert in hadrosaur hindlimb biomechanics, the odds are pretty good that you'll get papers to review on hadrosaurs, hindlimbs, biomechanics, or any combination of the three.

Edited volumes present a special case. Typically, the editor(s) for such a volume will draw from the contributing authors as reviewers for each others' papers. If you're the sort of person who likes reviewing papers (or whom the editors learn does a good job at reviewing), this can be a lot of fun, and/or a lot of work.

What Happens
The first step in the process, for a reviewer, is an email from the journal editor. Often, this comes as a form letter with the paper title, list of authors, and perhaps the abstract. The reviewer is given a choice--will you accept the responsibility of reviewing, or decline?

A reviewer may choose to decline for one of several reasons. Perhaps he or she doesn't have time at the moment to review the manuscript properly. Perhaps the manuscript is so completely out of the expertise of the reviewer that it's not worth the effort. Or, maybe the requested reviewer had a non-negligible role in the research, and it thus wouldn't be appropriate to review the manuscript.

If the reviewer has seen the manuscript before, this requires some care. Perhaps it was as a reviewer for another journal from which the paper was rejected. Perhaps the reviewer looked over the paper for the authors before they submitted the final version. Either way, it's usually a good idea to let the editors know. These aren't deal-breakers (manuscripts can change substantially between drafts, after all), but editors usually appreciate knowing this sort of information.

Accepting the invitation to review a paper is a grave responsibility, with lots of unwritten obligations. As a reviewer, you promise to provide a fair, thorough, and timely report on the manuscript. The Golden Rule applies here--would I, as an author, want to receive a sloppy evaluation delivered a year after initial submission, in which the referee clearly hadn't bothered to read the paper? Hopefully the answer is obvious.

Coming soon. . .reviewing text, figures, analyses, and much more!