Tim Gowers recently suggested an answer to “How might we get to a new model of mathematical publishing?” which I highly recommend. While there has been much talk for years now on how to replace the journal system, I think that his proposal is explicit enough and simple enough to actually be implementable if seriously attempted.
One of the many comments to his post questioned the basic premise:
[…] I have no idea why people constantly claim that the journal system is broken. It seems to work just fine to me. The only real issues I’ve heard people bring up are 1. the open access issue, and 2. the cost issue. [Here comes a discussion of how these issues are on their way to be solved — a point to which I mostly agree] Aside from the above two issues, what exactly is this suggestion supposed to accomplish?
I’d like to answer this explicitly and talk about the problem with the journal publication system, even assuming that we’ve completely solved the open-access and cost issues: Journals are simply not fulfilling their main three functions: dissemination, verification, and allocation of attention.
- Dissemination: While originally the main point of a print journal was so that Prof. A. can see the results of Prof. B. relatively quickly, it is clear that, in the age of the Internet, journals only slow dissemination compared to, say, putting stuff on the arXiv.
- Verification: Despite pretenses, refereeing is not really trust-worthy. Results of some importance become believed not when refereed but rather only after the community has studied them for a while.
- Allocation of attention: an important goal of leading journals is to filter the “important” papers out of all the submitted ones, so that readers need not read everything but rather only the important stuff. I am afraid that today so much is published so that most of what one reads in most journals should have been filtered out. Partially this is a problem of the publish-or-perish culture and partially due to the coarseness of the refereeing model as a filtering tool.
All three of these main goals can be improved upon considerably using the right tools (that need to be figured out) on the Internet. At the same time that the journal system has lost its usefulness, it has created a lot of harmful side effects: the writing of countless worthless papers, lack of recognition for surveys, books, or other non-“paper” contributions, blind and silly use of metrics like impact factors for hiring, grants and promotion which lead to wasteful optimization of these rather than of real research. All these harmful side-effects could be tolerated had the system served its main purpose — but now we are just paying the price but not getting the goods.
Succinctly and elegantly put. Solving this problem may not be easy, but I believe the solution starts by recognizing the extent of the problem.
Let me respond to your three points. Everything I say here is from the perspective of a pure mathematician.
1. I agree that journals no longer serve a role in disseminating new papers. However, they do serve an important role in preserving old ones, especially combined with Math Reviews. The arXiv does this very poorly — so much of it is junk/wrong or poorly organized, and searching through it to see if something is known is a serious exercise in frustration. This will only get worse as time passes and the arXiv gets larger.
This is probably less of an issue in computer science, but I routinely need results from papers that are 30-50 years old.
2. I think you underestimate the importance of refereeing. It’s kind of redundant for big-name results — they got a lot of attention, so the community will find problems with them sooner or later. However, most results do not receive nearly that level of scrutiny. My experience as an author and as a referee is that the process works fairly well, not just for finding mistakes but also for helping polish the exposition. Of course, it is not perfect (nothing is), but it does provide a very valuable filter.
This is related to my first point. 30 years from now, someone might need a result that was not all that important today. Of course, you should make a habit of understanding everything you use, but at least if appeared in a decent journal you have some confidence that it is not complete nonsense.
3. The “allocation of attention” thing misses an important function of journals, namely as a crude measure of how important someone’s work is. I frequently have to compare mathematicians who work in very different fields from mine (for instance, when serving on hiring committees or grant review panels). How else am I supposed to do this? Of course, one reads letters and what not, but if you have 700+ applicants you need a quick way to weed out people who have no chance whatsoever.
Hey, I just went to a talk by Michael Nielsen 😉
I agree with the problems you mention, but I’m worried that the suggested alternatives — such as just putting papers up on a website and opening it up for comments — would cause other problems. For one, it might make it very difficult for a beginning researcher working on a novel or otherwise unpopular topic to get anyone to evaluate his or her work; it would seem to institutionalize that to break into the community you need to work on a popular topic first. Even for established researchers it would seem to create particularly strong incentives to work on popular topics. Journals (and seriously refereed conferences) address this issue to some extent, by (ideally) giving every paper a fair hearing and at least a chance at some stamp of approval.
Btw., as you know, with ACM TEAC we’ve been working hard to address 1, 3, and arguably 2 as much as we can 🙂
Andy and Vincent:
While certainly it is possible that a system that would replace journals would still have some attributes in which it is inferior to the journal system, I actually believe that for all the points that both of you raised, a web-based system similar to the one proposed by Gowers could be superior to the journal system:
(1) Archival value: It would seem to me that papers on arXiv have a higher certainty of being maintained for dozens or hundreds of years than would journals. The arXiv can be mirrored and copied at will and there will certainly be many bodies who will do that (and index it, etc.), while if a publisher goes bankrupt (or his servers get destroyed by an earthquake), all his publications may disappear. I think that even now arXiv is a safer bet, and certainly that will be made to be the case if it becomes a primary repository of scientific work.
(2) Quality of refereeing: We could certainly debate the general level of refereeing today, as many have done. Maybe the situation in Math is still OK, but in CS it is not, and I think that there are reasons to believe that it is deteriorating in all fields and will likely continue to do so. In any case one need not claim that current refereeing is worthless but rather just be convinced that it is not very good and can be improved upon. In fact the key challenge addressed by Gowers is to try to figure out an alternative that will give higher value than refereeing. I think that this should not be too difficult using web-based recommendation systems, say from the Stack/Math Overflow family.
(3) Metrics of excellence: while many would prefer any alternative system not to provide any crude numerical measures for ranking researchers, I personally agree that this would be a feature rather than a bug. Web-based reputation systems (again e.g. of the Stack/Math overflow family) can very naturally offer “better”, more informative, less noisy and less manipulable metrics. No question that the exact tuning of such systems is a challenge (not addressed by Gowers’ post), but it would be hard to do worse than impact-factors.
(4) Novel or Unpopular topics: I actually think that the journal system handles these very poorly: if 2 or 3 random referees don’t like a paper in a new/unpopular area then it is rejected. In contrast, an open recommendation system would give it a much better chance: as long as even a few reputable people are interested and comment on it, the paper starts getting attention.
1. This seems pretty debatable. My university library has complete runs of both well-known and obscure journals going back to the 19th century. When I need something they don’t have, our librarians have always managed to find it via interlibrary loan, even if the publishing company disappeared long ago. Talking to my friends in the humanities, I know they have easy access to obscure things published many hundreds of years ago.
Contrast that with trying to access computer data that is even 20 years old. Standards and data formats change, etc.
Paper books have a track record for surviving pretty much everything (collapses of civilizations, etc.). Data doesn’t.
2. Basically all active mathematicians referee papers for journals. But we can’t even get many top people to post papers to the arXiv, much less participate in social networking type stuff. Switching to a system like Gowers suggests would drastically narrow the range of people who are involved in the refereeing process. Also, while many top people are “active on the internet”, many of the most active people on the internet are, shall we say, not top people. It would give them a voice that they do not deserve. This would be damaging to mathematics.
3. God forbid we ever use “online reputation” for hiring! I suspect that we will have to simply disagree on this one…
4. I always hear people complain that papers on novel or unpopular topics have trouble getting published, but I personally have never seen evidence of this. Certainly it is hard for certain fields to publish in certain journals, but there are enough very good journals that pretty much any mainstream topic has a good journal to publish in. I mean, without trying I can list >20 journals that are good enough that I could imagine tenure cases at good places built around papers in them…
And back…
1) I was actually comparing digital copies of journals to the digital arXiv. Most libraries can not carry physical copies of more than a fraction of journals anymore, so I doubt that paper is a viable candidate for archiving research anymore with the modern size of science. If you believe that it is, then certainly it is at least as easy to print arXiv papers on paper.
2&3) You are describing how things work today. The whole point is that if we can change the system, then the people who are now putting their efforts into writing, reviewing, and editing journals will do the same for the new system. E.g. in CS a conference culture emerged and all leading people put their efforts into conferences. Similarly, while “online reputation” today is not a serious candidate as a factor in hiring decisions; much of the point of Gowers’ post was how to design one that can be. I believe that the bar set by journals is pretty low and can be bettered by a well-designed system. How to get from here to there is exactly the problem.
Dear AndyP, Regarding point 1, I can say with confidence that the cost of maintaining a first rate library is prohibitive in many parts of the world. I am extremely grateful to those mathematicians who keep up to date copies of their papers on the arxiv.
An open access journal system would be a real boon to large parts of the world. Your reasoning seems to me to be overly US-centric. Am I missing something, or do you really mean to ignore the rest of the world?
Best,
Stephen
PS- as for the issue of data surviving apocalypses, well, redundancy is never bad, right? No one is suggesting destroying print copies. If the zombies come 100 years from now and shut down our computer systems, I’m sure that marked up copies of important mathematical papers will be found in the offices of graduate students.
Overall, I agree with Andy, and I found some of the alternative suggestions and arguments naive, sometimes absurd or self-contradictory, and often simply wrong. Of course, the quality of the journal system depends on the work of the referees (and on the ability of editors to find good referees). I dont see alternatives to the refereeing process. The fact that papers usually are arxived before being submitted and sometimes get some early attention and evaluation is an advantage for the refereeing process.
[…] Tim Gowers wrote an interesting post where he proposed in surprising many details an Internet mechanism (mixing ingredients from the arXive, blogs, MathOverflow and polymath projects) to replace Journals. Noam Nisan (who advocated similar changes over the years) wrote an interesting related post entitled the problems with Journals. […]
A couple of months ago I wrote a bit about our experience with a new journal called “Semantic Web” with open review to make reviewing more accountable [1]. The chief editors recently penned a paper about the journal, which you all might find of interest [2]
[1] http://micheldumontier.blogspot.com/2011/09/scientific-publishing-were-not-quite.html
[2] http://knoesis.wright.edu/faculty/pascal/resources/publications/LP2012.pdf
[…] my previous post I tried to spell out the problems with the current academic journal publishing system, and pointed […]
[…] system ain’t broke so we shouldn’t fix it. Rather than comment on this, I refer you to Noam Nisan’s elegantly written response (to which Andy P in turn responds). Share this:TwitterFacebookLike this:LikeBe the first to like […]
I can’t speak for others but I approach journal refereeing quite differently from the way I approach other reviewing of papers. I make sure that I have a concentrated block of time and work through a paper line by line. I can’t do many papers this way and I can surely get fooled but it is radically different from the way I would be likely to approach rating a paper posted to the ArXIv – even one I am interested in.
Of course this is all individual reviewing and maybe the “crowd-sourcing” of reviews is going to be better in overall quality. When I review a paper I do it not only as my “good deed” for the field, but as something of a personal favor to the editor. I am not sure I would participate at as deep a level in a situation in which I was just doing the general community service – and I shouldn’t be doing it as a personal service to the authors.
To me, we could use both the original detailed scrutiny followed by the long term ability to comment on a paper.
Dear Noam, I beg to disagree with two of your three points.
2. Verification: Despite pretenses, refereeing is not really trust-worthy. Results of some importance become believed not when refereed but rather only after the community has studied them for a while.
Referees main and most difficult task is to read the paper carefully and verify it (or at least become confident in its correctmness). Usualy, this takes quite a lot of time. Having referees who carefully reading the paper before publication gives additional incentives for the authors to double check the arguments before submitting. The alternative ideas of public reviews (and “shared refereeing”) are overall much weaker for this purpose. (Indeed it is correct that for rather important papers there is a stage of the community studying the papers and digesting them. (Here on rare cases Internet cooperation can be useful.) But also for this process the earlier (or parallel) refereeing (for verification and for improving the presentation) is important.
3. Allocation of attention: an important goal of leading journals is to filter the “important” papers out of all the submitted ones, so that readers need not read everything but rather only the important stuff. I am afraid that today so much is published so that most of what one reads in most journals should have been filtered out. Partially this is a problem of the publish-or-perish culture and partially due to the coarseness of the refereeing model as a filtering tool.
The claim that most papers are bad and should be filtered out is a long-lasting sentiment which I think is bogus. The real problem we have is that there are too many papers of good quality that we would like to understand. (It is quite easy to filter out papers of no interest.) Also the complaints about “publish-or-perish colture” are strange. Scientists are supposed to publish! (You do not hear similar complaints about drive-or-perish from bus drivers or cure-patients-or-perish from medical doctors).
In addition to complaints about “too many papers” both yours and Tim’s posts call for substantially widening the scope of what papers should be. This is a little self contradiction.
Continuing Gil’s driver metaphor:
Bus drivers are supposed to take people from point A to point B; driving is just the means to do that. They may be paid according to the distance they traveled rather than to the distance between A and B simply because that’s what the old “speedometer” technology could measure. Luckily for passengers on buses, they can tell if there is an unjustified gap between the distance they pay for and the real distance between A and B.
Passengers on Taxis are not so lucky: as they are often tourists they pay according to the number of miles driven even if the actual distance between A and B is much smaller. No wonder that in the Taxi business we do have a severe drive-in-loops-or-perish problem.
Unfortunately, crazy eyed reformers who suggest that modern GPS+Google-maps technology would allow us to move to a system where one pays taxis according to the shortest route from A to B are always shot down by the taxi-drivers guild.
Are you suggesting researchers be compensated based on the (relatively short term) success of their papers?
Is the goal of the proposed new system to reduce the number of researchers and their welfare, in addition to reducing the scope of research? I honestly didn’t realize that the goal of the exercise is to cut researchers down to size so that the lazy bums don’t get paid when the research doesn’t pan out.
If the above doesn’t reflect the goal of the proposal, it’s unclear what the cost you bring up in the Taxi example is. The argument that researchers are paid to do research, (and put it in writing) is sound. As for the fact that most papers are not important, one is reminded of Sturgeon’s Law.
“Are you suggesting researchers be compensated based on the (relatively short term) success of their papers?”
Heaven forbid… I am merely suggesting out that the problem of too many “publications” is real and is created by the way that we currently pay researchers by the “journal publication”.
Paying them “by journal publication” isn’t actually a very common occurence. We pay them to do research, and use publications as an imperfect proxy that they’re not sitting on their bottoms, merely claiming to be thinking big thoughts. Wouldn’t your replacement system end up playing a similar role?
Also, it’s unclear what you mean by too many publications. Are there too many novels being written? It’s a creative endeavor after all. If we could really hone in on the masterpieces, we could forgo the more mundane stuff, but we can’t. Plus you sometimes need the mundane stuff to set things up…
An excellent analysis of what is needed from a good system for publishing and reviewing which demonstrates some of the problems with various suggested Internet based systems can be found in this post http://ilaba.wordpress.com/2011/11/14/random-thoughts-on-publishing-and-the-internet/ .
[…] So why do researchers continue to publish in these journals? Well it is well known that academia is instilled with a publish-or-perish mentality, and moreover the specific venue in which you publish influences how your peers regard your work. Journals are scored by impact factor and publishing in journals with high-impact factor indicates that I am a good researcher. The quality of journal in which I publish plays a significant role in hiring decisions and other career opportunities and this, at least to me, is the primary reason why researchers continue to submit to these closed journals. There are some other factors, that motivate researchers to publish in journals, such as the peer-review system and the fact that publication is a sanity check that the work is correct and reasonable. However, I think the main motivation is to demonstrate one’s research ability. Noam Nisan talks about some other reasons and more details about this problem here. […]
third-age range…
[…]The problem with journals « Algorithmic Game-Theory/Economics[…]…
[…] “A more modest approach”) and also some entries on nuit blanche or the post “The problem with journals”. Recently, I found that the International Mathematical Union (IMU) has started a blog on […]
[…] have no intention to revive the discussion on the pros and cons of journals, but conferences proceedings in computer science, and in AGT in particular, are […]