Here is a followup note to my series of posts on “rejection,” that had to do with the paper that an editor bounced from PLOS Comp. Bio.

I’ve gotten pretty good at reading between the lines.  My “intuition” for why they bounced it in the first place was that they simply disliked our new modeling approach because it was too new and different.

After posting here, the managing editor contacted me and said that she would look into it.  For that I am grateful.

After about a week, I got a review back from the associate editor who had made the rejection, apologizing for not giving more information in the initial rejection.  That was good.  At least I now knew why our paper had been bounced without sending it out for review.

The reasons for rejection were as I had suspected.  In his explanation, the editor told us that we had taken an inferior modeling approach to existing methods.  I found that offensive.

How can one judge that an approach is “inferior” when it hasn’t been tried to any great extent?  Only a small handful of people have attempted to use this method, because it is so new.  To compare something that is brand new (i.e. “beta”) to something that people have been working on for 20+ years, and say it is “inferior” because it doesn’t have all the bells and whistles yet seems silly.

That would be like comparing the sophisticated horse & buggy’s of the late 1890’s to the automobiles of the day.  People said that those automobiles were clunky, slow, and would never amount to much.  Where are those horse & buggies now?

On this subject,  I wrote to him:

I’ll leave you with an example provided by a good friend and colleague of mine:
“Sue and Judy each have a methodology..
1. The predictive power of Sue’s methodology A increases by a constant rate of At time 0, it is at 10.
2. At time 10, Judy comes up with methodology B. The explanatory power of methodology B is really good, in fact better than methodology A. Ordinary folks would look at Judy’s model and say “hey, that looks an awful lot like the actual system I’m interested in!” The predictive power starts at 7, then grows geometric rate of 1.5x. It’s a really good methodology!
(this is where we are now)
3. At time 11, Sue’s methodology is at 12. Judy’s is 10.5.  But Judy can’t get her papers published because “the predictive power is less.”
4. At time 12, Sue’s methodology is at 14. Judy’s is at … well Judy isn’t really sharing her methodology with anyone anymore. She lost all her funding because no-one was interested in a methodology that predicted less than the existing solutions. Instead, Judy has a nice quiet well-funded lab at company X, and noone really knows what she’s been up to lately..”
Science is conservative. This is not just about our own little modeling issue – this kind of story gets replayed over and over again.  People who are within one established paradigm almost never welcome a new one.  I’m not sure why; I’ve never understood that, since I’ve always loved new approaches and new methods.
I used an example from traffic to explain why I think our modeling approach (agent based modeling) adds something useful:
A good example is traffic flow.  Many people have attempted (and some, unfortunately, still do) to model traffic with equations.  They can come up with a nice self-consistent system of stochastic equations.  But what does this do?  Does it “predict” when, or even where, an accident will occur? No.
In fact, no bulk equation can “predict” the effects that a single weaving cell-phone jabbering or drunk driver might have on the traffic around him.  Because that is a strictly spatial process, where one incident can have far reaching effects on the rest of the system, say if the weaving driver hits an oil tanker truck and causes a spill.  That is not so far-fetched, it happens all the time in the real world. I know someone who had to clean up the mess when a cell-phone driver wasn’t paying attention and ran into a semi-truck at freeway speeds.
Yet a nice, self-consistent mathematical model will never show this case.  It will never explain “how” or “why” something happens.
To introduce such effects to a mathematical model, one has to add arbitrary noise terms.  To do so, one has to make assumptions about the sources of the noise, e.g. “I think that the drunk driver will have an effect at this point in my system.”  What if it isn’t the drunk driver after all? What if the speeder is actually the one more likely to cause the accident, because of the particular road configuration where it narrows?  If you’ve added your arbitrary noise terms to represent “stochasticity” you will never know the difference.
With an agent-based model, you can lay out the road structure in the model, and actually watch traffic moving in the model.  You can simulate the effects of an occasional drunk driver.  You can observe how the local context, such as a narrowing of the road across a bridge, interacts with objects like that drunk or speeding driver.  You can test whether small changes in road design reduce the accident rates.  You can help the traffic engineer solve real problems.  Isn’t that what modeling is supposed to be about?
The traffic analogy applies just as well to the inside of a cell.  The cell is not some spatially arbitrary “network” like an electrical circuit.  Everywhere in the cell there is fine-grained structure that influences the reactions taking place at that particular location.  Idealized circuit design doesn’t take spatial properties into account.

I got no response to that email – none.  Not even a short message that says “we agree to disagree”.

It is too bad that people feel such strong need to defend their turf, and their ideas, at the exclusion of other ideas.

The ostensible purpose of scientific publishing is to get the work out there for the audience to read, discuss, and debate about. Only time will tell whether this modeling approach is better or worse than existing approaches.  No single person can judge that – there is not enough information.  But if censorship occurs, then there is no discussion and no debate.  There is no way of resolving the question about which method is better, if there is no discussion.

Another senior scientist I know told me he’s seen a lot more of this type of censorship as grant funding has gotten tighter.  I don’t know whether that’s true, but if it is, it makes a sad statement.

However, the internet is too powerful a tool to let scientific censorship sit undisputed.  I don’t think that many scientists have yet exploited it to the full extent that they could.  As an experiment, I’m going to start applying some of the very same principles that I talk about in the book that I’m writing (Marketing Your Science) to make an end run around the censorship.

I’ll talk a bit more about some of the things that I’ll do in future posts.

Science should not be about censorship or rigorous adherence to a given ideology.  Instead, science is at its best when it consists of open exploration and testing of new ideas.


    3 replies to "Rejection part III: conservatism in science"

    • Rick

      You think new stuff is hard to get published, try getting important negative results published.

      • morgan

        I agree with you on the difficulty of publishing negative results. I had a student that did 2 years of work for a negative result. It was an important result, yet I know that we won’t be able to get more than about a paragraph or two into a paper about it. I know someone who wants to start “The Journal of Negative Results” for exactly that reason…

    • Sherwood Helf

      I must say, I enjoy reading your article. Maybe you could let me know how I can bookmark it ? I feel I should let you know I found this site through google.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.