Downright stupidity: papers and grants

by morgan · 2 comments

 

Wow. Not in a while have I been so infuriated as I am after reading the recent bit tiltled “Could the NIH payline be too high?” from Nathan S. Blow of Biotechniques (“From the Editor” piece for Vol 55, No 1, July 2013).

His argument goes like this.

Big lab with lots of funding equals LOTS of articles.

Little lab with not so much funding equals only a FEW articles.

FURTHERMORE, the large labs are more highly “productive” which means the study sections are “doing the right thing” in giving them more grants.

THEREFORE, be it resolved, we should reconsider caps on the amount that any one lab may receive. We should not “meddle” and we should leave paylines low to select only “the best science” (as equated to number of publications).

Damn. Where do I start to tear this nonsense apart?

The Nobel Prize

Let’s consider the Nobel winners. Or the Lasker prize winners.  Are these all people from big labs publishing tons of papers?

NO.  Of course not. In many instances, these prizes have gone to an individual who was slaving away in obscurity until he (or she) came upon something really BIG.

When that happens, often his community is VERY SLOW to allow this person to publish these new results, because nobody believes the new discovery. Barry Marshall discovering H. Pylori. Daniel Schectman discovering quasicrystals. Many, many more.

Their discoveries had nothing to do with paper-cranking-outing. Often, their discoveries came because they were thinking and pursuing big questions, instead of paper-cranking-outing.

The correlation between paper counts and “productivity” is bogus

Some people like to write many different versions of the same article, each one containing a small incremental advance. This is sometimes termed, affectionately, the Least Publishable Unit (LPU).

Others like to publish only the truly BIG and impactful stories in their labs, which may mean only 1-2 per year.

Now, it so happens that for the paper-counting types (i.e. bean counters) that they may mistakenly equate the labs cranking out LPU’s as being “more productive.” That’s nonsense.  It’s often no more than a matter of how much one is willing to write about increments of the work (i.e. writing about essentially the same thing over and over).

Yet, many people, even non-bean-counters, become unfortunate victims of this ruse. They get swayed by their bean-counting colleagues into thinking that #papers = productivity.

So, study sections DO often reward the people with more papers with more grants.

The paper assembly lines

That means we get a bizarre kind of “survival of the fittest” where the fitness function becomes the number of papers spewed forth.

It has created a system where a few of its denizens are willing to stop at virtually nothing to maintain their paper-cranking-out assembly lines.  In this survival-of-the fittest game, where the fitness function has nothing necessarily to do with REAL advances that benefit the world (i.e. the people who ultimately PAY for the science) – these people can simply outcompete all others. I was going to use a metaphor of a certain kind of bug that breeds readily and can survive in almost any environment, but I won’t be quite that crass.

(I think many of these people are well-intentioned. They’ve just realized what it takes to survive in the current system, and they’ve taken steps to maximize their survival probability.)

Has it “cured cancer” or put people on Mars or… ???

No. The large numbers of papers being cranked out has not meant that we have more of the fundamental advances that benefit humanity. It simply means that we have more papers to mark off on the totem pole. Who has time to read them all? Almost nobody.

About the only person in the end this is good for is the publishers of journals. See a correlation there?

The new animal model

The other day I talked to a client who had spent several years developing a new animal model of a very important disease. She’d had no publications on it yet, because, well, it is new and it took a few years to develop!! Now that the model is done, she will be getting publications, but she was (rightly so) worried about how grant reviewers will see her in the interim with no publications.

Basically what the bean counters are saying is this: don’t bother to stop and think of a “better way.” You don’t have time for that. Instead, think about the publication you can get out RIGHT NOW that shows you are being (quote) “Productive.”

Sadly, I often have to tell my grant writing clients that very same advice. What I’d LOVE to tell them is DO SOME REALLY IMPORTANT SCIENCE. But what I HAVE to tell them is: “Get that little publication out the door asap so you’ll have a chance of getting your next grant funded.”

It’s stupid. The system is going to collapse if we keep this up.

Software for solving big problems

You may think to yourself, my oh my, Morgan’s really going off on this one.

Yes I am! It’s because I have experienced this myself.

See, my lab’s Magic Power was to develop sophisticated software to tease out new types of information from complex data sets.  One of the software packages I developed in 1991 STILL gets requests to this day. After moving institutions, the link went missing, and people are emailing me complaining that that software isn’t available (I have ZERO means of supporting it except as a charity effort now).

That software, that people are still using today, took ~6 years to develop and lots of staff time in addition to my own. I got only two publications on it’s development during that period, then one more much later when we adapted it for a different use.

In other words, for a very large investment that helps multiple other labs, we “seemed” to be making very little progress because we weren’t just cranking out papers. We were actually focused on the most important problem at hand: how do we develop the very best software and algorithms that we can??

Now, even though my lab spiked above the so-called magic threshold of $1M/year for a few years, we never got to ~30 publications per year, or anywhere close. That’s because software cannot be hurried. More money doesn’t mean faster progress.

In fact, the main reason I left UNC was because they pulled the plug on a project that was originally proposed as a 5-year effort to develop software that integrates proteomics, genomics, and microbiome studies across campus.  After 1 year and lots of silly politics they pulled the plug, not telling me that they had done so – not even asking for a progress report – then charging a few months of employee time back to my grants. Talk about shortsighted.

In other words, they got impatient. (It didn’t help that there was a brand new dean for research, who knew nothing of our project). They were bean-counting rather than keeping the big picture in mind. I’m sure that, now, 3-years later they still have no good way of integrating these complex data sets across the different facilities. They probably never will. It’s a difficult problem that will take significant time, smarts, and resources to solve.

Nathan Blow is simply wrong. So are all the bean counters.

We are scientists, not accountants. Science is about figuring out how our world works. It’s about coming up with new cures. It’s about developing new technologies.

Those things take time. They take patience. And they certainly don’t progress any faster just because someone is publishing more. (Usually quite the opposite).

This is why I am no longer running a lab. I’d had enough of the BS after that experience with UNC.

Now I do what I can to help others who are still in the system fighting to survive.

The one thing I recommend to all my “non-assembly-line” and “non-bean-counting” colleagues is to push back. DO NOT LET THE BEAN COUNTERS WIN. Yours is a Noble cause: the cause of fighting for meaningful, quality science rather than simply the quantity of science.

I wish you luck, and I’ll be there behind the scenes, doing what I can to support you!

signature small Downright stupidity: papers and grants

{ 2 comments… read them below or add one }

Leave a Comment

Previous post:

Next post:

retargeter