Navigation
Public engagement

Virus Fighter

Build a virus or fight a pandemic!

Play online

Maya's Marvellous Medicine

Read online for free

Print your own copy

Battle Robots of the Blood

Read online for free

Print your own copy

Just for Kids! All about Coronavirus

Read online for free

Print your own copy

Archive
LabListon on Twitter
« EU grant success to harness the immune system to treat brain damage | Main | New lab babies! »
Wednesday
Jan012020

Unpopular grant review opinions

Unpopular grant review opinion 1. Sections on ethics, equality, open publishing, budgets, etc make grants almost unreadable, and should not be sent to external reviewers.

I am not saying that these things are unimportant - far from it - just that a data dump of 100-page long applications with 10 pages of actual science is not a useful way to do things. Issues such as open publishing and equality could be better dealt with at the institute level. The institute should have the requirement to show they have appropriate policies in place before anyone from that institute can apply. These are not individual researcher issues. Issues such as budgets are best dealt with by financial administrators. Do I know the appropriate budget for a post-doc in Sweden? No. So don't send me 20 pages of financial material. This could, and should, be checked internally and not sent to external review. Guess what, I also don't read Greek. So why are there 15 pages of internal Greek administrative material in the 68-page document sent to me to review? It just makes my life difficult, and makes it more likely I will miss important bits. I'm also not a fan of letters of collaboration. If you say that you work with someone, I'm going to believe you. It is a weird thing to make up. If I can't trust you on that, why trust you on anything you've written?

Too many funding agencies seem like they want to have boxes X, Y and Z ticked, which is good. Unfortunately, rather than actually check it internally they just want a data-dump passed on to reviewers. Reviewers who are selected for their familiarity with the science, not for administrative sections. This approach looks like the boxes are ticked, but it is not actually a good way of effecting change. It sometimes feels more like a protection for the funder, so that they can say it was checked by external reviewers. 

What do I want as a reviewer? First, a simple log-in that doesn't require me to fill in all my details. Then a small application with just the science. I want an easy-to-navigate website, with just two open text boxes (project and applicant) to fill in. I want practical guidelines on what the scores given mean (e.g., funding chance at each score, solid examples of each score). And that's it. Anything more just makes my life harder. 

Unpopular grant review opinion 2. Reviewing grants is an inherently wasteful way to distribute resources.

Yes, grant review filters out some bad ideas and in theory saves money. But science has to fund ideas that won't work. There is no other way to push back the frontiers.

The main alternative is just bulk funding. Block funding every researcher equally is not ideal either. If there are no penalties for failure and no rewards for success, the system can become stagnant. This is why block funding systems were gradually phased out and replace with grant review. But are systems of 100% grant review the most efficient way to allocate resources? An enormous amount of work goes into writing and reviewing good ideas that are never funded. Would it not be preferable to have some of that time spent on science?

I would prefer it if institutes were required to provide a minimum core funding of 2 junior staff or students to each group leader, with appropriate consumables. Yes, this would take up perhaps 50% of research funding. Yes, limits on group leader hiring would be needed. But under this system, the cycle of insecurity and short-termism would be broken. Small labs could work on hard problems over the long-term. Effort would be spent on research not writing unsuccessful grants.

The pot of funding for research grants would be halved in size, but the number of applications would go way down. I suspect that the actual success rate for grants may even rise under this system. A lot of scientists would be okay with a small team, and might even prefer it. At the moment, a lot of applications are made from a place of desperation, for survival of the lab. Group leaders are constantly trying to grow, because often growth or death are the only options. Those "survival" grants would now not be needed. Grant applications would be reserved for either a) those who have proven their ability to efficiently lead a larger team, or b) the small labs that have a special idea that needs the extra boost in resources.

I suspect that this hybrid system would be more efficient than either 100% block funding or 100% grant review funding. Any funders willing to rise to the challenge?

Unpopular grant review opinion 3. Aspirations to remove the use of metrics, such as DORA  are well meaning, but ultimately cause more problems than they solve.

DORA seeks to remove the influence of journal impact factors. For good reason, since impact factors are problematic, and an imperfect measure of quality of the articles in those journals. But do you know what else is imperfect? Every other system.

I am reviewing 12 grants for the same funder. The applicants have an average of 70 papers each. Let's say that a proper deep review of a research paper's quality takes 3 hours. Just the CV assessment would require 2520 hours of deep review. That is nearly 4 months of work. No one actually does that. Even if we had the time, it would be a repeat of effort already done by the peer reviewers at the journal. 

We also need to acknowledge that metrics have strengths. First, they are less amenable to bias then just having one person say the paper is good or bad. Second, they are better at comparing large numbers of applicants - which is the entire point of grant panels. 

DORA principles have their place. In particular, the faculty selection process. But trying to use these principles on grant review panels does not accept the reality of the job that panel members are being asked to do. I would suggest that grant agencies embrace metrics, but do so wisely and cautiously. Develop a useful set of metrics that are given for each applicant. Some off-the-cuff ideas:

  • average number of last author papers per lab member per year
  • average impact factor of last author papers over the last five years
  • average citation number of last author papers from more than five years ago
  • average amount of grant funding per impact factor point of last author papers
  • number of collaborative papers compared to lab size

I'm not devoted to any of these metrics, but having them would make CV comparison easier and, arguably, fairer. An enormous amount of research should be put into the correct selection of metrics, so that we select for the type of qualities that we want. What you measure is what you get. But the advantages of using metrics are real. We could identify the strengths of the applicant. "This applicant doesn't publish much, but look at the output compared to their funding!" or "Every post-doc who joins this lab ends up with a good paper". Different grant formats could use emphasize different metrics, for example applications for an infrastructure grant should be given a bonus if the applicant has a record of multiple collaborative papers. It just makes sense - they've proven they work with multiple groups. Likewise, post-doc fellowships could be influenced by a metric on their supervisor's success rate with post-docs - I'd rather send a fellow into a lab where most post-docs succeed than to a lab where 90% disappear into the ether. 

There would also need to be a text entry that allows someone to make a case that the metrics are not appropriate in their particular case. I am happy to look beyond metrics if the applicant can convince me there is a reason to. But that should be the case for the applicant to make, rather than throwing out all of the quantifiable meta-data. Blindly using one metric is bad, but intelligently using multiple metrics, tailored to the purpose of the grant, just makes sense. 

Conclusion. We could be doing grant review much better. Right now, I am not even sure that we are moving in the right direction. I'd like to see more involvement from grant agencies, and a more thoughtful assessment of the burden of peer review on both applicant and reviewer. Scientists should just be reviewing the science, and we should be given useful tools to do so. Administrative issues should be audited independently, and often at the level of the institute rather than the grant. These are complex issues, and on another day I might even argue the opposite case for each opinion above, but the important thing is that we should be having a fearless and data-led discussion on the topic. 

 

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>