How to become a better scientific evaluator

In: EqualBITE
Open Access

If the inline PDF is not rendering correctly, you can download the PDF file here.

Recently some data was published about the success rate of women in the UK driving test (DVSA, 2015). It is significantly lower than the success rate of men. Yet, data from insurance companies show that women are by far safer drivers and in particular are involved in fewer accidents and accidents with less serious outcomes (Hartley, 2015). This suggests that the “success criteria” used for the driving test are possibly biased against women and more importantly are not measuring accurately who is a safe driver which is, after all, the main purpose of the driving test.

We wondered whether similar problems could happen in science when we evaluate a scientific proposal or article. There is widespread evidence of the existence of “implicit bias” in science (McNutt, 2016) and at least in some disciplines the proportion of women who are principal investigators on grants is considerably smaller than that of men (Kaatz, 2014).

RCUK (Research Councils UK) is the umbrella body for all UK government-funded research councils. The proportions of women applicants are much smaller than those of men for research grants from the Biotechnology and Biological Sciences Research Council (BBSRC), Engineering and Physical Sciences Research Council (EPSRC), Medical Research Council (MRC), Natural Environment Research Council (NERC), and the Science and Technology Facilities Council (STFC). Of course, there are fewer women who work in these areas to start with but even then, proportions of female applicants are lower than estimates of the proportions of women researchers in eligible institutions for almost all the research councils, apart from the STFC. So women do not apply for funding as often as men.

Are women successful when they do apply? In the last published data set (2014-2015), women had lower success rates as principal investigators for BBSRC, EPSRC, MRC and NERC. Women had a higher success rate in AHRC and ESRC. The gaps in success rates are mostly in the region of 3 – 5% and these have closed to 1% over the years in NERC. An exception is STFC where the success rate for women was 52% in comparison to 72% in 2014-15 (RCUK, 2016). This is consistent with the gender success gap in a study of funding in the Netherlands (van der Lee & Ellemers, 2015), and with the European Research Council (ERC WGGB).

It is entirely possible that this is because the women’s grant applications are not as good as the men’s, but one wonders what exactly do we mean by “good” here. Is the women’s science not as good? Or are the applications themselves not as good because they do not conform to the set criteria? Or is it simply our perception of the applications? Is there bias (implicit or otherwise) in the reviewer pool? RCUK aim to have at least 30% of the minority gender represented on all peer review panels during 2017 – this is a tall order given that the most recent data set from 2013 shows that five of the major funding councils had not yet achieved this. EPSRC was farthest from the target at 17% female panel membership. There is a large body of literature around this problem and I am not sure we can really answer these questions definitively. However, based on substantial experience of evaluating grant proposals and articles regularly, we feel that it is useful to think about how to become a less biased evaluator.

Ingredients

Evaluation of grant applications is often performed in two steps: pre-evaluation remotely, followed by discussion of the proposals and ranking at an evaluation panel.

For the pre-evaluation:

  • A quiet/peaceful room to perform the pre-evaluation.
  • A reasonable amount of time to perform the evaluation (usually twice as much as what you would think).
  • A healthy dose of self-awareness and humility, taking time to reflect briefly on your own gender, age or subject biases can help overcome them. Also you may form an initial opinion that is not the most informed one: it’s OK to change your mind.

At the panel:

  • A chair of the evaluation panel committed to gender equality and ready to remind all the panel members of this commitment.
  • A well-organised workflow with sufficient time to discuss proposals.

Method

  1. Remind yourself of potential biases (such as gender, age, etc, remembering that gender is not the only cause of bias) from yourself and/or from the external referees if you have external reviews. These include: anchoring bias, where the reviewer over-relies on one piece of information and it colours their subsequent judgements; halo effect where the candidate’s competence in one area causes the reviewer to assume they are competent in other areas; and shifting standards of reference in which cultural stereotypes about particular groups cause different standards to be used for judging individuals e.g. women in maths might have to be “twice as good to get ahead” because of cultural stereotypes about women’s poorer maths aptitude. You can find a full list of how cognitive biases might influence peer review decisions in Table 1 of Kaatz et al. (2014).
  2. When you’re reviewing applications before the meeting: make sure you set aside enough time for each evaluation. Perform a first-level evaluation and let a few days pass so that you can come back to the grant with a fresh mind. Be ready to change your initial opinion.
  3. Prepare a file with a “template” of all the criteria that you are asked to evaluate and make sure you can back up each of the criteria with actual facts (e.g.: the proposal is groundbreaking because of such and such; there is enough preliminary data as shown in Figure 1; the value for money is not so good because they are asking for two postdocs but it is not clear that they need so many).
  4. At the evaluation panel, don’t be afraid to speak up if you recognise a bias at work. This is often more apparent during discussions of track record and career breaks. If you are chairing a panel, you could use the “consider the opposite” approach to challenging bias in which you systematically encourage panel members to play devil’s advocate and come up with reasons why their thinking might be wrong (Bohnet, 2016).

For funders: make an effort to reduce the conditions under which implicit bias might fester in your peer reviewers.

  • Be careful about the language you used in the assessment criteria. Certain words are associated with masculine stereotypes and are likely to increase the chances of male applicants – such as “risk-taking” or “technological breakthrough” (Kaatz, 2014; van der Lee & Ellemers, 2015). Make sure commitment to a fair assessment devoid of bias is clearly communicated to the external referees.
  • It is still unclear whether anonymizing the applicant pool will impact women’s success rates (Ledin et al., 2007) and many funders – such as RCUK – do not do this. Such funders would no doubt argue that the track record of the applicant needs to be assessed and it is not possible to do so anonymously. It is also important to be aware how the same credential can be valued differently depending on who has it (Uhlmann & Cohen, 2005). It would, however, be possible to anonymously review the proposed research itself and combine these scores with the track records afterwards, and then re-anonymise.
  • Try to reduce time pressure at panel meetings or review periods. Cognitive biases flourish when decisions have to be made quickly (Kaatz et al., 2014).
  • When people believe in their own objectivity, they are more vulnerable to acting on their biases. Remind your panel members that everybody has implicit biases, even so-called objective reviewers (Kaatz et al., 2014). Emphasise the equality policy of your funding body, and make it clear that the “gender problem” is not yet solved so reviewers must remain vigilant (van der Lee & Ellemers, 2015). Brief discussions of how to practically deal with bias among panel members may also be very useful. Consider setting a little bit of time at the beginning of the meeting for such discussions.

If the inline PDF is not rendering correctly, you can download the PDF file here.

EqualBITE

Gender equality in higher education

Metrics

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 53 13 2
PDF Downloads 17 7 1