Gender balancing staff recruitment: interviewing

In: EqualBITE
Author:
Judy Robertson
Search for other papers by Judy Robertson in
Current site
Google Scholar
PubMed
Close
Open Access

I was so nervous before the interview for my present job that I was tempted to run away. I was to be interviewed by a panel of nine including the Principal, the Head of College, two heads of school and a motley collection of professors, one of whom had examined my PhD fifteen years previously. It was like a scene from The Lord of the Rings! I didn’t run away, of course, and managed to forget the experience until I was researching this book. While such intimidating interview panels are unusual even at Edinburgh, a panel-based interview following a presentation is fairly standard for academic posts. Put it this way: all the academic jobs I have ever applied for or recruited for have involved panel-based interviews. This is very expensive in terms of staff time so it is important that it pays off in terms of hiring people who will perform well in their jobs.

As I read the research on the topic, it became clear to me that the current interviewing practices at universities in the UK could do with a rethink.

Evidence from other industries suggests that current university procedures are likely not effective in selecting the best people for the job, irrespective of how intimidating they are to candidates. Here’s what Bohnet’s review of the research evidence recommends we should be doing instead (Bohnet, 2016).

Ingredients

  • Knowledge of the literature on bias in recruitment.

  • Willingness to challenge tradition.

Method

  1. Beware of a false sense of security: equality and diversity training. I have read a lot of Athena SWAN applications, and it is common to mention that interview panel members have all undertaken equality and diversity training as if this might be a magic bullet. Unfortunately, there is no evidence that equality and diversity training in general actually works (Wilson, 2011; Bohnet, 2016). We know from lab-based studies that there is a small positive effect for training interventions to reduce implicit prejudice overall (Lenton et al., 2009), but that it is by no means straightforward to get the training right (Lai et al., 2013). For example, someone who is aware they could be biased may try to suppress automatic stereotypes, which does not always help; in some cases it can make the stereotypes more salient and lead to an increase in bias (Bohnet, 2016). Receiving feedback that you are progressing towards the goal of being more egalitarian can increase your implicit bias and make you act in a more discriminatory way (Kim, 2003). Although there is a large literature on implicit bias, there is still a lack of evidence that reducing implicit bias reduces discriminatory behaviour in the short term, never mind in the longer term (Lai et al., 2013). Given that a lot of the evidence to date is from lab-based studies, we still don’t know how these effects will play out in the real world. In short, don’t make equality and diversity training for interview panel members your only way of ensuring equality when making hiring decisions.

  2. Beware of a false sense of security: a woman on the panel. Another common feature of Athena SWAN applications is a statement of a departmental policy in which at least one member of each gender is represented on an interview panel (usually to guarantee one woman on a panel of men). Presumably this is on the reasonable assumption that the presence of a woman on the panel will reduce bias. In the slippery world of unconscious bias, though, it pays to question common sense. There is some evidence that having women on a panel can prevent male panel members from making biased decisions (Zinovyeva & Bagues, 2011), but the female panel members themselves may act on their own biases. In a study of a large professional services firm, being interviewed by a woman hurt the success rates for the more competent women (who might turn into competitors) (Bohnet, 2016). A study of gender quotas on academic hiring committees in Spain indicated that junior women panel members operated as if they were in competition with applicants and were less likely than men to hire women at the same career stage (Zinovyeva & Bagues, 2011). Female professors did not do this, perhaps because they no longer feared same-sex competition and were looking for allies (Bohnet, 2016).

  3. Define “good fit”. The idea of recruiting someone who is a “good fit” for the department is seductive, but can be problematic if this shorthand phrase is ill-defined. For sure, hire someone who is collegiate or possesses other explicit qualities which are necessary for the job, but make sure that “good fit” doesn’t mean “someone like me” or “someone like everyone else who already works here”. Where there is ambiguity, biases thrive.

  4. Interview candidates in batches. If possible, recruit for multiple positions at the same time. Use comparative evaluation between candidates against explicit planned criteria because this focuses attention on the individual’s performance rather than on stereotypes about their group. Making multiple hiring decisions at once has the additional benefit of encouraging recruiters to embrace variety (Bohnet, 2016).

  5. Use structured interviews based on a checklist so that every candidate is asked the same questions in order to reduce potentially biased subjective criteria.Create a scoring system for the questions in advance and decide how each question should be weighted. Put this into a structured interview form to assist with note-taking. Resist the temptation to deviate from the interview schedule. As Bohnet notes, “the data showing that unstructured interviews do not work is overwhelming” (Bohnet, 2016). Metaanalysis indicates that combinations of tests of general mental ability, work sample tests and structured interviews are the best predictors of future job performance (Schmidt & Hunter, 1998).

  6. Choose skilled interviewers to be on the panel. Check that the panel members have experience in interviewing, or have had some training. Subject expertise will not be sufficient.

  7. Don’t interview as a panel. Panel members should interview each candidate separately, with each interviewer focusing on the same set of competences each time. Compared to the standard procedure, this requires a similar time commitment from each interviewer, but ensures that they get a longer period of quality time with each candidate to ask relevant questions and form an opinion. This procedure results in more independent data points about the candidate which helps to make an informed choice later. It requires a longer time commitment from the candidate to attend a series of interviews on different topics, but people are usually willing to do a lot for that dream job!

    The panel member should assign scores for each candidate straight away to avoid bias creeping in when trying to recall characteristics of the interviewee. The notes and scores should not be shared with other panel members until just before the review meeting when everyone has written their notes.

  8. Consider other forms of assessment. You could ask candidates to perform a job-related task to demonstrate their competence more authentically. For example, a software developer might solve a programming problem, or a lecturer might teach a short “class” as if to first-year students. Ask another colleague, not on the panel, to anonymise the test results. The panel reviews them, sorts them and then compares those results with the interview ratings. This procedure has been used within the Institute for Academic Development at the University of Edinburgh, where it made a big and positive difference. On at least three occasions, the task results meant that the person who performed best at interview didn’t get the job - another candidate did. This has helped the Institute for Academic Development to make much better recruitment decisions.

  9. Review candidates together. Once the interviews are complete, the panel convenes as a group to review scores and make hiring decisions. It is good practice to compare responses horizontally across the structured questions as you would when marking exam scripts so that the halo of one good answer doesn’t influence the evaluation of an individual’s performance on other questions.

  10. Don’t settle for an “OK” candidate. It can be helpful to have a group member with the role of “bar raiser” to ensure that the group doesn’t settle for a comfortable consensus and hire someone who is unsuitable, to avoid widening the candidate search again. Academics on open-ended contracts will be around for a long time - don’t spend the next twenty years managing someone who was never right for the job!

  • Collapse
  • Expand