top of page

I was very pleased to be invited by Andrew Dunn (editor) to write this article after my presentation at the British Psychological Society: Cognitive Section Annual Conference in Liverpool Hope University in autumn 2018. I am very keen to fully publish the details of our testing procedures and follow-up outcomes in peer-reviewed journal articles, but this was at least second best. I have been regularly asked why I have not already published in peer-review journals. Well:


  • There are not enough hours in the working week (and the supposedly non-working week). Any of my students or staff who have worked with me will confirm I rarely take time off. I am slowly working through a backlog of publications. As such, I need an assistant to write this up.

  • In that vein, we have applied several times to different funding bodies to test large numbers of officers at different police forces and to follow-up with field studies to assess the outcomes, but none so far have been successful.

  • None of the police forces/businesses who have funded testing of their staff have provided any funds for writing up the research outcomes.

  • Many participants tested in this manner do not provide permission for their anonymised data to be published. In one project, involving nearly 1,000 participants from a specific workforce, 90% refused consent.

  • It would be extremely difficult to assess police successes and link to testing outcomes in a properly designed study. As an analogy, my PhD reviewed the literature examining whether CCTV systems deter crime. The data were very convoluted and often informative finding a minor positive impact at best – possibly primarily due to changes to the entire local environment being implemented at the same time. This is true of business/police deploying super-recognisers. The workplace, and systems and procedures change and there can be no proper control groups, or comparison with work practices previously. Therefore, data are by their very nature anecdotal, or correlational – rather than experimental.

  • I have been involved in one large field study funded by the Nuffield Foundation for £100,000 in 2006-2008 that could provide a model for such research. This examined police use of street identification. I was on a post doc for 2 years – data analyses were highly problematic as it was hard to ‘group’ case outcomes systematically – criminal cases vastly differ. In many ways the data collected in that study was probably more amenable to quantitative analyses than, for instance, trying to assess the abilities of a super-recogniser identifying real suspects in real crowds.

  • On the other hand, one paper that was published was linked to police successes at the Metropolitan Police Service (MET). Data of suspect ID rates had been supplied by the MET for a subset of police in the project. Fifty-three police had identified 3,739 suspects – and although not reported in the paper more than 50% of these suspects will have been eventually charged. There were no significant correlations between test outcomes and suspect ID rates. This was probably because all police were super-recognisers or had expressed interest in being on the super-recogniser team. There were no police controls.


However, I would be very keen to be involved in such a project in the future and these ideas were discussed at the Unfamiliar Face Identification Group Conference (2019) in Sydney in February. The aim for interested attendees was to be able to find a way to be able to present super-recogniser in court on a level with forensic evidence. Although in my research I have consistently stated super-recognisers should be treated legally as any other potentially flawed eyewitness (see Davis et al., 2016; 2018), some of the novel ideas need exploration.

Davis, J. P. (2019). The worldwide impact of identifying super-recognisers in police and business. The Cognitive Psychology Bulletin, 4, 17-22. ISSN: 2397-2653.

Comments


bottom of page