Face and Voice Recognition Lab
School of Human Sciences
Institute of Lifecourse Development
University of Greenwich
London
Version 5: 30 April 2024
University of Greenwich Face and Voice Recognition Lab
Volunteer Research Participant Pool
Three-part blog: An explanation
This blog is separated into three parts, as the large number of figures we have created to demonstrate the distribution of volunteer scores on each test, exceeded the maximum space allowed according to our website provider. We also wanted to embed the answers to the very large number of associated regular emailed questions about super-recognisers and our research test procedures, and these answers also took up substantial space.
Part 1 of the blog, "Membership of the Greenwich Volunteer Pool", provides a brief history of the volunteer pool, the processes by which people can join, and what to expect if you do join. We also include reasons for the criteria we use for super-recognition in our research. Some demographic and other information describing the volunteer pool members is also included.
Part 2 of the blog, "How do your scores on our tests compare with volunteer pool members and the population in general?" is where you need to go if you want to compare your scores on tests with those achieved by other members of the volunteer pool and representative samples of the general population.
Part 3 of the blog, "Super-recognition in the lab, the workplace, and the world" (in preparation), at the time of writing (29.04.2024) is not quite finished but will contain some in depth information about our research consultancy work with policing and businesses. There will also be information describing the super-recognition criteria we use in workplace projects and why there are some differences between criteria in different workplaces, and indeed, why these criteria of necessity will always differ from those used in research.
Blog 1: Membership of the Greenwich Volunteer Pool
Table of Contents
1. Introduction and important information.
2. A brief history of the lab and the development of the volunteer pool.
3. Who joins the Greenwich Volunteer Pool?
4. The Demographics of the Greenwich Volunteer Pool
5. Request for Help: Please help by uploading photos for inclusion in new tests.
Appendix A: Information for people who have not accessed our website before.
1. Introduction and important information
Thank you to members of our volunteer pool who continue to contribute to our research. We are regularly asked, “what is a good score on these tests?” and “am I a super-recogniser?” We hope this blog will help to answer these questions and provide some interesting information about our research investigating face and voice recognition ability.
New visitor? If you are a new visitor to our website and you wish to take our introductory face and voice recognition tests and/or contribute to our research, please read the information in Appendix A. You will need to take the Three Tests of Face Recognition in order to be issued with an Anonymised Code. You will need an Anonymised Code to access the Greenwich Test and Research Distribution Platform.
Available languages: Currently, most of our face identity processing tests are available in 10 languages (English, Dutch, French, German, Italian, Norwegian, Portuguese, Romanian, Russian, Spanish) and we will send e-mails in those languages too. However, voice identity processing tests are translated into English and German only, while a substantial project of our research is only available in English (students do not have funds to pay for translation services. At the time of writing, we have additional test documentation prepared in Arabic, Hindi, and Polish. A new language normally takes 7-14 days to enter into the system.
On the Greenwich Test and Research Distribution Platform you may edit your preferred language option for emailed research invites and project instructions. You may also select a preferred language, while additionally providing consent for us to send emails in English, when your preferred language is not available.
For everyone: Greenwich Test and Research Distribution Platform We would like to introduce everybody to the new Greenwich Test and Research Distribution Platform. Almost all the tests listed in this blog are available on this platform which you can access at any time you like via the URL link below. The Platform will be loaded with an ever-expanding battery of face and voice identity processing tests, links to our current research projects, methods to edit the information we hold about you, and downloadable information sheets.
Anonymised Codes: Once you have completed our introductory tests, you will be issued with a unique Anonymised Code. To access the Greenwich Test and Research Distribution Platform, please click on the link above and enter your Anonymised Code into the authenticator system. You will find your code in red font, on the top left of many of our e-mails. Please do not share this code with anyone else or use more than one code. These codes protect the integrity of our system by allowing us to match your scores on our tests and in our research, while protecting your privacy and anonymity. The use of more than one code may mean the information we retain about you will be incorrect.
Friends and family: If you would like a friend or family member to take the tests, please complete the Friends and Family questionnaire loaded on Page 3 of the test and research distribution platform. We will send you a new Anonymised Code to forward it to them.
Anonymised Code not accepted? All anonymised codes have been loaded into the authenticator system of the Greenwich Test and Research Distribution Platform. If you cannot access the system and you have been a member of the volunteer pool for more than a few weeks, the most likely explanation is that there may be interconnectivity failures between laptop, browser, and test system. Using a different browser may be effective.
If you are a brand-new member of the volunteer pool (i.e., in the past week), your code may not be loaded into the authenticator system, which allows access. We download test data and update the authenticator system once a week, so please wait for a few days. If access is denied for more than 2-3-days, please contact us on super-recognisers@greenwich.ac.uk
Voice identity processing test failures: Please note, the voice identity processing tests only work via a limited number of browsers (e.g., Google Chrome) and delivery systems (I.e., they are unlikely to work on most mobile phones due to restrictions built into the software).
The reliability of first attempts on a test and calculating z-scores for workplace projects: We offer the opportunity for participants to retake the tests as many times as they like, although we normally aim to use the first attempt score for our face recognition ability estimate calculations. First, it is usually the most reliable score, as everyone takes the first attempt in similar conditions. In other words, no one will have seen the faces before, and all our key tests measure unfamiliar face recognition ability. Second, no one will be familiar with the structure or design of the test, even if similar to other tests. Familiarity with faces or test design will make a test seem easier. Not surprisingly, second attempt scores are normally higher than first attempt scores. However, underlying face recognition ability will not have been improved by improving scores. We will occasionally use second or later attempt scores in our face recognition ability calculations if the evidence strongly suggests that the first attempt score was anomalously low for the reasons given above. We ask participants to let us know at the time of taking the tests if they had a problem.
No mobile phones please: Most tests in our battery require participants to select an option that states that they are not using a mobile phone as we want to discourage their use. Compared to laptops or PCs, mobile phone use is reliably associated with lower scores. Features on faces are harder to spot. If you did use a mobile phone when taking our tests, you are welcome to take all of them again. All we ask is that you please respond when prompted that this will be your second (or other) attempt, as you used a mobile phone the first time(s). You will still be at an advantage, even if you did use a mobile phone the first time you took a test for the same reasons as described above.
Limitations of tests of face recognition: It is important to note that when face recognition tests are used in research investigating individuals who occupy the low end of the face recognition ability spectrum, it is surprisingly common for some prosopagnosics, who suffer extreme face recognition deficits in real life, to score substantially higher than prosopagnosic cutoffs on face identity recognition tests. It is probable that to achieve these scores they use strategies, such as focussing on a single facial feature to familiarise themselves with a face, that are effective in a test, but would not be effective in the real world. As such, their scores do not reflect their true ability.
There may be other explanations, but readers should be aware that all tests have limitations, even if taken in highly controlled laboratory and/or clinical environments. They provide an estimate of ability only, although a consistent pattern of results on multiple tests, is likely to be far more accurate than if estimates are based on one test only.
The same limitations will operate on those at the top end of the face recognition spectrum. We occasionally receive emails from individuals who claim super-recognition skills in the real world and although we cannot check veracity, they often provide anecdotal descriptions of unusual feats of recognition that are consistent with accounts from super-recognisers. However, when they have taken our tests, their scores are substantially lower than they hoped for, and they do not meet our super-recognition standard. Of course, one explanation could be that these individuals do not possess super-recognition skills and that they simply overestimate their ability. This is of course a possibility. However, our view is that it would be extremely unlikely for limitations of this type to be restricted to the low end of the ability spectrum only. Therefore, we always respond by describing the potential limitations of tests and that their superior skill sets may be restricted to the identity recognition of humans in real life, and it does not effectively transfer to photos and videos. A second explanation, assuming these people are genuine super-recognisers, is that the time allowed to view a face in tests is simply not sufficient for their skillsets. It is feasible that they require substantially more time than is typical in order to generate a face representation of that person in memory. However, once the required level of exposure has been achieved, they may reliably recognise that face months, years, or even decades later.
Ethics associated with the volunteer research participant pool: The research associated with creating the Greenwich Face and Voice Recognition Lab research participant pool database was approved by the University of Greenwich Research Ethics Committee in 2015. It has been updated several times since. All participants give their consent for their data to be stored. In a second repository the participants' email addresses are separately stored so that they can be invited to participate in future research. When the EU General Data Protection Regulation (GDPR) went into effect in 2018, all participants were reminded by email that we had saved their data. If there was no response, the corresponding data was deleted. We have been conducting a similar exercise in 2023 and 2024, and we expect the database of e-mail addresses to be reduced. Those participants who have contributed to the tests since 2018 occasionally receive information about GDPR and how to withdraw their consent to receive e-mail invites at any time.
More information about our ethical and data protection procedures can be found here.
New research collaborations: We are always keen to discuss arranging appropriate collaborative research projects with any reputable university or other organisation, particularly, but not exclusively, if this research helps us to better understand human face recognition. However, we will not collaborate if there is risk that research might be exploitative. Most importantly, our ethical protocols restrict the type of information that researchers external to the Greenwich Face and Voice Recognition Lab may collect. For instance, it would not be possible to request contact information such as emails. All communications with participants need to be via the University of Greenwich. We monitor the conduct of all collaborative research in case of accidental inappropriate practices. Note – all research must be approved by an institutional ethical committee and noted by the University of Greenwich Research Ethics Panel, or directly by the University of Greenwich Research Ethics Panel alone.
If you would like to discuss a potential research collaboration with the Greenwich Face and Voice Recognition Lab, please email super-recognisers@greenwich.ac.uk
2. A brief research programme history and the development of the volunteer pool
University of Greenwich super-recognition research and the media
The super-recogniser research programme of the University of Greenwich officially started in April 2011. A small team of about 20 Metropolitan Police Service (MPS) officers who had each made disproportionately large numbers of suspect identifications from CCTV images (i.e. at least 15 identifications in 12 months), were asked to take a series of four face recognition tests and one object recognition test. This was the first research in the world to test super-recogniser police officers. Over the next few months, additional police were tested, as were a group of age-, gender-, and ethnicity-matched controls. The results showed that many (but definitely not all) of this unique pool of police officers, who by the end of the project were described as super-recognisers by their employer, generated significantly higher scores on the face recognition tests than the control group (Davis, 2013; Davis et al., 2013; Davis et al., 2016).
Over the next few years, the MPS super-recognisers were deployed to a variety of roles aiming to best capitalise on their skills. Suspect identification rates steadily increased and as part of a research consortium from eight European Union countries, the MPS and the University of Greenwich team were funded by the European Commission. One aim of the €8,000,000 LASIE project (2014-2017) was for the University of Greenwich to develop new tests of super-recogniser ability, so as to identify more police officers with the skill. Two test batteries were developed. Based on discussions with super-recogniser police, and the operational work of super-recogniser police, one strand was used to test police to assist with deployment decisions to roles best filled by super-recognisers. A parallel aim was to conduct research with non-police to investigate the capabilities of super-recognisers. Not all tests are used in both contexts.
An in-depth history of the Face and Voice Recognition Lab and its impact can be found in Davis (2020). The chapter also describes some of the research conducted by Professor Josh P Davis before his first research on super-recognisers.
Current online procedures
Could you be a Super-Recogniser Test
One of the first tests created was the Could you be a Super-Recogniser Test. This 14-trial 5 minutes fun test was first loaded on the Qualtrics platform in April 2015 in advance of publication of a link in a BBC Future (11 June 2015) article on MPS super-recognisers. The test went viral and was picked up by worldwide media including the Daily Mirror (15 June 2015), the Daily Mail (15 June 2015), and the New York Times (16 June 2015). By the end of June 2015, more than 1,000,000 participants had taken the test.
The media continued to follow super-recogniser stories. Professor Davis has been interviewed on TV more than 100 times. On average, an article mentioning super-recognisers is published somewhere in the world at least once a day. Approximately one-in-four mentions the University of Greenwich. About once a week, a direct link to the Could you be a Super-Recogniser Test will be published.
On a typical day in 2024, 200-400 participants take the Could You be a Super-Recogniser Test. Once a week, daily contributions exceed 1,000. Occasionally, when a linked media article is published, numbers increase to a few thousand in a day (the highest was more than 100,000 in one day in June 2015), and then they tend to gradually decline over the course of about a week. However, this test is anonymous, and information about participants is limited, meaning a full set of results has never been published. Despite this, 7,000,000 participants had taken the test by 2023 (see Appendix B).
Three Tests of Face Recognition
When they finish the Could You be a Super-Recogniser Test, all participants are invited to take the Cambridge Face Memory Test: Extended (Russell et al., 2009), the Glasgow Face Matching Test (Burton et al., 2010), and the Short-Term Face Memory Test 30-60 (Robertson et al., 2019). Participants are also asked to consent for us to save their data for research purposes and we also request some demographic information.
Participants receive their test scores upon completion and at this point we invite everyone who is at least 18-years-of-age to join the volunteer pool. Those who do volunteer are asked for permission to save test scores for use in future research projects. Perhaps not surprisingly those who score the highest are the most likely to be one of the approximately 20% who agree to volunteer. New volunteers receive invites to take some of the tests loaded on the Test and Research Distribution Platform and which are listed below. They are referred to this blog for more information. Some participants receive information about one of our clients, Super-Recognisers International.
Emails and delays in communications
Please note – the irregular flow of participants described over is associated with unexpected, seemingly random large numbers of emails every now and again. We are never told in advance when the media will publish an article nowadays. As a consequence, we are sometimes very slow in answering your emails. We do apologise if you have to wait. If you take our tests after reading an article on super-recognisers, an email to us will be one of many. However, please do not think we are asking you to stop sending emails. Far from it. Some emails describe some very interesting insights and anecdotes about your experiences, neurodiversity, and your skillsets. These help to generate ideas for future research. We often receive repeat emails from volunteers about procedural issues. We do try to do our best to keep up with emails reporting problems associated with the tests. However, we sometimes have to prioritise other work. We receive no funds to support this service.
3. Who joins the Greenwich Volunteer Pool (n = 56,160)?
At the time of writing, there are 56,160 members of the Greenwich Volunteer Pool. At one time there were over 100,000, but when GDPR was introduced in 2018, all were e-mailed to check they wished to remain on the database. If no reply was received, their contact details were deleted. The pool was reduced to about 38,000 and it has been growing ever since, even though we have conducted additional GDPR-associated deletion exercises. More than 25,000 have taken part in at least on project in the past two years, although the exact number will be higher. About 50% of our research is anonymised and we cannot tell who did or did not contribute.
Face recognition superiority of the Greenwich Volunteer Pool
Not surprisingly, the key feature of the volunteer pool is superior face recognition ability, even though a few members describe themselves as prosopagnosics and their scores support this opinion. As an example, Table 1 displays the Greenwich Volunteer Pool’s first attempt scores (n = 55,893) on the Cambridge Face Memory Test: Extended (CFMT+) (Russell et al., 2009). The information in this table describes the distribution of scores on the CFMT+ that would be expected if the Volunteer Pool sample was drawn at random from the population, alongside the often far higher numbers who actually achieved these standards.
The information in Table 1 is mostly described via the use of percentiles as readers may be more familiar with data expressed in this manner. We also describe the data in terms of z-scores, in this table and throughout this blog. As a reminder, to standardise any set of data, z-scores are calculated using the following equation.
z = (participant score – sample mean)/sample standard deviation
Assuming data on a test are normally distributed and standardisation has been conducted in this manner, it is possible to make predictions as to the proportion of a sample likely to achieve a given cut off point. For instance, 68% would be expected to achieve a score within 1 standard deviation of the mean, 34% above the mean score on a test, 34% below the mean score. Similarly, 95% and 97.7% would be expected to achieve a score within 2 and 3 standard deviations of the mean respectively.
A score 2 standard deviations above the mean (z = 2.00) on a face recognition test has commonly been used to denote the cut-off point for inclusion in super-recogniser groups, while 2 standard deviations below the mean (z = -2.00) is the common standard for prosopagnosia (face blindness). From the information above, readers will be aware that z-scores of +/- 2 are expected to be achieved by 2.5% of a sample. In media reports, this value has commonly been rounded down to 2% when describing the proportion of the population expected to achieve super-recognition standards.
As a reminder, Table 1 lists the participant numbers expected to achieve a series of different percentiles if the distribution of scores across percentiles was roughly equivalent to that expected from a randomly selected sample of the population. These data are based on the results of 254 participants (M = 70.7, SD = 12.3) described in Bobak et al. (2016). The closest equivalent CFMT+ and z-score values are also reported.
To demonstrate the extent to which the volunteer pool outperforms these expectancies, we describe the actual number and percentage of participants who achieved each percentile value. For instance, 559 volunteers would be expected to achieve a score that puts them in the top 1% of the sample (99 out of 102, associated with a z-score of 2.33), while 5,589, would be expected to achieve a score in the top 10% (86 out of 102, associated with a z-score of 1.29). However, substantially more than this, 1,279 (2.79% of the total volunteer sample), and 28,782 (50.9%) actually achieved the 1% and 10% standards respectively.
Table 1: Expected and actual participant numbers achieving a range of different CFMT+ (out of 102) based percentiles.
Target (%) | z-score | CFMT+ (Score) | Expected (n) | Actual (n) | Actual (%) |
| Superiority (OR) | Superiority (%) |
1 | 2.33 | 99 | 559 | 1279 | 2.29 |
| 2.29 | 1.29 |
2 | 2.05 | 96 | 1118 | 5493 | 9.83 |
| 2.04 | 7.83 |
3 | 1.88 | 94 | 1677 | 9901 | 17.71 |
| 5.90 | 14.71 |
4 | 1.75 | 92 | 2236 | 14762 | 26.41 |
| 6.60 | 22.41 |
5 | 1.65 | 91 | 2795 | 17284 | 30.92 |
| 6.18 | 25.92 |
10 | 1.29 | 86 | 5589 | 28782 | 51.49 |
| 5.15 | 41.49 |
20 | 0.84 | 81 | 11179 | 37782 | 67.60 |
| 3.38 | 47.60 |
30 | 0.52 | 77 | 16768 | 42766 | 76.51 |
| 2.55 | 46.51 |
40 | 0.26 | 74 | 22357 | 45829 | 81.99 |
| 2.05 | 41.99 |
50 | 0.00 | 71 | 27947 | 48296 | 86.41 |
| 1.73 | 36.41 |
60 | -0.26 | 67 | 33536 | 50799 | 90.89 |
| 1.51 | 30.89 |
70 | -0.52 | 64 | 39125 | 52198 | 93.39 |
| 1.33 | 23.39 |
80 | -0.84 | 60 | 44714 | 53697 | 96.07 |
| 1.20 | 16.07 |
90 | -1.29 | 55 | 50304 | 54832 | 98.10 |
| 1.09 | 8.10 |
100 | NA | NA | 55893 | 55893 | 100.00 |
| 1.00 | 0.00 |
The final columns to the right of Table 1 display the odds ratios associated with the numbers achieving the various standards and the percentage difference between expected and actual numbers. As such, the odds ratio associated with the 1% and 10% rows, indicate that 2.29 (1%) and 5.15 (10%) times as many volunteers achieved this standard than would be expected from a randomly selected sample of the population. The most extreme odds ratio (OR) value is associated with the 4% percentile as more than 6 times as many volunteers have achieved this standard, than would be expected if the data of test scores were equally distributed.
Privacy and anonymity
One of the most important features of the volunteer pool is that to protect privacy and anonymity, we actually retain very little information about those who contribute to our research in our volunteer database. Apart from e-mails, age, gender, ethnicity, country, language, and scores on cognitive tests we know virtually nothing about any of the volunteers.
That being said, we may conduct research that asks for a substantial amount of extra information about you. However, information collected in individual projects is safely siloed in separate password-protected databases and it remains in that silo, unless we explicitly ask in advance, for your consent to use in a second separate project (which rarely happens as we do not normally have ethical approval to do so). We fully anonymise all data in a project silo as soon as is feasible at the end of a study, so it is not possible to cross-reference back to the volunteer database.
As such, no project-specific information, is ever stored in our main volunteer database that contains scores on face and voice recognition tests. If participants consent for us to use their face recognition test scores taken in the past in a new project to investigate the relationship, we extract the scores from our volunteer database using our Anonymised Codes and maintain all data in the project-specific siloed database. We would never add project data to the volunteer database, even temporarily for convenience.
Therefore, when we describe features of the volunteer pool below, we are normally describing research that has been conducted on samples of less than 1000 volunteers, or trends in the content of emails we have received from volunteers in the past.
The neurodiversity of the volunteer pool
A substantial proportion of emails from volunteers contain information about their neurodiversity, negative or positive lifetime experiences, the possession of some trained or innate skills, and also whether family members share similar characteristics. Over the past 10-years, we have received a few thousand emails of this type, most have a word count of about 100-200, although some can be a few thousand words. As noted above, we do not have ethical consent to save emails against the information retained in the volunteer database, so it would be impossible for us to analyse patterns in these emails.
However, Professor Josh Davis reads all these emails and responds to the vast majority. Many emails ask whether a particular neurodiverse characteristic that is sometimes used to describe that volunteer, is shared with other super-recognisers or the volunteer pool in general. The most common factor that is shared by most of these emails is that most of these descriptions are unique or described 2 or 3 times only. Even when a more common characteristic is described (e.g., dyslexia, ADHD, autism, anxiety, depression, PTSD, high IQ), it is almost always something that is found in a substantial minority of the population and would be expected to be similarly distributed in a volunteer pool of more than 50,000.
However, these are anecdotes, and even through at least 3000-4000 emails of this type have been received, we know nothing about the vast majority of volunteers and there may be a large number of common traits and characteristics in people with superior face and/or voice recognition, that researchers have no knowledge of.
Medium-term project aim: One medium-term aim of our research programme will be to send invites requesting volunteers provide details of the co-occurrence of neurodiverse characteristics and/or other relevant information that they believe may be linked to their ability to recognise other human beings. Much of this information may be highly sensitive, and volunteers who are risk-averse (or simply sensible) may be hesitant when deciding whether to contribute.
We will need additional ethical approval and to devise a protocol to ensure that volunteers can be confident that we will maintain our data protection standards, so they can be assured that this research will be as risk-free as feasible. There would be no point starting such a project, if a substantial proportion of those we hope will participate, view the research as too risky. A project may be most feasible by working with an external agency who anonymously collect all data. Volunteer-participants would provide additional consent for the University of Greenwich to anonymously share test score data with that agency.
Representative in terms of personality, attributions, and attitudes
More than 25 published and unpublished research projects have recruited a substantial number of Greenwich Volunteer Pool members. Some of this research has investigated topics that have nothing to do with face and voice recognition. For instance, volunteers have been asked to complete questionnaires measuring personality, psychological traits, and attributions towards various individuals or groups. Outcomes have all been fairly typical and similar to those expected from participant samples that are representative of the population (e.g., Foster et al., 2022, Macintosh & Davis, 2021, Satchell et al., 2019). None of this research has suggested that apart from superior skills at face and voice recognition (e.g., Jenkins et al., 2021), membership of the volunteer pool is linked to any unusual characteristics. However, there is clearly a self-selection bias at work. Super-recognisers (even though rare in the population), who are members of the volunteer pool, tend to contribute to our research more often than non-super-recognisers, even if that research investigates topics not associated with face and voice recognition.
For this reason, please do not be deterred from contributing to our research if your test results appear disappointing in comparison to others. If we invite you to contribute, you will be eligible, and we will always be very grateful for your contribution. Indeed, we always need contributions from volunteers who do not score in the super-recogniser range. We will never invite someone who is ineligible. We do not want to waste anyone’s time.
Representative range of political views, although occasional subject experts volunteer
We try to ensure that our data collection methods are as politically neutral as possible and that any information we provide does not bias the responses of participants. From volunteer emails, we believe that the volunteer pool contains people with a wide range of political views and opinions, roughly representative of society. However, sometimes participants may be assigned to different conditions in research. It is possible, that the experience of contributing to the research in one condition may be interpreted as being biased in one direction. If this is the case, then we hope that a second condition introduces an equal bias, but perhaps in the opposite direction, thus maintaining a neutral perspective.
Given that most of our research has links to reducing or investigating crime and criminal law, it is not surprising that a disproportionately high number of emails and sometimes responses on research questions asking for more information about a topic, are from volunteers who work in policing or the law or who are students intending to move into those professions. Given that a large proportion of volunteers are over 30 years old, many students are studying higher degrees and have a wealth of knowledge and experience to draw on. We sometimes receive comments from subject experts, while some police have provided redacted descriptions of investigations that they have contributed to.
As described above, we never retain information provided by email in the same data silo as information collected in a project. All comments provided by volunteers contributing to our research are anonymous. We are always extremely grateful to anyone taking time out to provide sometimes highly detailed helpful descriptions.
3. The demographics of the Greenwich Volunteer Pool
Please check that the demographic information embedded in your invite email is correct. You may edit this information if you enter the Greenwich Test and Research Distribution Platform and locate the appropriate files.
Most volunteers have provided demographic data. When missing, it is because we did not always ask if we could indefinitely store these data when we created the volunteer pool. The reliability of these data is important, as factors can impact scores on different face recognition tests. Most people are aware of the cross-ethnicity effect (e.g., see Meissner & Brigham, 2001 for a review), in that people tend to be better at recognising faces from their own ethnicity than other ethnicities. Some research has found similar cross-age (e.g., Anastasi & Rhodes, 2005), and cross-gender effects (e.g., Herlitz & Lovén, 2013). Effects are partly driven by levels of out-group contact (Meissner & Brigham, 2001).
With collaborators we have investigated the cross-age (baby faces) (Belanova et al., 2018), and cross-ethnicity effects (Carroll et al., 2021; Robertson et al., 2020) in super-recognisers. In general, super-recognisers score significantly higher on both the own-group and other-group tests in comparison to those of typical level ability from the same demographic group as the super-recognisers. However, super-recognisers are impacted by the cross-ethnicity face recognition deficit, just like everyone else.
Country
Volunteers live in 186 countries. Table 1 lists the top 30 countries represented in the volunteer pool. Not surprisingly, as the University of Greenwich is in London, the largest proportion are from the UK. However, numbers from Germany will probably overtake those from the UK in 2025.
Table 2: The 30 countries most commonly represented in the volunteer pool.
|
| % |
|
| % |
1 | UK | 21.2 | 16 | Finland | 0.8 |
2 | Germany | 19.7 | 17 | Romania | 0.7 |
3 | USA | 9.6 | 18 | Portugal | 0.6 |
4 | Brazil | 4.8 | 19 | Poland | 0.6 |
5 | France | 4.4 | 20 | Ireland | 0.6 |
6 | Australia | 2.8 | 21 | Jamaica | 0.6 |
7 | Russia | 2.8 | 22 | Denmark | 0.6 |
8 | Spain | 2.3 | 23 | India | 0.5 |
9 | Canada | 2.0 | 24 | Belgium | 0.5 |
10 | Switzerland | 1.5 | 25 | Ukraine | 0.5 |
11 | Sweden | 1.5 | 26 | New Zealand | 0.5 |
12 | Norway | 1.1 | 27 | Taiwan | 0.4 |
13 | Austria | 1.1 | 28 | Argentina | 0.4 |
14 | Mexico | 0.9 | 29 | Greece | 0.4 |
15 | Netherlands | 0.8 | 30 | Philippines | 0.4 |
Before 2018, fourth was China, with Hong Kong. Virtually none remain, possibly a consequence of being unable to access western websites. On the opposite extreme, when the pool was first established, 17 claimed Antarctica as their temporary home. One e-mailed to say they were working on a scientific base on the south pole, while another participant once e-mailed from the ice cap in the far north. They took the tests while sheltering from a storm. When e-mailing, they commented they could hear a polar bear outside.
Gender
Of those volunteers who completed our gender question (n = 49,517), 38.2% responded Male, 61.3% female.
Age
The mean age of the pool at about 40-years is far higher than the typical student samples who are recruited to a high proportion of psychological research projects (Figure 1). As a group, the pool is also aging, which is not surprising as new members constitute only a small proportion of members.
Figure 1: Age distribution of volunteer pool
Ethnicity
The vast majority of members of the pool describe their ethnicity as White (81.8%). Mixed ethnicity is the second most common (4.2%), followed by Hispanic (2.0%). Although the proportion of White participants is close to the proportion of White people in the population of England and Wales (86%; Gov.uk, 2020), many ethnicities are underrepresented given the international nature of the pool.
One reason may be that the tests described in this blog only contain White faces. We may be inadvertently deterring people from other ethnicities from contributing. We do employ tests containing faces of other ethnicities in our tests with police forces and businesses for job deployment purposes, However, we restrict access to these tests, so as to ensure that no one taking them has an advantage from having the opportunity to practice.
4. Request for Help
Limitations of the Greenwich Volunteer Pool
One of the main limitations of using the Greenwich Volunteer Pool for research purposes is the lack of ethnic diversity. To correct this issue, we would like to introduce new tests that contain representative samples of faces from different regions of the world over the next six months. We hope this will encourage more participation. Three tests are ready and will be loaded in May 2024. However, we need to be able to provide a similar range of tests containing faces of different ethnic groups, as we have for White faces.
Do achieve this we need more facial images. We simply do not have enough images from most ethnicities to create new tests. If you are a member of one of our underrepresented ethnicities, can you please help by donating photos of you, with each photo taken in different years? We can offer a £5 Amazon voucher as compensation. We have an easy-to-use system for you to upload your photos.
When volunteers consent to the use of their photos in our research, we believe we are following best ethical practice, as they will be the most informed people possible as to the type of research their photos might be used in. Unfortunately, when developing new tests, we need many more photos than we are ever likely to use. This is because to challenge super-recognisers, we often need photos of ‘doppelgangers’ who are most likely to be mistaken for one another. However, doppelgangers are rare in the population in general, and the same is likely true of our volunteer pool. We can never predict in advance which photographed faces already stored in one of our databases will later be matched with a doppelganger. This process is seemingly very random.
We are currently creating Multi-Ethnic Age-Distanced Face Tests. The task for participants taking these tests will be to view pairs or small groups of facial images on a screen, and then to decide whether images are of the same person, or of two or more different people. To make it harder, face age will vary as we are interested whether there are individual differences in the ability to identify age distanced faces. We hope we can create different tests for different ethnic groups.
Therefore, please donate 8-12 photos of you, each taken in a different year. Social media posts are often helpful when trying to work out when photos were taken. More information and access to the Photo Upload System can be found here:
Appendix A: Information for people who have not accessed our website before.
The information in Appendix A is for new visitors to our tests or website who wish to contribute to our research programme to find out how good they are at face recognition.
Could you be a Super-Recogniser Test (CYBSRT)
We recommend everyone starts by taking the, "Could you be a Super-Recogniser Test". This has been taken by more than 7 million people. It is just designed to be fun, and no one could be diagnosed as a super-recogniser based on a 14-trial 5-minute test. It is a good warm up though as participants become familiar with the requirements of our test systems. With surprisingly high effect sizes, performances on the tests positively correlate with those on other more reliable tests.
Please note, that there are no restrictions in terms of age on this test, as we do not retain any details about those who take the test or use scores for research purposes. Many children take the test, and we will never know, as everyone is anonymous to us. We do know that children as young as 5-years-of-age have taken this test, but we normally suggest that it is suitable for children of 8-years and above.
Avoid mobile phones: Please avoid using a mobile phone in order to optimise performances. Some images will be too small for accurate decision-making. We always ask if we can save the first attempt scores on the three tests as these are used to attribute ability for research. Subsequent attempts on a test will improve scores due to practice, but your ability will remain the same.
Three tests of face recognition
We always ask people to take three much more reliable tests in our battery that are available in the next link. They take 25-45 minutes, although you can take a break between tests (1-2 days) as long as you use the same laptop and click on the same URL link.
Children will need to gain the consent of their parents or guardian to take these tests, and they will be asked to provide assent. We provide two routes through the three tests, which anyone can take. Children as young as 5-years have taken the CFMTYP which is an easier version of the CFMT+. Again, however, we would suggest that 8-years is a more sensible cut-off.
Test 1
Cambridge Face Memory Test: Extended (CFMT+) (Russell et al., 2009) (102 trials)
Cambridge Face Memory Test for Young People (CFMTYP) (Russell et al., 2009) (85 trials)
Participants can select which of the two tests they would prefer to take.
Test 2
Glasgow Face Matching Test (GFMT) (Burton et al., 2010)
Test 3
Short-Term Face Memory Test (30/60) (STFMT3060) AKA Adult Face Recognition Test (Robertson et al., 2019).
When you finish the three tests you will receive your scores and you will be issued with an Anonymised Code which you will be able to use to access our other tests and information. It is not possible to access the systems without the code. We request email addresses from adults interested in taking more tests or contributing to our research. Children cannot leave their email address for ethical reasons. It is however possible to arrange for parents to receive information on behalf of children if they wish to consent each time for their child to take more face tests. Children cannot access information about our wider research programme as long as they use the correct parent-approved route through the test system.
Tests for police deployment and business recruitment
These three tests are reasonably reliable indicators of ability. However, we expect people applying for super-recogniser jobs or working in law enforcement to take a far larger battery of tests with faces from different ethnicities and with longer delays between phases. We have worked with more than 30 international police forces now.
We also work with one UK company to administer these tests (Super-Recognisers International), set up by a retired Metropolitan Police Detective Chief Inspector who created the world’s first Police Super-Recogniser Unit, at the old New Scotland Yard. You could contact them, if you are interested in taking this further. They will charge for the service. https://superrecognisersinternational.com/
Volunteering for research: On completion of the three tests, you will find more information about how to volunteer for further research (mostly online), and, if interested, for your details to be stored in our volunteer research pool database so we can send information about other opportunities that might arise. If you do, you are not committing yourself to anything except about 6 invites to research projects per annum. You can ignore them all until one that interests you arrives. We are always very grateful to our volunteers.
Comments