PLOS ONE: Identifiable Images of Bystanders Extracted from Corneal Reflections

Criminal investigations often use photographic evidence to identify suspects. Here we combined robust face perception and high-resolution photography to mine face photographs for hidden information. By zooming in on high-resolution face photographs, we were able to recover images of unseen bystanders from reflections in the subjects' eyes. To establish whether these bystanders could be identified from the reflection images, we presented them as stimuli in a face matching task (Experiment 1). Accuracy in the face matching task was well above chance (50%), despite the unpromising source of the stimuli. Participants who were unfamiliar with the bystanders' faces (n = 16) performed at 71% accuracy [t(15) = 7.64, p39 megapixel cameras routinely. However, as the current study emphasizes, the extracted face images need not be of high quality in order to be identifiable. For this reason, obtaining optimal viewers - those who are familiar with the faces concerned - may be more important than obtaining optimal images.

Supporting Information

Movie S1.

Animated zoom on the cornea of a high-resolution photographic subject. The zoom begins with a passport photo-style framing of the subject, and ends with a full face close-up of a bystander captured in the subject's corneal reflection. Successive movie frames represent a linear magnification of 6%. Each frame was resized to 720 pixels wide ×540 pixels high using bicubic interpolation to reduce high spatial frequency noise. Contrast was enhanced separately for each frame using the Auto Contrast function in Adobe Photoshop to improve definition. The image sequence was then converted to movie format for viewing.




We thank Stuart Campbell at the Photographic Unit at the University of Glasgow for high resolution photography, Llian Alys at the National Policing Improvement Agency (NPIA UK) for pointing out forensic applications, and an anonymous reviewer for inspiring Experiment 2. Original high-resolution photographs and performance data are available from the corresponding author.

Author Contributions

Conceived and designed the experiments: RJ. Performed the experiments: CK RJ. Analyzed the data: CK RJ. Contributed reagents/materials/analysis tools: RJ. Wrote the paper: RJ.


  1. 1.Creer KE (1984) The forensic examination of photographic equipment and materials, Forens Sci Internat. 24: 263–272. doi: 10.1016/0379-0738(84)90160-9
  2. 2.Ricci LR, Smistek BS (2006) Photodocumentation in the investigation of child abuse. Office of Juvenile Justice and Delinquency Prevention, US Department of Justice, United States.
  3. 3.Laustsen CB (2008) The camera as a weapon. On Abu Ghraib and related matters. J Cultur Res 12: 123–142. doi: 10.1080/14797580802390848
  4. 4.Harmon LD, Julesz B (1973) Masking in visual recognition: Effects of two-dimensional filtered noise. Science 180: 1194–1197. doi: 10.1126/science.180.4091.1194
  5. 5.Harmon LD (1973) The recognition of faces. Sci Am 229: 70–83. doi: 10.1038/scientificamerican1173-70
  6. 6.Burton AM, Wilson S, Cowan M, Bruce V (1999) Face recognition in poor quality video: evidence from security surveillance. Psychol Sci 10: 243–248. doi: 10.1111/1467-9280.00144
  7. 7.Yip A, Sinha P (2002) Role of color in face recognition. Perception 31: 995–1003. doi: 10.1068/p3376
  8. 8.Nishino K, Nayar S (2006) Corneal imaging system: Environment from eyes. Int J Comp Vis 70: 23–40. doi: 10.1007/s11263-006-6274-9
  9. 9.Johnson MK, Farid H (2007) Exposing digital forgeries through specular highlights on the eye. Lect Notes in Comp Sci 4567: 311–325. doi: 10.1007/978-3-540-77370-2_21
  10. 10.Trevisa J (1975) In: Seymour MC, et al.., editors. On the Properties of Things: John Trevisa's translation of “Bartholomaeus Anglicus De Proprietatibus Rerum.” Oxford: Clarendon. 184.
  11. 11.Clutterbuck R, Johnston RA (2002) Exploring levels of face familiarity by using an indirect face-matching measure. Perception 31: 985–994. doi: 10.1068/p3335
  12. 12.Clutterbuck R, Johnston RA (2004) Matching as an index of face familiarity. Vis Cognit 11: 857–869. doi: 10.1080/13506280444000021
  13. 13.Clutterbuck R, Johnston RA (2005) Demonstrating how unfamiliar faces become familiar using a face matching task. Eur J Cog Psychol 17: 97–116. doi: 10.1080/09541440340000439
  14. 14.Megreya AM, Burton AM (2006) Unfamiliar faces are not faces: Evidence from a matching task. Mem Cognit 34: 865–876. doi: 10.3758/bf03193433
  15. 15.Burton AM, White D, McNeill A (2010) The Glasgow face matching test. Behav Res Meth 42: 286–291. doi: 10.3758/brm.42.1.286
  16. 16.Jenkins R, Burton AM (2011) Stable face representations. Phil Trans Roy Soc B 366: 1671–1683. doi: 10.1098/rstb.2010.0379
  17. 17.Ekman P (1993) Facial expression and emotion. Am Psychol 48: 384–392. doi: 10.1037/0003-066x.48.4.384
  18. 18.Calder AJ, Lawrence AD, Keane J, Scott SK, Owen AI, et al. (2002) Reading the mind from eye gaze. Neuropsychologia 40: 1129–1138. doi: 10.1016/s0028-3932(02)00008-8
  19. 19.Blakemore S, Decety J (2001) From the perception of action to the understanding of intention. Nat Rev Neuro 2: : 561–567.
  20. 20.Trucco E, Verri A (1998) Introductory Techniques for 3-D Computer Vision. Upper Saddle River, NJ: Prentice Hall.
  21. 21.Torralba A, Freeman WT (2012) Accidental pinhole and pinspeck cameras: Revealing the scene outside the picture. Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, 374–381.
  22. 22.Hendy B (1998) The future of digital photography. Paper presented at PMA/DIMA conference, Sydney, Australia, 1998.
  23. 23.Moore GE (1965) Cramming more components onto integrated circuits. Electronics 38: 114–117. doi: 10.1109/n-ssc.2006.4785860