Skip to main content
School of Electronic Engineering and Computer Science

Facing Up To Robots

Vision and Cognitive Science Research Groups
robot

Lifelike robots have yet to find a place in the average home or workplace, but Professor Peter McOwan’s research on face perception and the development of ‘facial interpretation based technology’ will make possible a new generation of socially aware robots which are capable of empathy.

This work has captured the public imagination and helped to raise awareness of computer science research to the public and students in schools. The research has also sparked public debate over the uses of artificial intelligence, and raised awareness of the issues involved via widespread media coverage and through targeted smartphone apps.

Our Impact

Impact on Public Engagemnet

This research led directly to a series of thought-provoking and informative ‘public engagement’ activities designed to inspire the next generation of science researchers as well as inform the general public. A range of approaches have been deployed, from traditional to innovative face to face events, to the use of apps and social media.

Multimedia and Apps

McOwan’s research into face recognition informed the production of a short film ‘Why faces are special’ (Black and McOwan, 2012).  Designed to be accessible to a wide audience, the film was selected as one of the finalist 55 from 1450 films submitted to the festival CERN CineGlobe film festival 2012. Available on YouTube, the film has been watched 2,227 times (January 2013).

A special edition of cs4fn (www.cs4fn.org), McOwans project to promote computer science research in schools, describing both the facial perception elements and the affect recognition system was developed as part of the cs4fn project as support for the Royal Society Summer Exhibition (http://www.cs4fn.org/faces/). The cs4fn website gains 14 million hits, 750,000 visitors per year. Industry has supported this engagement work including the largest grants given by Google’s CS4HS programme totalling 108K in the period 2008 to 2012 

CS4FN is certainly one of our 'champion' UK projects, and the longest running recipient of our grants.
[Google EMEA University Programme Director. ]
robot_run2

In July 2012 an iPhone game app  Robot Run based on the results from the LIREC expression/ affect recognition aspects of QMUL research as part of the LIREC project  was developed (https://itunes.apple.com/us/app/robot-road-run/id535094701?mt=8) and this has been downloaded around 50 times a week and received a number of 5 star ratings.

TuneTrace

Elements of computer vision used in robot vision systems were also incorporated into TuneTrace (http://www.qappsonline.com/apps/tunetrace/ ), a smartphone app that uses image processing to extract the graph of the users drawing and traverses this with simple software agents to produce a musical tune.This engages users actively in an understanding of the principles of programming and exposes them to simple steps in computer vision. The app was featured on the Discovery Channel, Wired magazine UK and Japan, was selected by the Guardian newspaper 20 best iPhone and iPad apps this week, Shaw Connect’s 10 best new Apps in May 2013. It has had over 30 thousand downloads.

The impact of his work in promoting computer science to diverse audiences was recognised in 2011 when McOwan was awarded the IET Mountbatten medal, a national competitive award sponsored by the IET and the BCS.

Peter is an inspirational example of how an active researcher can also be an effective communicator, stressing the importance of ‘research stories’ to provide engaging and widespread societal impact to enthuse the next generation to follow careers in computing and technology
(Mountbatten Medal Ceremony Citation Speech)

Face to face events- embedding engagement in real and constructed social environments

Pogoing robots at the ICA (Institute of Contemporary Arts) in London, July 2006. This unique event, in partnership with digital artists’ soda.co.uk, involved three large pogo dancing robots as an integral part of the audience at a live musical gig. The robots were trained to respond, dance, depending on their musical tastes. McOwan was present to discuss issues around computational modelling of neuroscience, robot embodiment and emotion, with the diverse audience of concert goers and the media. He was responsible for the implementation of the live stage visuals showing the audience how the robots were processing the music and deciding to dance.

Guerrilla Science Blade Runner film recreation event (July 2010). This interactive dialogue with the public on computer image perception and social robots took place over 6 days on a reconstructed set from the film Blade Runner in London’s Canary Wharf and engaging with 7,000 people over the 6 days. The novel juxtaposition of current research with popular science fiction, mediated by a team of appropriately costumed researchers proved to be provocative and lead to over 1000 people taking our test and engaging in open discussions.

The marvelous Peter McOwan, .., joined us with a custom-made test to help us hunt for replicants (The name given to the lifelike robots in the film).
[http://guerillascience.co.uk/archives/tag/secret-cinema]
pMcOwan at Blade Runner

The test was designed to challenge people’s perceptions of what constitutes human versus artificial intelligence providing:

a unique experience of presenting and discussing their research with a public audience, while also being embedded within the fictional narrative of a theatrical event.
[Guerrilla Science event director]

Image taken from http://guerillascience.co.uk/archives/tag/blade-runner

Traditional Face to Face engagement events

McOwan and his team took part in Robotville, a four day event at the London Science Museum in December 2011, where under his leadership, robots from the LIREC consortium were presented and discussed. Specifically QMUL gave a live demonstration and discussions of the uses for the LIREC face interpretation software to over 4000 members of the public. McOwan also spoke at and coordinated an industry facing event: Robot Futures: Beyond the Valley as part of robotville, and the research was also demonstrated at the CBit Industry Trade fair event in Hanover Germany in 2012 to around 6000 industrialists and member of the public.

The research was also included in a talk at the House of Commons by McOwan as part of the Walking With Robots project, and also as part of the Robots and Avatars project. It was also presented at the Big Bang Science Fair in 2010; the UK Space Conference 2008-2001.

The feedback from the teachers and Schools who attended the School days was universally positive not to say ecstatic!
[Space Conference Director]

 

McOwan was also invited to participate in the Royal Society Summer Exhibition July 2011 ‘Facing up to faces: perception from brains to robots’ event. The face space manipulation, avatar generation and effect recognition systems were demonstrated live:

demonstrating his cutting-edge research into robotic face perception to over 14,000 people who attended the Summer Science Exhibition over the 6 days. These included school groups, members of the public, journalists and policy makers.
[Royal Society Summer Exhibition Coordinator]

The evening soirees gained access to around 100 Fellows of the Royal Society (FRS).

 

Underpinning Research

 

This work combines contributions at QMUL from the Cognitive Science group and the Computer Vision group. Computational modelling of human facial perception, in particular the examination of the face space hypothesis was undertaken as a continuation of a long standing collaboration with Psychology at UCL predominantly through the EPSRC dynamic faces project. This research developed novel methods for the creation and manipulation of photorealistic avatars. The research focus was on the development and utilisation of new tools for the extraction of facial motion and mapping expressions between faces. One aim was dynamic 3D motion capture without using existing noisy time of flight technologies or restrictive structured light approaches. Faces vary in colour as well as image brightness but the natural colour signal is not used effectively in image motion or stereo algorithms We developed a new approach to image motion analysis that characterised the bright-dark, yellow-blue and red-green opponent channels of the human colour system as chromatic derivatives. We incorporated chromatic derivatives into our existing spatio-temporal brightness derivative method for motion and binocular disparity calculation and demonstrated improved performance. [R1]

The prime motivation of the computer vision work was to build computer vision tools that could be used to develop new methods for studying the perception of facial motion. A major aim was to generate a photorealistic average avatar with which to separate out the motion of the face from its form. This was achieved using 2D image-based performance-driven animation. We constructed a photorealistic avatar using Principle Components Analysis (PCA) over vectors encoding the differences between single frames of movie sequence and a reference frame. This can deliver an expression space for a given person.

We examined the psychological validity of a PCA-based expression space. By adapting to facial images at the ends of a particular dimension of facial variation (e.g. the first principal component) we could shift the appearance of expressions away from the adapting expression but did not shift perception of faces arrayed along a second orthogonal direction. This showed adaptation within expression space and that images which were statistically orthogonal were also perceptually orthogonal. The idea that faces are represented as relative to a mean face, which has become the standard view in face perception, raises questions about over which set of faces is the mean constructed. We built PCA spaces across individuals rather than across expressions to investigate “family resemblance” between different classes.

We used a novel technique of mapping a vector representing a deviation of a male face from the male mean into a female face space. This resulted in a female “sibling”. We showed that the “sibling pairs” looked more alike than a random pairing indicating “family resemblance” may be encoded by similar vectors referenced to the average of classes of faces [R3]. The same technology can be used to visualise our prejudices. We found that average Conservative and Labour MP’s faces were indistinguishable. However average faces rated as strongly labour or strongly conservative did look distinctively different and were correctly matched to their stereotypical category by participants in a follow-up experiment [R4].

Insights from this work for example dynamic areas of particular importance in processing [R5] fed into the development of robust facial expression and affective intent prediction technologies which formed QMUL’s contribution to the EU funded IP Lirec (Living with Robots and Interactive Companions) which examines the requirements for socially meaningful long term interactions between humans and robots in real world social scenarios [R2,R6].The research explores for example ‘affect sensitivity’ identifying the affective states of humans and linked non-verbal behaviours. It also identifies limitations and challenges arising from the design of an affect recognition framework in a real world scenario where an iCat robot plays chess with children [R2].

Both of these projects had, by design, specific public engagement strategies embedded from the start, which were further and successfully amplified through the EPSRC PPE project Computer Science for Fun (based at QMUL) and McOwans QApps project www.qappsonline.com. Curzon and McOwan’s cs4fn project provides a successful strategic framework to do high quality public engagement through writing accessible articles about research to create engaging stories, while QApps provides a portal for promoting research based smartphone apps.

 

References

 

  • [R1] Xuefeng Liang, Peter W. McOwan, and Alan Johnston  Biologically inspired framework for spatial and spectral velocity estimations JOSA A, Vol. 28, Issue 4, pp. 713-723 (2011) doi:10.1364/JOSAA.28.000713 Google scholar citations 2
  • [R2] G. Castellano, I. Leite, A. Pereira, C. Martinho, A. Paiva, and P. McOwan "Affect recognition for interactive companions: challenges and design in real world scenarios." Journal on Multimodal User Interfaces 3.1 (2010): 89-98. Springer, DOI 10.1007/s12193-009-0033-5 Google scholar  citations 27
  • [R3] Harry John Griffin, Peter William McOwan, and Alan Johnston "Relative faces: Encoding of family resemblance relative to gender means in face space" Journal of Vision October 14, 2011 11(12): 8; doi:10.1167/11.12.8 no current citations
  • [R4] Roberts T, Griffin H, McOwan P W, Johnston A, 2011, "Judging political affiliation from faces of UK MPs" Perception 40(8) 949 – 952 no current citations
  • [R5] Berisha F, Johnston A. & McOwan P.W. Identifying regions that carry the best information about global facial configurations, Journal of Vision 10(11) article 27  (2010) doi: 10.1167/10.11.27 Google Scholar Citations 6
  •  [R6] Castellano, G.; Mancini, M.; Peters, C.; McOwan, P. W., “Expressive Copying Behaviour for Social Agents: A Perceptual Analysis”, IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, November 18, 2011. Google  Scholar Citations 4

 

Grants

 

  • [G1] 2008-12 EU FP7 LIREC (McOwan) £707,509
  • [G2] 2008-11 EPSRC EP/F037384/1 (McOwan) Analysing Dynamic Change in Faces£287,392
  • [G3] 2008-11 Google CS4HS Programme Support, cs4fn x 4 (Curzon, McOwan, Black) £100,435 £37,983 (2008), £21,576 (2009), £21,914 (2010), £18,962 (2011)
  • [G4] 2008-13 EPSRC EP/F032641/1 PPE Award (Curzon, McOwan) cs4fn £661,645
  • [G5] 2008  Neurotic, pogoing robots co applicant Wellcome Trust Arts award (McOwan and Warman) £37,000

     

Impact Corroboration

 

  • [I1]IET Mountbatten Medal 2011 to McOwan for public engagement in computer science http://www.theiet.org/resources/library/archives/institution-history/mountbatten-medal.cfm
  • [I2] The cs4fnfaces/robots special: http://www.cs4fn.org/magazine/magazine13.php
  • [I3] The Lirec website http://lirec.eu/
  • [I4] Pogoing robots at the ICA  http://news.bbc.co.uk/1/hi/7487645.stm
  • [I5] Facing up to robots at Royal Society 2011 http://royalsociety.org/summer-science/2011/facial-perception/
  • [I6] Facing up to Robots at Blade Runner Secret cinema event http://guerillascience.co.uk/archives/tag/peter-mcowan
  • [I7] UK politicians faces media coverage (example) http://www.sciencedaily.com/releases/2011/10/111006094827.htm
  • [I8] Why Faces Are Special, film selected for the CERN CineGlobe film festival 2012  http://www.cineglobe.ch/official-selection
  • [I9] Why faces are special film http://www.youtube.com/watch?v=Row6GSzg_m4
  • [I10] Robotville example http://www.newscientist.com/blogs/culturelab/2011/12/welcome-to-robotville-population-20.html
  • [I11] Robot Futures: Beyond the Valley industry engagement 2011 http://www.youtube.com/watch?v=MCwiT3Vfgr0
  • [I12] Need to inspire the next generation of computer scientists and impact of cs4fn project initiative EPSRC (2006) International Perceptions of the UK Research Base in Inf. And Communications Technologies,
Back to top