Supervisor: Dr Miles Hansard
The commercialization of depth-camera technologies (e.g. time-of-flight imaging) has made 3D data much more widely available. This includes close-range models of faces and objects, as well as large-scale scene models. There is great potential for this data to be used in cinema effects, virtual reality, game design, and HCI applications. However, it is not clear how best to process and render the raw 3D data, in order for it to be seamlessly merged with traditional video footage. This project will explore the perceptual aspects of this problem, using computational and psychophysical methods. In particular, 3D and head-mounted displays will be used, as well as eye-tracking and other sensor technologies. Strong programming skills, and a background in computer science, computer graphics, psychophysics, or computational neuroscience is required. There is scope for collaboration with the QMUL School of Experimental Psychology, and with visual effects companies, in this project.