Supervisor: Dr Dan Stowell
Research group(s): Centre for Digital Music
Acoustic monitoring has been shown to be extremely valuable - e.g. monitoring activity in streets and parks, or in the home for security and home assistance. However, with always-on microphones there is a risk to people's privacy. This problem can be overcome if the audio data is transformed in a "privacy preserving" manner, in other words if we can find some transformation that provably removes sensitive information from the data (such as spoken words), yet still enables automatic analysis. This project will work within the growing domain of privacy-preserving machine learning, and will also take as a starting point recently-described audio transformations which seem to offer some privacy protection, but for which privacy is not yet proven. These include long-duration index spectrograms (Towsey 2014), low-bitrate spectral energies (Gontier et al 2017), hashing (Jimenez et al 2018), and zero-crossings (Colonna et al 2015 ). The goal is to develop an audio transformation that can be proven to, or trained to, minimise the leakage of sensitive information, while maximising the usefulness of the data for various purposes. We aim to create a key component of audio processing that can be used directly in thousands of audio monitoring projects, helping to safeguard the privacy of citizens worldwide.