In broad terms, I apply Artificial Intelligence and Data Science techniques to audio and music research, aiming to understand the content of individual recordings as well as large collections. This includes Semantic Audio, a field that is in the confluence of Signal Processing, Machine Learning and Knowledge Representation using Semantic Web technologies.
I'm leading QMUL's team of the EU funded AudioCommons project. Among the most novel things we're building are an ontology framework for the description of audio content and services. We're looking at developing confidence measures for audio analysis algorithms, so users can trade off precision vs. recall and retrieve content that is most appropriate for their use cases. We also work on the assessment of how using open sound content improves the creativity of professionals in game audio, music and video production use cases.
Besides AudioCommons, I'm conducting research on the Fusing Semantic and Audio Technologies for Intelligent Music Production and Consumption (FAST-IMPACt) project, leading the Production work thread. I also supervise PhD students who work in the areas of Intelligent Music Production, Deep Neural Networks for music labelling, musical gesture recogniton in expressive music performance, casual exploration of digital archives, as well as looking at the role of the user interface and 'nostalgia' in music production.
New Center for Doctoral Training in AI and Music
I'm an investigator of the UKRI Centre for Doctoral Training in Artificial Intelligence and Music (AIM CDT) with Simon Dixon (PI) and others, starting Sept. 2019. The core area of the CDT is in Music Information Research, or Music Informatics, a research area of importance to the UK’s Creative Industries focusing on the development of new approaches to understand and model music and to develop products and services for creation, interaction and experience of music and music-related information. Research at the CDT will focus on music understanding, intelligent instruments and interfaces, and computational creativity, guided by real application needs from partners across the digital music world. The AIM CDT will fund over 72 PhD students, with additional studeentships provided by over 25 industry partners. I have several PhD topics within this centre. These are listed here.
AI for Music in the Creative Industries of China and the UK
I'm an investigator of the AHRC funded AI for Music in the Creative Industries of China and the UK project with Nick Bryan-Kinns (PI) and others. This project examines the increasing role and potential of AI for music in the Music Industry and the Creative Industries in China and the UK and build partnerships leading directly to the development of future substantial funded collaborations.
Semantic Applications for Audio and Music (SAAM2018) Workshop
I'm programme chair of the International Workshop on Semantic Applications for Audio and Music (SAAM2018) to be held in conjunction with the International Semantic Web Conference (ISWC 2018) on 9th October 2018 in Monterey, California.
SAAM is a venue for dissemination and discussion, identifying intersections in the challenges and solutions which cut across musical areas. In finding common approaches and coordination, SAAM will set the research agenda for advancing the development of semantic applications for audio and music.
JAES Special Issue on Participatory Sound And Music Interaction Using Semantic Audio
I'm guest editor of the AES journal Special Issue on Participatory Sound And Music Interaction Using Semantic Audio. After receiving nearly 30 great papers, the first two volumes (Vol. 64 and 66) with 9 and 8 accepted papers have been published.
Audio Mostly 2017 at QMUL
I've been general chair of the Audio Mostly conference with ACM in-cooperation. The conference themed "Augmented and Participatory Sound and Music Experiences" was held at Queen Mary between 23-26 Aug. 2017 with over 120 attendees and a rich programme of papers, posters, demos, installations, workshops.