The impact has been primarily on the economy, but also on practitioners and on creativity and culture. This impact has been achieved through extensive commercialization and dissemination activities, which led to two patents, wide coverage in the popular press, two successful follow-up grants, consultancy, invited talks to industry, licensing deals, and the formation of a start-up company with substantial investment.
The research has had an economic impact and an impact on society, culture and creativity. The Centre for Digital Music is the first research group to apply Intelligent Systems design to sound engineering. This research led to development of automatic mixing tools for audio and music production. It has had significant economic impact, as well as impact on practitioners and impact on culture.
A start-up company, was formed in 2012. This company, since named MixGenius, received substantial investment ($300,000 Canadian, or £180,000) from Tandem Launch Technologies. In June 2013, MixGenius received an additional £960,000 in funding from several venture capital firms, based on a valuation of £1,920,000 [I1].
Commercialisation activity began in 2008 with a trial license of acoustic feedback prevention tools to Roland Hemming Audio consultants. As research continued, patents were filed and granted [I6], the commercial applications expanded, and industry interest grew. In 2010, our rhythm transformation software development kit was licensed to Lickworx for £5,000, and Fraunhofer signed a licensing agreement for source separation of live sound (£7,250). In 2011, Yamaha licensed a proof-of-concept method for intelligent mixing of backing tracks (£3,600).
Research on intelligent dynamic range compression included optimal compressor design, which resulted in Dr. Reiss being employed as a consultant [I3] at Ableton AG (2011-12). His contributions have resulted in a complete redesign of the compressors used in their flagship product, Ableton Live. Similarly, research on intelligent equalisation led to Dr. Reiss being an equaliser design consultant for Antelope Audio (2013, €800).
This research has had further impact on UK business through the transfer of highly skilled people to industry. Stuart Mansbridge (MSc, 2011) is Head of Music Technology at MixGenius, where PhD student Brecht De Man is doing an internship. Henry Bourne (MSc 2013) is Product Manager at Calrec.Audio Ltd. The contributing PhD students Martin Morrell and Alice Clifford did work placements at BBC (2011) and FXpansion, Ltd. (2012). Enrique Perez (PhD 2011) and Jacob Maddams (MSc 2012) are now Director of Engineering and Software Engineer, respectively, at Solid State Logic in Oxford, and were hired in part due to their specialist audio signal processing skills and research [I4].
Invited talks that resulted in commercial interest have been given at a large number of industry events, including Audio Engineering Society Conventions and European Business Network Conventions. Further invited talks were made to Fraunhofer, Ableton, British Library, Yamaha, BBC, SSL, Harman Soundcraft, Focusrite and at a large number of academic conferences with significant industrial presence (DSP, AES, ISMIR, DAFx) and academic institutions (Surrey, UWL, BCU, DIT, York, Northwestern).
This is through both the use of MixGenius to provide automatic mixing to a wider public to improve creativity as well as through public engagement. Coupled with the commercialization work there has been extensive public engagement activities [I6]. Active web pages are maintained promoting the research, and the YouTube channel IntelligentSoundEng (viewed over 4,500 times) is further used for promotion.
The research was featured twice in the EECS published schools outreach magazine Audio!, and talks have been given to visiting students and school teachers. As a form of Turing Test, automatically mixed content was entered into a sound recording competition, and the judges, all with extensive professional sound engineering experience, were unable to distinguish it from a human mix.
The research was featured in New Scientist (twice) [I8], The Engineer [I9], Guardian (twice) [I10], AV Magazine, ProSoundNews, La Presse, and on BBC Radio 4 (twice), BBC World Service, Radio Deutsche Welle, LBC and ITN, Telegraph podcast and AES podcast, among others.
Dr. Reiss was a panellist at the launch of the Royal Academy of Engineering (RAE) Enterprise Hub, and invited to present his technology to HRH, Duke of Edinburgh at the inauguration of the RAE’s Prince Philip House [I5].
The work has also frequently been discussed on various online discussion forums, blogs and mailing lists, including SoundOnSound, Gearslutz, Music Producer’s Guild and KVR, invoking lively debate.
Audio production is currently a very time consuming and labour intensive task. The underpinning research within Electronic Engineering & Computer Science’s (EECS) Centre for Digital Music [R1-R6] provides intelligent signal processing tools that automate much of the audio and music production process. It exploits best practices in audio engineering and advanced knowledge of human sound perception.
Novel, adaptive systems were devised that analyse the relationships between all the incoming sounds in order to manipulate and edit multi-track audio in much the same way as a professional mixing engineer would operate the controls at a mixing desk. They were tested in live sound [R1,R5] and post-production environments [R4], and user evaluation consistently showed that the intelligent systems will outperform an amateur mix. The research included a series of feasibility studies concerned with automatic mixing of live music [R2]. Contributions included creation of novel tools that position and enhance sources [R3], adjust gains and faders [R6], and correct time offsets and polarity issues in multichannel audio [R2]. All of these techniques operate in real time while ensuring system stability and preventing acoustic feedback.
There were several scientific breakthroughs in the research. The first was the development of multitrack signal processing. Unlike multichannel signal processing, where one usually attempts to extract information regarding the signals that were mixed together, multitrack signal processing is concerned with how best to mix a collection of individual sources in order to achieve a combined signal with preferred characteristics. The second concerned a major advance in psychoacoustics, with an emphasis on understanding human perception of complex sound mixtures, based on advanced auditory models and extensive listening tests. The third involved the application of Knowledge Engineering and Grounded Theory to sound engineering, and the translation of the understanding gained from such studies into algorithms that apply best practices in audio production. These breakthroughs were implemented within a common framework, allowing the practical realization of intelligent, real-time systems that perform the complex tasks of a professional sound engineer. Perceptual audio evaluation was performed which showed that mixes devised by intelligent systems were often preferred over manual mixes.
In 2011, Dr. Reiss was awarded a European grant DigiBIC (EC FP7 CSA ICT-2009.4.1, €1.2million total, €78k for QM, 2011-13) to help bring the outcomes of this research to SMEs in the Creative Industries. The commercial potential of this research also led to Dr. Reiss being awarded a prestigious Royal Academy of Engineering Enterprise Fellowship (£85k, 2012-2013). The DigiBIC grant was used to disseminate the research to the European community, and led to the consulting work, whereas the Fellowship provided entrepreneurship training and mentoring essential to the start-up and other commercialisation activities, as well as funding (patent costs, legal advice on international contract negotiation, etc.).
Dr. Reiss, in collaboration with Prof. Cavallaro (Director: Centre for Intelligent Sensing, EECS), was recently awarded an EPSRC grant, Multisource audio-visual production from user-generated content (EP/K007491/1, £317,814). This will build on the successful work so far, and expand into intelligent production of user-generated video content.