Skip to main content
School of Electronic Engineering and Computer Science

Symbolic interaction and Artificial Creativity

5 March 2014

Time: 3:00 - 4:00pm
Venue: BR 3.02 Bancroft Road Teaching Rooms Peter Landin Building London E1 4NS

Distiguished Seminar Series: Gérard Assayag Distiguished Seminars - preceded at 2.45pm by tea and followed by a reception both in the Informatics Hub

About the Speaker


Gérard Assayag
Research Lab Director,  IRCAM (CNRS, UPMC)

Gerard Assayag is head of IRCAM Research Lab «Sciences and Technologies of Music and Sound » and of its Music Representation Team he has founded in 1992. His research interests are centered on music representation issues, including programming language, machine learning, constraint and visual programming, computational musicology, music modeling, and computer-assisted composition. He has designed with his collaborators OpenMusic and OMax, two music research environments which have gained international reputation and are used in many places over the world for computer assisted composition, analysis and improvisation.
Gérard Assayag is a founding member of AFIM (Association Francaise d’Informatique Musicale) and SMCM (Society for Mathematics and Computation in Music).  He serves in the Editorial Boards of SMCM’s Journal of Mathematics and of the Journal of New Music Research (JNMR). He has organized or co-organised the “Forum Diderot, Mathematique et Musique” for the European Mathematical Society in 1999 as well as several international computer music conferences, including the Sound and Music Computing 2004 conference, the 3rd Mathematics and Computation in Music conference in 2011 and the Improtech Paris - New York 2012 conference on Improvisation an new technologies.
Gerard Assayag has been co-editor of several journal special issues on music science such as JNMR or Soft Computing, and of books including «Mathematic and Music » (Springer-Verlag 1999),  «The OM Composer’s Book »  (Delatour 2008), «New Computational Paradigms for Computer Music » (Delatour 2009), and «Constraint Programming in Music » (Wiley 2012).

Abstract
We propose a conceptual shift in musical interaction with machines : up to now, engineers and researchers have been obsessed by computing fast computer response — a logical concern considering the available machine speeds. However, instantaneous response is not the way a musician reacts in a real performance situation. Although decisions are being carried out at a precise time, the decision process relies on evaluation of past history, analysis of incoming events and anticipation strategies. Therefore, not only can it take some time to come to a decision, but part of this decision can also be to act later on. This process involves time and memory at different scales, just as music composition does, and cannot be fully apprehended by usual signal and event processing.

In order to foster realistic and artistically interesting behaviors of digital interactive systems, and communicate with them in a humanized way,  we bring into synergy a combination of means: machine listening — extracting high level features from the signal and turning them into significant symbolic units ; machine learning — discovering and assimilating on the fly intelligent schemes by listening to actual performers ; stylistic simulation— elaborating a consistent model of style ; symbolic music representation —  formalized representations connecting to organized musical thinking, analysis and composition. These means cooperate in effect to demarcate a multi-level memory model backing a discovery and learning process and a generative one, thus contributing to the emergence of a creative musical agent.

After OpenMusic, a standard for computer assisted composition, we have designed OMax, an interactive machine improvisation environment that explores this new interaction schemes. It creates a cooperation between heterogeneous components specialized in real-time audio signal processing,  high level music representations and formal knowledge structures. This environment learns and play on the fly in live setups and is used in many artistic and musical performances. Starting from OMax, we show recent trends of our research on interactive creative agents capable of adequacy and relevance by connecting instant contextual listening to corpus based knowledge, with longer term investigation and decision processes allowing to refer to large-scale structures and scenarios.

“Symbolic Interaction” brings together the advantages one can get from the worlds of interactive real-time computing and intelligent, content-level analysis and processing, in order to enhance and humanize man-machine communication. Performers improvising along with Symbolic Interaction systems experiment a unique artistic situation where they interact with a musical (and possibly visual) agent which develops itself in its own ways while keeping in style with the user. Symbolic Interaction tends to define a new artificial creativity paradigm in computer music, and extends to other fields as well : The idea to bring together composition and improvisation through modeling cognitive structures and processes is a general idea that makes sense in many artistic and non-artistic domains . It is a decision-making paradigm where a strategy has to be found in order to weave decisions step after step either by deciding to relate to an overall structural determinism, or to jump in an improvized way and generate a surprise. This works well in music creation/improvisation because one of the factors we do relate to liveliness is precisely this mixture of deterministic and unexpected (improvised) behaviour we find in living organisms and we are often frustrated not to see in digital intelligence.

Video Link: http://ess.q-review.qmul.ac.uk:8080/ess/echo/presentation/dd57beb6-6d63-40b1-b4d4-ea0376d71977

Photos: http://mupae.blogspot.com/2014/02/gerard-assayag-of-ircam-to-give-eecs.html

Back to top