After studying Renaissance music in Italy for a year, Cort Lippe studied composition and computer music with Larry Austin in the USA. He also followed composition and analysis seminars with various composers including Boulez, Donatoni, K. Huber, Messiaen, Penderecki, Stockhausen, and Xenakis. From 1980-83 he studied and worked in The Netherlands, at the Instituut voor Sonologie with G.M. Koenig and Paul Berg in the fields of computer and formalized music. From 1983-1994 he lived in France where he worked for three years at the Centre d'Etudes de Mathematique et
Automatique Musicales (CEMAMu), founded by Iannis Xenakis, while following Xenakis' courses on acoustics and formalized music for two years at the University of Paris. Subsequently, he worked for nine years at the Institut de Recherche et Coordination Acoustique/Musique (IRCAM), founded by Pierre Boulez, where he gave courses on new technology in composition, and developed real-time computer music applications. His research includes almost 40 peer-reviewed publications on interactive music, granular sampling, score following, spectral processing, FFT-based spatial distribution/delay, acoustic instrument parameter mapping, and instrument design.

He has written for most major ensemble formations. Some of his commissions include the International Computer Music Association, the Sonic Arts Research Center (UK), the Festival El Callejon del Ruido (Mexico), the Dutch Ministry of Culture, and the Zentrum für Kunst und Medientechnologie (Germany); and he has written for many internationally acclaimed soloists, including bassist Robert Black, percussionist Pedro Carneiro, tubist Mel Culberton, saxophonist Steven Duke, clarinetist Esther Lamneck, sho player Mayumi Miyata, harpist Masumi Nagasawa, tubist Melvyn Poore, pianist Yoshiko Shibuya, and bass clarinetist Harry Sparnaay. His compositions have received numerous international prizes, including first prizes from the Irino Competition (Japan), the Bourges Electroacoustic Music Competition (France), El Callejon Del Ruido Competition (Mexico), USA League-ISCM Competition (USA), and the Leonie Rothschild Competition (USA); second prize from the Music Today Competition (Japan); third prize from the Newcomp Competition (USA); and honorable mentions from the Prix Ars Electronica 1993 and 1995 (Austria), the Kennedy Center Friedheim Awards (USA), the Sonavera International Competition (USA), the Bourges Electroacoustic Music Competition, and the Luigi Russolo Competition (Italy).

His music has been performed at over 100 major festivals worldwide, including the International Computer Music Conference, ISCM World Music Days, Gaudeamus (The Netherlands), the Music Today Festival (Tokyo), the Bourges Synthese Festival (France), the Huddersfield Festival (UK), and SARC’s Sonorities Festival (UK). In addition, since 1993 Lippe has collaborated with the composers/researchers Miller Puckette and Zack Settel, performing as the Convolution Brothers at festivals worldwide. His works are recorded on more than thirty CDs, including ADDA, ALM, Apollon, Big Orbit, CBS-Sony, CDCM, CDE Music, Centaur, Classico, CMJ Recordings, EMF, Hungaroton Classic, Harmonia Mundi, ICMC2000, ICMC2003, IKG Editions, Innova, MIT Press, Neuma, Salabert, SEAMUS, Sirr, SMC07 and Wergo.

As a teacher, Lippe has given over 100 presentations and guest lectures around the world, and was a visiting professor at the Sonology Department of Kunitachi College of Music, Tokyo (1992, 1999-2007, and 2010), the Carl Nielsen Conservatory of Music, Odense, Denmark (1999-2001), New York University (2007), and as a recipient of a Fulbright Award in 2009, he spent six months teaching and doing research at the National and Kapodistiran University of Athens, Greece. Since 1994 he has taught in the Department of Music of the University at Buffalo, New York where he is an associate professor of composition and director of the Lejaren Hiller Computer Music Studios.

Musical Interaction or Interactive Music?
The genre of real-time, interactive music involving acoustic instruments and electronic sounds has become an accepted ensemble formation. This paper will explore compositional relationships between acoustic instruments and electronic sounds in this genre, as well as sound design of the electronic part.

Interest in new and often unusual interfaces for expressive electronic music production have become widespread in the past decade, allowing for real-time pure electronic music to be performed in concert as flexibly as acoustic music. And, while we are a long way from standardization, there is a large and healthy quantity of experimentation taking place in this field. On the other side of the coin, real-time robotic performance of acoustic sounds/instruments is rapidly reaching a very high level of sophistication, having progressed far beyond MIDI control of the piano. On the one hand we have physical performance of the virtual, and on the other, virtual performance of the physical. Both of these performance models imply direct and immediate interaction, usually with a relatively clear one-to-one correspondence between physical activity and sound. This one-to-one correspondence is a direct outcome of the models themselves: we want to perform electronic music in similar fashion to the way we perform with acoustic instruments, or we want to automatically control acoustic instruments so that the instruments behave much like they do when humans use them. In these two paradigms there is an effort to mimic human performance activity (new interfaces for controlling electronic sounds) or, an effort to mimic the result of human physical activity (robotic control of acoustic instruments). But what do we want to do with the combination of acoustic instruments, performed by humans, with electronic sounds?

The combination of acoustic instruments with electronic sounds is a genre of music that has existed for more than 60 years. With the possibilities offered by real-time digital signal processing, the genre has become a relatively accepted ensemble formation over the past two decades. If this genre has any level of sophistication, it has been made possible by the significant research that has taken place over the past 25 years into real-time pitch, amplitude, and timbre detection. One of the motivations for this research has been a desire to offer connections between the physical and the virtual. Virtual sound/instrument design and control based on musically relevant acoustic instrument parameter mapping offers pointers and guidelines for composers, performers and listeners interested in creating strategies for linking the acoustic and electronic worlds of sound. And while researchers have offered composers a rich variety of techniques for tracking musical parameters, two criticisms are often made of this genre of music. The first is that the electronic part lacks the richness of electronic sounds found in pure electronic music (traditionally a non-real-time art form). The second criticism is that, too often, the musical interactions mainly offer one-to-one correspondences in a kind of master/slave interactive relationship where the acoustic sound triggers an electronic response. Both are valid criticisms, but especially if one makes a comparison with the complex musical relationship that exists between two acoustic instruments in a typical chamber music context. While a dialogic relationship is certainly evident in chamber music, and can arguably be identified as a primary relationship, the give-and-take between instruments often goes far beyond the master/slave relationship used to describe the typical interactive composition.