Skip to content

Emotional Data in Music Performance: Two Audio Environments for the Emotional Imaging Composer

Reference

Winters, R. Michael, Ian Hattwick, and Marcelo M Wanderley (2013). “Emotional Data in Music Performance: Two Audio Environments for the Emotional Imaging Composer”. In: Proceedings of the 3rd International Conference on Music & Emotion, Jyväskylä, Finland.


Abstract

A screenshot of the mapping tool which allows emotional parameters to interpolate between a variety of virtual acoustic spaces

Technologies capable of automatically sensing and recognizing emotion are becoming increasingly prevalent in performance and compositional practice. Though these technologies are complex and diverse, we present a typology that draws on similarities with computational systems for expressive music performance. This typology provides a framework to present results from the development of two audio environments for the Emotional Imaging Composer, a commercial product for realtime arousal/valence recognition that uses signals from the autonomic nervous system. In the first environment, a spectral delay processor for live vocal performance uses the performer’s emotional state to interpolate between subspaces of the arousal/valence plane. For the second, a sonification mapping communicates continuous arousal and valence measurements using tempo, loudness, decay, mode, and roughness. Both were informed by empirical research on musical emotion, though differences in desired output schemas manifested in different mapping strategies.


Download

731kb pdf