AudioScapes

Exploring Surface Interfaces for Music Exploration

Steven R. Ness and George Tzanetakis

There is a growing interest in touch-based and gestural interfaces as alternatives to the dominant mouse, keyboard and monitor interaction. Content and context-aware visualizations of audio collections have been proposed as a more effective way to interact with the increasing amounts of audio data available digitally. Audioscapes is a framework for prototyping and exploring how touch-based and gestural controllers can be used with state-of-the-art content and context-aware visualizations. By providing well-defined interfaces and conventions a variety of different audio collections, controllers and visualization methods can be combined to create innovative ways of interacting with large audio collections. We describe the overall system architecture, the currently available components and specific case studies.

The positions of the songs on the grid are not placed by hand, but are rather an emergent property of the music itself.

AudioScape web view of the 1000 songs, 10 genres collection described in "Musical Genre Classification of Audio Signals" by Tzanetakis and Cook, IEEE Transactions on Speech and Audio Processing 2002. Changing the iterations demonstrates how the map self-organizes.

AudioScape web view of a personal collection of 5800 mp3 30-second snippets.

AudioScape web view of the RWC collection.

AudioScape web view of a subset of the FreeSound database of sound effects and sounds.

Video Demonstration of controlling a desktop view using the Radio Drum

Video Demonstration of the iPhone AudioScape view

Video Demonstration of a user with cerebral palsy controlling the desktop AudioScape view using the keyboard

Video Demonstration of controlling the desktop AudioScape view using an eye tracker.

Marsyas was used to analyze the audio data in all of the above demonstrations. In the video links below, Marsyas was also used in conjunction with Qt to display and provide interaction with the grids.