Navigating the Path Between Computer Science and Music

Dana Cook Grossman

In 1959, the British novelist and physicist Sir C.P. Snow gave a famous lecture ruing what he saw as a rift between society’s “two cultures”—the humanities and the sciences. Snow would surely be heartened, half a century later, by Dartmouth doctoral student Andy Sarroff. “I have one foot in the music department and one foot in the computer science department,” says Sarroff.

“I would describe myself as being in the field of music-information retrieval,” he continues. “It’s not such an old field—probably just about 15 years old. Its focus is taking music in whatever format it’s in and extracting meaningful information out of it. Usually, I’m working with digital audio—looking at the zeros and ones in digital audio and mapping the perception to the signal.”

Sarroff came to his interest in “zeros and ones” through his interest in notes and meter.

“Music was a gateway to computing for me. I’ve always played music,” he says.

He was a music major as an undergraduate at Wesleyan and then a recording engineer for eight years or so. He went on to earn a master’s degree in music technology at New York University.

“As someone who’s worked in the practical area of making music—sitting in a recording room for hours and hours, mixing music—I’m really interested in the characteristics of sounds, in measuring those characteristics quantitatively,” he says.

Among the attributes that interest him are music’s spatial characteristics. Sarroff explains what that means: “Say you’re sitting in a room with very nice acoustics and very nice speakers; you’ll be thinking about the virtual three-dimensional image that you’re hearing. You have a sense of all these musical ‘objects’ that are placed virtually in a stereo field. They don’t actually exist, but your brain’s interpretation is that you have drums in one place and voice in another.”

Analyzing what creates that sense, he says, involves complex computations to map the perceived sound to the audio signal.

Another topic that intrigues Sarroff is known as “separation”—the ability to distinguish the various aspects of a piece of music from each other. “Our brains can easily separate out all the components,” he says, “but it’s very difficult to separate them out from the signal” computationally.

Separation, he says, is “a pretty deep problem in music, actually in the field of audio in general.”

The question occupying him currently is a Google Faculty Research-sponsored project called Search by Groove. “The idea of groove is an incredibly complex thing,” he says. “It basically means the way you hear music and how that affects the way your body moves to music. We’re looking at about 9,000 songs and trying to extract rhythmic motifs that describe that particular song. I don’t mean just the rhythmic patterns—groove is a combination of instrumentation and rhythmic patterns. Then we use those motifs in a look-up database to find similar songs.”

While Sarroff is engrossed in the academic aspects of these questions—and envisions remaining in a university setting after he competes his doctorate—he notes that there are many practical applications for such work. For example, online radio stations can use computational analysis to generate playlists. Or music retailers can develop algorithms that extract information from the music that customers buy to identify similar pieces they might like. Or composers can use it to help create new music.

The nature of his interests means that although Sarroff’s PhD will be in computer science, he works closely with Dartmouth’s Digital Musics Program.

He finds the disciplinary boundaries at Dartmouth so permeable that “it’s pretty easy to engage in the interactive, interdisciplinary work that music and computation require. I came from NYU, where we had a really nice research group, but reaching across to another department there was more difficult than it is here.”