Life Beyond Reed (continued)

Photo by Matt D'Annunzio

Gina Collecchia ’09

Audio engineer

Audio engineering seems like an ideal career for Gina. Passionate about music and gifted at math, she found novel ways to combine these fields at Reed, writing a math thesis on music information retrieval that explored the mathematic similarities between songs, artists, or genres, and developing algorithms that allow machines to make sense of audio data. Since graduation, she has published a book on musical signal processing, earned a masters in music, science, and technology from Stanford, and worked for Bay Area companies. Gina recently became a senior audio software engineer at Jaunt VR, where she leads the development of audio algorithms for virtual reality.

Thesis: The Entropy of Musical Classification. Advisor: Prof. Joe Roberts [math 1952–2014]

What does an audio engineer do? At SoundHound, I was working on machine learning for speech recognition, text to speech, and building our knowledge graph for voice search. For a company of about 100 people, we accomplished some pretty huge things—our personal assistant, Hound, is faster than Apple and Google and uses our own speech recognition. I wanted to do more music and audio. At Jaunt, I’ll be responsible for everything audio. Jaunt has a unique, cinematic approach to virtual reality, and it’s all about capturing 3-D content. 

What are you working on now? I’ll be building tools for content creation, binaural (headphone) and Ambisonic (speaker array) audio rendering, and much more. VR is a new frontier for media, so I'm really excited about this new opportunity!

What did you learn at Reed that you’ve carried with you into your career? The high level of work ethic, autonomy, and coursework were a really good setup for research positions, networking, and grad school. 

Who should look at audio engineering? There's a lot of different disciplines that fall into audio engineering: electrical engineering, computer science, linguistics, math, physics, art, psychology—even archaeology! There’s a ton of research to be done on perception of audio and music, and the parts of the brain that it affects. With the developments in artificial intelligence and machine learning, I think we’re going to need a lot more of that research.

What are the next big problems for sound engineers to solve? There's still source separation and speech-related problems to solve, but machine learning has opened the door for a lot of creative research. Google is trying to create the next music sensation like the Beatles with neural networks and deep learning. 

Like robots writing and producing music? Yes. And I’m a little skeptical of that kind of thing. I think part of our attachment to music is the desire to be or be with the rock star—to have a connection to the performer, which I don’t think we could ever have with a robot. But there are Spotify- or Pandora-generated playlists that people love. That is one direction [this field is headed] that is rooted in user experience.