You ain’t heard nothin’ like this before.
Literally.
Thanks to Google and its AI capabilities, we’re expanding our aural horizons, taking our ears to places they’ve never been before. You see, Google is creating brand new sounds with technology, combining the sounds made by various instruments and creating something that is entirely novel. It’s the work of Jessie Engel, Cinjon Resnick, and other team members of Google Brain, the tech company’s core AI lab. And it’s called NSynth or Neural Synthesizer, described as “a novel approach to make music synthesis designed to aid the creative process.”
While it may sound as though Google’s scientists are playing two instruments at the same time with NSynth, or perhaps layering instruments atop one another, that’s actually not what’s happening. Rather, as Wired notes, this new software is producing completely unique sounds by leveraging “the mathematical characteristics of the notes that emerge” from various instruments. And those instruments are indeed varied — NSynth is capable of working with around 1,000 different sound makers from violins to didgeridoos. And the combinations of those sounds are creating countless new experiences for us to reckon with.
“Unlike a traditional synthesizer which generates audio from hand-designed components like oscillators and wavetables, NSynth uses deep neural networks to generate sounds at the level of individual samples,” the team explained in a blog post last month. “Learning directly from data, NSynth provides artists with intuitive control over timbre and dynamics and the ability to explore new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer.”
Indeed, music critic Marc Weidenbaum tells Wired that this concept is nothing new, though we’re certainly more adept at synthesis than ever before. “The blending of instruments in nothing new,” Weidenbaum said, “Artistically, it could yield some cool stuff, and because it’s Google, people will follow their lead.”
Ultimately, the team behind NSynth notes, “We wanted to develop a creative tool for musicians and also provide a new challenge for the machine learning community to galvanize research in generative models for music.” And later this week, the public will be able to see this new tool in action as Google’s team presents at the annual art, music, and tech festival known as Moogfest. So if you’re near Durham, North Carolina, this certainly seems like something worth checking out.
No comments:
Post a Comment