A synthesizer interface that enables musicians to make music with sounds generated with Machine Learning.
NSynth Super is part of an ongoing experiment by Magenta: a research project within Google that explores how machine learning tools can help artists create art and music in new ways.
Technology has always played a role in creating new types of sounds that inspire musicians—from the sounds of distortion to the electronic sounds of synths. Today, advances in machine learning and neural networks have opened up new possibilities for sound generation.
Building upon past research in this field, Magenta created NSynth (Neural Synthesizer). It’s a machine learning algorithm that uses a deep neural network to learn the characteristics of sounds, and then create a completely new sound based on these characteristics.
Generating Sounds with NSynth
NSynth uses deep neural networks to generate sounds at the level of individual samples. Learning directly from data, NSynth provides artists with intuitive control over timbre and dynamics, and the ability to explore new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer. NSynth is an algorithm that can generate new sounds by combining the features of existing sounds. To do that, the algorithm takes different sounds as input.
Using an autoencoder, it extracts 16 defining temporal features from each input. These features are then interpolated linearly to create new embeddings (mathematical representations of each sound). These new embeddings are then decoded into new sounds, which have the acoustic qualities of both inputs.
A full description can be found on the Magenta blog. The dataset and algorithm can be found in the research paper Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders on the Google Research page.
Instrument Digital Interface
Using NSynth Super, musicians have the ability to explore more than 100,000 sounds generated with the NSynth algorithm. The challenge to design the interface was to visualize the generated sounds and provide an intuitive way for musicians to navigate the sounds.
Instrument Physical Interface
To design the physical interface the team went through different design and prototype iterations to create the interface with 4 quadrants to navigate the space of sounds.
Open NSynth Super
NSynth Super is built using open source libraries, including TensorFlow and openFrameworks, to enable a wider community of artists, coders, and researchers to experiment with machine learning in their creative process. The open source version of the NSynth Super prototype including all of the source code, schematics, and design templates are available for download on GitHub.