An artificial ear that solves the cocktail party problem
Unlike current approaches to artificial learning that rely on big data, brute force, and expensive computation, our technology is suited for an analog implementation that mirrors the efficiency, elegance, and simplicity of true biological systems
We have spent the last two years building an artificial ear that will achieve blind source separation, learning and separating the complex sounds in its environment into distinct audio stream outputs without training, test data or audio preprocessing.
Our biologically plausible learning algorithms incorporate observations and architecture from neuroscience to enable the learning of complex patterns in a few-shot, unsupervised manner directly from sensory information.
Watch it in action
Witness the autonomous actors within a Density Network, organized in multiple regions with distinct learning behaviors, extract the sound of a single instrument from a musical duet overlayed with speech.
What’s next:
Artificial eyes and fingertips
We are harnessing the power of emergent behavior in algorithmic form. Our artificial ear proves that our approach works. We know what needs to be done to build on these foundations to give our platform new senses, skills and modes of thinking.
In nature, birds and mammals mix and match learning capabilities to traverse the earth or build a dam. Every new capability we build adds to the library of behaviors that can be combined within a single evolving system to solve real-world problems.
Eventually we hope our algorithmic analog for the brain can unlock mysteries of the mind and expand treatment capacities.
Once implemented, our approach will hear, see and feel the world around it with unprecedented efficiency. A single system, optimized for implementation on next generation analog computing chips, will learn locally, preserve context, open new doors, and change what we believe is possible.