Somehow, even in a room full of loud conversations, our brain can focus with one voice on something called the Cocktail Party Effect. But it gets louder – or as you get older it is harder to do that. Now, researchers may have figured out how to fix it with a machine learning technique called silence and the cone of silence.
Computer scientists have trained a neural network that roughly mimics the wiring of the brain, detecting and separating the voices of many people speaking in a room. The network did so by measuring how long it took for the microphones to sound in the center of the room.
When researchers tested their system with loud background noise, they found that the cone of silence had two voices. Within 3.7º of their sources, They told an online-only conference this month about neural information processing systems. This compares with a sensitivity of only 11.500 to the previous sophisticated technology. When the researchers trained their new system in additional voices, they managed the same trick with eight voices — a sensitivity of 6.3-— even if more than four were not heard at once.
Such a system could one day be used on hearing aids, surveillance systems, speakerphones or laptops. New technology, this is possible too Keep track of moving voices, By splitting and making your Zoom calls easier The background noise is soothing, From vacuum cleaners to rhombus children.
Devoted web lover. Food expert. Hardcore twitter maven. Thinker. Freelance organizer. Social media enthusiast. Creator. Beer buff.