Next gen slaves are gonna be so advanced. Sure hope no one figures out a cheap way to use naturally grown brains for computing and data mining (under a capitalist system).
Maybe I misunderstood but it seems they just used the brain cells as a microphone and the voice recognition was done by a machine learning algorithm?
This needs more tests. It looks like current results are the combination of how braincells naturally filter the experience if sound and ai on top of that.
It looks like the brain actually does recognize voices as different but we need an ai to read this from the brain. I am curious how much better this performs then just pure ai.
Id alo like to know how the brain got exposed to sound cause irl an organic microphone is an ear, is it a brain with ears?
Even if not better then just ai voice recognition. Sending experiences trough neural matter and using ai to analyze the way it responds will learn us a lot about how the brain actually works.
Your last para is likely the important one. Rather then this being some idea to make things more efficient. It was likely done purely to see how human brain cells function. This may long term lead to more effective solutions by mimicking the ideas learned. But ATM we really still do not know how much we don’t know about the human brain.
The description in the article sounds like they hooked the neurons to electrodes and fed the “voice” in as electrical impulses, and then used software to look at the neuron responses and determined they could tell which voice it was.
That doesn’t really seem very significant in itself. Of course if you give different inputs you’ll get different responses, and you can map those responses to those inputs when there’s no other complexity going on. It feels like a very very preliminary study being reported in a bit if a sensational way.
They made a documentary about this. It doesn’t end well. https://en.wikipedia.org/wiki/Saturn_3
Saturn 3! Such a terrible movie, but great fun if you’re a scifi schlock fan
Oh, I know it’s bad, but I love it.
Can anyone with Nature access help with the full-text? It’s not on Sci-Hub yet.
🤖 I’m a bot that provides automatic summaries for articles:
Click here to see the summary
Researchers have created a “biocomputer” made up of lab-grown human brain tissue and electronic circuits that they say can perform tasks including voice recognition.
The idea is to build a “bridge between AI and organoids,” as coauthor and University of Indiana bioengineer Feng Guo told Nature, and leverage the efficiency and speed at which the human brain can process information.
Ultimately, the hope is to have brain-inspired biological computers perform tasks on behalf of conventional AI — while also providing scientists with an exciting new way to study the human brain.
Once the AI was trained on these responses, the researchers found that Brainowave was able to pinpoint the original speaker 78 percent of the time.
According to Nature, researchers are also excited to discover new ways to study the human brain, as well as as well as neurological disorders like Alzheimer’s, by replicating its architecture in a lab setting.
However, scaling up their mini-brains to ones large enough to complete more complex tasks will likely prove difficult, as cultivating the cells is an expensive and laborious process.
Saved 51% of original text.