07/11/2018 / By Edsel Cook
California-based researchers are showing off the latest version of their mind-reading artificial intelligence (AI) algorithm. In a Digital Trends article, the deep-learning AI can read a person’s thought patterns in order to identify the song playing from your device – and in your head.
Apps like Shazam employ similar machine learning that let them identify a song by listening to it. But this is on a wholly different level of intelligence.
Researchers from the University of California, Berkeley (UC Berkeley) started working on their AI in 2014. Study author Brian Pasley and his teammates attached electrodes to the heads of volunteers and measured brain activity while the participants were speaking.
After finding out the connection between brain activity and speech, they combined their accumulated brain data with a deep-learning algorithm. The AI then proceeded to turn the thoughts of a human being into digitally-synthesized speech with some accuracy.
In 2018, the UC Berkeley team have brought their AI to the next level of mind-reading. The improved deep-learning AI demonstrated 50 percent greater accuracy than its predecessor. It was better able to read the brain activity of a pianist and predict what sounds the musician is thinking of. (Related: Creepy: New AI can READ YOUR MIND by decoding your brain signals … kiss your personal privacy goodbye.)
Study author Pasley explained that auditory perception was the act of listening to music, speech, and other sounds. Earlier studies have shown that certain parts of the auditory cortex of the brain are responsible for breaking down the sounds into acoustic frequencies, such as high tones or low tones.
He and his team observed those brain areas to see if the latter were responsible for breaking down imagined sounds in much the same way they processed actual sounds. Examples of imagined sounds would be internal verbalization of the sound of one’s voice or pretending that good music was filling a silent room.
They reported finding a large overlap between the parts of the brain that handled real sounds and the parts that translated imagined sounds. At the same time, they also found significant contrasts.
“By building a machine learning model of the neural representation of imagined sound, we used the model to guess with reasonable accuracy what sound was imagined at each instant in time,” Pasley said.
In the first phase of their experiment, the UM researchers attached diodes to a pianist’s head and recorded his brain activity while he performed several musical pieces on an electric keyboard. They could then match the volunteer’s brain patterns to the musical notes.
During the second half, they repeated the process with the caveat that the keyboard was muted. Instead, they requested the musician to imagine the notes that he was playing at the moment.
This way, they were able to train their music-predicting AI algorithm to guess the imaginary note playing in the participant’s head.
Pasley said that the ultimate objective of their research was to create deep-learning AI algorithms for a speech device. The prosthetic would be used as a means of communication for patients who suffered from paralysis that deprived them of speech.
“We are quite far from realizing that goal, but this study represents an important step forward. It demonstrates that the neural signal during auditory imagery is sufficiently robust and precise for use in machine learning algorithms that can predict acoustic signals from measured brain activity,” Pasley claimed.
Find out more about your mind at Mind.news.
Sources include:
Tagged Under: AI, Algorithm, algorithms, artificial intelligence, auditory brain, auditory cortex, Auditory perception, brain activity, Deep Learning, Deep-learning AI, human brain activity, machine learning, mind reading
COPYRIGHT © 2017 COMPUTING NEWS