Facebook stops funding the brain reading computer interface

Now the answer is inside, and it’s not at all close. Four years after announcing an “incredibly crazy” project to build a “silent speech” interface using optical technology to read thoughts, Facebook leaves the project within reach, saying consumers ’brain readings are still a long way off.

In a blog post, Facebook said it would stop working on the project and instead focus on an experimental wrist controller for virtual reality that reads muscle signals to the arm. “Although we still believe in the long-term potential of head-mounted optics [brain-computer interface] technologies, we have decided to focus our immediate efforts on a different approach to the neural interface that has a path to the market in the near future, ”said the company.

Facebook’s brainstorming project had taken him to unfamiliar territory, including funding brain surgeries at a California hospital and building helmet prototypes that could shoot light through the skull, and into heated debates about whether technology companies should have access to private brain information. Ultimately, however, it appears that the company has decided that the investigation will simply not lead to a product soon enough.

“We have a lot of hands-on experience with these technologies,” says Mark Chevillet, the physicist and neuroscientist who until last year led the silent speech project, but recently changed roles to study how Facebook handles elections. “That is why we can say with certainty that, as a consumer interface, an optical silent voice device mounted on the head is still a long way off. Possibly more than we expected. “

Mental reading

The fashionable reason for brain-computer interfaces is that companies see mind-controlled software as a breakthrough, as important as the computer mouse, graphical user interface, or sliding screen. In addition, researchers have already shown that if they place electrodes directly on the brain to harness individual neurons, the results will be remarkable. Patients paralyzed with these “implants” can deftly move robotic arms and play video games or write through mind control.

Facebook’s goal was to turn these findings into a consumer technology that everyone could use, which meant a helmet or headset that you could put on and take off. “We never intended to do a brain surgery product,” Chevillet says. Given the social giant’s numerous regulatory issues, CEO Mark Zuckerberg had once said that the last thing the company should do is open skulls. “I don’t want to see Congress hearings on this issue,” he had joked.

In fact, as brain-computer interfaces move forward, there are serious new concerns. What if big tech companies could know people’s thoughts? In Chile, lawmakers are even considering a human rights bill to protect brain data, free will, and mental privacy from technology companies. Given Facebook’s poor history of privacy, the decision to stop this investigation may have the secondary advantage of placing some distance between the company and growing concerns about “neurorights.”

The Facebook project was specifically aimed at a brain controller who could connect with their ambitions in virtual reality; bought Oculus VR in 2014 for $ 2 billion. To get there, the company took a two-pronged approach, Chevillet says. First, it was necessary to determine whether a voice-to-speech interface was even possible. To do so, he sponsored research at the University of California, San Francisco, where a researcher named Edward Chang has placed electrode pads on the surface of people’s brains.

While the implanted electrodes read data from individual neurons, this technique, called electrocorticography or ECoG, measures at once large groups of neurons. Chevillet says Facebook hoped it would also be possible to detect equivalent signals from outside the head.

The UCSF team made some breakthroughs and today publishes in the New England Journal of Medicine that it used these electrode pads to decode speech in real time. The subject was a 36-year-old man whom researchers refer to as “Bravo-1,” who after a severe stroke has lost his ability to form intelligible words and can only growl or moan. In their report, Chang’s group says that with electrodes on the surface of the brain, Bravo-1 has been able to form sentences on a computer at a speed of about 15 words per minute. The technology involves measuring neural signals in the part of the motor cortex associated with Bravo-1’s efforts to move the tongue and vocal tract while imagining speaking.

To achieve this result, Chang’s team asked Bravo-1 to imagine saying one of the usual 50 words nearly 10,000 times, feeding the patient’s neural signals into a deep learning model. After training the model to match words with neural signals, the team was able to correctly determine the word Bravo-1 by thinking about saying 40% of the time (likely results would have been around 2%). Still, his sentences were full of mistakes. “Hello how are you?” can come out “Hungry how are you?”

But scientists improved performance by adding a language model: a program that judges which word sequences are most likely in English. This increased accuracy to 75%. With this cyborg approach, the system could predict that Bravo-1’s phrase “I hit my nurse” meant “I like my nurse.”

As remarkable as the result may be, there are more than 170,000 words in English and therefore the performance would fall outside the restricted vocabulary of Bravo-1. This means that the technique, while it may be useful as a medical aid, does not come close to what Facebook had in mind. “We see applications in the foreseeable future in clinical care technology, but that’s not our business here,” Chevillet says. “We’re focused on consumer apps and there’s a long way to go.”

Equipment developed by Facebook for diffuse optical tomography, which uses light to measure changes in blood oxygen to the brain.

FACEBOOK

Optical failure

Facebook’s decision to abandon brain reading is no surprise to researchers studying these techniques. “I can’t say I’m surprised, because they had hinted that they were looking at a short period of time and that they were going to evaluate things,” says Marc Slutzky, a Northwestern teacher whose former student, Emily Mugler, was a contractor. key that Facebook was doing for its project. “Just speaking from experience, the goal of decoding speech is a big challenge. We are still a long way from a practical solution that covers everything ”.

Still, Slutsky says the UCSF project is an “impressive next step” that demonstrates remarkable possibilities and some limits to the science of brain reading. “It remains to be seen if speech can be decoded freely,” he says. “A patient who says ‘I want a drink of water’ versus ‘I want my medicine,’ because they’re different.” He says that if artificial intelligence models could be trained for longer and in more than one person’s brain, they could improve quickly.

While doing the UCSF research, Facebook also paid other centers, such as the Johns Hopkins Laboratory of Applied Physics, to find out how to pump light through the skull to read neurons non-invasively. Like MRI, these techniques are based on the detection of reflected light to measure the amount of blood flow to brain regions.

It is these optical techniques that remain the biggest stumble. Even with recent improvements, including some from Facebook, they are not able to pick up neural signals with sufficient resolution. Another issue, Chevillet says, is that the blood flow that these methods detect occurs five seconds after the fire of a group of neurons, making it too slow to control a computer.

Source link

Leave a Comment

Your email address will not be published. Required fields are marked *