The era of mental conversations has begun

Maria Isabel Garcia

This is AI generated summarization, which may have errors. For context, always refer to the full article.

What is the latest happening on the ground with ordinary mortals who are doing science to try to read people's minds?

Lois Lane once asked Superman if he could read her mind – at least she did in the song that she “sang” to him in the first Superman film that came out when I was a freshman in high school. It was a hit song at the time and many remember it up to now. I think it really struck a major chord in many not just because it was in the movie but because reading a woman’s mind is probably the ultimate challenge that could make or break any man, let alone Superman.   

But what is the latest happening on the ground with ordinary mortals who are doing science to try to read people’s minds? Lately, they have had a breakthrough.

Many of us are familiar with another superhuman, the late Stephen Hawking, and how he communicated, but probably only a few know how it worked. His speech was enabled by a computer and software that Intel had made and updated for him since 1997. It basically made use of a keyboard that appeared on the screen with a cursor that scanned the keyboard. A sensor was attached to Dr Hawking’s cheek which he twitched to stop the cursor on the characters that he chose. A twitch on his cheek affirmed the choice of letters and also predicted words after he had “typed” in a few letters. Then after completing a “typed” sentence, he sent it using his cheek twitch again to his speech synthesizer which verbalized it for him. But the software did not decode his thoughts. 

To decode thoughts and synthesize them as speech, you need BMIs (brain-machine interfaces). They have been in development for over a year now in various research lab groups around the world. Major breakthroughs have not been many, but this recently demonstrated software that can translate brain signals into questions and answers in real time is considered one by many in the field.  

On one hand, you don’t have to worry, yet. This does not mean that any of your silent thoughts can spill out as speech as spontaneously as a machine scans your brain. The intention of this research and applied technology is to enable those who are paralyzed in their speech to be able to hear a question and be able to have their responses verbalized. It is not to have all our unfiltered thoughts freely roaming out there for everyone to see. Come to think of it, we already have that – it is called the Internet. On the other hand, I think we have to worry and start having serious ethical discussions about this kind of technology, because the study was funded by Facebook.

When we hear something, scientists know that networks in the brain are activated. When we prepare to respond to it in the form of speech, we need the cooperation of our larynx, jaw, lips, and tongue. For this, scientists have also observed which networks in the brain become active. For people who have conditions that prevent them from audibly responding to questions that they are hearing, the challenge is how to make this knowledge connect with technology that will enable these people to hear and respond audibly and in real time. 

This challenge was met because of 3 volunteer participants who were really going to have brain surgery for their epileptic conditions. They agreed to have brain patches implanted in the surface of their brains, specifically in those parts that are already known to be activated when we hear a question. They also placed the implants on the surface of that brain known to be activated when we verbalize responses to the said questions. They then asked 9 questions of the participants and made them verbalize a given list of 24 possible answers to those specific questions.  

They did this repeatedly to map the brain waves in that region that corresponded to specific questions and answers when perceived and uttered. This means that the software could read the brain signals that corresponded to when the brain perceived the questions, as well as the brain signals when it was to respond in a specific way to that question way above what pure chance would have allowed. For the answers, as there was more than one possible answer to one question, the software was able to develop the algorithm to predict the possible answers to each question at remarkable rates. The most remarkable thing was this happened in real time! This means that, if this were not limited to only 9 questions and 24 possible answers, this could be a spontaneous conversation. For those who can perceive a question and can respond to it but are prevented to because of a brain or spine injury, the hope this gives is priceless.

So while an app for the genders to be able to decode brain signals accurately so that we can respond productively to each other is still a long way from being downloaded online, this, I think, should be pursued and developed further and made available to those whose words are trapped in their brains.  

But readers, again, please know that Facebook funded this project. We already know what happens to the patterns they mine out of the written, voluntary posts we make. Imagine what Facebook could do if they could literally get inside your head and read your thoughts? No superhero can rescue you from that tragedy. – Rappler.com

Maria Isabel Garcia is a science writer. She has written two books, “Science Solitaire” and “Twenty One Grams of Spirit and Seven Ounces of Desire.” You can reach her at sciencesolitaire@gmail.com.

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!