How the technology behind ChatGPT is making mind reading a reality




CNN

On a recent Sunday morning, I found myself lying flat in the claustrophobic confines of an fMRI machine at a research facility in Austin, Texas, wearing a pair of ill-fitting scrubs. “Things I do for TV,” I thought.

Anyone who’s ever had an MRI or fMRI scan will tell you how noisy it can be — swirling electrical currents create powerful magnetic fields that allow for detailed scans of your brain. This time, however, I could barely hear the loud whirring of mechanical magnets, and I was handed a pair of specialized headphones that started playing snippets from The Wizard of Oz audiobook.

Why?

Neuroscientists at the University of Texas at Austin have found a way to turn scans of brain activity into words using the same artificial intelligence technology that powers the pioneering chatbot ChatGPT.

The breakthrough could revolutionize the way people who have lost the ability to speak communicate. This is just one groundbreaking application of AI that has been developed in recent months, as the technology continues to advance and looks set to touch every aspect of our lives and society.

“So, we don’t like to use the term mind reading,” Alexander Huth, an assistant professor of neuroscience and computer science at the University of Texas at Austin, told me. “We thought it would evoke things that we couldn’t actually do.”

Huth volunteered to be the subject of the study, spending more than 20 hours within the confines of the fMRI machine, which listened to audio clips while taking detailed pictures of his brain.

An AI model analyzed his brain and the audio he was listening to, and over time was eventually able to predict the words he heard just by looking at his brain.

The researchers used San Francisco-based startup OpenAI’s first language model, GPT-1, developed using a vast database of books and websites. By analyzing all this data, the model learns how sentences are constructed—essentially, how humans speak and think.

The researchers trained artificial intelligence to analyze the brain activity of Huth and other volunteers as they listened to specific words. Eventually, the AI ​​learned enough that it could monitor the brain activity of Huth and others to predict what they were listening to or looking at.

I spent less than half an hour at the machine, and as expected, the AI ​​couldn’t decode the part of the Wizard of Oz audiobook I’d been listening to, which depicted Dorothy making her way down the yellow brick road.

Before entering the fMRI machine, CNN reporter Donie O'Sullivan was given specialized headphones to listen to audiobooks during brain scans.

Huth listened to the same audio, but because the AI ​​model had been trained on his brain, it was able to accurately predict which part of the audio he was listening to.

While the technology is still in its infancy and shows great potential, its limitations may come as a relief to some. Artificial intelligence cannot yet easily read our minds.

“Its real potential application is to help people who can’t communicate,” Huth explained.

He and other UT Austin researchers believe this innovative technology could one day be used by people with “locked-in” syndrome, stroke patients and others with normal brain function but who cannot speak.

“We’ve shown for the first time that this level of accuracy can be achieved without brain surgery. So we think this is a first step on the road to actually helping people who can’t speak without neurosurgery,” he said.

While groundbreaking medical advances are undoubtedly good news and could change the lives of patients battling debilitating diseases, it also raises questions about how the technology might be applied in controversial settings.

Can it be used to extract confessions from criminals? Or uncover our deepest, darkest secrets?

The short answer, Huth and his colleagues say, is no — not yet.

For starters, brain scans take place on fMRI machines, AI techniques require hours of training on a person’s brain and, according to researchers in Texas, subjects give their consent . If a person actively refuses to listen to the audio or think about other things, the brain scan will not be successful.

“We think everyone’s brain data should be kept private,” said Jerry Tang, lead author of the study. A paper published earlier this month detailing his team’s findings. “Our brains are one of the last frontiers of our privacy.”

Tang explained, “There are clearly concerns that brain decoding technology could be used in dangerous ways.” Brain decoding is a term researchers prefer to use instead of mind reading.

“I think mind reading conjures up this idea of ​​knowing little thoughts that you don’t want to reveal, like reacting to things. And I don’t think there’s any indication that we can actually do that with this method, ’ explains Huth. “What we can get is the big idea that you’re thinking about. Someone’s telling your story, and if you want to tell a story in your head, we can do that too.”

Last week, makers of generative artificial intelligence systems, including OpenAI CEO Sam Altman, came to Capitol Hill to testify before a Senate committee on lawmakers’ concerns about the risks posed by the powerful technology. Altman warned that AI development without guardrails could “do significant harm to the world” and urged lawmakers to implement regulations to address the problem.

Don responds to AI warnings, telling CNN lawmakers need to get serious about “psychological privacy” to protect “brain data” — our thoughts — and those are two more things I’ve heard in the age of AI Dystopian terms.

While the technique currently works in very limited circumstances, this may not always be the case.

“It’s important not to have a false sense of security that things will always be like this,” Tang warned. “Technology can improve, which could change our ability to decode and change whether decoders require human cooperation.”



Source link

Leave a Comment