The ability to read people’s thoughts has long been a popular theme in science fiction stories and movies. In the real world, such broad mind-reading powers are not possible. Yet, advances in brain imaging and computer technology are enabling scientists to decode brain activity patterns in ways that offer limited, but captivating, glimpses into what people perceive at a given moment in time. Using these new techniques to peer into the working brain, scientists are able to predict what images — or even short video clips — people are looking at or where people are as they navigate through a virtual environment.
As a result of this research, neuroscientists are not only learning about normal brain function, but are also developing better technologies to compensate for injury or illness. These studies are leading to:
- A deeper understanding of how the brain processes information, as well as how it forms and recalls memories.
- Improved prosthetic design and new methods of communication for people with paralyzing injuries or illness.
Our brains are complex machines, and the coordinated function of brain circuits produce many of the things we feel, think, and do. To decode this process, many studies use functional magnetic resonance imaging (fMRI), which tracks brain activity by measuring blood flow through the brain. This technique requires volunteers to lie still in a large, immobile scanner. Volunteers are then asked to look at images or to perform simple tasks. At the same time, their brain activity patterns are recorded, analyzed, and stored by a computer. Later, neuroscientists use these patterns to predict what the volunteers are doing or seeing in a separate scanning session.
After scanning volunteers’ brains as they looked at thousands of random images, scientists have built computer models that make it possible to identify the specific photos the volunteers view during later brain scanning sessions. More recently, scientists have used these techniques to draw the geometric patterns or natural images that volunteers view as they lie in the scanner — even if the volunteer was seeing the images for the very first time. One preliminary study even demonstrated that these techniques can produce rough reproductions of short video clips watched by volunteers. These re-created images are blurry and lack detail but are impressive nevertheless (for example, the reproduction might show a person speaking, but not identify the speaker).
More than pictures and videos can be decoded from brain activity, however. Brain researchers predicted simple choices volunteers made as their brains were being scanned. Specifically, the researchers determined whether a volunteer decided to add or subtract two numbers in a math exercise from patterns of brain activity in the medial and lateral prefrontal cortex.
The hippocampus is a key brain region in circuits involved in spatial navigation and memory, and thus has been the focus of research to decode these functions. By studying neural activity patterns in the hippocampus, neuroscientists predicted the location of video game players within a virtual environment. Brain researchers also identified the short video clip volunteers were remembering during a scanning session based on the patterns of hippocampal activity recorded during earlier sessions.
Scientists are a long way, of course, from “mind reading” — deciphering complex, abstract thoughts in real-time. With current brain decoding technology, a neuroscientist may be able to tell that you are looking at an image of, say, an apple rather than an orange, but it cannot unscramble all your complex thoughts about that image, such as your memories of apple picking as a child.
Even with the limitations of today’s technology, this research already has many promising applications. It could, for example, help paralyzed patients perform everyday tasks. Research has shown that a brain implant that can record brain electrical activity directly can control a prosthetic device. In a recent study, neuroscientists predicted 3-D hand movements from brain signals recorded with portable technology. These findings may help develop better prosthetics, capable of translating brain signals into natural movements for artificial limbs.
Brain decoding may also help engineers develop accurate and portable communication devices for people who have lost the ability to speak due to debilitating neurodegenerative diseases, such as amyotrophic lateral sclerosis (ALS). So, while this research may still sound like science fiction, its aim is to better understand how the brain works and to help develop new technologies to improve people’s lives.
Kay KN, Naselaris T, Prenger RJ, Gallant JL (2008) Nature 452: 352–355.
Miyawaki Y, Uchida H, Yamashita O, Sato M, Morito Y, Tanabe HC, Sadato N, Kamitani Y (2008) Neuron 60: 915–929.
Thirion B, Duchesnay E, Hubbard E, Dubois J, Poline JB, Lebihan D, Dehaene S (2006) NeuroImage 33: 1104–1116.
Naselaris T, Prenger RJ, Kay KN, Oliver MD, Gallant JL (2009) Neuron 63: 902–915.
Haynes JD, Sakai K, Rees G, Gilbert S, Frith C, Passingham RE (2007) Reading hidden intentions in the human brain. Current Biology 17: 323–328.
Hassabis DM, Chu C, Rees G, Weiskopf N, Molyneux PD, Maguire EA (2009) Decoding neuronal ensembles in the human hippocampus. Current Biology 19: 546–554.
Chadwick MJ, Hassabis D, Weiskopf N, Maguire EA (2010). Decoding individual episodic memory traces in the human hippocampus. Current Biology 20: 1–4.
Velliste M, Perel S, Spalding MC, Whitford AS, Schwartz AB (2008) Cortical control of a prosthetic arm for self-feeding. Nature 453: 1098–1101.
Bradberry TJ, Gentili RJ, Contreras-Vidal JL (2010) Reconstructing Three-Dimensional Hand Movements from Noninvasive Electroencephalographic Signals. The Journal of Neuroscience 30: 3432–3437.