What Are Brain-Computer Interfaces? Linking Mind and Machine
- Published9 Oct 2023
- Source BrainFacts/SfN
Brain-computer interfaces (BCIs) can help people do things like control robotic arms, drive vehicles, control computers, and communicate. But how do they work? Four steps broadly apply to the operation of BCIs: measure, interpret, encode, and deploy (MIND). These steps allow machines to record brain data, decipher the user’s intended action, translate that information into something a computer can use, and do something with the processed data. While BCIs are new and exciting, there are ethical considerations to using these technologies for our collective futures.
This is a video from the 2023 Brain Awareness Video Contest.
Created by Harrison Canning.
CONTENT PROVIDED BY
In a world where technology seems to advance by leaps and bounds every day, one frontier stands out among the rest: the realm of brain computer interfaces or BCI.
In this video, we're going to talk about technologies that provide a link between mind and machine, and a future where our brain's electrical signals become the language that enables us to interact with computers, overcome disability, and tap into the limitless potential of the human mind.
Commanding machines by thought or getting an implant to restore and possibly even enhance brain function might seem like the stuff of science fiction, but over the last few decades, scientists and engineers have made breakthroughs that are bringing us closer to this remarkable reality.
BCIs have been used to control robotic arms, drive vehicles, control computers, play video games, and communicate. With an influx of talent and money flowing into the field, it's clear that BCIs will be a part of our collective future. As such, it's essential that we engage in informed conversations about BCIs to ensure that they benefit everyone.
Okay, so how did BCIs actually work? Well in general, there are four steps that are broadly applicable to most BCIs. These steps are measure, interpret, encode, and deploy, and can be remembered with the acronym "MIND.”
Step one, measure. This is all about recording brain signals. Now, there are several technologies and techniques for doing this that can vary from cost, risk, ease of use, and capability, which makes them ideal for different applications.
Let's start with devices that record electrical signals generated by brain activity, as this is the most common method. These devices include brain implants that use razor-thin electrodes to precisely collect high-resolution data from specific brain regions. This high degree of specificity and signal quality is needed for applications like controlling finger movements on a prosthetic hand and using a computer.
Another technology that records electrical activity is EEG. EEGs are head-worn devices that use electrodes on the scalp to record brain activity. Its position outside the skull means that the data is really noisy and it can't hone in on specific processes. However, it is ideal for assessing overall brain state and can reliably sense focus, engagement, and sensory information. With this, clever engineers have used EEG to levitate drones, improve users’ focus, and type.
Now, electricity is not the only way we can record brain activity. Many neurotechnologies like fMRI, fNIRS, and fUS detect changes in blood flow with the logic being that active brain regions, compared to less active ones, need more energy and oxygen, which of course is provided by blood. These technologies operate on a slower time scale, but they can reach greater depths compared to electrical interfaces, giving them a more complete window into overall brain function.
In a recent study, scientists train an AI algorithm to analyze fMRI data and reconstruct videos of what a person was seeing at the time of recording. Let's go back to that MIND acronym. Now that we've recorded our brain data, we've got to interpret what it means and encode that data into information a computer can use.
To do this, we have to find patterns in brain activity. Most of the time, this is done by machine learning. As an example, let's say we want to code words from thought to help someone communicate. First, we'd ask the subject to think about a specific word while we record brain activity from their language areas. This process would be repeated many, many times for each word and the data would then be searched for patterns. By overlaying recorded data from each trial, we could uncover unique patterns of activity that are associated with each word. The brain pattern for the word “dog,” for example, would look very different to the computer from the word “cat.” Once the training is complete, the user has the freedom to think of any of the trained words in whatever sequence they desire. The computer then analyzes their current brain activity and matches it with the learned patterns to identify the intended word and display it on the screen.
The last part of our MIND acronym is deploy. This is a step where we actually do something with that data that we've processed in the previous steps. In our decoding thoughts example, the deploy step is where we'd actually display the word the user wants to say on the screen for everyone to see. If we were trying to control a prosthetic hand, the deploy stage is where we tell that robot where to go and which fingers to move.
Now that we've covered the basics of BCIs, let's discuss ethics. As we've seen, BCIs are an exciting technology with the potential to improve lives. At the same time, there are potential dangers. Here are some ethical questions I want you to consider. I encourage you to pause the video and share your thoughts in the comments. It is so important for us to learn and have conversations about BCIs to make sure that the role that they play in our collective futures benefits everyone. Thank you for watching and may we continue to explore the depths of the mind together.