Nagesh Singh Chauhan
Brain-Computer Interface(BCI) and Artificial Intelligence
What would it be like to control everything around you just by your thoughts?
Computers and brains already talk to each other daily in high-tech labs – and they do it better and better. For example, disabled people can now learn to govern robotic limbs by the very power of their minds. The expectancy is that we may one day be able to operate spaceships with our thoughts, upload our brains to computers and, ultimately, create cyborgs.
Even Elon Musk is joining this race. The CEO of Tesla and SpaceX has acquired Neuralink, a company aiming to establish a direct link between the mind and the computer. Musk has already shown how expensive space technology can be run as a private enterprise.
With an enormous burst in technology over the last few decades, the border between humans and machines has begun to narrow. Our spectacular science-fiction describing “mind control” has slowly come true with the help of machines. The frontiers of these new methods are brain-computer interfaces (BCIs) and artificial intelligence (AI). Experimental paradigms for BCIs and AI were usually developed and applied independently from each other. However, scientists now prefer to combine BCIs and AI, which makes it possible to efficiently use the brain’s electric signals to maneuver external devices
Brain-computer interface (BCI)
Brain-computer interface (BCI) sometimes called, direct neural interface or a Brain-machine interface(BMI) is a technology that allows a human brain and an external device to talk to one another—to exchange signals. It gives humans the ability to directly control machines, without the physical constraints of the body. So Its is a device that translates neuronal information into commands capable of controlling external software or hardware such as a computer or robotic arm. BCIs are often used as assisted living devices for people with motor or sensory impairments.
The connection between the brain and BCI is a two-way link (a bidirectional interface). One direction involves a BCI sending brain activity to a computer, and the computer translating brain activity into motor commands called Passive BCI. Communication can also happen in the other direction — where the computer sends information directly to the brain of the BCI user. This is called Active BCI where there is a direct brain connection, compared to passive BCI which is non-invasive.
How does the Brain-computer interface (BCI) work?
The human brain is filled with cells called neurons, Every time we think, move, feel or remember something, these neurons are at work, transferring the information from one part of the body to the other in the form of biochemical and electrical signals. These electrical signals sometimes travel at the maximum speed of 150 m/s. The signal path is mostly insulated, but some signals do escape. These escaped electrical signals are what the BCI devices try to detect and interpret by using electroencephalography (EEG). EEG can read signals from the human brain and send them to amplifiers. The amplified signals are then interpreted by a BCI computer program, which uses the signals to control a device.
Why does BCI even important?
According to Davide Valeriani, Post-doctoral Researcher in Brain-Computer Interfaces at the University of Essex, “The combination of humans and technology could be more powerful than artificial intelligence. For example, when we make decisions based on a combination of perception and reasoning, neurotechnologies could be used to improve our perception. This could help us in situations such as seeing a very blurry image from a security camera and having to decide whether to intervene or not.”
Types of Brain-computer interface (BCI)
There are three types of BCI based on the electrodes used for measuring brain activity. They are Invasive, Partially-Invasive, and Non-Invasive.
1. Invasive BCI
Invasive BCIs are implanted directly into the grey matter of the brain during neurosurgery. Using chips implanted against the brain that have hundreds of pins less than the width of a human hair projecting from them and piercing the cerebral cortex, scientists are able to read the firings of hundreds of neurons in the brain. The language of the neural firings is then sent to a computer translator that uses special algorithms to decode the neural language into computer language. This is then sent to another computer that receives the translated information and tells the machine what to do. As they rest in the grey matter, invasive devices produce the highest quality signals of BCI devices but are prone to scar-tissue build-up, causing the signal to become more fragile or even lost as the body responds to a foreign object in the brain.
2. Partially-Invasive BCI
Partially invasive BCI devices are implanted inside the skull but rest outside the brain rather than within the grey matter. They produce better resolution signals than non-invasive BCIs where the bone tissue of the cranium deflects and deforms signals and has a lower risk of forming scar tissue in the brain than fully invasive BCIs. Electrocorticography (ECoG) measures the electrical activity of the brain taken from underneath the skull in an identical way to non-invasive electroencephalography, but the electrodes are embedded in a thin plastic pad that is placed above the cortex.
3. Non-Invasive BCI
Non-Invasive BCI is also called an electroencephalograph (EEG). Credits
The easiest and least invasive method is a set of electrodes, this device is known as an electroencephalograph (EEG explained below) various others are MEG (magnetoencephalography), or MRT (magnetic resonance tomography). The electrodes can read brain signals. Regardless of the location of the electrodes, the basic mechanism is the same: The electrodes calculate minute differences in the voltage between neurons. The signal is then amplified and filtered. In existing BCI systems, it is then interpreted by a computer program, which displayed the signals via pens that automatically wrote out the patterns on a continuous sheet of paper. Even though the skull blocks a lot of the electrical signal, and it distorts what does get through it is more accepted than the other types because of their respective disadvantages.
An electroencephalograph (EEG) detects electrical activity in your brain using small, metal discs (electrodes) attached to your scalp. Your brain cells communicate via electrical impulses and are active all the time, even when you're asleep. This activity shows up as wavy lines on an EEG recording.
Components of Brain-computer interface (BCI)
Basic Components of a Brain-Computer Interface. Brain activity is translated into a control signal for an external device using a sequence of processing stages. The user receives feedback from the device, thereby closing the loop. Credits
There are two ways of producing these brain signals:
Actively generate these signals by presenting stimuli to the subject(e.g. pictures, sounds, videos, etc) or having the subject imagine the movements.
Just reading the brain waves already generated by the subject.
According to Sjoerd Lagarde, Software Engineer at Quintiq, “Actively generating signals has the advantage that signal detection is easier since you have control over the stimuli; you know for example when they are presented. This is harder in the case where you are just reading brain waves from the subject.”
There are different ways to detect brain signals. The most well-known are EEG and fMRI(Functional Magnetic Resonance Imaging), but there are others as well. EEG measures the electrical activity of the brain, fMRI measures brain activity by detecting changes associated with blood flow in the brain.
Each of these methods has its own pros and cons. Some have a better temporal resolution (they can detect brain activity as it happens), while others have a better spatial resolution (they can pinpoint the location of activity).
The idea remains largely the same for other types of measuring techniques.
One of the issues we will find when dealing with brain data is that the data tends to contain a lot of noise. When using EEG, for example, things like the grinding of the teeth will show in the data, as well as eye movements. This noise needs to be filtered out.
The data can now be used for detecting actual signals. When the subject is actively generating signals, we are usually aware of the kind of signals we want to detect. One example is the P300 wave, which is a so-called event-related potential that will show up when an infrequent, task-relevant stimulus is presented. This wave will show up as a large peak in your data and you might try different techniques from machine learning to detect such peaks.
When you have detected the interesting signals in your data, you want to use them in some way that is helpful to someone. The subject could for example use the BCI to control a mouse by means of imagined movement. One problem you will encounter here is that you need to use the data you receive from the subject as efficiently as possible, while at the same time you have to keep in mind that BCI’s can make mistakes. Current BCI’s are relatively slow and make mistakes once in a while (For instance, the computer thinks you imagined left-hand movement, while in fact, you imagined right-hand movement).”
How AI is helping the Brain-computer interface (BCI)?
First, brain signals are easily corrupted by various biological (e.g., eye blinks, muscle artifacts, fatigue, and concentration level) and environmental artifacts (e.g., environmental noise).
There are many difficulties in working with EEG. Since the main task of BCI is brain signal recognition, the discriminative deep learning models are the most popular and powerful algorithms.
It is difficult to make sense of brain activity that propagates from the neurons speaking to each other, through the skull, through one’s scalp, and just barely into the EEG sensor. Generally, EEG data is very noisy in the sense that it is very hard to get a clear signal for something specific. Therefore, it is crucial to distill informative data from corrupted brain signals and build a robust BCI system that works in different situations.
Moreover, BCI has a low SNR due to the non-stationary nature of electrophysiological brain signals.
The accuracy in classifying electroencephalographic (EEG) data in BCI depends on the number of measuring channels, the amount of data used to train the classifier, and the signal-to-noise ratio (SNR). Of all those factors, the SNR is the hardest to adjust in real-life applications. Although several preprocessing and feature engineering methods have been developed to decrease the noise level, such methods (e.g., feature selection and extraction both in the time domain and frequency domain) are time-consuming and may cause information loss in the extracted features.
Third, feature engineering highly depends on human expertise in a specific domain. Human experience may help capture features on some particular aspects but prove insufficient in more general conditions. Therefore, an algorithm is required to extract representative features automatically.
Artificial Intelligence particularly Deep learning algorithms provide a better option to automatically extract the distinguishable features.
Moreover, a majority of current AI research focuses on static data and therefore cannot classify rapidly changing brain signals accurately. It generally requires novel learning methods to deal with dynamical data streams in BCI systems.
Until now, deep learning has been applied extensively in BCI applications and shown success in addressing the above challenges.
Deep learning has three advantages. First, it avoids the time-consuming preprocessing and feature engineering steps by working directly on raw brain signals to learn distinguishable information through back-propagation. Furthermore, deep neural networks can capture both representative high-level features and latent dependencies through deep structures.
Finally, Deep learning algorithms are shown to be more powerful than traditional classifiers such as Support Vector Machine (SVM) and Linear discriminant analysis (LDA). It makes sense because almost all the BCI issues can be regarded as a classification problem
Deep Learning Algorithms used in BCI
CNN is the most popular deep learning model in BCI research, which can be used to exploit the latent spatial dependencies among the input brain signals like fMRI image, spontaneous EEG, and so on.
CNN has achieved great success in some research areas which makes it extremely “scalable” and feasible (through the available public code). Thus, the BCI researchers have more chances to understand and apply CNN to their works.
Implementation of the cascade CNN-GRU/LSTM model according to EEG data. Meshing is the first step in converting multi-channel EEG signals into sequences of 2D images. The 2D mesh time series is passed through the cascade of CNN and recurrent layers for training, validation, and testing. Credits
Generative deep learning models are mostly used to generate training samples or data augmentation. In other words, generative deep learning models play a supporting role in the BCI area to enhance the training data quality and quantity. In the BCI scope, generative algorithms are mostly used in reconstruction or generating a batch of brain signals samples to enhance the training set. Generative models commonly used in BCI include Variational Autoencoder (VAE), Generative Adversarial Networks (GANs), etc.
Deep Belief Networks are also used in BCI for feature extraction. Even though a growing number of publications focus on adopting CNN or hybrid models for both feature learning and classification.
RNN and CNN are illustrated as having excellent temporal and spatial feature extraction ability, it’s natural to combine them for both temporal and spatial feature learning.
Applications of BCIs based on AI
Schematic description of BCIs based on AI. With the help of AI to process signals, the applications of BCIs have been extended greatly, including cursor control, auditory sensation, limb control, spelling devices, somatic sensation, and visual prosthesis. The circuit can be described as follows. First, micro-electrodes detect signals from the human cerebral cortex and send them to the AI. Second, the AI takes charge of signal processing, which includes feature extraction and classification. Third, the processed signals are output to achieve the abovementioned functions. Finally, feedback is sent to the human cortex to adjust the function. BCIs, brain-computer interfaces; AI, artificial intelligence. Credits
Some other applications of BCI include:
Sleep patterns analysis
Fatigue and mental workload analysis.
Mood detection. For instance, “a system that monitors the user’s brain to adapt the spaces accordingly in terms of temperature, humidity, lighting, and other factors.” (16) Recently, Nissan in cooperation with Bitbrain presented the first prototype of Brain-to-Vehicle interface.
Controlling devices (robotic arms, etc.)
Personal identification system using brain waves.
Boosting physical movements and reaction time using transcranial direct stimulation.
Workplace analysis/Maximize productivity. For instance, there are projects to develop an application to analyze an operator’s cognitive state, mental fatigue, and stress level.
Marketing field: In this field, “Studies have pointed out that EEG could be used to evaluate the attention levels generated by commercial and political ads across different media. BCIs could also provide insights on the memorization of those ads.” (17) In general, “BCIs could be used to optimize internet ads or TV spots” (18).
Educational field: In this field, “BCIs could help identify the clearness of the studied information for each student, allowing teachers to personalize their interaction with each student depending on the results” (19).
Entertainment field: In this field “BCIs could be used in video games. For instance, players could control their avatar using only a BCI. When it comes to movies, BCIs can help to create interactive films with the use of the brain-activity of the spectators.” (20) In the future, “Audiences in the future will be empowered to immerse themselves and collectively control a film through their combined brain-activity” (21).
Military field: In this field “BCIs have been used by Soldiers to pilot a swarm of drones at the Defense Advanced Research Projects Agency (DARPA)” (22).
Limitations of Brain-computer interface (BCI)
1. The brain is incredibly complex. To say that all thoughts or actions are the results of simple electric signals in the brain is a gross understatement. There are about 100 billion neurons in the human brain. Each neuron is constantly sending and receiving signals through a complex web of connections. There are chemical processes involved as well, which EEGs can't pick up
2. The signal is weak and prone to interference. EEGs measure tiny voltage potentials. Something as simple as the blinking eyelids of the subject can generate much stronger
signals. Refinements in EEGs and implants will probably overcome this problem to some extent in the future, but for now, reading brain signals is like listening to a bad phone connection. There's lots of static.
3. The equipment is less than portable. It's far better than it used to be -- early systems were hardwired to massive mainframe computers. But some BCIs still require a wired connection to the equipment, and those that are wireless require the subject to carry a computer that can weigh around 10 pounds. Like all technology, this will surely become lighter and more wireless in the future.
Recommended readings :
BCI for searching Google without saying or typing a single word: Merging with AI: How to Make a Brain-Computer Interface to Communicate with Google using Keras and OpenBCI
Latest researches on Brain-computer Interface