How artificial neural networks and electroencephalography (EEG) are helping scientists to Identify the extent of stroke-related brain damage.

Image Source: Neurobiotics

Earlier this year, researchers from Russia’s Neurobotics Corporation and a team at the Moscow Institute of Physics and Technology worked out how to visualize human brain activity by mimicking images observed in real-time.

This breakthrough in artificial neural network technology usage will eventually enable post-stroke rehabilitation devices that will be controlled by signals from the brain. The team uploaded their research via a ‘preprint’ on the bioRxiv website and also shared a video that showcased their ‘mind-reading’ device at work.

Decoding the Human Brain

To develop devices that can be controlled by the human and treatments for cognitive disorders or post-stroke rehabilitation, neurobiologists must have an understanding of how the brain encodes data and information. A critical development in the creation of these technologies is the ability to study brain activity using visual perception as a marker. For instance, when recording brain activity as somebody watches a video.

Existing Methods for Successful Neural Image Signal Observation

The current solutions for extracting and analyzing observed images from human brain signals use signals picked up via implants from neurons, or functional MRI. Although these are both fantastic technologies, both methods are limited concerning practicality in clinical environments and everyday life.

Computer-Brain Interface Developments

The computer-brain interface that has been developed by Neurobotics and MIPT relies on a combination of electroencephalography (EEG) and artificial neural networks. EEG is a noninvasive technique for recording brain waves via electrodes placed on the scalp. Forbes Enterprise and Cloud Contributor, Bernard Marr defines artificial neural networks below:

By combining machine learning with AI and sophisticated computing, the system that has been developed reconstructs the images that a person undergoing EEG views in real-time.

Vladimir Konyshev, who leads the Neurorobotics Lab at MIPT, said:

The Experiment: Phase One

Image Source: Neurobotics

In stage one of the experiment, the team of neurobiologists asked healthy participants to watch 20 minutes of ten-second YouTube videos. Five categories were selected, which included: waterfalls, abstract shapes, human faces, motorsports, and moving mechanisms.

When the EEG data was analyzed, the researchers identified distinct brainwave patterns for each category of video. This allowed the team to analyze responses to the videos directly from the brain in real-time.

The Experiment: Phase Two

Image Source: Neourobotics

In phase two of the experiment, three categories were randomly selected from the original five. Next, the researchers developed two separate neural networks: one for generating ‘noise’ from the EEG, the other for generating category-specific images from neural ‘noise.’

Next, the separate networks were trained to work in synergy, turning EEG signals into images that were similar to the real images that test subjects were observing.

Visualizing Brain Activity

To test the new system’s ability to visualize activity in the brain, subjects were shown unseen videos from the original categories. EEGs were recorded as they watched, and fed to the neural networks. The test was a success, with convincing images generated that could easily be categorized in over 90% of the cases.

Image Source: YouTube

Grigory Rashkov, Co-Author of the research paper from this study has said:

“The electroencephalogram is a collection of brain signals recorded from the scalp. Researchers used to think that studying brain processes via EEG is like figuring out the internal structure of a steam engine by analyzing the smoke left behind by a steam train,”

“We did not expect that it contains sufficient information to even partially reconstruct an image observed by a person. Yet it turned out to be quite possible.”

“What’s more, we can use this as the basis for a brain-computer interface operating in real-time. It’s fairly reassuring. With present-day technology, the invasive neural interfaces envisioned by Elon Musk face the challenges of complex surgery and rapid deterioration due to natural processes — they oxidize and fail within several months. We hope we can eventually design more affordable neural interfaces that do not require implantation,”

Neural Networks: For Better or Worse:

More recently, on November 20th, a panelist at the ‘Artificial Intelligence at Work’ event hosted by Workday and Politico suggested that there is a need to overwrite some neural networks to prevent unintended bias such as racial discrimination.

According to Dawn Tilbury, Head of NSF’s Engineering Directorate:

Either way, it looks like the advancement of AI technologies and machine learning are doing wonders for the medical field. With the possibility of wearable devices that will help stroke survivors to have a better quality of life on the horizon.



Comments are closed

Sorry, but you cannot leave a comment for this post.


Latest Posts

Latest Video



In less than one hour, WFNS Neuroanatomy Series Wednesday 12 pm GMT, 8 am EST

WFNS Neuroanatomy Series Wednesday 12 pm GMT, 8 am EST TO REGISTER YOU TUBE LINK:

Join Mailing List


“Neuroanatomy through Clinical Cases” , Second Edition by Hal Blumenfeld, Yale University

SOURCE Printer Friendly Hal Blumenfeld, Yale University School of Medicine The book can be ordered through the Sinauer Associates website. Sample content is also available on that page…. Read More →

Request a Webcast

Request Panel Spot please tell us the date, time, studio of Neuroanatomist P P