NeuroConnect: Giving people the power of communication back

Seher Khan
5 min readJun 10, 2024

--

I think as a society, we fully take for granted how easy it for us to articulate our thoughts. The process of converting a thought into a word, a word into a sentence is almost a mindless process.

But this isn’t the case for everyone, much over 360,000 people struggle with a condition called Dysarthia.

What is Dysarthia

Dysarthria is caused by damage to the nervous system, impacting the muscles involved in speech. This damage can occur in the Central Nervous System, including brain areas responsible for motor control like the motor cortex, basal ganglia, cerebellum, and brainstem. It can also affect the Peripheral Nervous System, which includes the cranial nerves that control the muscles of the face, lips, tongue, and throat.

Now many neurological conditions can lead to dysarthria, including Amyotrophic Lateral Sclerosis, Cerebral Palsy, Huntington’s Disease, stroke, Multiple Sclerosis, Myasthenia Gravis, Guillain-Barré Syndrome, Bulbar Palsy, and Traumatic Brain Injury. These conditions cause muscle weakness or paralysis, disrupt muscle coordination, and lead to abnormal muscle tone (either spasticity or flaccidity).

People with dysarthria may experience slurred, slow, or rapid speech, abnormal rhythm and pitch, and difficulty moving their lips, tongue, and jaw. This reduces the clarity and efficiency of their speech, making communication challenging and often leading to social withdrawal.

The Issues lie in the current solutions

  1. Cost: Generally, existing solutions cost upwards of $25,000, making them unaffordable for most people.
  2. Effectiveness: The most effective solutions are provided by invasive BCIs, which significantly increase costs and require a lengthy installation and recovery process. These factors make them impractical for the general population.
  3. Human Connection: Most non-invasive solutions require a TV-mounted screen, forcing users to focus on the interface instead of the person they are conversing with

Stephen Hawking (Current Solutions)

The best way to show this is through Stephen Hawking — His Amyotrophic Lateral Sclerosis (ALS) caused him to have speech muscles that don’t function. In order to speak he would use an infrared cheek twitch system

This system would be connected to a virtual keyboard system where each cheek twitch would type a specific letter as a cursor moved throughout the keyboard: in essence, each cheek twitch movement would type one key.

This solution is currently the most “revolutionary” in terms of having a speech generation device that has an aspect of looking at the person that you’re speaking to; However, this solution costs $25,000.

NeuroConnect

This is where our product steps in, NeuroConnect. A mix of Augmented reality and Brain Computer Interfaces to revive the ability to speak for Dysarthia patients and making them feel properly integrated into normal speaking society

Physical Structure

In essence, NeuroConnect has two main features

  1. Neural earbuds
  2. Augmented Reality Glasses

The neural earbud is able to an EEG (Electroencephalography) device that is able to read the electrical activity that is present in someone’s brain. We utilize this by having users use minor neck movements to control a cursor on a virtual keyboard.

User Display

The Augmented reality glasses allow the user to see the person they are having a conversation with, as well as a small keyboard that will be present toward the right side

To control this keyboard, users would make a few key movements

  1. To move the cursor around the keyboard and browse the user would slightly move their neck towards the right
  2. Then to select, a slight neck movement towards the left
  3. To select and have the audio output, the user would press the period button

Other features include a suggested word column, these suggestions would become personalized over time, similar to the IPhone auto complete feature.

It’s key to note that this technology isn’t exclusive to just neck movements, many people who have Dysarthia may not have adequate ability to control their necks — This device can also be calibrated to work through hand, arm, and various other movement. However, we selected neck movements because that is the most common muscle that Dysarthia patients have the ability to move.

The App

The NeuroConnect App

The output audio of your Neuroconnect would come out of a Bluetooth connected device; likely your phone.

On the display there would be a visual of brain signals and the progress of the user’s typing.

Deciphering Movements

Basic brain wave reading graph

Now when the user does these movements, our system needs to be able to decipher the direction of the neck movements.

When a person voluntarily moves their neck, the neural processes begin in several motor-related brain areas like the supplementary motor area (SMA), anterior cingulate cortex, dorsolateral prefrontal cortex, and primary motor cortex. Even before the movement is initiated, slow buildup of electrical potentials called readiness potentials can be detected over these motor regions, reflecting preparatory processes

Sensorimotor Cortex

As the voluntary neck movement is executed, patterns of synchronized neural oscillations, particularly in the alpha (8–12Hz) and beta (13–30Hz) frequency bands, are observed over the sensorimotor cortex contralateral to the moving body part

The primary somatosensory cortex (S1) does not just passively receive sensory feedback during movement, but actively integrates information about the impending motor command from motor areas like the primary motor cortex even before sensory signals arrive. This anticipatory motor information in S1 can help predict the consequences of the movement.

What decoding neck movements would likely look like

To decipher the direction of neck movements, the system would need to decode the spatial patterns and temporal dynamics of neural activity across the motor, somatosensory and associated cortical regions using advanced signal processing and machine learning techniques. Factors like the distribution of oscillatory power changes, their onset times, and phase relationships across multiple areas would all contribute to inferring movement direction.

Impacts

Building NeuroConnect will have several significant impacts. By reducing the costs of speech generation devices, we enable dysarthria patients to exercise their basic right to speak freely and able to have actual human connection by being able to face and make eye contact with the person in front of them. We also enable users to use an every day device to have these conversations; their phones rather than big bulky hardware mechanisms.

In a quantitative sense, by lowering the costs to almost one-tenth of current solutions and avoiding the lengthy processes of invasive BCI, our device becomes available to most people who may need it.

Choose NeuroConnect, giving back the power of communication

--

--