Neurotech may heal mental illness, yet raises risks

Tue 2 December 2025
Technology
News

Dr. Virginia Mahieu, Neurotechnology Director at the Center for Future Generations, leads work at the intersection of neuroscience, technology, and policy. In an interview for ICT&Health Global, she explains how emerging neurotools could transform our future and expand the boundaries of human potential. “The brain is the last frontier of innovation, and we’re just beginning to map it,” she says.

What important advances have helped neurotechnology grow quickly recently?

Neurotechnology is advancing rapidly across several fronts, but a few developments stand out. One major driver is sensor miniaturization, which makes wearables more portable and implants less invasive. Another is artificial intelligence: brain data is noisy and complex, and AI has greatly improved signal denoising and interpretation. It also enables closed-loop devices that can sense the nervous system and stimulate it in response, a major step forward for treating debilitating conditions.

Can you give an overview of the market? What kinds of neurotechnology devices are already available, and which are expected soon?

We tend to view the neurotech market in two main categories: the medical and consumer markets. They differ in use cases, business models, and regulatory paths, even though they rely on similar underlying technologies, including EEG, fNIRS, and microelectrode arrays, as well as electrical, magnetic, or infrared stimulation. The medical market is broad, covering devices that detect neurological conditions for diagnosis or treatment, such as epilepsy, stroke, or depression. The consumer market focuses on entertainment, computer interaction, and cognitive wellness (sleep, focus, productivity), and these last two increasingly blur the line with the medical sector.

We can also distinguish between invasive devices, such as deep brain stimulation and implantable BCIs like Neuralink, and non-invasive devices that sense or stimulate through the skull or the skin. These non-invasive, wearable systems dominate the consumer space.

A key trend is the rapid development of non-invasive wearables, as major tech companies enter the field. We expect neurotechnology to have a “mainstream moment” in the coming years, with brain-sensing capabilities built into headphones, glasses, watches, and even jewelry.

Neuralink aspires to develop an implantable brain-computer interface for everybody. How do you assess their progress, the promise, and potential threats?

Neuralink is a classic Elon Musk company: grand ambitions, substantial funding, and substantial controversy. Their technical progress is impressive; they have implanted around a dozen people with severe paralysis, restoring basic functions such as computer interaction and communication, and they have developed a robot to automate parts of the surgical procedure. They have also accelerated public expectations of what BCIs might eventually achieve, for better or for worse.

But the idea of turning an invasive implant into a mainstream consumer product, intended to “merge human consciousness with AI”, raises serious ethical questions. Their 2030 timeline is ambitious, and brain implants remain highly risky and invasive, with little long-term safety data. For most people, they are not medically justifiable, and some experts are even calling for a moratorium on consumer implants until the risks are better understood.

Implants, Neuralink’s or otherwise, can restore or enhance function, but they also place people in vulnerable positions by potentially exposing intimate aspects of cognition through the data they generate. They therefore need to be pursued and justified with extreme care.

How effective are non-invasive therapies like transcranial magnetic stimulation compared to invasive approaches, and what limitations do they have?

Invasive technologies provide better signal resolution and more direct, faster effects on neural targets than non-invasive ones. Some of them, such as DBS, also provide better access to deep brain areas involved in critical functions, including emotions, memory, and motor disorders such as essential tremor. However, they also come with much higher risk, higher costs, and far greater difficulty in implantation or deployment, especially for everyday use.

Non-invasive TMS, for instance, has shown clinically significant effects and is now approved for the treatment of depression, obsessive-compulsive disorder, migraine, and addiction, and is being investigated for many more indications, such as anxiety. More broadly, transcranial electrical stimulation is also approved for the treatment of depression, and is being investigated for working memory enhancement and alertness, among others (interestingly, also in military contexts).

Yet, despite their clinical potential, non-invasive therapies like TMS are often recommended only when other therapeutic approaches have failed. As the evidence base expands, we may see a greater impact of non-invasive therapies as they move from secondary to first-line treatments. And when balancing risks and benefits, non-invasive therapies offer a level of convenience and scalability that invasive devices won’t match for a long time.

How do you see consumer-focused neurotech devices influencing everyday life in the near future?

Consumer neurotech is moving into workplaces, schools, and entertainment, and we already see headphones and wristwatches incorporating cognitive tracking and brain-health metrics. This will likely become far more mainstream as major tech companies, like Meta, Apple, Samsung, Amazon, and Google, invest in or already integrate neurotech into their products.

These devices could open a new chapter of personal health and well-being tracking. Even simple brain signals can reveal habits, mental-health patterns, and broader indicators of physical health. Over time, continuous monitoring could help detect early signs of disease and support preventative care, particularly as mental and neurological disorders rise. The normalization of cognitive tracking could help address these challenges, but it also raises important questions about our values, vulnerabilities, and how we understand ourselves as humans.

What safeguards are essential to prevent misuse of sensitive neural data, especially in consumer applications?

Neural data is among the most sensitive data humans can generate, not only the raw signals but also the inferences drawn from them. These can reveal intimate characteristics, creating risks of discrimination or manipulation, and they pose unique consent challenges, since they may expose things that are subconscious or unknown to the person.

For example, consider a wellness device that detects early signs of Alzheimer’s disease: that information could be highly attractive to employers or insurers, and the user themselves may not be fully able to act on it – a symptom of the condition the device is detecting.

Robust safeguards must therefore ensure neural data, especially inferred data, remains under the user’s control. This requires privacy-by-default system design, strict limits on data sharing, and full transparency and user control over any access by third parties, including researchers or companies.

So how can companies ensure that employee participation in neuro-device monitoring is genuinely voluntary and ethical?

In workplace settings, consent is a major concern: can you truly give meaningful consent to cognitive monitoring if declining it could jeopardize your job? Monitoring should therefore be minimal and strictly limited to roles where it is genuinely necessary, for example, high-risk jobs where fatigue could have severe consequences.

In office environments, cognitive monitoring should remain fully under employee control if used at all. Any data provided to employers should be aggregated, and we should still question whether such information is truly needed for productivity. True voluntariness also means no penalties or incentives linked to participation. The same principle applies to schools and universities.

In all contexts, information about the device’s scientific validation and efficacy should be clearly communicated, a level of transparency not currently required in the consumer market.

What are the most urgent regulatory reforms Europe should consider for consumer neurotech?

A major emerging issue – which we at CFG are deeply concerned about – is the blurring of the line between consumer wellness products and medical devices. Many devices marketed as wellness products fall outside medical device regulation, despite claims that verge on clinical. The regulatory landscape for these wellness devices, including more broadly software-based wellness apps and biometric trackers, is fragmented and unclear. We urgently need a new framework for borderline devices, which we think should include five components:

  1. A dedicated EU-level market oversight body, with responsibility for monitoring and certifying digital wellness products and services.
  2. A transparency framework, similar to energy labels, in which an independent body would assess privacy, safety, and scientific validation.
  3. Stricter marketing rules, as current laws allow wellness companies to make medical-sounding claims so long as they avoid certain words.
  4. Specific guidance on how existing data regulations (e.g., GDPR, the AI Act) apply to non-medical brain data.
  5. A clear pathway for interoperability with the European Health Data Space. Wellness data could contribute meaningfully to preventive healthcare, but that opportunity could be lost without such a pathway.

How can consumers differentiate between scientifically validated neurotech products and hype-driven wellness devices?

Right now, this can be tricky. Many wellness devices use language that sounds medical, for instance, referencing conditions like ADHD or claiming to “help with symptoms” without explicitly stating they treat them. They often feature testimonials claiming their devices helped with a medical condition, or cite clinical guidelines or medical statistics, implying that their devices are inspired by them. All of this is allowed under current marketing rules, even though it can be misleading.

The quality of scientific evidence also varies widely. Some companies have hundreds of peer-reviewed studies validating their specific device, while others provide only technology-level proof-of-principle, and some offer little more than a white paper.

Without scientific training, consumers cannot easily distinguish these tiers of evidence. This is why an independent transparency and validation framework is so urgently needed, as well as clearer and tighter on marketing practices.

What would a successful European neurotechnology ecosystem look like in 5–10 years, balancing innovation, safety, and ethics?

At CFG, our vision is that Europe maintains, and ideally grows, its already strong share of the global neurotech market (30%!), with a particular focus on areas of unmet medical needs where neurotechnology can be truly transformative for those who need it. We also see potential for consumer wearables to meaningfully improve mental health outcomes and help prevent neurodegenerative disease, but this must be achieved with deep respect for individual choices and privacy.

Europe has a uniquely strong foundation to achieve this, based on the values of privacy and respect for rights and freedoms, but only if we proceed intentionally, strategically, and quickly.

What practical steps can industry, researchers, and policymakers take to involve the public meaningfully in shaping the future of neurotechnology?

Public feedback, including user and patient feedback, is essential for a healthy neurotech ecosystem. These technologies touch the most intimate aspects of human life, and without early, meaningful engagement, we risk backlash and conspiracy theories similar to those seen with vaccines. Major, sometimes controversial actors, like social media and AI moguls, are entering the field, which could fuel misinformation. And because every brain is unique, relying on narrow demographic samples can make technologies ineffective or even harmful for others.

Industry should therefore seek and integrate user feedback across the entire lifecycle of neurotech: from concept, to design, to post-market use. Researchers and policymakers need to understand public awareness levels and concerns, particularly regarding privacy and autonomy. Above all, engagement must involve a diverse range of citizens, as cultural perspectives and personal experiences with neurotechnology vary widely. Our report Towards Inclusive EU Governance of Neurotechnologies offers further practical guidance.

Let's imagine an implant that allows communication with computers or smartphones by simply thinking. Would you go for it?

Personally, after studying this sector for years, I wouldn’t be first in line. As exciting as it sounds, I would need strong guarantees about long-term safety (both physical and cognitive) and privacy. I also strongly believe that this kind of invasive technology should be prioritised for those who genuinely need it to restore their quality of life.

I also think there's a real risk that this could fuel our already sky-high technological addictions, which we’ve seen with social media and now AI chatbots. We are constantly bombarded by information from all sizes of screens. Getting outside, talking to one another, touching grass, and just breathing a little more often would serve us well, both individually and collectively. And honestly, I value my quiet time to just be in my head, and I think we all should. Neuroscience supports this: look up the default mode network, a system that activates when we’re “bored” (video) and plays a key role in creativity, problem-solving, and finding meaning in life.

How is healthcare shaping its future? Thousands of healthcare professionals are discovering what truly works and seizing opportunities. Claim your ticket and experience it at the ICT&health World Conference 2026!