- Feb 20, 2026
Toxic practices within our own ranks
- Brendan Parsons, Ph.D., BCN
- Neurofeedback, Neuroscience, Practical guide
A field that attracts hope (and opportunists)
Neurofeedback is one of those rare methods that hits three cultural pressure points at once: it’s neuroscience-adjacent, it’s experiential, and it promises something almost everyone wants—better regulation. Better sleep. Better focus. Less anxiety. More resilience.
And to be fair, the underlying idea is compelling. Give the nervous system real-time information about itself, and it can learn—gradually, measurably—to shift patterns of arousal and attention. When it’s done with clinical discipline, neurofeedback can sit inside a broader, evidence-informed care plan as a training tool: not a miracle, and adjacent to routine medicine and psychotherapy; a way to practice self-regulation with feedback you can’t get from willpower alone.
But here’s the tragedy: the same qualities that make neurofeedback interesting to serious clinicians make it irresistible to imposters.
Because “neuro” is a credibility shortcut.
It lets people borrow authority from brain science without doing the hard work of brain science. It lets marketing masquerade as methodology. It lets a proprietary product be sold as a clinical discipline. And because regulation is inconsistent and the public is understandably hopeful, the market rewards confidence faster than it rewards competence.
In other words, neurofeedback is emerging—and emergence is a vulnerable phase. It’s when standards are still being negotiated, language is still being defined, and the difference between training and theatre is still hard for non-experts to spot.
So this is an opinion piece, and it’s not polite. It’s a red line. It targets those who knowingly do wrong, and will hopefully serve as a wake-up call for those who fall in line with these toxic practices.
If you want to build this field, you accept scrutiny. You accept limits. You accept the boring responsibilities: operational definitions, transparent claims, adequate training, and real ethical safeguards.
If you want to use this field to build your ego—by selling fog, franchising certainty, and targeting vulnerable people—then you’re not advancing neurofeedback.
You’re destroying it. And I'm done staying quiet about it.
I'm going to call out two things in this post:
Those who steal the name to sell their product
Those who steal credibility to sell their name
Part 1 — When opportunists borrow neurofeedback to promote pseudoscience
Let’s talk about the so-called “new generation” of neurofeedback.
When a product’s pitch is built on mystique, proprietary authority, and glossy certainty, it is not “the future.” It’s the opposite of science: it’s a closed system asking you to suspend judgment.
And that is exactly what this kind of marketing does.
Sometimes, it's the world’s leading “dynamical” neurofeedback system—fully automated, delivering real-time information about brain activity without requiring diagnoses, protocols, or medication. They position it as a non-invasive brain-training technology that helps people feel more balanced in focus, rest, mood, resilience, and performance.
Then the scope quietly balloons: they frame it as a deep and lasting learning process, and in the same breath suggest that one or two sessions may help the brain regulate in the face of major life events like severe illness, job loss, trauma, even anesthesia.
That combination should make any serious clinician sit up straight.
Because it’s the classic recipe: maximal scope, minimal accountability.
If the expert is the software, it’s not clinical neurofeedback
One of the most revealing lines in this type of marketing is:
Although this method is entirely based on expertise intrinsic to the software…
Read that again.
They’re admitting the core expertise doesn’t live in the clinician’s assessment, training design, clinical reasoning, or protocol selection. It lives in the software.
Neurofeedback, done responsibly, is an applied learning intervention. The mechanism is training: feedback + learning + adaptation, guided by assessment and clinical judgement. If you replace that with “trust the software,” what you’re selling is not neurofeedback as a scientific method. You’re selling a branded consumer experience with a neuro-aesthetic.
And once “the expertise” is the proprietary software hidden behind a thick veil of secrecy, the method becomes non-falsifiable in practice. When outcomes are good, the software “worked.” When outcomes are poor, the person “needs more sessions,” or life stress “interfered,” or the process is “unpredictable.” Either way, the black box cannot be meaningfully challenged.
That’s not science. That’s a self-sealing marketing system.
The claims are sweeping, and the safety language is performative
The list of benefits reads like a catalogue of human suffering: ADHD, autism, stress, insomnia, anxiety/panic, depression, eating disorders, learning disorders, OCD/phobias, self-confidence, leadership, performance...
The text repeatedly suggests improvement is expected, often fast (“rapid improvements,” “from session to session... until problems disappear”), with broad psychosocial outcomes (“clear mind,” “higher performance,” “better day to day life for you and your loved ones”).
Then, buried in the same material, they insert a disclaimer:
This system is a training tool and does not diagnose, treat, mitigate, prevent or cure any disease, disorder or abnormal physical state…
That’s not ethical clarity. That’s legal duct tape.
You cannot spend paragraphs implying benefit across serious clinical categories and then pretend the disclaimer erases the psychological impact of what you just sold. People don’t buy disclaimers. They buy the story you told before the disclaimer.
“Two days to autonomy” is not training. It’s franchising.
When anyone promotes a 2-day certification course (“a low-cost, weekend-style fee”) so trainees can “independently and autonomously start their practice”... please think it through and look deeper. When the program includes “running a session” “the trainer's role,” “follow-ups” and then, with zero embarrassment:
“Starting your business”... yeah. You're not getting clinical education, you're getting a "get rich quick" pitch. (My advice: run away!)
And the "bonuses":
– a website
– a flyer
– help setting up Google Business
– help setting up a Facebook Pro page
That’s not a curriculum. That’s a sales funnel packaged as professional training.
It’s the commercialization of legitimacy: train the operator just enough to run sessions, give them marketing assets, and send them into the world as “certified professional.”
And they explicitly say it’s accessible to everyone: “consumers” as well as “professionals.”
This is the ethical nightmare scenario in neurofeedback: turning a brain-based intervention into a retail credential that can be acquired by anyone in a weekend. Real clinical education does not include posing for a selfie for social media marketing...
The “certificate” is a marketing prop, not a professional safeguard
Their FAQ concedes the certificate is not a diploma, then pivots: it can reassure clients and “strengthen your CV,” and it’s “recognized” by the product’s own parent organization.
That tells you what the credential is for: posturing and persuasion.
The evaluation method is also revealing: a take-home multiple-choice test, 90% pass mark, retakes allowed.
That’s not competency-based assessment. That’s administrative theatre.
“No side effects” is a red flag, not reassurance
Whenever someone leans hard on “gentle,” “non-invasive,” “effortless,” and “safe for everyone,” they’re not educating you—they’re sedating your skepticism.
Yes, neurofeedback may be non-invasive in the sense that it doesn’t inject, stimulate, or medicate. But anything that repeatedly shifts arousal, mood, sleep, and attention is not a neutral wellness toy. People vary. Context matters. Some respond quickly, some slowly, some not at all, and some get destabilized before they improve. Side effects happen. They are real. They have to be taken seriously.
Responsible practitioners say that out loud, monitor it, and have a plan.
“Without side effects” is how you market a scented candle. When it’s used to sell brain-based intervention, it’s a sales accelerant wearing a lab coat.
“Millions of hours” of use does not translate to an evidence base
This is a favourite move: replace outcomes with volume. “Millions of hours,” “thousands of sessions per day,” “in dozens of countries around the world.”
That’s not science. That’s market penetration.
If you want credibility, show what was measured, in whom, compared to what, with what effect sizes, how durable the changes were, how adverse events were tracked, and what the drop-out and non-response rates looked like.
Usage statistics can’t answer any of that. They’re a popularity contest dressed up as validation.
Bottom line: stop calling this neurofeedback
Strip the pitch to its skeleton and it’s this:
Trust the software. Expect broad benefits. Don’t worry about risk. Get certified fast. Start a business.
That is not neurofeedback as a clinical discipline. That is productized pseudoscience: a black-box experience wrapped in the language of neuroscience and sold as if it carries the same legitimacy as assessment-driven, competence-governed practice.
And here’s the collateral damage: when these systems are marketed as “the newest, most advanced neurofeedback,” they inflate public expectations to fantasy levels, recruit weekend-certified operators, and then—when results don’t match the promise—poison trust in the entire field.
That isn’t innovation.
It’s dilution with a body count.
Part 2 — When frauds pretend to be experts and authorities
Now for the second pattern: the institute that positions itself as the gateway, the owner, and the certifier of a proprietary, trademarked neurofeedback “method.”
What follows is not mind-reading. It’s structural critique. You can see the incentives on the page.
The “guardians of best practice” posture
This is the oldest credibility trick in the book: cite the major professional bodies to borrow their authority, then quietly redirect the reader toward your own house-brand credential.
You’ll see the familiar name-dropping—BCIA, ISNR, AAPB—followed by a shiny promise like:
“Join our experts and obtain your certification… to transform your practice!”
Here’s the problem. Referencing those institutions signals a world where legitimacy comes from independent standards and independent certification—BCIA certification. That’s what those names mean to informed professionals: governance that exists outside any one vendor’s sales funnel.
So if you cite the AAPB, ISNR, and BCIA to imply you stand for standards, but aren't actually recognized by any of these institutions... You’re laundering credibility.
Being a member of AAPB and ISNR is great. It's really important that we support the associations that support and represent us. But paying for a membership is not the same as being properly accredited... and pretending otherwise is fraudulent.
It’s hypocritical, it’s misleading, and it invites exactly the inference you want the public to make: “this must be the recognized standard.” It isn’t.
Serious clinical standards are independent of the brand selling the training. When the same organization sells the education, sells the identity, and sells the certification, you don’t have governance.
You have a monopoly. You have a cult-like mentality. You will almost definitely be taken advantage of.
Trademarking the “method” and selling a private credential
Branding is not the sin. A closed ecosystem masquerading as open and validated science is.
The move is simple: take a broad umbrella and label it. "Buzzword neurotherapy”. Trademark the identity, build a ladder, and then charge people for access to belonging. Once the label becomes a protected brand asset, the seller becomes the gatekeeper—not just of training, but of legitimacy.
That’s how you get practitioners who don’t use an approach—they blindly adhere to it. Because their reputation, referrals, and self-concept are now tied to the brand.
This is the opposite of scientific culture. Science wants methods you can replicate and hypotheses you can test. These systems want disciples and brand ambassadors, not critical thinkers and real scientists and clinicians.
“Knowing just enough to be dangerous” is the diagnosis
In this field, ignorance is obvious. Overconfidence isn’t.
The most hazardous practitioner isn’t the person who says, “I don’t know.” It’s the person who knows just enough to sound scientific, just enough to sell certainty, and not enough to understand where the landmines are.
You can see it in the language: big, comforting adjectives—holistic, embodied, integrative—used as a substitute for operational clarity. Those words aren’t automatically wrong. They’re just incredibly convenient when you don’t want to explain what you actually do.
But complexity is not a license to be vague. Complexity is a demand for precision—and the first precision question is always about the people claiming authority.
Who is teaching this, and what are their credentials outside their own ecosystem?
What independent certifications do they actually hold, and are those certifications current and verifiable?
What is their standing in the field—peer-reviewed publications, invited talks at credible conferences, contributions to standards, supervision track record, or other evidence they’ve advanced the science rather than just sold a story?
And if they can’t answer those, then the rest of the “complexity talk” is just camouflage.
Because competent education can articulate the basics in plain language:
What is being assessed, with what tools, and with what decision rules?
What is being trained, what signals are targeted, and how are parameters adjusted?
What are the contraindications, risk flags, and referral pathways?
What does non-response look like, and what do you do when it happens?
Instead, the pitch is often “business-ready”: a fixed number of hours, a promise of “significant results,” and language about rapidly growing a client base and meeting profitability goals.
That’s how you manufacture practitioners who feel authorized before they’re competent.
And in a brain-based discipline, that’s not just embarrassing.
It’s how people get hurt.
The marketing targets the vulnerable, not the informed
Watch who the message is for.
Serious training markets to professionals who can evaluate claims, demand operational clarity, and recognize scope issues.
This kind of ecosystem markets primarily to vulnerability: parents terrified for their children, adults desperate for relief, families looking for anything that feels like hope. It’s all calls-to-action, quizzes, “discoveries,” VIP language, and emotional storytelling.
The manipulation playbook (and how to inoculate yourself)
This is the style of marketing that doesn’t try to inform you—it tries to move you.
It usually follows a predictable sequence:
Emotional hook. A personal story, a “message that moved me,” a moment of inspiration. The goal is to warm you up before any claims are made.
Moral framing. The pitch draws a line between “those who take action” and “those who hesitate,” between “abundance” and “fear,” between “pioneers” and “the stuck.” It turns skepticism into a character flaw.
Insecurity activation. You’re reminded of your fatigue, your limits, your frustration—“the same tools, the same routines, the same results.” This isn’t clinical honesty; it’s engineered dissatisfaction.
Vague superiority promises. You’re offered “objective transformation,” “precise adjustments,” “systemic understanding,” “real change”—language that feels scientific but is hard to pin down, replicate, or falsify.
Urgency + intimacy call-to-action. A short call this week. Limited slots. A warm invitation. It feels personal. It’s a conversion funnel.
Inoculation is simple: every time you feel inspired, flattered, shamed, or rushed—pause and ask the questions ethical training welcomes.
Who is teaching—and can you verify their qualifications outside their own brand? (Licensure, degrees, scope, supervision history.)
Are they BCIA-certified in the discipline they’re selling? If not, why are they leaning on BCIA/AAPB/ISNR language while marketing a private house credential as “the standard”?
What have they contributed that survives contact with peer review: publications, conference presentations, guideline work, or recognized clinical teaching in the wider field?
What competencies are actually assessed in vivo (case formulation, signal quality/artefact handling, protocol selection, monitoring, documentation)—not just a take-home quiz or attendance badge?
What is the risk governance: contraindications, adverse-effect monitoring, stopping rules, and referral pathways when a client destabilizes?
What outcomes are tracked, with what measures, over what timeframe—and what do they do with non-response and dropouts?
Finally: if you removed the storytelling, urgency, and belonging cues, would there still be a clear, testable method on the table?
Responsible clinical education increases your ability to think critically and tolerate uncertainty. This style of marketing does the opposite. It recruits through emotion, status, and belonging—then sells the credential as the cure for doubt.
Responsible clinical education increases your ability to think critically and tolerate uncertainty. This style of marketing does the opposite. It recruits through emotion, status, and belonging—then sells the credential as the cure for doubt.
The “buzzword neurotherapy” umbrella is the perfect shape for bullshit
Here’s why these umbrellas sell so well: they’re designed to be unfalsifiable.
Define your method as organized around universal human pillars—posture, breathing, sleep, emotions, cognition—and you can’t really be wrong. You can always say you’re being “holistic.” When one approach fails, you can always retreat into another, cyclically, again and again and again...
But clinical credibility doesn’t come from being broad. It comes from being specific: operational definitions, measurable claims, clear boundaries, and the ability to say, “Here’s where this does not apply.”
Grand evolutionary storytelling and sweeping systems language can be inspiring. It can even be clinically useful as a narrative frame. But it is not evidence, and it doesn’t replace risk governance.
Take home message: don’t fall for people who bundle vague concepts into a polished package and sell it as peak innovation. A polished turd is still a turd—but you only need to say it once.
Wrap it in “ethics” while building an ecosystem that benefits from ambiguity
They say the right things: technology doesn’t replace judgment; data isn’t a verdict; the person should be active.
But sentiment is cheap.
The ethical test is structural: does the ecosystem push people toward independent standards, transparent methods, conservative scope, and referral discipline?
Or does it push people toward a branded identity, an internal credential, a recommended equipment pathway, and a business model that depends on rapid adoption?
From what’s on the page, the incentives point in one direction: train, buy, start practicing fast, grow fast, and then purchase the next rung of the ladder.
That’s how a field gets contaminated from the inside.
Quietly. With pretty words. And zero substance.
What good looks like
If you want a clean compass in a noisy marketplace, this is it.
Ethical neurofeedback education and practice is boring in the right ways. It is specific, bounded, and accountable. It can tell you what is being measured, what is being trained, why those choices were made, and what happens when things don’t go to plan. It tracks outcomes. It acknowledges non-response. It discusses risks. It has referral pathways. It encourages skepticism, not loyalty.
And it never needs to borrow legitimacy from professional organizations while selling a private credential as if it’s the same thing.
Words don’t count. Actions do.