- Apr 10
A Microphone Is Not a Credential
- Brendan Parsons, Ph.D., BCN
- Neurofeedback, Neuroscience
If You're Here, Here's What's Happening
I shared this post ā or linked to it ā because there's likely something you just read about neurofeedback that isn't true, isn't well-supported, or isn't honest about what the evidence actually says.
I'm going to try and explain why.
But first, two things I need to say clearly.
This is not an attack on neurofeedback.
And if you happen to be the author of the "offending" piece, this is not an attack on you, personally.
I work in this field. I have done so for two decades now, and I am still learning. I still say "I don't know" more than I say anything else with full confidence. Neuroscience is genuinely hard. Making sense of it, communicating it accurately, and applying it responsibly takes time, training, and a tolerance for uncertainty that doesn't come naturally to most of us. I make mistakes. I have made them in print, in supervision, in front of rooms full of people. Being wrong is not the problem. Being wrong is how we learn.
The problem ā the specific problem I am pointing at ā is something different.
It is what happens when someone picks up a microphone, or opens a social media account, and presents claims they have not adequately examined, with a confidence the evidence does not support, to an audience that has no easy way to know the difference. Most of the people who do this are not malicious. Many are genuinely enthusiastic about neurofeedback, genuinely believe they are helping, and are often right about the potential of the field (at least in some ways).
But good intentions do not neutralize bad information. And a person who communicates publicly about clinical interventions bears responsibility not just for what they believe, but for whether what they say can actually be defended, and for knowing the difference between the two.
I believe neurofeedback is a real and promising area of clinical practice and neuroscience research. I believe it deserves rigorous investigation, careful practitioners, and serious public discourse.
What I do not believe is that the field is served by claims that are too confident, too vague, too inflated, or too careless with the science, especially when those claims reach audiences that include clients, families, and people trying to make consequential decisions about their care.
That is the problem this piece is about.
Not neurofeedback.
Not honest mistakes.
Not thoughtful disagreement about evidence or protocol.
The problem is the gap between what the evidence supports and what gets said ā and the unwillingness, in some quarters, to close it.
We Need To Educate About Neurofeedback... Responsibly
A field is not invalidated by the existence of poor communicators or overconfident marketers. If anything, the opposite is true: when a field has genuine potential, it becomes more important ā not less ā to defend it from exaggerated claims and sloppy reasoning.
Misinformation does not stay contained. It migrates. It becomes the thing clients quote back to you in intake sessions. It becomes the rationale families use to delay or abandon treatments that do have evidence behind them. It becomes the ammunition that skeptics ā some of them justified, some of them not ā use to dismiss the entire domain.
So when I correct a bad claim, I am not launching a frontal assault against the person who made it ā and I am certainly not attacking neurofeedback. I am pushing back on a pattern that, regardless of intent, does the field real damage.
I am pro-neurofeedback. I am anti-bullshit.
Those are not contradictory positions. In this field, they are the same.
What Keeps Going Wrong
There is a recurring pattern in bad neurofeedback discourse. It usually sounds scientific from a distance. Up close, it falls apart.
Decorative neuroscience: the neuromyths
Sometimes it is a basic neuroscience fact repeated because it sounds impressive. "The brain has 100 billion neurons." Perhaps ā though that figure has been challenged and refined in the literature, and more careful estimates have revised it downward (toward 86, for those counting). But the more important question is: why is this being said? Is it relevant, accurate enough for the context, and connected to a real clinical or scientific argument ā or is it simply decorative? Neuroscience vocabulary deployed to impress rather than advance understanding is not education. It is performance.
Pseudotechnical inflation: fancy jargon
Sometimes the problem is pseudo-technical language that collapses under the most basic scrutiny. I've been told, for instance, that with "therapeutic neurofeedback" ā a specific proprietary approach ā we are training "below 0 Hz." Well... no. There is nothing below 0Hz. And even ILF (heading down into the 0.001Hz range) is highly controversial, although backed up by a small amount of peer-reviewed literature. Either way, that's not a precise definition of what signal is being measured, or of its relevance to neuroscience, neurofeedback, or the brain. What is being filtered, and how? The hardware and software needed to cleanly measure a signal at that speed... the length of the recording (artifact free)... If you want to impress a neuroscientist, talk about operational elements rather than rhetoric. How that claim connects to a physiological mechanism rather than a metaphor.
If the explanation dissolves into vibes, sales copy, or appeals to proprietary software, that is not sophistication. That is evasion in a lab coat.
Regulatory inflation: borrowing credibility
Sometimes the confusion is about authorization versus evidence. A device is CE-marked, cleared, homologated, or legally marketed ā and somehow that gets rhetorically stretched into "effective," "validated," or "scientifically proven" across an impressive list of disorders.
No. That is not how evidence works.
Regulatory authorization and clinical efficacy are categorically different things. A device can be legally cleared for use as a biofeedback or relaxation tool without there being strong controlled evidence that it effectively treats depression, autism, ADHD, PTSD, migraines, Parkinson's disease, Alzheimer's disease, allergies, asthma, addiction, and burnout ā all at once, all confidently, all without meaningful qualification.
Confusing those categories is either careless or convenient. Sometimes both.
The black box
And then there is the black box.
The algorithm is proprietary. The method is special. The mechanism is unclear but somehow also revolutionary. The testimonials are powerful. The science is invoked when helpful and dismissed when inconvenient.
Clinical experience is treated as sacred when someone asks for data ā but scientific-sounding language is deployed aggressively when the product needs to be marketed. The double standard is consistent, and exhausting.
Science and lived experience are compatible. Deeply compatible. But not when "experience" becomes a shield against scrutiny, and "proprietary" becomes a reason to avoid transparency rather than a description of legitimate intellectual property.
What the Evidence Actually Looks Like
Let me be concrete, because vagueness is part of the problem.
Neurofeedback research in some of its best-studied applications ā ADHD, headache, epilepsy, depression, PTSD, and lots of promising and emerging work with ASD, insomnia, age-related disorders, etc. ā shows encouraging results. Systematic reviews and meta-analyses do exist, even if they get the perspective wrong on occasion wrong. This is a legitimate area of inquiry with real findings, real complexity, and real reasons for optimism.
But the honest picture is also this: results are often mixed. Effect sizes are frequently modest. Heterogeneity across studies is substantial. Non-specific therapeutic factors are not always well controlled. Expectancy effects, spontaneous improvement, and regression to the mean remain live concerns in much of the literature. Simply put: neurofeedback is operator dependent, and there is no "one size fits all" approach, and no one explanation as to why neurofeedback works (or doesn't).
In neurofeedback, research is not always as straightforward as it appears. Sham conditions are often treated as if they were inert placebos borrowed cleanly from pharmacology. They are not. In many designs, sham neurofeedback remains behaviorally and psychologically active: it preserves the training context, the feedback display, expectancy, effort, attention, and sometimes even partial contingency. Likewise, strict double-blind procedures can constrain or dilute core active ingredients of genuine neurofeedback, including voluntary self-regulation and adaptive clinician-guided calibration. So a small or null difference between active and sham conditions does not automatically mean neurofeedback lacks specific effects. Sometimes it means the control model does not map cleanly onto an interactive, learning-based intervention.
My objection is not that neurofeedback should be held to a lower standard. It is that it should be held to the right standard. A bad control condition is not rigor. A conceptual mismatch is not neutrality. And importing pharmacological trial logic into an adaptive, participatory, learning-based intervention does not magically make the resulting inference clean.
This means a responsible communicator can say: neurofeedback is an active and promising area of neuroscience, with meaningful signals in specific applications and important open questions about mechanism, generalizability, durability, and trial design.
What a responsible communicator cannot say ā or should not say without serious qualification ā is that a specific proprietary modality is broadly effective across dozens of heterogeneous neurological, psychiatric, developmental, and medical conditions, with durable results and without meaningful side effects, based on device authorization, selective citation, and clinical anecdote.
The gap between those two statements is where most of the bad communication lives.
The Difference Between Experience and Claim
I want to be careful here, because there is a real and important point hiding inside a lot of bad epistemics.
Lived experience matters. Clinician observation matters. Practice-based knowledge matters. Individual response matters.
Any serious clinician knows that the literature often lags behind practice. That clients present in ways studies do not capture. That rigorous trials exclude the very complexity that defines real clinical work. That you can see something repeatedly, in well-documented cases, long before anyone has designed a controlled study to confirm it.
I am not dismissing any of that.
What I am saying is that none of it ā none of it ā provides license to convert observations into universal claims, or to present a pattern of clinical experience as if it were equivalent to a well-controlled trial.
Experience can generate hypotheses. It can guide individualized care. It can sharpen our intuitions and refine our protocols. What it cannot do, all by itself, is eliminate expectancy effects, selection bias, placebo response, regression to the mean, non-specific therapeutic factors, or the human tendency to remember the wins and rationalize the losses.
That is not a criticism of experience. It is a description of why the scientific method exists.
The real standard is not science versus experience. It is science and experience in honest dialogue ā each informing the other, neither getting a free pass.
Science without contact with clinical practice becomes sterile and often clinically irrelevant. Practice without accountability to evidence becomes folklore. And folklore with a headset and a proprietary algorithm is still folklore.
Why I Correct This Publicly
Because the misinformation is public.
Because the marketing is public.
Because the overclaiming is public.
Because clients, parents, and desperate adults are reading these posts, watching these videos, and visiting these websites right now.
Because if a confident but inaccurate claim goes unchallenged often enough, it starts to sound like consensus ā especially in a field where many practitioners and most of the public don't yet have the background to evaluate it critically.
And because neurofeedback has a particular and exhausting habit of treating any request for definitions, mechanisms, or data as if it were a personal attack rather than a professional standard.
It is not a personal attack.
If you make a public claim, you can tolerate a public question.
If you use scientific language, you can tolerate scientific standards.
If you invoke evidence, you can tolerate being asked for it.
If you market expertise, you can tolerate scrutiny.
That is not persecution. That is the price of credibility.
On Being Wrong ā and What Comes After
I want to say this carefully, because I mean it.
Being wrong is fine. Being wrong is normal. Being wrong is, for anyone who takes science seriously, a recurring and often productive experience. I have been wrong about protocols, wrong about mechanisms, wrong about what a study actually showed. I have corrected myself publicly, adjusted my thinking, and moved on. That is not a weakness. That is how the process works.
The distinction I am drawing is not between people who make mistakes and people who don't. That distinction doesn't exist. The distinction I am drawing is between people who make mistakes and update, and people who make the same mistakes repeatedly, confidently, in public, and treat correction as persecution.
That is not learning. That is willful ignorance with a platform.
And here is the uncomfortable part: the person with the platform bears a specific responsibility. Not just for what they believe, but for what they broadcast ā and for being honest about the limits of both. Having a large following, a polished website, a podcast, or a compelling clinical story does not expand the evidence base. It does expand the reach of whatever you say about neurofeedback... so please be careful about what you say.
I also know the script that follows when someone raises these concerns. The critic is "too rigid." The clinician asking for evidence is "closed-minded." The one distinguishing between promising and proven is "attached to old paradigms." The request for transparent methods is somehow "anti-innovation."
No.
I am not angry at scrutiny. I am angry at sloppy scrutiny and sloppy advocacy alike.
The healthiest thing a field can do is stop rewarding certainty theater: the confident gestures, the persuasive affect, the selective use of citations, the unanswerable appeals to proprietary knowledge. None of that is expertise. Expertise is calibrated. It knows what it knows, knows what it doesn't, and says so clearly. The words "I don't know" are not a failure of expertise. They are its most reliable marker.
What a Mature Field Sounds Like
A mature clinician in a mature field can say:
This approach is promising, but the evidence is still mixed.
This specific protocol is based on clinical reasoning, not a definitive biomarker.
This device is authorized for use, but that is not equivalent to having controlled efficacy data for every condition on the brochure.
This client improved; and I don't know with certainty how much of that improvement was the protocol, the relationship, the expectancy, the passage of time, or some combination of all of them. That uncertainty doesn't make the work less meaningful. It makes me more careful.
This sham-controlled trial is interesting, but the control is not necessarily inert, and the inference is only as clean as the design logic behind it.
This study is useful, but it is small, undercontrolled, difficult to generalize, or methodologically mismatched to the intervention it is trying to evaluate.
This method may help some people, and I am not going to present it as an established answer for two dozen disorders across the lifespan because the evidence doesn't support that framing.
That is not negativity. That is not closed-mindedness. That is not hostility to innovation.
That is maturity. And more of our field needs to grow into it.
The Standard I Am Defending
I want a field with room for innovation, but not immunity from scrutiny.
I want practitioners who are curious, but calibrated.
I want confidence anchored to competence.
I want clinic websites and social media posts that clearly separate:
supported uses from speculative ones,
device authorization from therapeutic proof,
client testimony from controlled data,
mechanistic hypotheses from established mechanisms,
enthusiasm from exaggeration.
And I want us to stop pretending that asking for these distinctions is hostile. It is not hostile. It is the minimum standard for talking responsibly about interventions that affect vulnerable people ā people who are often in pain, often having exhausted other options, and often placing a great deal of trust in the expertise of the person in front of them.
That trust deserves better than a confident performance.
Final Thought
I have been quiet about this for too long. Not because I didn't see it ā I've been watching this pattern for years ā but because calling it out takes time I don't always have, and because it can feel like picking fights when I would rather be doing the actual work.
But the misinformation keeps accumulating. The clients keep arriving having read it. The families keep making decisions based on it. And the people doing careful, rigorous, honest work in this field keep having their credibility eroded by proximity to claims that can't be defended.
I'm done being quiet about it.
Not because I have unlimited time ā I don't, which is precisely why this essay exists. The next time I see something that needs it, I can share a link instead of explaining it from scratch. That's the point. This is not a tantrum. It is an efficiency measure.
If you're here because I flagged something you posted: I am not assuming bad faith. I am assuming you care about this field, or you wouldn't be in it. What I am asking is that you take the flag seriously ā that you sit with the specific claim, look at what it actually rests on, and ask yourself whether you can defend it to a skeptical but fair audience. If you can, I'd genuinely like to know how. If you can't ā that's the beginning of something useful.
Understand the gesture correctly.
It is not a personal attack.
It is not bitterness or score-settling.
It is not anti-neurofeedback.
It is a refusal to let marketing replace method. A refusal to let regulatory language masquerade as efficacy data. A refusal to let proprietary opacity pose as expertise. A refusal to let bullshit ā however well-intentioned ā become the public face of a field worth taking seriously.
A microphone is not a credential.
A testimonial is not a trial.
A proprietary algorithm is not an argument.
And calling that out is not cruelty.
It is professional responsibility. For all of us.