- Dec 31, 2025
Neurofeedback: Owning the Debate, Raising the Bar
- Brendan Parsons, Ph.D., BCN
- Neurofeedback, Neuroscience
Neurofeedback has reached an uncomfortable but necessary moment in its history. For more than a decade, some of its most visible critics—most notably Thibault and gang—have dominated the public scientific narrative, framing neurofeedback as a methodologically fragile intervention propped up by placebo, expectation, and ideological enthusiasm.
Much of this criticism landed because the field allowed it to. Neurofeedback expanded clinically faster than it matured scientifically, and when challenged, too often responded with defensiveness, appeals to anecdote, or silence. In doing so, it ceded the microphone to critics who, while methodologically astute, are not practitioners of neurofeedback and frequently misunderstand its core mechanisms.
This post argues for a shift in posture. Neurofeedback must continue to face its critics head-on—but it must also become its own most rigorous critic. The goal is neither to dismiss scepticism nor to elevate it to authority. Rather, it is to recognise that external critics have identified real weaknesses, while also acknowledging that they routinely mischaracterise what neurofeedback actually is, how it is practiced, and how learning-based interventions function.
If neurofeedback is to progress, it cannot outsource its self-critique to those outside the field. Nor should it grant disproportionate authority to critiques grounded in incomplete models of learning and self-regulation. The task now is internal: to raise standards, clarify fundamentals, and move the conversation beyond debates that no longer reflect the state of the practice.
What Thibault et al. got right
The core contributions of the Thibault papers deserve to be stated plainly.
First, they forced the field to confront a genuine methodological blind spot: behavioural improvement can occur without durable, measurable changes in resting-state EEG or objective sleep architecture. The Schabus insomnia study, frequently cited in their commentaries, made this tension explicit and uncomfortable.
However, the broader conclusion often drawn from this work—that double-blind, sham-controlled designs represent the gold standard for evaluating neurofeedback—is far less convincing. Neurofeedback, like psychotherapy and other learning-based interventions, cannot be meaningfully double-blinded without distorting the intervention itself. Participants are not passive recipients of a hidden ingredient; they are active learners engaged in self-regulation, strategy formation, and meaning-making.
In this respect, demanding double-blind designs in neurofeedback reflects a category error rather than methodological rigor. Psychotherapy research abandoned the fantasy of true double-blinding decades ago, not out of convenience, but because blinding the participant undermines the very mechanisms through which change occurs. Neurofeedback belongs in this same class of interventions.
The real lesson of the Schabus study is not that neurofeedback fails when double-blinded, but that outcome measures, learning trajectories, and transfer must be specified more carefully. Behavioural change without immediate or static neural markers is not an anomaly—it is a familiar feature of complex learning systems.
Second, they correctly identified a publication ecosystem vulnerable to false positives. Small samples, flexible analyses, ideological commitment, and commercial entanglements were not incidental problems—they were structural ones. In this respect, their critique mirrored broader concerns raised across psychology and neuroscience.
What their analysis largely ignored, however, is the complementary problem of false negatives—particularly when neurofeedback is compared against so-called sham conditions. Sham neurofeedback is not an inert placebo. It preserves many of the most potent ingredients of learning-based interventions: sustained attention, repeated practice, feedback contingencies, expectancy, therapist interaction, and motivational framing. When such an active control is treated as a neutral baseline, true effects are easily washed out, leading to systematic underestimation rather than overestimation of efficacy.
In other words, a field can be simultaneously vulnerable to false positives and false negatives. Focusing exclusively on the former, while treating sham neurofeedback as methodologically pristine, creates a distorted evidentiary landscape that penalises complex learning interventions for working through mechanisms they are explicitly designed to recruit.
Third, they highlighted a neglected truth: expectation, motivation, therapeutic alliance, and engagement are powerful drivers of clinical change. Neurofeedback, wrapped in technological salience and delivered over repeated, structured sessions, is exceptionally well-positioned to mobilise these forces.
Where the critique falters is in treating this fact as a weakness rather than a strength. When ethically and responsibly leveraged, these factors are not contaminants to be eliminated but active ingredients to be understood, specified, and optimised. Entire fields—psychotherapy, rehabilitation, education, and skill learning—are built on the principled use of expectancy, alliance, and engagement to drive durable change.
The problem is not that neurofeedback recruits these mechanisms; it is that the field has too often failed to articulate how and why it does so, leaving critics free to relabel learning dynamics as placebo effects. Properly framed, the capacity of neurofeedback to harness expectation and engagement is not evidence against its legitimacy, but evidence of its alignment with how complex biological systems actually learn and adapt.
On these points, resistance from the neurofeedback community was largely defensive rather than scientific.
Where external critique reaches its limits
Before addressing conceptual limits, it is necessary to be more explicit about a recurring source of confusion regarding authority. Thibault has repeatedly defined his own position as that of “a cognitive neuroscientist who has extensively reviewed the EEG‑neurofeedback literature.” This self-description matters, because it also delineates the boundaries of his competence.
Extensive literature review is not the same as expertise in a complex clinical and technical practice. Neurofeedback is not reducible to signal extraction, frequency bands, or statistical contrasts; it is a learning-based intervention whose effectiveness depends on protocol design, state-dependence, individualisation, therapist skill, and moment-to-moment adjustments that are rarely captured—let alone understood—through published methods sections alone.
Yet Thibault’s critiques repeatedly proceed as if reading the literature were sufficient to exhaust what neurofeedback is. From this vantage point, failures of poorly specified protocols are treated as failures of principle, and limitations of early experimental designs are elevated to ontological claims about brain self-regulation.
To put it bluntly, familiarity with a field’s papers does not confer mastery of its practice. As a favourite analogy goes: watching every Formula 1 race does not make one Schumacher. Knowing the rules, lap times, and statistics is not the same as sitting in the cockpit, responding to feedback at speed, and making decisions under constraint.
This does not invalidate external critique—but it does render Thibault’s authority partial, provisional, and frequently overstated, particularly when his conclusions are treated as definitive judgments on a technique he has never practiced.
The central problem with the “neuroplacebo” argument is not that it invokes placebo mechanisms, but that it treats them as explanatory endpoints rather than learning mechanisms.
The false dichotomy of specific versus non-specific effects
Thibault et al. repeatedly frame placebo-related factors as “non-specific,” implicitly contrasting them with a hypothetical pure, mechanistic neural intervention. This distinction collapses under even modest scrutiny.
Learning is not specific in the way pharmacological binding is specific. Operant conditioning, reinforcement learning, and active inference are inherently contextual, expectation-sensitive, and meaning-dependent. In motor rehabilitation, psychotherapy, exposure therapy, and skill acquisition, we do not discard interventions because expectancy and engagement contribute to outcomes—we design around them.
By this standard, calling neurofeedback a superplacebo is not a disqualification. It is an admission that it functions as a high-salience learning environment.
Sham neurofeedback is not inert
Another conceptual error lies in the interpretation of sham-controlled designs. Sham neurofeedback is routinely treated as a neutral baseline, when in fact it remains a rich training context: sustained attention, repeated attempts at self-regulation, feedback contingencies, therapist interaction, and motivational framing all remain intact.
That sham and genuine neurofeedback often produce similar behavioural outcomes does not demonstrate the absence of learning. It demonstrates that learning is not uniquely tethered to narrow frequency modulation under poorly matched protocols.
An impoverished model of neurofeedback learning
Implicit in the critique is the assumption that successful neurofeedback should produce rapid, linear, consciously accessible control over predefined EEG metrics. Contemporary learning models offer no such expectation.
Neural self-regulation is:
state-dependent,
non-linear,
highly individual,
and sensitive to arousal, reward structure, and task meaning.
Failure to regulate a predefined signal under generic conditions says more about protocol design than about the brain’s capacity to learn.
What has changed since the critique
Perhaps the most overlooked limitation of the Thibault corpus is its temporal anchoring. The critique is aimed squarely at a form of neurofeedback that many practitioners no longer defend.
Modern neurofeedback research increasingly emphasises:
protocol individualisation rather than one-size-fits-all frequency targets,
learning models grounded in reinforcement learning and active inference,
multimodal feedback integrating arousal, attention, and engagement markers,
and explicit acknowledgment of non-responders as a design problem rather than a nuisance variable.
In other words, many of the failures highlighted by Thibault et al. have already been metabolised by the field—often implicitly, sometimes uncomfortably, but undeniably.
Reframing neurofeedback: from signal control to skill acquisition
If neurofeedback is understood not as a tool for direct neural micromanagement, but as a structured environment for learning brain–body self-regulation, much of the controversy dissolves.
From this perspective:
Expectation is not a confound; it is a learning prior.
Engagement is not noise; it is task salience.
Therapist interaction is not contamination; it is scaffolding.
The real scientific challenge is not eliminating these factors, but specifying how they interact with neural constraints to produce durable change.
This reframing does not excuse poor methodology. On the contrary, it demands better experimental designs—ones that model learning trajectories, individual differences, and transfer, rather than treating them as post hoc inconveniences.
Brendan’s perspective:
On expertise, accountability, and professional boundaries
Although this essay is unapologetically opinionated, it reflects a concern shared quietly by many serious practitioners: the neurofeedback field has been too permissive about who gets to claim expertise.
Neurofeedback is technically demanding, clinically subtle, and highly sensitive to operator skill. Yet the field routinely tolerates self-appointed experts whose primary credentials consist of cursory trainings, marketing fluency, or selective citation of the literature. These voices do real damage. They inflate claims, muddy concepts, provoke legitimate scepticism, and ultimately hand critics the very examples they need to dismiss the field wholesale.
If neurofeedback is to mature, it must draw firmer professional boundaries. Expertise should be grounded in demonstrable training, supervised clinical experience, methodological literacy, and an ability to articulate learning-based mechanisms with precision. Those who repeatedly misrepresent the technique, oversell outcomes, or posture as authorities without the requisite competence should not be amplified under the banner of inclusivity.
This is not a call for ideological purity, nor for silencing dissent. It is a call for accountability. Fields earn credibility not only by welcoming critique, but by refusing to legitimise pseudo-expertise within their own ranks. In medicine, psychotherapy, and rehabilitation, professional communities do this through standards, certification, peer review, and—when necessary—clear professional distancing.
Neurofeedback must do the same. Allowing poorly grounded voices to dominate conferences, social media, or commercial platforms does not democratise the field; it erodes it. Responsible self-regulation means being willing to say, plainly and publicly, that not all opinions carry equal weight—and that some claims do not deserve a microphone.
Raising the bar is uncomfortable. But failing to do so guarantees that others will continue to define the field from the outside.
Conclusion
Neurofeedback does not advance by silencing critics—but neither does it advance by endlessly rehearsing the same external critiques as if they were final judgments.
The work of Thibault and colleagues exposed real problems, yet it also revealed the limits of outsider critique. Lacking practical grounding in neurofeedback, their analyses frequently mistake design failures for theoretical impossibilities and learning mechanisms for placebo contamination. When the field treats such critiques as definitive, it unintentionally validates misunderstandings of its own foundations.
The more urgent task is internal. Neurofeedback must become radically self-critical: intolerant of weak protocols, vague claims, poor outcome measures, and unexamined commercial influence. It must demand learning models that reflect how brains actually change, designs that respect individual variability, and evidence that extends beyond short-term subjective improvement.
At the same time, the field must stop outsourcing its identity to its critics. Neurofeedback is not a pharmacological intervention in disguise, nor is it a failed attempt at neural micromanagement. It is a learning-based method for training self-regulation in complex biological systems—messy, slow, context-dependent, and profoundly human.
Facing criticism head-on does not mean amplifying every critical voice. It means engaging seriously, correcting misunderstandings clearly, and holding ourselves to standards higher than those imposed from the outside.
If neurofeedback is to earn its future, it will not be by winning arguments with its critics—but by making many of those arguments obsolete through better science, clearer theory, and more disciplined practice.
References
Ali, S., Lifshitz, M., & Raz, A. (2014). Empirical neuroenchantment: From reading minds to thinking critically. Frontiers in Human Neuroscience, 8, 357. https://doi.org/10.3389/fnhum.2014.00357
Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124. https://doi.org/10.1371/journal.pmed.0020124
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. https://doi.org/10.1126/science.aac4716
Schabus, M., Griessenberger, H., Gnjezda, M.-T., Heib, D. P. J., Wislowska, M., & Hoedlmoser, K. (2017). Better than sham? A double-blind placebo-controlled neurofeedback study in primary insomnia. Brain, 140(4), 1041–1052. https://doi.org/10.1093/brain/awx011
Thibault, R. T., & Raz, A. (2016). The psychology of neurofeedback: Clinical intervention even if applied placebo. American Psychologist, 71(7), 679–691. https://doi.org/10.1037/amp0000118
Thibault, R. T., Lifshitz, M., & Raz, A. (2017). Neurofeedback or neuroplacebo? Brain, 140(4), 862–864. https://doi.org/10.1093/brain/awx033
Thibault, R. T., Lifshitz, M., & Raz, A. (2018). The climate of neurofeedback: Scientific rigour and the perils of ideology. Brain, 141(2), e11. https://doi.org/10.1093/brain/awx330