- Yesterday
Alpha Frequency Neurofeedback Sharpens the Timing of Attention
- Brendan Parsons, Ph.D., BCN
- Neurofeedback, Neuroscience, Optimizing performance
Attention is a temporal problem. The brain has to decide when to engage and when to release, often in fractions of a second, against a continuous stream of sensory input that does not pause to be polite. Alpha-band activity — the 7 to 13 Hz rhythm with its parieto-occipital signature — has been theorized for decades to participate in this temporal coordination. Alpha amplitude shapes inhibitory gating, suppressing irrelevant input. Alpha frequency — the actual peak rate of oscillation — has been proposed to shape the resolution at which attention samples the world. Faster alpha rhythms travel with shorter reaction times, finer perceptual resolution, more efficient cognitive processing. We have known the correlations for years.
What we have not had is causal evidence. Most of what we know about individual alpha frequency (IAF) and attentional dynamics has been correlational, or based on indirect manipulations such as rhythmic transcranial magnetic stimulation at IAF + 1 Hz. Whether training a person's IAF — letting them learn voluntary control over the parameter — would in turn alter how attention unfolds in time has been an open question. Correlation is one thing; the brain politely cooperating with a training protocol is another.
A new study published in NeuroImage in 2026 by Jacques and colleagues at the French Armed Forces Biomedical Research Institute and partner labs gives that question its first large, well-controlled, mechanistically framed answer. One hundred and one healthy adults completed five sessions of EEG-based neurofeedback or an active sham, performing the Attention Network Test (ANT) after each session. (Biofeedback, broadly, is a training method that gives people real-time information about a physiological signal — heart rate, respiration, muscle tension, skin conductance, brainwaves — so they can learn to regulate it. Neurofeedback is the EEG-specific case.) In this study, the target was not amplitude — the parameter most attention-related neurofeedback protocols have historically chased — but frequency: the peak rate at which the parieto-occipital alpha rhythm was oscillating in real time. The sham intervention targeted random EEG parameters across blocks, a genuinely active placebo rather than a no-treatment control. The design was double-blind, the analytic framework was mature (linear mixed-effects models, cluster-based permutation, Bayesian multilevel mediation), and the study set up its analyses to test mechanism — not just does it work, but via which steps does it work, and how much of the behavioral effect is statistically attributable to each step. This is the kind of methodological care the field has long needed and rarely received.
Why should clinicians care? Three reasons. First, the populations most likely to benefit from IAF-targeted training are precisely those with documented slower alpha rhythms — post-concussion, mental fatigue, mild cognitive impairment, depression-related cognitive slowing, late-stage chronic fatigue with a cognitive component. Second, the study's mechanistic framing tells us which attentional component is being trained — the timing of cortical engagement, not its magnitude — which has direct implications for who is and is not a candidate. Third, the methodological standard set here should be a reference point. Active sham, double-blind, large sample, mediation analysis. This is what high-confidence neurofeedback evidence looks like, and frankly we should expect to see a lot more of it.
Methods
Sample and design. One hundred and eight healthy adults were enrolled (mean age 30.5 ± 7.99 years, 55 women, 96 right-handed). Participants had normal or corrected-to-normal vision and no neurological diagnoses. Seven were excluded for technical issues, signal quality problems, or loss to follow-up, leaving 101 final analyzable participants. Randomization was 3:1 to neurofeedback (n = 71) or active placebo (n = 30). Group assignment was double-blinded. The study was approved by the institutional ethics committee (Comité de Protection des Personnes Nord Ouest IV), and participants received monetary compensation.
Procedure. Five neurofeedback sessions on five consecutive weekdays. On day 1 (Monday), participants completed an initial Attention Network Test, followed by 4 minutes of resting-state EEG (eyes open and closed). The first NF session followed, with another 2 minutes of resting EEG and a self-report questionnaire (the NeuroFeedback Experience and Treatment Questionnaire, NeXT-Q) afterward. On days 2 through 5, the ANT was administered after the NF session each day.
Neurofeedback interface. Each session ran for 30 minutes — 10 blocks of 3 minutes with 30-second breaks. The interface, developed in OpenViBE and Python, displayed a yellow cursor moving along a vertical axis based on the participant's mean IAF z-score, computed in real time at parieto-occipital electrodes (P3, Pz, P4, O1, Oz, O2) using the formula proposed by Klimesch (1999). The goal was to use a self-chosen mental strategy — four generic examples were provided, but participants were encouraged to develop their own — to move the cursor toward the target (an IAF z-score above a threshold). When the cursor reached the target for at least 500 ms, the participant received a green visual signal and a reward sound.
The threshold was dynamically adjusted to maintain a reward rate between 30 and 60% — a feature of the design worth pausing on. If a participant exceeded 60% rewards in three consecutive blocks, the threshold raised by 0.1 z; if rewards dropped below 30%, the threshold lowered. This is good neurofeedback engineering. Reward rates that drift too high mean the brain is no longer being shaped (everyone wins, the learning curve flattens). Reward rates that drift too low mean the participant disengages or becomes frustrated. The 30–60% window is where shaping happens.
After each block, participants reported the strategy they had used.
Active sham. PBO participants underwent the same protocol structure, but the EEG parameter targeted was randomly altered across blocks within each session — increase beta amplitude in one block, decrease theta in the next, decrease delta in the third. This preserved engagement and the perceived sense of BCI control without producing genuine consistent IAF training. Crucially, this is not a no-treatment control — it is a credible placebo intervention, which means the contrast is much harder for the active condition to win.
Learner classification. Following Su et al. (2021), NF participants were classified post-hoc as learners or non-learners using a learning index (the average session-by-session IAF increase relative to S1, normalized). Non-learners had null or negative indices. This split yielded 40 learners, 31 non-learners, and 30 active PBO participants.
Outcome measures. Behavioral outcomes from the ANT included response time (RT), efficiency score (correct response rate divided by mean RT), and percentage change S1 to S5. Three attention networks were computed: alerting, orienting, executive control. EEG-ANT analyses included α-ERD amplitude (Pfurtscheller & Lopes da Silva, 1999) and α-ERD latency (time from cue or target to peak desynchronization). Target-locked P1 latencies were computed at occipital electrodes around 100 ms post-target.
EEG and statistics. A 32-channel BrainAmp system with actiCAP at 250 Hz, referenced to Fcz with a Fpz ground, impedances <10 kΩ. Preprocessing in MNE-Python included a 0.1–30 Hz band-pass filter, ICA for artifact removal, and a common average reference. Linear mixed-effects models with participant as random intercept were run in JAMOVI, with Holm correction for multiple comparisons. Bayesian multilevel mediation was conducted in R using the bmlm package (Vuorre & Bolger, 2018), with default priors and 10,000 MCMC samples; 90% credible intervals were adopted following McElreath's (2018) recommendation.
Results
Did the neurofeedback work? The first result is straightforward and important: 40 of the 71 analyzable NF participants — 56% — successfully upregulated their parieto-occipital IAF across the five sessions. The active PBO group also showed a smaller, mostly frontal IAF increase from session 3 onward, which the authors interpret as a non-specific cognitive engagement effect (more on that in the Discussion — placebo conditions in neurofeedback are rarely fully inert, and pretending they are is its own kind of mistake). Non-learners actually decreased their IAF after S1. The trained increase in learners was significantly larger than the PBO group's drift across the topography, with a roughly 0.35 Hz S5-vs-S1 gain in learners compared with about 0.15 Hz in active PBO and a small decline in non-learners.
The behavioral signal followed predictably. Learners showed significantly shorter RTs at S5 versus S1, and the percentage RT change S1 to S5 was significantly larger in learners (around 7%) than in non-learners or active PBO. The repeated-measures correlation between mean RT and IAF was modest but real (r = −0.13, p = 0.012), and within-individual coefficients were significantly more negative in learners than in non-learners (p = 0.002) and active PBO (p = 0.024). Higher IAF, shorter RT, within and across participants. A roughly 50-millisecond improvement at the group level — modest in absolute terms, robust across the methodology, and independent of the alerting, orienting, or executive control networks. The improvement was a generalized response-time speedup, not a network-selective effect.
The mechanistic signal was more interesting. The α-ERD amplitude — how much the cortex disengaged from idling alpha following stimuli — did not differ between groups. But α-ERD latency — when the cortex began disengaging in the cue–target window — was significantly shorter in learners across S2, S4, and S5, with the same pattern across all cue conditions except the no-cue trials. In other words, learners' cortices did not engage more strongly with task-relevant cues; they engaged earlier. The repeated-measures correlation between IAF and α-ERD latency confirmed the link (r = −0.17, p < 0.001), and within-individual coefficients were significantly more negative in learners than non-learners (p = 0.002).
Mediation analysis tied it together. A partial sequential pathway: NF training increased IAF, IAF increases were associated with shorter α-ERD latencies, and shorter α-ERD latencies were associated with faster RTs. The mediated proportion was 14.26% for IAF on RT (95% CI [0.99, 31]), and 15.65% for α-ERD latency on RT (95% CI [4.89, 52.34]). The total effect of NF on RT was β = −12.51 (90% CrI [−22.22, −2.69]), with an indirect effect through IAF of β = 1.87 (90% CrI [0.07, 4.30]) — significant, but representing only about a seventh of the overall NF-to-RT effect. The remaining ~85% of the behavioral effect was not mediated by IAF or α-ERD latency. This is an important number to sit with, and we will.
A target-locked ERP analysis added one more piece. Learners — and only learners — showed significant decreases in P1 latency across sessions, supporting the interpretation that NF-induced IAF increases facilitate faster sensory encoding at early stages of visual processing.
Demographic comparability. No significant group differences on age, education, sex, laterality, caffeine or tobacco consumption, sport hours per week, or HADS anxiety and depression scores. One demographic difference was observed, and it deserves attention: learners had significantly lower baseline IAF at S1 than both non-learners and active PBO. This contradicts an earlier finding (Wan et al., 2014) that higher baseline alpha amplitude predicted neurofeedback learning, and the authors interpret the discrepancy as more likely a ceiling effect than a true contradiction. We will return to what this means clinically.
Discussion
This study does several things I really appreciate. It treats the neurofeedback trial as an experimental probe of a specific mechanistic hypothesis — does training IAF causally alter the temporal dynamics of attention? — rather than as a clinical-efficacy demonstration, and it accepts the methodological cost that comes with that goal. The design is double-blind, the active sham is genuinely active rather than a no-treatment control, the sample is large by neurofeedback standards, and the analytic framework is mature. It is the kind of paper the field needs more of.
The cleanest empirical contribution is the dissociation between α-ERD amplitude and α-ERD latency. The two indices answer different mechanistic questions. Amplitude asks: how much does the cortex disengage from idling alpha activity in response to a cue? Latency asks: when does that disengagement begin? In this study, training-induced IAF gains shifted the timing without changing the magnitude. The cortex did not engage more strongly when needed. It engaged earlier. That distinction is not a methodological footnote — it points to a different family of clinical questions than the ones amplitude-based protocols typically answer.
The active PBO group's frontal IAF drift deserves an honest acknowledgment. PBO participants showed a smaller but real IAF increase from session 3 onward, despite training parameters that could not produce consistent IAF gains. The authors interpret this as a non-specific cognitive engagement effect — the experience of doing neurofeedback-like training, with a cursor and a reward and a self-chosen mental strategy, may itself produce broad cortical changes in some participants. The trained increase in learners was significantly larger than the PBO drift and was localized to the parieto-occipital target rather than the frontal pattern, which means the NF effect cannot be reduced to engagement alone. But the field should keep paying attention to what active sham conditions are doing. Sham is rarely inert, and treating it as a clean baseline can obscure as much as it clarifies.
The mediated proportion is the most interesting limit. IAF accounted for about 14% of the NF-to-RT effect; α-ERD latency accounted for another 16% of the IAF-to-RT effect. The remaining ~85% of the behavioral improvement was not statistically attributable to either of the trained markers. The authors are appropriately cautious about speculating, and so should we be — but the clinical lesson is straightforward enough. Neurofeedback effects in real practice are almost never single-mechanism. Clients improve because they engage with a structured weekly intervention, because they develop better mental strategies during training, because they feel more competent in regulating their internal state, because the therapeutic alliance moves their broader self-care, and also because the trained EEG parameter shifts. The 14–16% number is the part this study could measure cleanly. It is real, it is causal, it is replicable. It is also not the whole story — and a study honest enough to say so is a study you can build on.
The non-learner question — 44% of the NF participants did not increase their IAF meaningfully — sits in the broader literature where 30–40% non-response rates are routinely reported. The candidate-selection signal here is the lower-baseline-IAF-predicts-learning finding. This is, in my view, the most actionable clinical takeaway in the study, and we will spend more time on it below.
Limitations are substantive. The protocol was short (five sessions across one week), with no longitudinal data on durability. Healthy adults are not the clinical populations most likely to benefit, so the translation step requires its own evidence base. The ANT, while well-validated, is administered in controlled conditions and does not capture the ecological complexity of attention in daily life. Mental strategies were not standardized — which is also a strength, because real-world neurofeedback is rarely strategy-rigid, but it adds variance the design did not try to control.
For clients considering neurofeedback for attentional symptoms, the news is good at the level of “the brain’s natural rhythms are not fixed traits — they are tunable, with training, and consistency.” But it is not yet a guarantee that any given protocol will work for any given person. Roughly half of the trained group in this study did not change their IAF meaningfully, and the protocol effect on RT, while real, was modest. That is real outcome data worth being honest about.
For referring clinicians — physicians, psychologists, neuropsychologists — the lower-baseline-IAF-predicts-learning finding is the clearest practical signal. If you are working with a client whose presentation includes cognitive slowing, post-injury fatigue, mild cognitive impairment, or burnout-related cognitive blunting, IAF-targeted neurofeedback is now in a position to be discussed as a candidate intervention, with caveats appropriate to the early-stage nature of the clinical evidence.
For neurofeedback practitioners, the mechanistic distinction matters most. Most attention-NF protocols target amplitude. This study points to a different lever — frequency — and a different mechanism (faster cortical disengagement timing, not greater disengagement magnitude). Whether and how to combine the two is a question worth several years of careful clinical experimentation.
A provocative clinical question to end on: should some clients with attentional dysregulation that includes slow-alpha presentations first work on IAF up-regulation as a kind of trainability prerequisite, before being asked to modulate amplitude in emotionally loaded contexts? The study cannot answer that, but it sets up the conditions under which the question becomes worth asking.
Brendan’s Perspective
Why this paper changes the conversation about amplitude
Most neurofeedback for attention has focused on amount — the amplitude of theta, the amplitude of beta, the ratio between them, the amplitude of alpha at sensorimotor sites. We say it so often it can feel automatic: train SMR up, train theta down, watch what happens. This is one of the cleanest reminders I have read in a while that frequency is a different lever, with different mechanics, and probably different clinical indications.
The amplitude framing has carried us a long way, and there is good reason for that. Amplitude is straightforward to measure, the protocols are well-established, and the literature on amplitude-based neurofeedback for ADHD, anxiety, and stress is genuinely encouraging. Amplitude is also, frankly, easier to explain to a referring physician — we are training your client to produce more of this and less of that. But amplitude is, in a real sense, a magnitude story. It tells you how much inhibitory gating is in play, how much cortical engagement is being mobilized, how much resource is being allocated. It does not tell you about the timing of any of it. And timing, as this study makes clear, is its own clinical variable.
Think about what that means in the consulting room. The highly activated client — the one with chronic anxiety, sensory defensiveness, trauma-related hyperreactivity, panic-proneness — usually presents with elevated, possibly even racing, neural and autonomic activity. Their alpha rhythms may already be running on the faster end of the spectrum. The clinical problem is more about gating (too much sensory leak-through, too much cortical engagement at rest) than about timing (the cortex engages plenty fast when it has to, sometimes faster than is helpful). For this client, amplitude-based work — training alpha up at the right sites, calming sensorimotor engagement — is sensible as a first move.
But there is another client profile entirely, and it does not fit that picture. The post-concussion client at six months out who still feels “slow.” The client in a depressive episode whose thinking feels syrupy. The client in their late sixties with mild cognitive impairment whose family describes them as “a half-step behind in conversations.” The client with chronic fatigue and a cognitive component. The client coming back from burnout, reading the same paragraph three times before it lands. Those clients often have measurably slower alpha rhythms — an IAF in the lower part of their age-expected distribution — and the clinical complaint is fundamentally about timing. The cortex is engaging, but it is engaging too late, or too slowly. Amplitude-based training does not necessarily address that. The Jacques study suggests, with appropriate caveats, that frequency-based training might.
That distinction changes how I would think about a referral and an intake interview. If a client describes their difficulty in terms of capacity (“I cannot focus, I get overwhelmed, I cannot block things out”), I am thinking about gating and amplitude. If a client describes it in terms of speed (“everything feels delayed, I am slow to register things, I cannot track conversations the way I used to”), I am thinking about timing and frequency. The two presentations look superficially similar in a hurried clinical conversation. They imply different protocol families.
The clients I have in mind when I read this
The lower-baseline-IAF-predicts-learning finding is the part of the study I keep coming back to. Across the literature, “who responds to neurofeedback” remains a stubborn question, and most of the predictors that have been proposed — demographic, cognitive, psychological — have been weak or inconsistent. This study contributes one that is mechanistically coherent and clinically actionable: clients whose alpha rhythms are already at the upper end of their age-expected distribution may have less room to move; clients in the lower end may have more. A ceiling effect, plausibly. And it lines up with intuition.
Who, concretely, is in that lower-IAF group?
Post-concussion clients are an important one. Posterior alpha slowing is one of the most consistent qEEG findings post-concussion, and the symptomatology — cognitive slowing, mental fatigue, slowed processing speed — maps directly onto what IAF training would, in principle, address. We do not have clinical-trial evidence in this population yet. We do have a well-controlled mechanistic study in healthy adults and a documented physiological correspondence with the population. Those are not the same, and I would not pretend they are. But they are reason enough to bring IAF into the qEEG-informed protocol design conversation for these clients, while being honest with them about what is established and what is not.
Mild cognitive impairment is the next one. Posterior alpha slowing is a well-known feature of early MCI and the early stages of Alzheimer’s-spectrum disease. The clinical caution here is real — these clients are often older, cognitively more vulnerable, and on medications that may interact with cortical excitability. IAF training is not a primary intervention for MCI; it is at most a candidate adjunct. But the population fit is mechanistically reasonable, and the conversation is worth having with referring physicians.
Depression with a cognitive-slowing presentation — the client whose depressive episode shows up not primarily as low mood but as the inability to think with normal speed and clarity — is another population worth thinking about. So is post-COVID cognitive impairment, where the literature on slowed alpha and reduced peak frequency is now substantial. So is the burnout client whose recovery has stalled at the cognitive-fluency level even after the autonomic and emotional work has progressed.
What unites these populations is a shared mechanistic story: the cortex is engaging, but it is engaging too slowly relative to what the client needs in their life. That is a frequency problem, not an amplitude problem. The amplitude protocols may not be wrong for these clients, but they are probably not the most direct path to the clinical goal.
A note of restraint: these are populations where IAF-targeted training is plausible as a candidate. We do not yet have the clinical-trial evidence to make confident claims. The honest framing in a referring conversation is that this is an emerging area, the mechanistic case is now stronger than it was a year ago, and the protocol should sit alongside other interventions rather than replace them. That is not a sales pitch. That is what defensible clinical communication looks like in 2026.
Running this in a real practice — six lanes worth thinking about
The study tested IAF training as an experimental probe. Translating it into a clinical protocol requires more than just “train the same parameter for more sessions.” Six lanes worth thinking through, none of them optional:
Threshold management. The 30–60% reward rate was not a methodological footnote; it is a clinical principle. A reward rate that drifts above 80% can mean the brain is no longer being shaped — the client is winning easily and the learning curve will flatten. A reward rate that drifts below 20% means the client is not solicited enough: losing without knowing how to recover, disengagement incoming. In a real practice I want thresholds reset at the start of each session based on a fresh baseline, not last week’s setting and a hope. The dynamic adjustment in the study is a model we work with clinically.
Transfer-task design. The behavioral effect appeared on a structured visual attention task with cues. If the clinical goal is real-world attentional gain, the transfer task has to live in the same domain. Visual scanning during reading, target detection in classroom-relevant or sport-relevant contexts, conversation tracking under mild distraction, sustained attention during a working task that matters to the client. Sitting in a quiet room watching a cursor and getting reward sounds is not, by itself, where attentional improvements show up in a person’s life. The cursor is the gym; the gym is not the sport.
Sequencing logic. Before IAF training, I want autonomic baseline and sensorimotor stability in place. A nervous system that is dysregulated at rest is not ready to learn a fine-grained frequency target — the precision the protocol asks for is real. HRV biofeedback first, sleep stabilization first, basic arousal regulation first, particularly for the post-concussion and burnout profiles where autonomic dysfunction is part of the picture. After IAF training, the work shifts to consolidation — taking the gains into more demanding, more ecologically valid attention tasks. The middle of a clinical course is rarely a single protocol applied for ten weeks; it is usually a sequence.
Adaptive protocol logic. If a client is not learning IAF modulation after four to six sessions of careful threshold management, the right move is not to keep going. The right move is to ask whether this is the wrong target, the wrong client profile, or the wrong moment in their treatment arc. Alternatives are sitting right there: SMR work, alpha amplitude regulation, frontal midline theta, coherence-based work, HRV biofeedback as the more foundational intervention. Loyalty to a protocol is not loyalty to a client.
Multi-modal integration. The paper trained one parameter in isolation. Real clinical work rarely does. For the cognitive-slowing client, IAF + HRV biofeedback addresses cortical timing and autonomic readiness simultaneously, which matters because the cortex is downstream of the autonomic state — a tired, dysregulated system does not produce a sharp alpha rhythm even if you train it. For the post-concussion client, I would add respiration training, since vagal efficiency is often impaired post-concussion and breath work is one of the cleanest paths into baseline regulation. For the burnout client, sleep work has to be in the protocol or the IAF gains will not consolidate. The dimensions interact; the protocol should reflect that.
Learning-tracking. Exposure is not acquisition. The Su et al. learning index used in the study (mean session-by-session IAF increase relative to S1) is a reasonable starting point for clinical practice even without statistical formalization. If a client’s IAF is not trending up after four to six sessions, the protocol is not being learned, and continuing it for ten more sessions in hope does the client a disservice. That distinction — exposure versus acquisition — matters more than total session count, more than total time-in-chair, more than the protocol’s published evidence base for a different population.
This study’s five-day protocol is an experimental probe, not a clinical prescription. Real clinical IAF training, if and when it is delivered, should look more like a typical neurofeedback course: 20 to 40 sessions, reassessments, adaptation of protocol based on response, attention to all the things this experimental design controlled for (engagement, mental strategy, autonomic state, motivation). That is the clinical reality the experimental literature is gradually catching up to.
If I had to crystallize the lesson in one sentence, it would be this: frequency is timing, amplitude is volume — they live in different dimensions of the same system, and most attention-NF protocols have been training the volume knob while leaving the tempo dial untouched. The Jacques study is a careful first step toward training the tempo dial, and the result — modest, real, mechanistically specific — is exactly what early evidence in a new direction should look like.
Protocols are not chosen because they sound elegant in a paragraph. They are chosen because the intake, the physiology, the symptoms, and the response patterns all line up. With IAF as a new addition to that toolkit, the case-by-case clinical reasoning gets more nuanced — and more nuance, in this work, is a good thing.
Conclusion
This is a careful, well-powered demonstration that individual alpha frequency is not just a stable trait of the brain. It is a trainable parameter with measurable consequences for the timing of attention. The mechanism is specific: faster alpha leads to faster cortical disengagement, which leads to faster behavioral responses. The effect is clinically modest in healthy adults — a 7% reduction in response time across a five-day protocol, in roughly half of the trained sample — and the study is honest about its scope. Only 14 to 16% of the behavioral effect could be statistically attributed to the trained pathway. The remaining majority remained unexplained, which is closer to clinical reality than to clinical marketing.
What the work advances most is the quality of the question the field is asking. Does neurofeedback work? is increasingly tired and increasingly unhelpful. The better question — for whom, on what target, via what mechanism, with what dose, in what context — is the one this study takes seriously. Studies like this one, with large samples, active sham controls, mediation analyses, and mechanistic framing, are how the field gets to a defensible clinical evidence base.
For clinicians, the practical signal is narrower but real: clients with naturally slower alpha rhythms — post-concussion fatigue, mild cognitive impairment, depression-related cognitive slowing, mental-fatigue and burnout presentations — may be the ones who stand to benefit most from IAF-targeted training, and the trainability of IAF appears reasonably high in adults under controlled conditions. For researchers, the deeper message is that frequency and amplitude are different parameters with different mechanisms and likely different clinical indications. The field is overdue for that distinction.
It is not nothing. It is not magic either. It is a careful, well-controlled mechanistic step toward a different kind of attention protocol — one that asks not how much the cortex engages, but when. That is a worthwhile question, and increasingly a testable one.
References
Jacques, C., Verdonk, C., Cardoso, E., Ferreira, A., Vanneau, T., Beauchamps, V., Dondaine, T., Léger, D., Le Van Quyen, M., Gomez-Merino, D., Sauvet, F., Chennaoui, M., & Quiquempoix, M. (2026). Successful closed-loop neurofeedback alpha frequency modulation enhances the temporal dynamics of attention. NeuroImage, 332, 121912. https://doi.org/10.1016/j.neuroimage.2026.121912 (Open access, CC BY 4.0)