This isn’t entirely news to people in the field doing research, but it’s important information to keep in mind when anyone starts pushing fMRI (or SPECT) scans into popular media discussions about neurology or psychiatry.
There have been some high profile influencer doctors pushing brain imaging scans as diagnostic tools for years. Dr. Amen is one of the worst offenders with his clinics that charge thousands of dollars for SPECT scans (not the same as the fMRI in this paper but with similar interpretation issues) on patients. Insurance won’t cover them because there’s no scientific basis for using them in diagnosing or treating ADHD or chronic pain, but his clinics will push them on patients. Seeing an image of their brain with some colors overlayed and having someone confidently read it like tea leaves is highly convincing to people who want answers. Dr. Amen has made the rounds on Dr. Phil and other outlets, as well as amassing millions of followers on social media.
Dr. Mike, a rare YouTube doctor who is not peddling supplements and wares, and thus seems to be at the forefront of medical critical thinking on the platform, interviewed Dr. Amen recently[0]. I haven't finished the interview yet, but having watched some others, generally the approach is to let the interviewee make their grandiose claims, agree with whatever vague generalities and truisms they use in their rhetoric (yes it's true, doctors don't spend enough time explaining things to patients!), and then lay into them on the actual science and evidence.
Dr. Mike did an incredible job in that interview. He gave Dr. Amen all the rope to hang himself with his own words. When you're hawking a diagnosis method and you're not interested in building up the foundation of evidence for it by doing a double blinded, randomized controlled study. And that the results of said study would change how your treating patients it's pretty clear who the snake oil salesman is
Back in 2009 I remember reading about how dead salmon turns up brain activity in fMRI. fMRI studies are something very frequently invoked unscientifically and out of context.
My previous job was at a startup doing BMI, for research. For the first time I had the chance to work with expensive neural signal measurement tools (mainly EEG for us, but some teams used fMRI). and quickly did I learn how absolute horrible the signal to noise ratio (SNR) was in this field.
And how it was almost impossible to reproduce many published and well cited result. It was both exciting and jarring to talk with the neuroscientist, because they ofc knew about this and knew how to read the papers but the one doing more funding/business side ofc didn't really spend much time putting emphasis on that.
One of the team presented a accepted paper that basically used Deep Learning (Attention) to predict images that a person was thinking of, from the fMRI signals. When I asked "but DL is proven to be able to find pattern even in random noise, so how can you be sure this is not just overfitting to artefact?" and there wasn't really any answer to that (or rather the publication didn't take that in to account, although that can be experimentally determined). Still, a month later I saw tech explore or some tech news writing an article about it, something like "AI can now read your brain" and the 1984 implications yada yada.
So this is indeed something probably most practitioners, masters and PhD, realize relatively early.
So now that someone says "you know mindfulness is proven to change your brainwaves?" I always add my story "yes, but the study was done with EEG, so I don't trust the scientific backing of it" (but anecdotally, it helps me)
There are lots of reliable science done using EEG and fMRI; I believe you learned the wrong lesson here. The important thing is to treat motion and physiological sources of noise as a first-order problem that must be taken very seriously and requires strict data quality inclusion criterion. As far as deep learning in fMRI/EEG, your response about overfitting is too sweepingly broad to apply to the entire field.
To put it succinctly, I think you have overfit your conclusions on the amount of data you have seen
I would argue in fact almost all fMRI research is unreliable, and formally so (test-retest reliabilities are in fact quite miserable: see my post below).
EDIT: The reason being, with reliabilities as bad as these, it is obvious almost all fMRI studies are massively underpowered, and you really need to have hundreds or even up to a thousand participants to detect effects with any statistical reliability. Very few fMRI studies ever have even close to these numbers.
The difference is that EEG can be used usefully in e.g. biofeedback training and the study of sleep phases, so there is in fact enough signal here for it to be broadly useful in some simple cases. It is not clear fMRI has enough signal for anything even as simple as these things though.
But none of this (signal/noise ratio, etc) is related to the topic of the article, which claims that even with good signal, blood flow is not useful to determine brain activity.
In related news: ironically, Psychedelics disrupt normal link between brain’s neuronal activity and blood flow - thus casting some doubt on findings that under psychedelics more of the brain is connected (since fMRI showed elevated blood flow, suggesting higher brain activity).
As someone who used to work at the Cognitive Neurophysiology Lab in the Scripts Institute-- doing some work on functional brain image-- I can confirm this was not news even thirty years ago. I guess this is trying to make some point to lay people?
Are there proposed reasons for increased blood flow to brain regions other than neural activity? Are neurons flushing waste products or something when less active?
The BOLD response (oxygen-neuronal activity coupling) has been pretty much accepted in neuroscience. There have been criticisms about it (non-neuronal contributions, mysteries of negative responses/correlations) but in general it is pretty much accepted.
The measurement of the BOLD response is well-accepted, but the interpretation of it with respect to cognition is still basically mostly unclear. Most papers assuming BOLD response uniformly can be interpreted as "activation" are quite dubious.
Yes, I stupidly read the headline and said "no duh" but they are making a point about our understanding of brain activity. I was thinking about the part of the signal that is reliably filtered out, they are talking about something else. Sorry, I was wrong.
fMRI has been abused by a lot of researchers, doctors, and authors over the years even though experts in the field knew the reality. It’s worth repeating the challenges of interpreting fMRI data to a wider audience.
The way I understood it is that while individual fMRI studies can be amazing, it is borderline impossible to compare them when made using different people or even different MRI machines. So reproducibility is a big issue, even though the tech itself is extremely promising.
This isn't really true. The issue is that when you combine data across multiple MRI scanners (sites), you need to account for random effects (e.g. site specific means and variances)...see solutions like COMBAT. Also if they have different equipment versions/manufacturers those scanners can have different SNR profiles. The other issue is that there are many processing with many ways to perform those steps. In general, researchers don't process in multiple ways and choose the way that gives them the result they want or anything nefarious like that, but it does make comparisons difficult since the effects of different preprocessing variations can be significant. To defend against this, many peer reviewers, like myself, request researchers perform the preprocessing multiple ways to assess how robust the results are to those choices. Another way the field has combatted this issue has been software like fMRIprep.
Individual fMRI is not a useful diagnostic tool for general conditions. There have been some clinics trying to push it (or SPECT) as a tool for diagnosing things like ADHD or chronic pain, but there is no scientific basis for this. The operator can basically crank up the noise and get some activity to show up, then tell the patient it’s a sign they have “ring of fire type ADHD” because they set the color pattern to reds and a circular pattern showed up at some point.
fMRI is a cool, expensive tech, like so many others in genetics and other diagnostics. These technologies create good jobs ("doing well by doing good").
But as other comments point out, and practitioners know, their usefulness for patients is more dubious.
I wonder how much variation there is between a person who does certain mental activity regularly vs a person who rarely does it.
If they were to measure a person who performs mental arithmetic on a daily basis, I'd expect his brain activity and oxygen consumption to be lower than those of a person who never does it. How much difference would that make?
It involved going to the lab and practicing the thing (a puzzle / maze) I would be shown during the actual MRI. I think I went in to “practice” a couple times before showing up and doing it in the machine.
IIRC the purpose of practicing was exactly that, to avoid me trying ti learn something during the scan (since that wasn’t the intention of the study).
In other words, I think you can control for that variable.
(Side note: I absolutely fell asleep during half the scan. Oops! I felt bad, but I guess that’s a risk when you recruit sleep deprived college kids!)
this headline is a bit misleading on the first read, since it only affects functional (f)MRI, which is controversial since a longer time. a prominent example is the activity that has been detected in a dead salmon
It's not that fMRI itself is controversial, it's that it is prone to statistical abuse unless you're careful in how you analyse the data. That's what the dead salmon study showed - some voxels will appear "active" purely by statistical chance, so without correction you will get spurious activations.
f is functional.
MRIs are basically huge magnets used for imaging. When you apply a strong magnetic field, different tissue types and densities will react differently, and the MRI is basically measuring how those tissues react to the magnet. It is very good for imaging soft tissues, but not so much bone. Someone figured out that you can measure blood flow using the MRI, because blood cells react in a magnetic field, then "relax" at a known rate. Since we can measure blood flow, that is correlated with increased brain activity, i.e. since more neurons are firing, they require more energy, and therefore more blood. So, fMRI is using blood flow as a proxy for brain activity.
Fmri doesn’t measure blood flow, it measures the oxygen level in the blood. Hemoglobin molecules change shape when they carry oxygen and the different shapes react differently to magnets, which is a real stroke of luck
This study is validating a commonplace fMRI measure (change in blood-oxygenation-level-dependent or BOLD signal) by comparing it with a different MRI technique, one that uses a multiparametric quantitative BOLD model, a different model for BOLD derived from two separate MRI scans which measure two different kinds of signal (transverse relaxation rates), and then multiply/divide by a bunch of constants to get at a value.
I'm a software engineer in this field, and this is my layman-learns-a-bit-of-shop-talk understanding of it. Both of these techniques involve multiple layers of statistical assumptions, and multiple steps of "analysing" data, which in itself involves implicit assumptions, rules of thumb and other steps that have never sat well with me. A very basic example of this kind of multi-step data massaging is "does this signal look a bit rough? No worries, let's Gaussian-filter it".
A lot of my skepticism is due to ignorance, no doubt, and I'd probably be braver in making general claims from the image I get in the end if I was more educated in the actual biophysics of it. But my main point is that it is not at all obvious that you can simply claim "signal B shows that signal A doesn't correspond to actual brain activity", when it is quite arguable whether signal B really does measure the ground truth, or whether it is simply prone to different modelling errors.
In the paper itself, the authors say that it is limited by methodology, but because they don't have the device to get an independent measure of brain activation, they use quantitative MRI. They also say it's because of radiation exposure and blah blah, but the real reason is their uni can't afford a PET scanner for them to use.
"The gold standard for CBF and CMRO2 measurements is 15O PET; but this technique requires an on-site cyclotron, a sophisticated imaging setup and substantial experience in handling three different radiotracers (CBF, 15O-water; CBV, 15O-CO; OEF, 15O-gas) of short half-lives8,35. Furthermore, this invasive method poses certain risks to participants owing to the exposure to radioactivity and arterial sampling."
If you have a PET/MR system [0], you can probably do this "gold standard" comparison, and I know that one is used for research studies. I think you can piggy-back off a different study's healthy controls to write a paper like this, if that study already uses PET/MR and if adding an oxygen metabolite scan isn't a big problem. But that's speaking as someone who does not design experiments.
Why did TUM let this misleading headline front the news release? Dont we have enough issues with Academia? The result just mean BOLD is an imperfect proxy.
It is especially unforgiveable that the title of on the news release itself is about "40 percent of MRI signals". What, as in all MRI, not just fMRI? Hopefully an honest typo and not just resulting from ignorance.
Curious what you find to be "bs" about the results of this paper? That statistical corrections are necessary when analysing fMRI scans to prevent spurious "activations" that are only there by chance?
For task fMRI, the test-retest reliability is so poor it should probably be considered useless or bordering on pseudoscience, except for in some very limited cases like activation of the visual and/or auditory and/or motor cortex with certain kinds of clear stimuli. For resting-state fMRI (rs-fMRI), the reliabilities are a bit better, but also still generally extremely poor [1-3].
There are also two IMO major and devastating theoretical concerns re fMRI that IMO make the whole thing border on nonsense. One is the assumed relation between the BOLD signal and "activation", and two is the extremely horrible temporal resolution of fMRI.
It is typically assumed that the BOLD response (increased oxygen uptake) (1) corresponds to greater metabolic activity, and (2) increased metabolic activity corresponds to "activation" of those tissues. This trades dubiously on the meaning of "activation", often assuming "activation = excitatory", when we know in fact much metabolic activity is inhibitory. fMRI cannot distinguish between these things.
There are other deeper issues, in that it is not even clear to what extent the BOLD signal is from neurons at all (could be glia), and it is possible the BOLD signal must be interpreted differently in different brain regions, and that the usual analyses looking for a "spike" in BOLD activity are basically nonsense, since BOLD activity isn't even related to this at all, but rather the local field potential, instead. All this is reviewed in [4].
Re: temporal resolution, essentially, if you pay attention to what is going on in your mind, you know that a LOT of thought can happen in just 0.5 seconds (think of when you have a flash of insight that unifies a bunch of ideas). Or think of how quickly processing must be happening in order for us to process a movie or animation sequence where there are up to e.g. 10 cuts / shots within a single second. There is also just biological evidence that neurons take only milliseconds to spike, and that a sequence of spikes (well under 100ms) can convey meaningful information.
However, the lowest temporal resolutions (repetition times) in fMRI are only around 0.7 seconds. IMO this means that the ONLY way to analyze fMRI that makes sense is to see it as an emergent phenomenon that may be correlated with certain kinds of long-term activity reflecting cyclical BOLD patterns / low-frequency patterns of the BOLD response. I.e. rs-fMRI is the only fMRI that has ever made much sense a priori. The solution to this is maybe to combine EEG (extremely high temporal resolution, clear use in monitoring realtime brain changes like meditative states and in biofeedback training) with fMRI, as in e.g. [5]. But, it may still well be just the case fMRI remains mostly useless.
[1] Elliott, M. L., Knodt, A. R., Ireland, D., Morris, M. L., Poulton, R., Ramrakha, S., Sison, M. L., Moffitt, T. E., Caspi, A., & Hariri, A. R. (2020). What Is the Test-Retest Reliability of Common Task-Functional MRI Measures? New Empirical Evidence and a Meta-Analysis. Psychological Science, 31(7), 792–806. https://doi.org/10.1177/0956797620916786
[2] Herting, M. M., Gautam, P., Chen, Z., Mezher, A., & Vetter, N. C. (2018). Test-retest reliability of longitudinal task-based fMRI: Implications for developmental studies. Developmental Cognitive Neuroscience, 33, 17–26. https://doi.org/10.1016/j.dcn.2017.07.001
[3] Termenon, M., Jaillard, A., Delon-Martin, C., & Achard, S. (2016). Reliability of graph analysis of resting state fMRI using test-retest dataset from the Human Connectome Project. NeuroImage, 142, 172–187. https://doi.org/10.1016/j.neuroimage.2016.05.062
[5] Ahmad, R. F., Malik, A. S., Kamel, N., Reza, F., & Abdullah, J. M. (2016). Simultaneous EEG-fMRI for working memory of the human brain. Australasian Physical & Engineering Sciences in Medicine, 39(2), 363–378. https://doi.org/10.1007/s13246-016-0438-x
Even if neuronal activity is (obviously) faster, the (assumed) neuro-vascular coupling is slower. Typically there are several seconds till you get a BOLD response after a stimulus or task, and this has nothing to do with fMRI sampling rate (fNIRS can have much faster sampling rate, but the BOLD response it measures is equally slow, too). Think of it as that neuronal spiking happens in a range of up to some hundred milliseconds and the body changing the blood flow happens much slower than that.
The issue is that measuring the BOLD response, even in best case scenario, is a very very indirect measure of neuronal activity. This is typically lost when people referring to fMRI studies as discovering "mental representations" in the brain and other non-sense, but here we are. Criticising the validity of the BOLD response itself, though, is certainly interesting.
Right, my point is sort of that both the BOLD response and fMRI sampling rates are far too "slow" (not nearly approaching the Nyquist frequency, I guess) a priori to deeply investigate something as fast as cognition.
Depends on what you mean by cognition, but as you yourself said, BOLD may be correlated with certain kinds of long(er)-term activity, and that in itself is very useful if interpreted carefully. No one claims to detect single "thoughts" or anything of the sort, at least I haven't seen anything so shameless.
Well, a lot of task fMRI designs are pretty shameless and clearly haven't taken the temporal resolution issues seriously, at least when it comes to interpreting their findings in discussions (i.e. claiming that certain regions being involved must mean certain kind of cognition, e.g. "thoughts" must be involved too). And there have definitely been a few papers trying to show they can e.g. reconstruct the image ("thought") in a person's mind from the fMRI signal.
But I don't think we are really disagreeing on anything major here. I do think there is likely some useful potential locked away in carefully designed resting-state fMRI studies, probably especially for certain chronic and/or persistent systemic cognitive things like e.g. ADHD, autism, or, perhaps more fruitfully, it might just help with more basic understanding of things like sleep. But, I also won't be holding my breath for anything major coming out of fMRI anytime soon.
There have been some high profile influencer doctors pushing brain imaging scans as diagnostic tools for years. Dr. Amen is one of the worst offenders with his clinics that charge thousands of dollars for SPECT scans (not the same as the fMRI in this paper but with similar interpretation issues) on patients. Insurance won’t cover them because there’s no scientific basis for using them in diagnosing or treating ADHD or chronic pain, but his clinics will push them on patients. Seeing an image of their brain with some colors overlayed and having someone confidently read it like tea leaves is highly convincing to people who want answers. Dr. Amen has made the rounds on Dr. Phil and other outlets, as well as amassing millions of followers on social media.
[0] https://www.youtube.com/watch?v=J-SHgZ1XPXs
https://www.wired.com/2009/09/fmrisalmon/
And how it was almost impossible to reproduce many published and well cited result. It was both exciting and jarring to talk with the neuroscientist, because they ofc knew about this and knew how to read the papers but the one doing more funding/business side ofc didn't really spend much time putting emphasis on that.
One of the team presented a accepted paper that basically used Deep Learning (Attention) to predict images that a person was thinking of, from the fMRI signals. When I asked "but DL is proven to be able to find pattern even in random noise, so how can you be sure this is not just overfitting to artefact?" and there wasn't really any answer to that (or rather the publication didn't take that in to account, although that can be experimentally determined). Still, a month later I saw tech explore or some tech news writing an article about it, something like "AI can now read your brain" and the 1984 implications yada yada.
So this is indeed something probably most practitioners, masters and PhD, realize relatively early.
So now that someone says "you know mindfulness is proven to change your brainwaves?" I always add my story "yes, but the study was done with EEG, so I don't trust the scientific backing of it" (but anecdotally, it helps me)
To put it succinctly, I think you have overfit your conclusions on the amount of data you have seen
https://news.ycombinator.com/item?id=46289133
EDIT: The reason being, with reliabilities as bad as these, it is obvious almost all fMRI studies are massively underpowered, and you really need to have hundreds or even up to a thousand participants to detect effects with any statistical reliability. Very few fMRI studies ever have even close to these numbers.
https://source.washu.edu/2025/12/psychedelics-disrupt-normal...
Hasty post. I apologize.
fMRI is a cool, expensive tech, like so many others in genetics and other diagnostics. These technologies create good jobs ("doing well by doing good").
But as other comments point out, and practitioners know, their usefulness for patients is more dubious.
If they were to measure a person who performs mental arithmetic on a daily basis, I'd expect his brain activity and oxygen consumption to be lower than those of a person who never does it. How much difference would that make?
It involved going to the lab and practicing the thing (a puzzle / maze) I would be shown during the actual MRI. I think I went in to “practice” a couple times before showing up and doing it in the machine.
IIRC the purpose of practicing was exactly that, to avoid me trying ti learn something during the scan (since that wasn’t the intention of the study).
In other words, I think you can control for that variable.
(Side note: I absolutely fell asleep during half the scan. Oops! I felt bad, but I guess that’s a risk when you recruit sleep deprived college kids!)
Structural MRI is even more abused, where people find "differences" between 2 groups with ridiculously small sample sizes.
I'm a software engineer in this field, and this is my layman-learns-a-bit-of-shop-talk understanding of it. Both of these techniques involve multiple layers of statistical assumptions, and multiple steps of "analysing" data, which in itself involves implicit assumptions, rules of thumb and other steps that have never sat well with me. A very basic example of this kind of multi-step data massaging is "does this signal look a bit rough? No worries, let's Gaussian-filter it".
A lot of my skepticism is due to ignorance, no doubt, and I'd probably be braver in making general claims from the image I get in the end if I was more educated in the actual biophysics of it. But my main point is that it is not at all obvious that you can simply claim "signal B shows that signal A doesn't correspond to actual brain activity", when it is quite arguable whether signal B really does measure the ground truth, or whether it is simply prone to different modelling errors.
In the paper itself, the authors say that it is limited by methodology, but because they don't have the device to get an independent measure of brain activation, they use quantitative MRI. They also say it's because of radiation exposure and blah blah, but the real reason is their uni can't afford a PET scanner for them to use.
"The gold standard for CBF and CMRO2 measurements is 15O PET; but this technique requires an on-site cyclotron, a sophisticated imaging setup and substantial experience in handling three different radiotracers (CBF, 15O-water; CBV, 15O-CO; OEF, 15O-gas) of short half-lives8,35. Furthermore, this invasive method poses certain risks to participants owing to the exposure to radioactivity and arterial sampling."
[0] https://www.siemens-healthineers.com/en-us/magnetic-resonanc...
Wondering how they created that baseline. Was it with fMRI data (which has deviance from actual data, as pointed out)? Or was it through other means?
For task fMRI, the test-retest reliability is so poor it should probably be considered useless or bordering on pseudoscience, except for in some very limited cases like activation of the visual and/or auditory and/or motor cortex with certain kinds of clear stimuli. For resting-state fMRI (rs-fMRI), the reliabilities are a bit better, but also still generally extremely poor [1-3].
There are also two IMO major and devastating theoretical concerns re fMRI that IMO make the whole thing border on nonsense. One is the assumed relation between the BOLD signal and "activation", and two is the extremely horrible temporal resolution of fMRI.
It is typically assumed that the BOLD response (increased oxygen uptake) (1) corresponds to greater metabolic activity, and (2) increased metabolic activity corresponds to "activation" of those tissues. This trades dubiously on the meaning of "activation", often assuming "activation = excitatory", when we know in fact much metabolic activity is inhibitory. fMRI cannot distinguish between these things.
There are other deeper issues, in that it is not even clear to what extent the BOLD signal is from neurons at all (could be glia), and it is possible the BOLD signal must be interpreted differently in different brain regions, and that the usual analyses looking for a "spike" in BOLD activity are basically nonsense, since BOLD activity isn't even related to this at all, but rather the local field potential, instead. All this is reviewed in [4].
Re: temporal resolution, essentially, if you pay attention to what is going on in your mind, you know that a LOT of thought can happen in just 0.5 seconds (think of when you have a flash of insight that unifies a bunch of ideas). Or think of how quickly processing must be happening in order for us to process a movie or animation sequence where there are up to e.g. 10 cuts / shots within a single second. There is also just biological evidence that neurons take only milliseconds to spike, and that a sequence of spikes (well under 100ms) can convey meaningful information.
However, the lowest temporal resolutions (repetition times) in fMRI are only around 0.7 seconds. IMO this means that the ONLY way to analyze fMRI that makes sense is to see it as an emergent phenomenon that may be correlated with certain kinds of long-term activity reflecting cyclical BOLD patterns / low-frequency patterns of the BOLD response. I.e. rs-fMRI is the only fMRI that has ever made much sense a priori. The solution to this is maybe to combine EEG (extremely high temporal resolution, clear use in monitoring realtime brain changes like meditative states and in biofeedback training) with fMRI, as in e.g. [5]. But, it may still well be just the case fMRI remains mostly useless.
[1] Elliott, M. L., Knodt, A. R., Ireland, D., Morris, M. L., Poulton, R., Ramrakha, S., Sison, M. L., Moffitt, T. E., Caspi, A., & Hariri, A. R. (2020). What Is the Test-Retest Reliability of Common Task-Functional MRI Measures? New Empirical Evidence and a Meta-Analysis. Psychological Science, 31(7), 792–806. https://doi.org/10.1177/0956797620916786
[2] Herting, M. M., Gautam, P., Chen, Z., Mezher, A., & Vetter, N. C. (2018). Test-retest reliability of longitudinal task-based fMRI: Implications for developmental studies. Developmental Cognitive Neuroscience, 33, 17–26. https://doi.org/10.1016/j.dcn.2017.07.001
[3] Termenon, M., Jaillard, A., Delon-Martin, C., & Achard, S. (2016). Reliability of graph analysis of resting state fMRI using test-retest dataset from the Human Connectome Project. NeuroImage, 142, 172–187. https://doi.org/10.1016/j.neuroimage.2016.05.062
[4] Ekstrom, A. (2010). How and when the fMRI BOLD signal relates to underlying neural activity: The danger in dissociation. Brain Research Reviews, 62(2), 233–244. https://doi.org/10.1016/j.brainresrev.2009.12.004, https://scholar.google.ca/scholar?cluster=642045057386053841...
[5] Ahmad, R. F., Malik, A. S., Kamel, N., Reza, F., & Abdullah, J. M. (2016). Simultaneous EEG-fMRI for working memory of the human brain. Australasian Physical & Engineering Sciences in Medicine, 39(2), 363–378. https://doi.org/10.1007/s13246-016-0438-x
Even if neuronal activity is (obviously) faster, the (assumed) neuro-vascular coupling is slower. Typically there are several seconds till you get a BOLD response after a stimulus or task, and this has nothing to do with fMRI sampling rate (fNIRS can have much faster sampling rate, but the BOLD response it measures is equally slow, too). Think of it as that neuronal spiking happens in a range of up to some hundred milliseconds and the body changing the blood flow happens much slower than that.
The issue is that measuring the BOLD response, even in best case scenario, is a very very indirect measure of neuronal activity. This is typically lost when people referring to fMRI studies as discovering "mental representations" in the brain and other non-sense, but here we are. Criticising the validity of the BOLD response itself, though, is certainly interesting.
But I don't think we are really disagreeing on anything major here. I do think there is likely some useful potential locked away in carefully designed resting-state fMRI studies, probably especially for certain chronic and/or persistent systemic cognitive things like e.g. ADHD, autism, or, perhaps more fruitfully, it might just help with more basic understanding of things like sleep. But, I also won't be holding my breath for anything major coming out of fMRI anytime soon.