I'm a healthcare CIO of 12 years, and I've evaluated 4 and deployed 2 of these tools, one of which is currently deployed at my currently healthcare employer. I am very measured on AI but the results I've seen from these virtual scribes is HUGE. In every case we have IMMEDIATELY seen improvements in patient NPS scores, provider satisfaction, and note quality. Notes are more standardized as well as more verbose and detailed, which makes it easier for future providers to understand the case. These better notes reduce our claim rejection rate.
And what converted me was direct patient response. Across the board patient feedback is extremely positive, with the most common comment being along the lines of "I really felt like the doctor connected with me better and they were more present in the visit."
These AI scribes really DO improve patient care, I've seen it with my own eyes.
=> the error rate was 7.4% in the version generated by speech recognition software, 0.4% after transcriptionist review, and 0.3% in the final version signed by physicians. Among the errors at each stage, 15.8%, 26.9%, and 25.9% involved clinical information, and 5.7%, 8.9%, and 6.4% were clinically significant, respectively.
=> Omissions dominated error counts (83.8%, p<<0.001), with CAISs varying widely in error frequency and severity, and a median of 1–6 omissions per consultation (depending on CAIS). Although less frequent, hallucinations and factual inaccuracies were more often clinically serious. No tested CAIS produced error-free summaries.
On the gripping hand, people who work in the management end of the US healthcare industry can't be trusted with healthcare or information security to begin with.
My dad likes to joke around and his doctor uses some kind of transcription service. Time for fun!
His doctor asked him about using drugs and he made a joke that was something like "I only use coke" - meaning coca-cola. Of course his doctor knew he was kidding about drinking too much soda because he eats/drinks too much sugar. So they had a little laugh and moved on.
BUT now it's in his medical transcripts. My mom said it "transcribed" it as something like "the patient responded he has used cocaine recently".
I guess his doctor doesn't go in and actually fix things or even read over what the transcription says...
Also both of my parents have accents and have reported really weird transcriptions that don't match what they actually said.
So now my mom has told my dad he can't make jokes with the doctor anymore because even if the doctor knows he's joking it's going to get noted down as a "fact".
Errors can be a significant problem in manual charting as well.
I know a medical professional who does a similar evaluation process to what is outlined in your second link to human written charts. They then use that feedback to guide the department on how to improve their charting.
So, don't presume that those error rates cited in those studies should be compared to a baseline rate of zero. If you review human-written charts, you will often also not have an error rate of zero.
It’s been a year or so since I last read The Mote In Gods Eye/The Gripping Hand but I randomly was thinking of this morning. Very funny that I would see a reference to it the same day.
Be careful with initial impressions of metrics. We as humans have a heavy tenancy to anchor to our first judgments or impression. We see a win and assume the win is long term, with no downsides, and dependent on the new information/change.
So combine that with the Hawthorne effect and new business or health initiatives that can look great simply because participants notice change and notice the increased attention. However many human patterns have a tendency to regress to the mean.
Personally I have seen this a lot with developer tools and DevOps. A new SEV/incident/disaster happens and everyone rushes to create or onboard to a tool that would help. Around the office everyone raves about it and is sure that it would fix all issues. And the number of commits goes up, or the number of SEV's in an area decreases for a while. People were paying attention, after a while the tool starts to slow down or not be as used. It's got rough edges that weren't seen or scenarios that were supposed to be supported never get fully integrated. Eventually the patterns regress, but with more tools and more complexity.
You can do that by recording and transcribing (many methods) or your doctor has to write on the fly, or worse, has their head in their computer while you talk in their general direction.
Letting doctors talk and examine and not write is a wholly better experience.
Offsite third parties are the problem here. If this was done automatically without data leaving the room, is there a problem? Do you have the same objections to how your digital notes are stored?
For now. It always begins as voluntary. But then doctors will start to treat people who opt out the way TSA treats me when I opt out: a hostile adversary.
I already get glares and sighs when I dare to actually read every word of a multipage form I am expected to sign without reading. Was told once I would lose my appointment if I took longer than a few minutes to read more than 10 pages because I could not be checked in until I signed. Other patients are waiting, your exercise of your human rights is inefficient.
Then soon I'll have to pay a higher copay to opt out. Then I won't be able to opt out at all.
All in the name of optimizing patient NPS scores and patient throughput.
I've never had this problem. IME every doctors office recommends showing up 15-20 minutes early to a new-patient appointment for the explicit reason of filling out paperwork.
> Was told once I would lose my appointment if I took longer than a few minutes to read more than 10 pages.
I'd be finding a new doctor at that point. Ridiculous. I love it how doctors can be 30 minutes late for their appointments because they're running late and all their appointment delays are cascading, but if the patient reads a document for 5 minutes, they're the problem!
Yep I would agree as a patient. My current doctor types so slow that 6 out of the 10 short minutes in an appointment just disappear while he types. Even with other docs who can touch type, it will free them up to focus completely on the appointment and reduce the hours they spend charting afterwards.
> improvements in patient NPS scores, provider satisfaction, and note quality
How are note quality improvements measured? Vibe-notes might be more verbose and better sounding (which would explain the NPS and satisfaction metrics), but still not actually match the doctor's actual words or intent. Are the AI-generated notes actually compared with ground truth to prove they are accurate?
I’ve been in tech and medicine too. Consider that any “HUGE” effect in this context is likely exaggerated, especially for something as prosaic as a note-taking assistant.
As a patient sitting with a doctor, I don’t care how standardized the notes are. I don’t care about anyone’s NPS score. I do want the doctor to connect with me, but I also remember not too long ago when doctors did this anyway, without any assistance from robots.
I also remember not too long ago when doctors did this anyway, without any assistance from robots.
Or with assistance from other humans.
The last time I had surgery, every time I met with the surgeon (about six times), he had an intern following him around with a Thinkpad, typing in everything said.
The intern has the ability to understand context, idiomatic expressions, emotion, and a dozen other important and useful things that an AI transcription will never capture.
That’s probably not an intern. Doctors with enough pull can get dedicated scribes like this, but they aren’t cheap, which is why most doctors don’t get them.
I got an erroneous Type II diabetes diagnosis dropped into the note by the AI scribe at my last appointment because my PCP discussed the A1C test he was ordering. Would not recommend. That isn't to say that manually typed notes or speech to text dictated notes are perfect (dot phrases have ended up "documenting" plenty of conversations that never happened), but a false diagnosis of a chronic disease seems like a really bad failure.
How do you control for quality variation between patients? In my experience, AI note taking tools display a clear bias against participants who are {quieter, ESL, women, ...}. How can you evaluate whether these biases show up in a medical setting?
Good, I'm glad. Now find a way to do it in-house. Shipping our conversation to some random-ass fly-by-night SaaS who pinky-swear-promises they're HIPAA-compliant is a non-starter for a medical professional I'd actually want to give money to.
In an article critiquing over-use of AI assistants, the author confesses at the end this article itself was authored partly by Claude that introduced errors in the citations, lol.
Nonetheless, I come away from this article with the sense the ambient devices automating documentation of an encounter are still a net win, with caveats about the need for the doctor to polish the note ti reflect his or her own narrative voice.
"I am not saying ambient scribes are bad technology."
is this a counterpoint? he just seems to be wary of the risk, without a firm position and decided to personally stop using it. people often overestimate their own skills and think their own charting is better than that of others, that doesn't mean the tech doesn't work.
1) in the event you find yourself partially or totally disabled but the records don’t really make a good case for it and your provider has a dismissive attitude about filling out additional documentation to substantiate what they failed to in your records.
You’re not necessarily going to get approved for FMLA, STD, LTD, SS etc based on a diagnosis or test results alone. They will nitpick over say, heart failure, as if that’s magically and spontaneously going to go away. If you’re telling your provider that you’re limited by things like oh I don’t know, “I’m only awake for 2-4 hours before I need to sleep again” or “some days I just can’t do it and sleep 20 hours” but it’s not in your chart… expect denials and clarifications and a huge burden on you to prove why it’s limiting.
2) continuity of care, so you don’t end up explaining everything from the top to a specialist or having them run all these tests and procedures from square one — when there’s months long backlogs , and we already did all this and you need treatment - but - there wasn’t much to work with in your referring chart.
You might not appreciate the “intrusion” if you’re healthy and just worried about your privacy.
If/When things go south and you find yourself fighting these entities for a year or two or three while they nitpick and delay and deny and drag their feet , you’ll be glad an “AI” kept up meticulous records because this is phenomenally stressful and an endless burden on you when they don’t.
So, their AI slop can vomit out all this extra info on why insurance companies should pay them or why your condition is in fact disabling, and now their AI slop can comb through it looking for all that. Because they will try to avoid paying or approving any kind of leave or benefits if it’s not there
And god forbid you hand them a form where they’re being asked to explain themselves. 50/50 on them being eager to help out or rolling their eyes and saying something really nasty about the imposition. And then even when they do that, they almost never file a copy in your chart so your chart STILL doesn’t substantiate your claims. I’m all for an “ai” doing the progress notes in a case where the facility or provider can’t be fucked to do so.
Happily that’s not true of my current provider, who just, does that anyway (?) But I’ve been around enough to know they’re an exception. Even when providers are on your side and mean well, and want to bend over backwards to help you in any way they can — and I want to just acknowledge that’s the situation I’m in today — honestly , sometimes they just forget some of the details when they do their notes.
That’s why some places make the provider do it in real time while they’re talking to you, so they didn’t forget something relevant thirty minutes later. The other side of the coin here may be that some providers find that distracting or off putting to be typing away like a stenographer while they’re examining you…
I think it would be fair to say this can all be tedious and a burden for both patients and providers. There’s just a world of difference between a provider who wants to do this to provide excellency in care, and a provider who wants to do this because they resent it and think it’s beneath them.
You had it 20 years ago: doctors spoke into recorders, transcriptionists turned that into notes, the docs reviewed them.
The first study I cited replaces the "spoke into recorders" stage with non-AI voice recognition.
The second study replaces the "spoke into recorders" stage with LLM voice recognition, and... crucially... also replaces the educated transcriptionist step with nothing.
I imagine that the real problem is that the voice recognition can be classic or LLM and it just doesn't matter as much as having two humans in the loop instead of one. But that's not a story which gets you to replace cheap voicerec with expensive AI.
If I allow it, is the data from my meeting sent offsite at any stage, for example to an LLM service (e.g., Anthropic, OpenAI, etc.)? Or do the LLM vendors (or any others) have access to the internal data at any stage?
I understand the concerns and I am not sure I would allow myself to be recorded until I knew more.
However, I do think we are in a situation where everybody knows that healthcare costs need to come down that doctors and medical professionals are spread too thin, forced to see evermore patients in the same number of hours, and yet for every attempt to improve efficiency there is a “no, not that way“ response.
If I paid all my doctors $1200/hr and doubled how much time they spend with or on me, that'd still pale in comparison to healthcare expenditures attributed to me between actual insurance payments and actual money leaving my bank account. Doctors being spread too thin is very much a separate issue.
I definitely agree that medical professionals are spread too thin and automation seems like it would be a boon but, as the article points out, the introduction of automation likely won't translate to more doctor-patient time it'll translate to doctors seeing more patients.
The solution not only introduces a problem (decreased privacy) but could reinforce the existing problem it's trying to solve.
> it'll translate to doctors seeing more patients.
This is also a good thing. Even in supposedly developed parts of the world like San Francisco it can be difficult to find a PCP that is taking new patients.
Where healthcare is concerned, America is not what anyone considers "first world". Your healthcare system is more backward than most third world nations. I would rather leave the US than receive medical treatment there. I have never even considered trusting the US healthcare system. When I lived there I would rather fly home and get treated (in a third world country) than lose all my savings getting inadequate care in the US. I know people who have been through large and expensive treatment plans in the global south, who paid less for the complete treatment than Americans pay for the ambulance getting you to the hospital.
Your healthcare system is more backward than most third world nations. I would rather leave the US than receive medical treatment there.
And yet the wealthiest people in the world, who can have the best healthcare anywhere they want on the planet, even with private doctors, routinely choose to be treated in Rochester, Minnesota; Boston, Massachusetts; Houston, Texas; Baltimore, Maryland; and Los Angeles, California.
The U.S. is by no means perfect, but there's a reason that there are entire medical facilities in the U.S. that cater exclusively to people from other countries. Just listen to local radio in Palm Springs and you'll hear commercials along the lines of "Tired of waiting, or simply can't get the medical care you need in Canada? Come to our hospital!"
Meanwhile, if I wanted to have my recent surgery in Canada, I'd have to wait almost a year for a slot to open up. Here I waited all of two weeks. And the newspaper headlines in the UK are full of horror stories of patients dying in hospital hallways while doctors are on strike because everything is so great.
> I do think we are in a situation where everybody knows that healthcare costs need to come down that doctors and medical professionals are spread too thin
The problem is over optimization AND lack of people. As soon as there's an excuse for less staff because we have "digital record keeping" we're going to have less money and even less staff.
Having in person or remote notetakers is a great entry level job to do before you become a doctor. It could be boring but at least the terms are familiar and you get to know the person you're working with.
It's not like healthcare is an impossible problem to solve that needs more tech, we just refuse to spend money on people and (inexplicably) cannot help but dump tons of money into tech.
> The problem is over optimization not lack of people or resources. As soon as there's an excuse for less staff because we have "digital record keeping" we're going to have less money and even less staff.
At least in my area, it seems like lack of people is a problem. Sometimes it's lack of people because the pay is too low, but more of it it's lack of people because the pool of qualified people is too small. And increasing pay increases healthcare costs, and healthcare costs are already very high. If digital tools allow the available staff to see more patients while delivering the same level of care (and without burning out the providers), then that means more capacity and less times people want to see a doctor, but can't. Similar arguments for same number of patients ans greater level of care. If it's more patients, but worse level of care, then it becomes tricky.
The lack of people is too low because the organization tasked with accrediting new doctors has a financial incentive to its current members to keep the pool of doctors low.
Yes, and also almost all of these issues could be ascribed to all digital medical record-keeping. The fact that AI transcribed it matters relatively little.
One massive way to reduce healthcare costs is to remove caps from becoming a doctor; as long as you pass the tests and meet the requirements, why are we turning doctors away? So that existing doctors can be paid well above the market rate. There's a reason there's so many doctors in politics - it's very important for them to protect this business model.
Profit margins are capped by percentage. That creates the perverse intensive for insurance companies to pursue ever increases costs in order to increase profits.
How do you know it's not the other way around? Give consent to incorporate another technology that will keep wages the same but allow them to treat more patients and extract more profit for the shareholders?
2. The point of insurance is they're supposed to pay for shit
3. You figure out how to get them to pay for shit, sign an agreement that removes me of any patient responsibility of the balance bill, and assure me in writing that I will owe $0 no matter what
> for every attempt to improve efficiency there is a “no, not that way“ response
They've tried everything except "train and hire more doctors" and they're just all out of ideas aside from "erode patients rights and lower overall quality of care"
The economics of medical school cost, time, and capped residency spots (some would argue this is price-fixing with artificial scarcity) make it hard to just “make more doctors”. Combine this with a highly litigious society that always demands a full doctor (when for 90% of things an NP or PA would do) plus inverted population pyramid all exacerbate the problem.
We need more doctors now and it takes 12 years to make a doctor and by then the boomer cohorts aging and medical needs will peak.
Finally, even if we could do that, the top of the funnel candidate is substantially weaker with lower test scores and higher need for remedial classes. And for the good candidates, the ROI of medical school is not as good as it once was.
There might be some real concern about the cognitive and patient-interaction impacts of speech recognition being used... but on the other hand, it's more likely that details are missed when information is captured manually.
And the privacy/informed consent concerns here are silly, they apply to any of your charted data... and if you're going to any office that doesn't use the latest technology, your patient information is probably being sent between offices over fax anyway.
I live in a country with free public healthcare. In a recent doctor's visit, the doctor was interviewing me while a nurse was typing into the computer. Presumably so that the doctor would have more time to attend patients and so that she wouldn't get distracted.
It's fascinating how this translates to the idea that in the USA, this should mean "more time with patients", but in reality also means "more patients", but is somehow bad because the is a monetary drive.
This is seriously a good example of a domain that should enforce on-premises AI. Doctors absolutely can afford to buy an NVIDIA workstation. Transcribing text is not exactly super demanding, comparatively speaking. When did we even stop considering non-cloud services? If AI boomed 10 years ago, we wouldn't even be discussing this.
I'm more concerned about a record being made in general, than how it is made. If were to be affected by a tragedy and visit a psychologist or psychiatrist to receive support, it would likely require a diagnosis of depression to get insurance coverage, and having that on my record could make it more difficult and costly to legally fly an airplane or own a gun, and who knows what else.
Get help if you need it. Having periods of depression on your medical record doesn’t make your life more difficult, unless maybe you’re trying to be a spy or an astronaut or something.
HIPPA exists and has a lot of teeth. Given this extensive liability, I trust that if anything does go wrong they will be punished. Recordings might dramatically improve patient outcomes, and so I will let them
HIPAA enforces nothing other than a pinky-swear-promise of compliance. There are hundreds, if not thousands, of middlemen who sell SaaS like this to medical professionals. If one suffers a breach then shuts down, your doctor will just switch to the next one in line with no consequences because "they promised they were compliant". Meanwhile all your medical details will end up in a public dataset forevermore.
Honestly, my recent experience with this was really positive: the doctor actually said the technical stuff out loud to me for the first time in my life, in a way I could easily ask polite questions about and discuss with them.
In my case it was something very not sensitive, removing a benign tumor in a finger, which I have no problem telling the whole world about (I was awake for the surgery and got to watch, it was a incredibly fascinating experience that I want to write more about some day).
But I can imagine it would feel much more invasive if the subject were more sensitive.
> "Here is a real concern about implementation" → "Therefore you should refuse entirely"
This skips the middle step of "therefore we should implement it well."
I'm not convinced that we should be allowing doctors to record patient visits at this stage yet, but I'm really not convinced by these points, which largely don't hold up under closer examination.
A few that stuck out:
"Privacy" - Labs are routinely sent to third-party companies, and we don't do informed consent for that. The third-party argument isn't unique to recording.
"False promise of efficiency" - This doesn't really have anything to do with patients at all. It's a criticism of medical office management, not of physician-patient interactions. Telling patients to refuse a tool because management might exploit the productivity gains is asking patients to fight a labor battle on the provider's behalf.
"Consent can't be revoked mid-visit" - Consent typically can't be revoked in the middle of an appendectomy, or halfway through administering a vaccine either. Practical irrevocability is a normal feature of informed consent, not a special problem unique to recording. Proper consent processes in medical offices are a broader issue than consent about voice recordings specifically. Had the authors made the point that providers are being asked to obtain consent for tools whose technical implementation and privacy risks fall outside the provider's own domain knowledge — that would be a stronger argument. But that isn't quite the point they made, and their current framing doesn't wholly convince.
I think the "therefore we should implement it well" is not forgotten, it's elided because we don't think it's likely to happen.
Tech-naïve people think that we can build super duper encryption systems.
The more jaded amongst us know that people can get sloppy or complacent, it's rare to see a regulatory system that truly incentivises good practice, data breaches will happen eventually, and no-one will be held accountable.
> Labs are routinely sent to third-party companies
Labs are real businesses that do real things, and would have actual impact for a breach. Meanwhile any idiot can vibe-code a thin shim between a microphone and ChatGPT in a weekend, promise they're HIPAA-compliant, and start selling. Medical professionals have no obligation to do any diligence, and there's no reason for them to not just buy whoever-is-cheapest. They're not even close to the same thing.
This was extremely unconvincing for me. The site is now 500ing for me, so I can't fully quote it, but the arguments about privacy just fall flat. You don't know about Epic's, or GE's, or Philips' security either. You have to trust the institution of HIPPA et al overall to at least make things right.
I really don't care if my recording becomes training data.
I would rather be spoken to like I'm not an idiot. Use technical terms please. I want precision.
Calling the US healthcare system underfunded might be the most wild part of the whole thing. We spend 5.3 trillion dollars a year. That's 17% of the entire economy.
> You don't know about Epic's, or GE's, or Philips' security either.
The argument that a new vendor's security is probably not worse than others misses the point that by opting in, there
is one more database/vendor/server where sensitive data
about you resides, and which eventually will get hacked.
It's usually just a question not whether, but when.
For instance, in the UK, on this very day news reported half
a million British people's medical data has been offered for
sale on Alibaba, the "Chinese E-Bay". Trivial security advice
is to "reduce the attack surface", i.e. to reduce the chances
of getting hit by reduce one's presence in places where
personal data is concentrated (thus making an attractive target for hackers).
For example, when the German healthcare system launched its central electronic patient record, I opted out. One more system that, once hacked, won't have anything on me stored in it.
I always agree if this is for academic purposes, if it helps with research etc. I can't see why I shouldn't. We are just meat that will expire one day.
1. AI-generated charting.
2. The existence of a reliable record of the visit.
I am skeptical of the first in some cases (i.e. bias), but strongly in favor of the second.
My father is 80 and has Parkinson’s. He routinely leaves appointments unsure of what the doctor said, what changed, or what he is supposed to do next. Even when I attend with him, we sometimes disagree afterward about what exactly was recommended.
This happens with pediatric appointments too. My wife and I occasionally remember instructions differently: medication timing, symptoms to watch for, when to call back, whether something was “normal” or needed follow-up.
That is a care quality problem, not just a convenience problem.
The risks are real: privacy, consent, retention, training use, liability, and automation bias. But those argue for strict controls, not for a blanket refusal. Make it opt-in, give the patient access, prohibit training without explicit consent, keep retention short, and require clear auditability.
I do not want opaque AI quietly rewriting the medical record. But I also do not think “everyone relies on memory after a stressful 12-minute appointment” is some gold standard we should preserve.
Yes. It was great for when I had a major surgery last year and had a bazillion questions for the surgeon. But I don't always remember to. My parents definitely don't even think about it.
Oof yea I just got surprised by this at a vet appointment for my dog, weirded me out. I just went along with it to get the visit over with and I can see the benefit of having an accurate record of the visit, but we'll have to come to terms to the reality of this invasive surveillance as a society at some point I imagine.
Why do we even need to consult doctors anymore? Just let the AI decide. Docs should be freed up for up for doing physical tests and interventions, or otherwise for providing more training data for the AI in cases where the AI isn't producing results or when a second look is urgently needed in an emergency situation.
This situation is real. I've had the same doctor my entire adult life (~25 years). We've got a pretty informal relationship. I even saw her hammered at a bar one night, and had to give her a ride home because her friends were also drunk AF. Anyway, a few years ago, during an annual checkup, she asked how my family was doing and I made a joke about my brother drinking too much. A few weeks later I started receiving pamphlets in the mail about treating alcoholism, ads for rehab centers. I just brushed it off, didn't make any connection. Then, the next year, during my annual checkup, my doctor wasn't available, so I got a different doctor, someone I'd never talked to in my life. She immediately started asking me about my drinking. I fired back, asking WTF she was talking about. She said, "Oh, well your file says alcoholism runs in your family.", and then started lecturing me about getting over the shame of alcoholism is the first step to beating it. I don't even drink. No one in my family drinks other than my brother. He was drinking a lot at that time because he favorite NFL team (LA Rams) was doing really well, and he was celebrating a lot. And it was just a joke.
The next year, during my annual checkup, I gave my doctor a load of crap, telling her to record nothing I say unless I explicitly tell her to. She tried to defend the system, but she agreed. I'm still upset that my "file" still mentions alcoholism.
> "Oh, well your file says alcoholism runs in your family."
Medics often use private notes when handing over patients,
where they share information that the patients themselves
are not intended to see (and in many countries, not permitted
to see). In particular, such records are used to share warnings if patients have been in any way "difficult".
It's interesting how lots of service providers of all sorts will insist that you agree to their Terms of Service, Acceptable Use Policy, End User License Agreement (or whatever they want to call it) before engaging with you, but when the consumer insists on enforcing their own personal policy in the opposite direction such as refusing consent to recording or feeding your PII into some opaque AI system, suddenly it's a problem.
And what converted me was direct patient response. Across the board patient feedback is extremely positive, with the most common comment being along the lines of "I really felt like the doctor connected with me better and they were more present in the visit."
These AI scribes really DO improve patient care, I've seen it with my own eyes.
https://jamanetwork.com/journals/jamanetworkopen/fullarticle...
=> the error rate was 7.4% in the version generated by speech recognition software, 0.4% after transcriptionist review, and 0.3% in the final version signed by physicians. Among the errors at each stage, 15.8%, 26.9%, and 25.9% involved clinical information, and 5.7%, 8.9%, and 6.4% were clinically significant, respectively.
AI "scribes" in a perfectly replicable best-of-all-worlds scenario (2025): https://bmjdigitalhealth.bmj.com/content/1/1/e000092
=> Omissions dominated error counts (83.8%, p<<0.001), with CAISs varying widely in error frequency and severity, and a median of 1–6 omissions per consultation (depending on CAIS). Although less frequent, hallucinations and factual inaccuracies were more often clinically serious. No tested CAIS produced error-free summaries.
On the gripping hand, people who work in the management end of the US healthcare industry can't be trusted with healthcare or information security to begin with.
His doctor asked him about using drugs and he made a joke that was something like "I only use coke" - meaning coca-cola. Of course his doctor knew he was kidding about drinking too much soda because he eats/drinks too much sugar. So they had a little laugh and moved on.
BUT now it's in his medical transcripts. My mom said it "transcribed" it as something like "the patient responded he has used cocaine recently".
I guess his doctor doesn't go in and actually fix things or even read over what the transcription says...
Also both of my parents have accents and have reported really weird transcriptions that don't match what they actually said.
So now my mom has told my dad he can't make jokes with the doctor anymore because even if the doctor knows he's joking it's going to get noted down as a "fact".
I know a medical professional who does a similar evaluation process to what is outlined in your second link to human written charts. They then use that feedback to guide the department on how to improve their charting.
So, don't presume that those error rates cited in those studies should be compared to a baseline rate of zero. If you review human-written charts, you will often also not have an error rate of zero.
It’s been a year or so since I last read The Mote In Gods Eye/The Gripping Hand but I randomly was thinking of this morning. Very funny that I would see a reference to it the same day.
So combine that with the Hawthorne effect and new business or health initiatives that can look great simply because participants notice change and notice the increased attention. However many human patterns have a tendency to regress to the mean.
Personally I have seen this a lot with developer tools and DevOps. A new SEV/incident/disaster happens and everyone rushes to create or onboard to a tool that would help. Around the office everyone raves about it and is sure that it would fix all issues. And the number of commits goes up, or the number of SEV's in an area decreases for a while. People were paying attention, after a while the tool starts to slow down or not be as used. It's got rough edges that weren't seen or scenarios that were supposed to be supported never get fully integrated. Eventually the patterns regress, but with more tools and more complexity.
- https://pmc.ncbi.nlm.nih.gov/articles/PMC1936999/
- https://arxiv.org/abs/2102.12893
I am intentionally cursing to express my anger at this casual betrayal of medical trust.
You can do that by recording and transcribing (many methods) or your doctor has to write on the fly, or worse, has their head in their computer while you talk in their general direction.
Letting doctors talk and examine and not write is a wholly better experience.
Offsite third parties are the problem here. If this was done automatically without data leaving the room, is there a problem? Do you have the same objections to how your digital notes are stored?
I already get glares and sighs when I dare to actually read every word of a multipage form I am expected to sign without reading. Was told once I would lose my appointment if I took longer than a few minutes to read more than 10 pages because I could not be checked in until I signed. Other patients are waiting, your exercise of your human rights is inefficient.
Then soon I'll have to pay a higher copay to opt out. Then I won't be able to opt out at all.
All in the name of optimizing patient NPS scores and patient throughput.
I'd be finding a new doctor at that point. Ridiculous. I love it how doctors can be 30 minutes late for their appointments because they're running late and all their appointment delays are cascading, but if the patient reads a document for 5 minutes, they're the problem!
How are note quality improvements measured? Vibe-notes might be more verbose and better sounding (which would explain the NPS and satisfaction metrics), but still not actually match the doctor's actual words or intent. Are the AI-generated notes actually compared with ground truth to prove they are accurate?
As a patient sitting with a doctor, I don’t care how standardized the notes are. I don’t care about anyone’s NPS score. I do want the doctor to connect with me, but I also remember not too long ago when doctors did this anyway, without any assistance from robots.
Or with assistance from other humans.
The last time I had surgery, every time I met with the surgeon (about six times), he had an intern following him around with a Thinkpad, typing in everything said.
The intern has the ability to understand context, idiomatic expressions, emotion, and a dozen other important and useful things that an AI transcription will never capture.
Scribes _feel_ good in the short-term, but it's not clear if they're actually good on longer time horizons.
Nonetheless, I come away from this article with the sense the ambient devices automating documentation of an encounter are still a net win, with caveats about the need for the doctor to polish the note ti reflect his or her own narrative voice.
That article is clearly LLM-assisted if not vibe-written, which is the height of irony given the context.
Note that the CIO is talking about patient satisfaction, which is a distinct target. I agree about the long-run benefit being unclear.
is this a counterpoint? he just seems to be wary of the risk, without a firm position and decided to personally stop using it. people often overestimate their own skills and think their own charting is better than that of others, that doesn't mean the tech doesn't work.
1) in the event you find yourself partially or totally disabled but the records don’t really make a good case for it and your provider has a dismissive attitude about filling out additional documentation to substantiate what they failed to in your records.
You’re not necessarily going to get approved for FMLA, STD, LTD, SS etc based on a diagnosis or test results alone. They will nitpick over say, heart failure, as if that’s magically and spontaneously going to go away. If you’re telling your provider that you’re limited by things like oh I don’t know, “I’m only awake for 2-4 hours before I need to sleep again” or “some days I just can’t do it and sleep 20 hours” but it’s not in your chart… expect denials and clarifications and a huge burden on you to prove why it’s limiting.
2) continuity of care, so you don’t end up explaining everything from the top to a specialist or having them run all these tests and procedures from square one — when there’s months long backlogs , and we already did all this and you need treatment - but - there wasn’t much to work with in your referring chart.
You might not appreciate the “intrusion” if you’re healthy and just worried about your privacy.
If/When things go south and you find yourself fighting these entities for a year or two or three while they nitpick and delay and deny and drag their feet , you’ll be glad an “AI” kept up meticulous records because this is phenomenally stressful and an endless burden on you when they don’t.
So, their AI slop can vomit out all this extra info on why insurance companies should pay them or why your condition is in fact disabling, and now their AI slop can comb through it looking for all that. Because they will try to avoid paying or approving any kind of leave or benefits if it’s not there
And god forbid you hand them a form where they’re being asked to explain themselves. 50/50 on them being eager to help out or rolling their eyes and saying something really nasty about the imposition. And then even when they do that, they almost never file a copy in your chart so your chart STILL doesn’t substantiate your claims. I’m all for an “ai” doing the progress notes in a case where the facility or provider can’t be fucked to do so.
Happily that’s not true of my current provider, who just, does that anyway (?) But I’ve been around enough to know they’re an exception. Even when providers are on your side and mean well, and want to bend over backwards to help you in any way they can — and I want to just acknowledge that’s the situation I’m in today — honestly , sometimes they just forget some of the details when they do their notes.
That’s why some places make the provider do it in real time while they’re talking to you, so they didn’t forget something relevant thirty minutes later. The other side of the coin here may be that some providers find that distracting or off putting to be typing away like a stenographer while they’re examining you…
I think it would be fair to say this can all be tedious and a burden for both patients and providers. There’s just a world of difference between a provider who wants to do this to provide excellency in care, and a provider who wants to do this because they resent it and think it’s beneath them.
I work in healthcare, and we spend oodles of time and money making sure every technology that can possibly be on-prem is.
Maybe it's just not technically possible yet?
The first study I cited replaces the "spoke into recorders" stage with non-AI voice recognition.
The second study replaces the "spoke into recorders" stage with LLM voice recognition, and... crucially... also replaces the educated transcriptionist step with nothing.
I imagine that the real problem is that the voice recognition can be classic or LLM and it just doesn't matter as much as having two humans in the loop instead of one. But that's not a story which gets you to replace cheap voicerec with expensive AI.
However, I do think we are in a situation where everybody knows that healthcare costs need to come down that doctors and medical professionals are spread too thin, forced to see evermore patients in the same number of hours, and yet for every attempt to improve efficiency there is a “no, not that way“ response.
Is that channel available on Blippo+?
The solution not only introduces a problem (decreased privacy) but could reinforce the existing problem it's trying to solve.
This is also a good thing. Even in supposedly developed parts of the world like San Francisco it can be difficult to find a PCP that is taking new patients.
If you're the former, it works great. If you're the latter, it can be mediocre to BRUTAL. Medical debt is our #1 or 2 cause of bankruptcy iirc.
Regardless of which class you are, if you can access the care, our outcomes are the best in the world for most things.
And yet the wealthiest people in the world, who can have the best healthcare anywhere they want on the planet, even with private doctors, routinely choose to be treated in Rochester, Minnesota; Boston, Massachusetts; Houston, Texas; Baltimore, Maryland; and Los Angeles, California.
The U.S. is by no means perfect, but there's a reason that there are entire medical facilities in the U.S. that cater exclusively to people from other countries. Just listen to local radio in Palm Springs and you'll hear commercials along the lines of "Tired of waiting, or simply can't get the medical care you need in Canada? Come to our hospital!"
Meanwhile, if I wanted to have my recent surgery in Canada, I'd have to wait almost a year for a slot to open up. Here I waited all of two weeks. And the newspaper headlines in the UK are full of horror stories of patients dying in hospital hallways while doctors are on strike because everything is so great.
The problem is over optimization AND lack of people. As soon as there's an excuse for less staff because we have "digital record keeping" we're going to have less money and even less staff.
Having in person or remote notetakers is a great entry level job to do before you become a doctor. It could be boring but at least the terms are familiar and you get to know the person you're working with.
It's not like healthcare is an impossible problem to solve that needs more tech, we just refuse to spend money on people and (inexplicably) cannot help but dump tons of money into tech.
At least in my area, it seems like lack of people is a problem. Sometimes it's lack of people because the pay is too low, but more of it it's lack of people because the pool of qualified people is too small. And increasing pay increases healthcare costs, and healthcare costs are already very high. If digital tools allow the available staff to see more patients while delivering the same level of care (and without burning out the providers), then that means more capacity and less times people want to see a doctor, but can't. Similar arguments for same number of patients ans greater level of care. If it's more patients, but worse level of care, then it becomes tricky.
Uh... politics is almost uniformly lawyers and business people.
Also tests are the table-stakes to being a doctor (like leet code and programming).
Insurance company profit margins are capped by law and if anything their incentives are to pay the hospitals less.
US physician salaries are astronomical compared to anywhere else in the world.
1. I have health insurance
2. The point of insurance is they're supposed to pay for shit
3. You figure out how to get them to pay for shit, sign an agreement that removes me of any patient responsibility of the balance bill, and assure me in writing that I will owe $0 no matter what
Then you can record me.
They've tried everything except "train and hire more doctors" and they're just all out of ideas aside from "erode patients rights and lower overall quality of care"
We need more doctors now and it takes 12 years to make a doctor and by then the boomer cohorts aging and medical needs will peak.
Finally, even if we could do that, the top of the funnel candidate is substantially weaker with lower test scores and higher need for remedial classes. And for the good candidates, the ROI of medical school is not as good as it once was.
And the privacy/informed consent concerns here are silly, they apply to any of your charted data... and if you're going to any office that doesn't use the latest technology, your patient information is probably being sent between offices over fax anyway.
nit: that is a real efficiency gain. seeing more patients sounds better on the face of it.
It's fascinating how this translates to the idea that in the USA, this should mean "more time with patients", but in reality also means "more patients", but is somehow bad because the is a monetary drive.
Get help if you need it. Having periods of depression on your medical record doesn’t make your life more difficult, unless maybe you’re trying to be a spy or an astronaut or something.
HIPAA has laughably vague rules. It's not protecting much, and you probably have better protection through tort law wrt your private information.
In my case it was something very not sensitive, removing a benign tumor in a finger, which I have no problem telling the whole world about (I was awake for the surgery and got to watch, it was a incredibly fascinating experience that I want to write more about some day).
But I can imagine it would feel much more invasive if the subject were more sensitive.
> "Here is a real concern about implementation" → "Therefore you should refuse entirely"
This skips the middle step of "therefore we should implement it well."
I'm not convinced that we should be allowing doctors to record patient visits at this stage yet, but I'm really not convinced by these points, which largely don't hold up under closer examination.
A few that stuck out:
"Privacy" - Labs are routinely sent to third-party companies, and we don't do informed consent for that. The third-party argument isn't unique to recording.
"False promise of efficiency" - This doesn't really have anything to do with patients at all. It's a criticism of medical office management, not of physician-patient interactions. Telling patients to refuse a tool because management might exploit the productivity gains is asking patients to fight a labor battle on the provider's behalf.
"Consent can't be revoked mid-visit" - Consent typically can't be revoked in the middle of an appendectomy, or halfway through administering a vaccine either. Practical irrevocability is a normal feature of informed consent, not a special problem unique to recording. Proper consent processes in medical offices are a broader issue than consent about voice recordings specifically. Had the authors made the point that providers are being asked to obtain consent for tools whose technical implementation and privacy risks fall outside the provider's own domain knowledge — that would be a stronger argument. But that isn't quite the point they made, and their current framing doesn't wholly convince.
Tech-naïve people think that we can build super duper encryption systems.
The more jaded amongst us know that people can get sloppy or complacent, it's rare to see a regulatory system that truly incentivises good practice, data breaches will happen eventually, and no-one will be held accountable.
This is a big one in recent memory: https://www.theguardian.com/uk-news/2020/jun/10/babylon-heal...
Labs are real businesses that do real things, and would have actual impact for a breach. Meanwhile any idiot can vibe-code a thin shim between a microphone and ChatGPT in a weekend, promise they're HIPAA-compliant, and start selling. Medical professionals have no obligation to do any diligence, and there's no reason for them to not just buy whoever-is-cheapest. They're not even close to the same thing.
I really don't care if my recording becomes training data.
I would rather be spoken to like I'm not an idiot. Use technical terms please. I want precision.
Calling the US healthcare system underfunded might be the most wild part of the whole thing. We spend 5.3 trillion dollars a year. That's 17% of the entire economy.
The argument that a new vendor's security is probably not worse than others misses the point that by opting in, there is one more database/vendor/server where sensitive data about you resides, and which eventually will get hacked. It's usually just a question not whether, but when.
For instance, in the UK, on this very day news reported half a million British people's medical data has been offered for sale on Alibaba, the "Chinese E-Bay". Trivial security advice is to "reduce the attack surface", i.e. to reduce the chances of getting hit by reduce one's presence in places where personal data is concentrated (thus making an attractive target for hackers).
For example, when the German healthcare system launched its central electronic patient record, I opted out. One more system that, once hacked, won't have anything on me stored in it.
1. AI-generated charting. 2. The existence of a reliable record of the visit.
I am skeptical of the first in some cases (i.e. bias), but strongly in favor of the second.
My father is 80 and has Parkinson’s. He routinely leaves appointments unsure of what the doctor said, what changed, or what he is supposed to do next. Even when I attend with him, we sometimes disagree afterward about what exactly was recommended.
This happens with pediatric appointments too. My wife and I occasionally remember instructions differently: medication timing, symptoms to watch for, when to call back, whether something was “normal” or needed follow-up.
That is a care quality problem, not just a convenience problem.
The risks are real: privacy, consent, retention, training use, liability, and automation bias. But those argue for strict controls, not for a blanket refusal. Make it opt-in, give the patient access, prohibit training without explicit consent, keep retention short, and require clear auditability.
I do not want opaque AI quietly rewriting the medical record. But I also do not think “everyone relies on memory after a stressful 12-minute appointment” is some gold standard we should preserve.
The next year, during my annual checkup, I gave my doctor a load of crap, telling her to record nothing I say unless I explicitly tell her to. She tried to defend the system, but she agreed. I'm still upset that my "file" still mentions alcoholism.
Medics often use private notes when handing over patients, where they share information that the patients themselves are not intended to see (and in many countries, not permitted to see). In particular, such records are used to share warnings if patients have been in any way "difficult".