Why AI scribes are changing medicine and the hidden risks you must know


Introduction: the AI scribe revolution

Let’s be real: AI scribes are incredible. I’ve personally used one for well over 3,000 clinical encounters. If you’ve been using one, you already know. If you haven’t, you’re probably considering it or you’ve heard the buzz and are cautiously curious.

Notes that used to take forever are now magically generated with near-perfect grammar, complete sentences, and even structured formatting. It’s borderline addictive. I’ve been using an AI scribe for more than 6 months now in my urology practice, and I can’t imagine going back to the old way.

My pre-AI scribe self was pretty darn efficient in the EHR anyway. I had my templates down, and I could generate a note pretty quickly with a combination of copy-forward for return visits, dot phrases, and Dragon. So why would I switch?

I wondered, and you might too, who did those notes really serve? They became bloated over time and often didn’t capture the nuance of the encounter.

With my new AI-assisted workflow, I feel more like the doctor I set out to be. After all, isn’t that the whole point? I’m now mentally present in the moment with the patient, making better eye contact and actively listening more attentively.

Looking back, I’m surprised by how much mental energy I used to spend pressing F2 and navigating through template blanks. Now, without that tedious process, I feel significantly less drained by the end of the day. I’m seeing more patients in the same amount of time, by my own choice, and somehow expending less energy doing so.

I checked my system-tracked documentation metrics (Big Brother is watching, didn’t you know?) and everything checks out—no more late-night documentation, significantly reduced charting time, and fully original note content without copy-pasting. All major wins.

But here’s the thing: When you fundamentally change the way documentation happens, you introduce a whole new set of problems. Some are obvious, some unexpected. From technical hiccups and subtle misinterpretations to legal uncertainties and workflow disruptions, these challenges can catch you off guard if you’re not paying attention.

Not the old problems we all hate (click fatigue, note bloat, typing at midnight), but weird new pitfalls that didn’t exist before. Some are technical, some are legal, and some are just human nature.

If you’re using an AI scribe (or thinking about it), you need to know what these new risks are—and more importantly, how to avoid them.

1. The note that never was: technical failures

AI notes work beautifully 99 percent of the time. Maybe even 99.9 percent in my experience—technical failure is rare.

But there’s nothing quite like finishing a full patient visit, reaching for your AI-generated note, and realizing … it doesn’t exist.

Maybe you forgot to hit record. Maybe you did hit record, but the app glitched. Maybe it actually did record, but for some reason it failed on the back end, and the note never rendered. Rapidly growing cloud-based software products are bound to have occasional glitches. While no system is foolproof, a little patience is necessary as they evolve and improve.

The end result is the same: You now have to reconstruct the entire note from memory, which is the exact opposite of why you started using AI in the first place. And suddenly, you realize that you’ve been relying on the AI to do the remembering for you—you turned off that constant mental narration, the one where you used to tell yourself what to remember as you went. Now, when the AI fails, that safety net is gone, and recalling the details feels like reaching for something that was never stored in the first place.

How to prevent this:

  • Build a habit of always checking that the scribe is recording before you start.
  • Check that the note actually exists before you move on to the next patient.
  • Expect occasional failures so they don’t derail your day—take a deep breath and remind yourself that you documented manually for years. When it happens, just shift gears and handle this one patient the old way. It’s a temporary hiccup, not a catastrophe.

2. Proofreading less—trusting too much

After you get your AI scribe dialed in, you’ll notice that you have to edit less and less—until you might be editing very little or mostly nothing at all. AI-generated notes are so darn good that proofreading starts to feel unnecessary. And let’s be honest, after seeing hundreds of beautifully structured notes, you just assume they’re all fine.

But here’s the problem: AI isn’t perfect. It gets small things wrong, and you won’t notice unless you check.

Example: A patient with hematuria comes in, and you discuss different possible causes. The AI writes:

“Plan: Monitor hematuria.”

But what you actually said was:

“Plan: Order CT urogram, urine cytology and return for office cystoscopy to evaluate hematuria.”

That’s a significant difference. “Monitor” implies you’re just keeping an eye on it, while your real intent was to actively investigate it. If you don’t catch that and the patient gets lost to follow-up, that’s a problem.

Skipping a day of proofreading your AI-generated notes is a bit like skipping flossing—you might get away with it for a while, but eventually, it’ll catch up to you.

What to do:

  • Always proofread the entire note, but pay extra attention to the Assessment & Plan before signing it.
  • Be skeptical of AI perfection. Assume it made a subtle mistake somewhere, and go find it.

3. AI captures what you would have filtered out

Before AI scribes, if a patient casually mentioned something unrelated—”Oh, by the way, my left big toe has been aching”—you wouldn’t put that in your note unless it was actually relevant. You were the ultimate arbiter of what complaints made it into the documentation—some of that was appropriate clinical filtering, and some was medicolegal defensiveness.

But AI doesn’t filter. It tends to include everything (except small talk—though if AI ever starts documenting every time a patient insists their urine smells weird but ‘not in a bad way,’ we’re in for some interesting charts).

Now imagine this: A patient comes in for BPH. You have a whole conversation about urinary symptoms. Somewhere in the visit, they offhandedly mention, “I think I had a little testicular pain the other day, but it went away.”

Your AI scribe transcribes:

“Patient reports recent testicular pain.”

Except … you didn’t address it because you were quite sure it wasn’t relevant. Now, weeks later, as if fate had a particularly dark sense of humor, the patient develops actual testicular torsion, and someone pulls up your note. It looks like you knew about the pain but did nothing. See the problem?

How to prevent this:

  • If something sneaks into the note that you didn’t actually evaluate, delete it or explicitly state: “Patient mentioned testicular pain, but no current symptoms or concerns today.”
  • Proofreading is critical—AI scribes don’t discriminate between what’s relevant and what’s not, and sometimes that can lead to documentation headaches.

4. Copy-pasting into the wrong chart

This one is unique to AI scribes that aren’t integrated into the EHR: You have to manually copy and paste your notes from the AI scribe program into your EHR. I personally don’t mind the copy/paste aspect of AI-scribing workflows—it’s a necessary workaround. But I’ll own it. It’s only a matter of time before one of those notes ends up in the wrong patient’s chart, and that’s a headache none of us need.

Example: You’re moving fast, copying notes between tabs (maybe too many open). The wrong chart is open. You hit paste, sign the note, move on.

Then a few hours later, you realize: You just documented Mr. Johnson’s kidney stone visit … in Ms. Rodriguez’s chart. Or worse yet, you don’t realize until weeks later when a confused patient calls asking why their note says they had a prostate exam—except the patient is a woman, which makes for an awkward phone call.

How to prevent this:

  • Before pasting, triple-check the patient’s name and DOB.
  • Ask for direct EHR integration if you can.

5. AI-generated notes can fail to capture your intent

AI scribes don’t think. They don’t understand nuance. They document what was said, but they don’t always capture what you meant.

Example: A patient with prostate cancer asks about herbal treatments. You say something vague like, “Yeah, there’s not a lot of evidence, but some patients like to try them.”

Your AI scribe writes:

“Patient interested in alternative medicine for prostate cancer.”

Now … does that imply you recommended it? Could this be misinterpreted later? Now imagine sitting in a deposition, trying to explain that, yes, the note says one thing, but that’s not actually what you meant. You’re stuck clarifying that the AI phrased it that way—not you—and that your actual intent was lost in translation.

Fix:

  • Always proofread documentation about diagnostic plans, treatment decisions, or shared decision-making.

6. Model drift: When AI starts changing without warning
Most AI scribes rely on powerful large-language models from third-party providers like OpenAI or Anthropic. These models improve over time, but they also change over time. Occasionally, they drift in ways that impact how they generate notes.

Example: Maybe you’ve trained your AI scribe to document in a specific style, but suddenly you notice subtle shifts—certain phrases disappear, wording becomes inconsistent, or new quirks emerge. The AI isn’t broken; the model has just evolved without warning.

How to prevent this:

  • Keep an eye out for small but noticeable changes in how your notes are structured.
  • Periodically review your past notes to ensure consistency.
  • If something feels off, check with your AI scribe vendor. They may have switched models or updated parameters.
  • Don’t freak out. Model drift is normal, and most changes are subtle. If something major feels different, flag it and adjust accordingly.

7. Over-reliance on AI: Losing your own documentation skills
AI scribes are so good that it’s easy to forget how to function without them. When the system works, it’s a dream. But what happens if it goes down for an entire day? Or a week? Are you still efficient without it?

The risk here is similar to what happened with GPS. People used to memorize routes, but now, if Google Maps glitches, we’re suddenly lost. Over time, reliance on AI scribes could erode our ability to quickly and accurately document a case from scratch.

How to prevent this:

  • Every so often, challenge yourself to document a full patient visit manually—just to stay sharp.
  • Keep a few personal templates or dot phrases handy for when AI is unavailable.
  • Don’t let the scribe think for you—review and refine notes so you stay actively engaged in documentation.

Final thoughts: AI scribes are here to stay, but we need to adapt
I love my AI scribe, and I can’t imagine going back. It saves me two hours per clinic day, cuts down my documentation burden, and lets me focus more on my patients. AI scribes give us the chance to reclaim focus, and that’s worth celebrating.

But it’s not magic. It introduces new problems that didn’t exist before, and we have to be aware of them. The key is intentional use—proofread selectively, check for missing nuance, and be mindful of what makes it into the note. If we get this right, AI scribes will be the best documentation tool we’ve ever had.

David Canes is a urologist.


Prev
Next





Source link

About The Author

Scroll to Top