Subscribe to The Podcast by KevinMD. Watch on YouTube. Catch up on old episodes!
Janet A. Jokela discusses the profound impact of artificial intelligence in health care, exploring how AI is reshaping clinical decision-making, reducing physician burnout, and strengthening the patient-physician bond. She highlights the potential of AI to address health care inequities and ethical challenges, while also considering concerns about transparency, bias, and the future role of physicians. This conversation illuminates the balance between innovation and responsibility in the rapidly evolving landscape of AI in medicine.
Janet A. Jokela, MD, MPH, ACP’s Treasurer 2022-2025, served as the Regional Dean of the University of Illinois College of Medicine-Urbana, and currently serves as Professor and Senior Associate Dean of Engagement at the Carle Illinois College of Medicine, Urbana, IL.
She discusses the KevinMD article, “Navigating the world of artificial intelligence in health care.”
Our presenting sponsor is DAX Copilot by Microsoft.
Do you spend more time on administrative tasks like clinical documentation than you do with patients? You’re not alone. Clinicians report spending up to two hours on administrative tasks for each hour of patient care. Microsoft is committed to helping clinicians restore the balance with DAX Copilot, an AI-powered, voice-enabled solution that automates clinical documentation and workflows.
70 percent of physicians who use DAX Copilot say it improves their work-life balance while reducing feelings of burnout and fatigue. Patients love it too! 93 percent of patients say their physician is more personable and conversational, and 75 percent of physicians say it improves patient experiences.
Help restore your work-life balance with DAX Copilot, your AI assistant for automated clinical documentation and workflows.
VISIT SPONSOR → https://aka.ms/kevinmd
SUBSCRIBE TO THE PODCAST → https://www.kevinmd.com/podcast
RECOMMENDED BY KEVINMD → https://www.kevinmd.com/recommended
GET CME FOR THIS EPISODE → https://www.kevinmd.com/cme
I’m partnering with Learner+ to offer clinicians access to an AI-powered reflective portfolio that rewards CME/CE credits from meaningful reflections. Find out more: https://www.kevinmd.com/learnerplus
Transcript
Kevin Pho: Hi and welcome to the show. Subscribe at KevinMD.com/podcast. Today, we welcome back Janet A. Jokela. She is the treasurer of the American College of Physicians and an infectious disease physician. Today’s KevinMD article is “Navigating the world of artificial intelligence in health care.” Janet, welcome back to the show.
Janet A. Jokela: Thank you so much, Kevin. Delighted to be here.
Kevin Pho: All right, so tell us what your latest article is about.
Janet A. Jokela: ACP published a policy paper on artificial intelligence in health care this past year. I thought it would be a good opportunity to highlight that, review that, and chat about this topic with you and your audience.
Kevin Pho: Yeah, so I think that the evolution of that intersection between AI and health care is almost changing daily in terms of the applications that AI has and the implications for the way we practice medicine going forward. So tell me some of the highlights from the ACP paper.
Janet A. Jokela: Yeah, so in this paper, there are ten positions, and it’s all laid out very clearly. One of them is that AI may be thought of more as augmented intelligence as opposed to artificial intelligence—that it’s a tool to be used by physicians, that it is not going to replace physicians. It must not replace physicians; a patient needs a physician. Another one is that all these principles of AI development and use must be aligned with medical ethics. That will help enhance the patient-physician relationship and everything else that we hold so dear when we care for patients. So it touches on things like that.
Kevin Pho: Through your observations, through your interactions in ACP—and of course you work in an academic medical center—how are you seeing AI intersect with what you’re doing in medicine on a daily basis?
Janet A. Jokela: Yeah, well, certainly in medical education, it’s a very hot topic. How do we teach this to our medical students? What do we teach to our medical students? How do we prepare them to enter this rapidly changing world? So that’s one thing that is certainly high on people’s minds, like what do we do and how do we do this? Whether that’s with medical students or residents, that’s kind of a hot topic.
In practice, there are certainly a number of practices across the country who are now using digital scribes, which is a really interesting thing. A physician may have an interaction with a patient, have a visit with a patient, and then at the end of the meeting, the digital scribe will spit out a draft note. That’s pretty cool. And then, of course, there are other avenues where artificial intelligence is being used in clinical medicine, especially in radiology—like mammography—and also pathology, ophthalmology, dermatology, and so on. So there’s that whole diagnostic type of activity that AI is doing there, but also tasks like scheduling clinicians or scheduling patients. So there’s a wide array of places where AI is intersecting with us in medicine.
Kevin Pho: So let’s talk about the first thing, medical education. Of course, you are a medical educator yourself. How are medical students, residents, and interns utilizing AI today? And as someone who oversees that, do you see any red flags or potential dangers in the way that your students are using artificial intelligence?
Janet A. Jokela: Yeah, good question—excellent question. One of the main things that we see—and this is a hot topic in the world of publishing as well—is, for instance, maybe not so much in medical school, but more in undergraduate education. Let’s say students have an assignment to write X, Y, Z. Do they do it themselves, or do they go to ChatGPT to write something or create something to turn in? And that’s a really interesting ethical issue—who’s doing the creating, and where is this coming from?
Then you dive deeper, and you realize that the databases that ChatGPT or whatever system is being used draws upon—what are those databases? What’s the input into those databases that create the output? We don’t know, and we don’t have control over that right now. That is a concern, because there may be falsehoods or inaccuracies that are put forth as the output, and that can be quite problematic. So it’s a challenge these days—how to navigate this, and also how to guide students in the wild west of AI in this digital landscape.
Kevin Pho: In medical school, and in residency and fellowships, are you or your colleagues giving specific examples of what not to do, giving specific rules, and teaching them the boundaries of how they can ethically use AI when they’re learning to become physicians?
Janet A. Jokela: Yeah, there’s more and more of that. I think that’s growing. I think for many faculty, who may not be digital natives, this is just a completely new landscape—how do you teach on this topic to students who are digital natives? Wait, what does that mean? Absolutely, though, in medical school, in residencies, in fellowships, there are efforts across the country and at various institutions to specifically address these issues, concerns, and in many ways to focus on the ethics of AI. How do we use this ethically in health care and medicine? There’s a lot of work being done on that—there’s a lot of interest in that. As quickly as AI is changing, the educational efforts are also changing and evolving almost on a daily basis.
Kevin Pho: How about the scenario of working through a case, asking AI, for instance, to interpret lab values and come up with a differential diagnosis? Let’s say medical students and residents use AI for that aspect. What is the ACP’s position in terms of using AI to say, hypothetically, come up with a differential diagnosis and perhaps analyze a case?
Janet A. Jokela: Yeah, the paper that ACP just published does not specifically address that. It looks more at the fundamental development of AI tools, as opposed to being in the clinic or hospital today trying to sort out what’s going on with a particular patient. That said, it gets back to the database. What are the databases that are being used to generate a differential or a diagnosis? How confident are we in those databases that they’re accurate, that the information is equitable—things that are important to us in medicine? We’re at the forefront of this, and I think there are many people right now in medicine, in particular, because the stakes are so high for our patients, who are approaching this cautiously, which I think is warranted and understandable.
Kevin Pho: You mentioned earlier that instead of artificial intelligence, you might prefer to call it augmented intelligence. You mentioned tools like digital scribes—full disclosure, the presenting sponsor of my podcast is a digital scribe company—what are some other ways that AI can augment a physician’s practice?
Janet A. Jokela: Wow. Certainly with radiology, that’s been going on for quite some time, right? I think mammography is probably one of the best examples. There have been studies looking at who’s better—a radiologist or AI—and my understanding is that what’s best is them working together. So the physician, the radiologist, with AI to review mammograms or whatever the images are. Those outcomes are better than either alone, which is reassuring in many ways. But that’s an example of augmented intelligence—the artificial intelligence can augment the decision making of the physician but not replace it.
Kevin Pho: I’m happy that the ACP has come out ahead in terms of developing a policy as it relates to AI. How did the ACP come up with that?
Janet A. Jokela: Yeah, good question. The way the ACP comes up with any policies or positions is that it bubbles up through the Board of Governors, so from chapters to the governors in the chapters. It’s debated and discussed in the Board of Governors, and then it comes to the Board of Regents for approval, then it goes to committees for review, implementation, everything else. Sometimes there may be specific issues—whether it’s AI or other timely topics—that ACP leadership, including the Board of Regents, recognizes as really important to pursue and address, then it gets passed on to committees for further exploration.
This is not something that is done over the period of two weeks. It’s a very thoughtful, deliberative process. As the policy is researched and drafted, it comes back to the Board of Regents for review, the Board of Governors, other leadership, for input. They might suggest, “Say this, not that,” or “You missed the boat on this topic,” or “Think of it this way,” so again, it’s stepwise and deliberative, which I’m quite proud of. Then finally, it’s approved by the Board of Regents and submitted for publication. Because it’s an ACP publication does not guarantee publication in the “annals of internal medicine.” The same rigorous process of that journal is applied to these papers as it is to anything else, and that’s important as well.
Kevin Pho: Are there any other areas of this policy paper that we haven’t touched upon? What are some of the other main points you want to bring up?
Janet A. Jokela: Yeah, one is transparency, emphasizing how important that is. Imagine a worst-case scenario, say, where you’re in the exam room with a patient saying, “You have X, Y, Z,” and the patient may ask, “Why do you say that?” and the response is, “Oh, because, you know, the model says so.” That’s a tough place to be, not necessarily understanding how the model worked or why it worked that way. If that’s not disclosed to the patient—that, “Oh, this is an AI-generated recommendation for a treatment or diagnosis”—that’s problematic. So transparency is really important.
Another one related to all of this is equity. Again, getting back to the databases that are used to build these models: if the information coming into those models does not include patients of all backgrounds, all ethnicities, or any other key demographic, that could potentially lead to biases in the output of recommendations for treatment, diagnosis, or anything else. So those are some of the other positions mentioned in the paper.
Kevin Pho: So how about a message to practicing internal medicine physicians like myself? Not all physicians may be familiar with the nuances of artificial intelligence. If someone is just starting out, wants to incorporate some of these tools—maybe it’s a digital scribe, maybe it’s asking ChatGPT a medical question, or entering personal patient information into ChatGPT—what are some red flags or guardrails they should consider before doing that?
Janet A. Jokela: Data privacy and patient privacy are paramount. There are some health systems that have established their own large databases but put in place guardrails around patient privacy and data privacy. That’s really critical. For individual physicians working outside such a system, we do not want to—and it would be a violation of our oath—enter information that is protected or identifiable in some way. We can’t do that; that’s a big thing.
One place that I think is using AI in an interesting way is “dynamedex.” It’s something that’s available through ACP. They’ve created their own internal AI system within that tool, so when a question is put in, it searches internally on all the references they have in their database and gives an answer. That’s a clever way to put guardrails around what’s being searched, instead of the entire internet. It confines the search to a more rigorous base of evidence, which is reassuring.
Kevin Pho: And for those who aren’t familiar with what “dynamedex” is, give us a 20-second blurb as to what that is.
Janet A. Jokela: Yeah, it’s a decision support tool. It’s all online, all electronic, available through ACP if you’re an ACP member. You can put a search item in there, say “atrial fibrillation” or “treatment for AFib,” and it will bring you to the AFib page, references, short statements, and so on. It’s all referenced, so it’s reassuring. It inspires confidence in what you’re saying and recommending for your patients. It certainly limits the database to something that’s more evidence-based compared to a search engine that uses the entire internet.
Kevin Pho: So going forward, as we said earlier, the evolution of artificial intelligence is very fast, sometimes faster than organizations can keep up with. From your standpoint or the ACP standpoint, where do you see AI going forward? What are some predictions you may have as it relates to the intersection between AI and health care?
Janet A. Jokela: Sure, good question. Eric Topol’s book, Deep Medicine, published in 2019, touches on this. His thinking suggests that AI eventually may help people—save physicians time. He’s very forthright, saying that primary care is hard, in many ways broken, and it’s hard to connect with patients. Internal medicine physicians, all physicians who see patients, want to protect that time. If AI tools do save us time, we can preserve that to spend with our patients, which is where we all want to be.
Kevin Pho: Do you recall any specific ways that Dr. Topol predicted in 2019 that it would save us time? Certainly, digital scribes can help, as you mentioned. Is there anything else he predicted that may prove prescient?
Janet A. Jokela: I think the digital scribes are a big part. We know people spend a lot of time on notes, and yes, there are templates in EHRs that people use, but digital scribes can help. Another one is incorporating billing procedures into the digital scribe system. There may be other tasks, such as prior authorizations, that AI could help with—though AI for prior auth has gotten some bad press recently, which may be well warranted. We’ll see how that evolves, but there may be other insurance-related aspects of what we do that AI can assist with and ultimately save us some time.
Kevin Pho: Janet, you’ve seen the evolution of various medical technologies throughout the years. Taking a step back, do you ever wish medicine could go back to a simpler time, just a doctor and a patient in an exam room, without having to come up with policy papers about new technology? From your own personal reflections, do you ever wish that?
Janet A. Jokela: Yes and no. We’ve had tremendous advances in medicine, which have helped our patients in many ways, so I would hate to go back and lose those. That said, the whole health care infrastructure is a challenge, as we all know. If there were a way to go back in time to perhaps redirect some of that, that would be exciting. The challenges of finding the time and attention that our patients need is critical. To preserve that and be able to communicate effectively with our patients—going back in time to keep that—would be lovely.
Kevin Pho: We’re talking to Janet A. Jokela. She’s treasurer of the American College of Physicians, and we’ve been discussing “Navigating the world of artificial intelligence in health care.” Janet, as always, let’s end with some take-home messages you want to leave with the KevinMD audience.
Janet A. Jokela: Sure. First, AI is here—it’s going to evolve in ways we can’t predict, but it will never replace physicians and it will not supplant physicians. Second, it’s critical that AI development and use are aligned with medical ethics. That means it’s really important for physicians—who may not be AI experts—to be involved in the creation and regulation of these systems. Lastly, there’s a lot of optimism about how AI can help all of us, but most importantly, how it will ultimately help our patients. That’s an exciting place to be.
Kevin Pho: Janet, as always, thank you so much for sharing your perspective and insight, and thanks again for coming back on the show.
Janet A. Jokela: Thank you so much, Kevin. Delighted to be here.

