The Point of Medicine
A FORUM OF CHRISTIAN MEDICAL & DENTAL ASSOCIATIONS®
AI in Medicine: A Primer
March 23, 2026
By Dr. Steven Willing
Clinicians have used computerized image analysis for decades, particularly in ultrasound and nuclear medicine. Earlier systems relied on explicit, rule‑based algorithms written by engineers. The transition to what is now properly called “artificial intelligence” occurred when models began learning their own features and decision logic directly from data—most notably with the adoption of deep learning in medical imaging over the last decade.
Brief introduction
Few technological innovations have garnered as much attention or spread as rapidly as AI. Since the debut of ChatGPT in 2022, the AI phenomenon has spawned new companies, new business models and a sense that AI is now a “must‑have” for savvy users. It has turned Nvidia into a multi‑trillion‑dollar corporation. According to OpenAI, more than 800 million people are regular users of ChatGPT.
Much media attention has focused on the risks and shortcomings of the large‑language models (LLMs) that underpin the technology. Popular consumer implementations include ChatGPT (OpenAI), Gemini (Google), Copilot (Microsoft), Claude (Anthropic) and Grok (X/Twitter), with the first three capturing the lion’s share of the market. All are available for free, with subscription tiers for heavier or more powerful use.
The technology has spread across a variety of industries, including healthcare. By the end of 2025, roughly 66 percent of physicians were using some form of health AI. According to the Washington Post, more than 340 FDA‑approved AI tools are currently in use.
What are they being used for? Table 1 lists several medical applications that are already widely available and immensely practical.
The Utility of AI Models
Image analysis
Clinicians have used computerized image analysis for decades, particularly in ultrasound and nuclear medicine. Earlier systems relied on explicit, rule‑based algorithms written by engineers. The transition to what is now properly called “artificial intelligence” occurred when models began learning their own features and decision logic directly from data—most notably with the adoption of deep learning in medical imaging over the last decade.
AI’s ability to detect subtle or previously unrecognized patterns has been especially influential in neuroscience, where functional imaging data (particularly MRI) is analyzed to identify patterns that would otherwise be undetectable.
Algorithmic diagnosis and risk stratification
Clinical risk tools crossed into AI territory when they stopped applying predefined clinical rules and instead learned complex, non-linear risk relationships directly from large datasets—often in ways no clinician explicitly specified. Early warning models for clinical deterioration, including sepsis, illustrate both the promise and the challenges of this approach. By analyzing complex relationships among vital signs, laboratory values, and chart patterns, some systems can flag elevated risk hours before traditional scoring tools. At the same time, several widely deployed sepsis models have produced high false-positive rates in real-world use, highlighting the need for careful validation and clinical oversight. Another example of AI’s emerging capability is deep-learning analysis of ECGs, which can detect left ventricular dysfunction or occult atrial fibrillation from tracings that appear normal to clinicians. These tools do not replace bedside judgment, but they demonstrate how AI can surface subtle patterns that clinicians may not consciously perceive.
Document generation
Maintaining written medical records has long been one of the most burdensome tasks in healthcare. Time spent writing notes is time not spent with patients, families or in much‑needed rest.
Radiology was an early adopter of electronic transcription, driven by the need for near‑instantaneous reporting in ER and clinic settings. PowerScribe became widely available in the late 1990s and soon achieved broad acceptance despite its early shortcomings. Over the ensuing decades, the technology steadily improved, and voice recognition became standard across many specialties.
The new generation of AI tools goes beyond traditional dictation. Instead of simply converting speech to text, AI can turn a rough narrative into a structured clinical note—organizing the history, extracting key details, compressing redundancy, and generating a problem‑oriented assessment and plan. It can also produce different “versions” of the same information for different purposes, such as patient instructions, referral summaries or prior authorization letters. Used wisely, AI functions less like a faster keyboard and more like an efficient scribe‑editor that reduces clerical burden while helping clinicians communicate more clearly.
General research and inquiry
Where chat‑based AI systems really shine is in their ability to interpret complex clinical questions with multiple parameters and return responses tailored to those specifics. Traditional search engines can return a list of links that may or may not be relevant. In contrast, a chat‑based AI allows far more precise queries—for example, “What is the differential diagnosis for a mixed solid/cystic, non‑enhancing 2 × 2 cm right parietal lobe mass on MRI in a 5‑year‑old Thai male with a three‑month history of headaches and seizures?” Rather than returning generic links, the AI parses the information, synthesizes it and produces a ranked differential tailored to the clinical context.
This doesn’t merely save time; it lowers the cognitive threshold for performing focused inquiry, making it more likely to be incorporated routinely into daily clinical interpretation.
Of course, the responses are not always correct. As someone observed, AI is like a somewhat unreliable colleague: usually affable, typically certain but not always right.
Medicine‑specific AIs
Most physicians have already used one of the popular chatbots—ChatGPT, Gemini or Copilot. Beyond these general tools, several implementations are tailored specifically to clinicians.
OpenEvidence
OpenEvidence (openevidence.com) is a major venture founded by Daniel Nadler, funded by the Mayo Clinic and several major venture capital firms, and partnering with JAMA, the NEJM, and several medical societies, including the American College of Cardiology, the American Academy of Family Practice, and the American College of Emergency Physicians. Its emphasis is on providing reliable, up-to-date information on diagnosis and treatment guidelines, based on a limited set of peer-reviewed sources.
OpenEvidence is free to registered physicians (you must supply an NPI number) and offers CME to those who desire. The CME process requires additional steps beyond simple use. The interface is identical to most others, consisting of a simple text-entry box with a sidebar listing previous chats.
It is also the most constrained in scope, as OpenEvidence will reject any questions that are not specifically medical (like “Why did this application just shut down on my Windows desktop?”).
Doximity GPT
DoximityGPT (Doximity.com) is a free offering from Doximity, an established company that generates most of its revenue from advertising and marketing, with a smaller share from physician placement and telehealth tools. It is also free to verified U.S. clinicians.
DoximityGPT runs on a specialized implementation of the OpenAI engine (ChatGPT) and behaves similarly. It claims as its data sources over 750 hand-vetted medical journals, guidelines from over 200 medical organizations and is supplemented with on-demand web searches.
DoxGPT also offers CME, but the pathway is simpler than with OpenEvidence. CME accumulates passively through use; I was surprised in December to learn–without any additional effort on my part– I had earned 10 credit hours in a few months. Another perk of DoxGPT for those without access to a medical library is that it provides free full-text access to up to five paywalled articles per month, a real benefit for those who occasionally want to read beyond the abstract.
EMR‑embedded implementations (Epic)
There’s an excellent chance you’ve already been using AI, even if you didn’t notice. Our hospital system uses Epic, and an embedded AI is used to generate clinical summaries and problem lists that may show up on the main page for a given patient. Presently, it lacks the option to summarize long radiology reports, and we don’t use it to create reports—we use PowerScribe.
AI Scribe systems (PowerScribe/Nuance, owned by Microsoft)
In the furious buildout of AI over the last few years, nearly everything that could be touched by AI has been. Most radiologists across the U.S. use PowerScribe (Microsoft) for exam reporting, and in its newest rendition, PowerScribe One, has implemented a variety of AI features. The application processes text as it is dictated, recognizing and flagging actionable or critical findings. It also includes tools like Smart Search, which lets the radiologist quickly pull up guideline snippets or standardized terminology without leaving the report, and Smart Impression, which can draft a preliminary impression based on the dictated findings when that feature is enabled. None of this replaces interpretation—it simply adds a layer of guardrails and shortcuts that reduce omissions and keep reports more consistent.
Personal applications
I expect a very high percentage of readers are already using one of the consumer-focused AI services. For someone new to the game, it might be hard to know where to start or which one to use. This section is for you. All of these run from a website so all you have to do is click on the link, usually register for a free account, and start testing the waters. For a list of suggested uses, see Table 2 for some of the numerous helpful tasks I have used it for over the last 10 months.
ChatGPT (OpenAI)
The most widely used general-purpose AI chatbot. A strong “default” option for everyday questions, summarizing, brainstorming, and drafting text.
Gemini (Google/Alphabet)
A top-tier general assistant, especially convenient for people who live in the Google ecosystem. Particularly useful for everyday search-like questions and productivity tasks.
Copilot (Microsoft)
Often appears directly inside Microsoft products and Windows workflows. A practical choice if you rely heavily on Microsoft 365 (Word, Outlook, Excel) or want help with Windows-related tasks.
Claude (Anthropic)
An excellent tool for writing, editing, and working with long documents. Also popular for coding and structured reasoning tasks.
Grok (xAI / X)
Integrated closely with the X (Twitter) platform. Best suited for users who already spend time there and want an assistant in that ecosystem.
As with any tool, treat AI responses as a starting point—especially in medicine—and verify details when accuracy matters. If you’re not sure where to start, pick one (ChatGPT or Claude), use it for a week, then try a second and compare.
Depending on your particular needs, this decision tree might serve as a starting point:
- Do I want a finished Word document, PowerPoint deck or Excel output with minimal friction? Am I having problems with my Windows PC? → Copilot
- Am I writing or editing something long and want it to read cleanly and coherently? → Claude
- Do I want an all-purpose assistant for thinking, drafting, summarizing and clinical-style inquiry? → ChatGPT
- Am I already living in Google (Gmail/Docs/Chrome/Android) and want AI that feels like “smart search plus writing”? → Gemini
- Am I mostly using this while I’m on X (Twitter) and want AI integrated into that stream? → Grok
| Table 1: Professional uses of AI | |
| Differential diagnosis Medical image analysis Knowledge update (What’s changed?) Best practices/Standards of care Pharmacology Clinical authoring and editing Clinical summarization and synthesis Decision framing / option comparison |
| Table 2: Personal applications of AI (author) | |
| Fitness and training scheduling and analysis Smart home automation and integration Technical troubleshooting Home improvements and repairs Travel planning Pet care guidance Reading and interpretation guide (non-medical reading) Enhanced and more efficient web searching Shopping and product comparison Decluttering decisions (keep, donate, recycle, discard) Decision support Writing and correspondence |
Conclusion
AI is no longer a futuristic add‑on to clinical practice; it is becoming a part of the everyday fabric of medicine. From image analysis to documentation support to rapid clinical inquiry, such tools are already reshaping how clinicians work, think and manage information. Their limitations are real, and their output still requires human judgment, but the trajectory is clear: AI is becoming a routine companion in clinical decision‑making rather than a novelty. The challenge for physicians is not whether to use AI, but how to use it wisely—leveraging its strengths, recognizing its blind spots and keeping the clinician firmly in the driver’s seat.
What's The Point?
- Can AI replace the clinician? How should we respond to those who argue it can or should?
- What if anything are you encouraged by in light of the spread of AI through medicine?
- If technology always amplifies and diminishes, what do we gain and what do we lose from the AI in medicine?
We encourage you to provide your thoughts and comments in the discussion forum below. All comments are moderated and not all comments will be posted. Please see our commenting guidelines.
Recent Articles
DISCUSSION FORUM
Join us for a vibrant conversation! This is a place to engage with others who see medicine not just as a profession, but as a calling — one that honors God, wrestles with real questions, and seeks truth with humility and purpose.