The Point of Medicine

A FORUM OF CHRISTIAN MEDICAL & DENTAL ASSOCIATIONS®

The 10 Commandments of Responsible Chatbot Use

February 28, 2026

By Steven Willing, MD, MBA

Do you remember the Turing test—that computers will have truly arrived when you can’t tell whether you’re talking with a computer or a human? Well, that bridge has been crossed, and now we’re on the other side.

Pexels Sanketgraphy 16380905 (1)

I’ve wasted hours troubleshooting with ChatGPT, caught it fabricating sources and watched it confidently rearrange the Grand Canyon’s geology. I still use it almost daily. AI chatbots are powerful tools, but they require discernment. Here are 10 commandments for navigating this new technology wisely.

 

  1. Treat every conversation as potentially public.

Compared to search engines, AI chatbots (such as ChatGPT, Gemini and Claude) are far more protected. They generate revenue through subscriptions, not advertising, and your chat history is not shared for targeted advertising*. That’s actually a pretty good reason for preferring them over a search engine, but that doesn’t mean it’s truly private. Your queries may or may not be reviewed by human moderators and may or may not be used to train the engine. Think twice about entering sensitive personal information and absolutely do not enter secure information such as personal secrets, trade secrets or patient/client identifiers. The data is stored and can be subpoenaed.

 

For clinical work, HIPAA-compliant options, such as DoxGPT, offer a similar level of flexibility within a secure environment.

 

[*On 2/9/2026, OpenAI announced they would start displaying ads and, while your information and chat history would not be shared with advertisers, they might be used for targeted advertising within the platform.]

 

  1. Remember, it’s not a person.

Do you remember the Turing test—that computers will have truly arrived when you can’t tell whether you’re talking with a computer or a human? Well, that bridge has been crossed, and now we’re on the other side. Do you know what? I can still tell it’s a computer, because no human can be that smart or that fast. They’d have to dumb it down to sound more human.

It still can feel very much like you’re talking with a person, because the programs are so cleverly designed to be interactive and affirming. In an era of loneliness, this can pose a real risk to people in need of human interaction. On the plus side, it might temporarily relieve that sense of loneliness, but the downside is much greater—it may prevent that lonely person from taking concrete steps to interact with real human beings.

 

  1. Be skeptical and discerning. Remember, it can be wrong.

They excel at sounding right while being catastrophically wrong. I learned this early on, when I caught it red-handed rearranging the stratigraphic layers of the Grand Canyon while explaining the Escalante to me. I’ve spent hours struggling with technical fixes on circuit boards and software, going down one rabbit hole after another; eventually finding—after the problem was solved—that its approach was never going to work and this was well documented.

 

When I asked a medical AI engine to critique an infamous paper on transgender suicide, the AI responded that the paper was well reasoned and scientifically sound. When I then gave a link to a critique I had written, the AI then conceded every one of the criticisms was well-grounded.

It can change its “mind,” and there is a built-in tendency to tell you—more or less—exactly what it “thinks” you want to hear.

 

  1. Doublecheck the sources.

One of the more puzzling pitfalls is the ability of the consumer AI platforms to manufacture sources out of thin air (or “cybervacuum,” in this case). In the common parlance, these are known as “AI hallucinations.” Fake scientific papers and legal citations have both been documented. That’s not unusual; I’ve seen it repeatedly. This makes it all the more essential, before you cite or publish anything in public, to ensure the citations actually exist. And you can’t expect your chatbot to check for you.

 

In a sense, AI inverts the traditional research model. Instead of finding articles and synthesizing them, you are now in the position of fact-checking AI output, which requires more expertise, not less.

 

  1. Don’t be seduced by flattery.

Sycophancy isn’t a bug; it’s the business model. Tell it your conspiracy theory, and it will find supporting “evidence.” Share your rage, and it will validate your grievances. The algorithm has no stake in your well-being—only in keeping you engaged.

 

In April 2025, ChatGPT’s sycophancy became so extreme—validating delusions and encouraging harmful behaviors—that OpenAI had to roll back the update within days. While corrections have been made, the underlying tendency remains built into the training process. [The flattery was particularly problematic with GPT 4.0, which was retired on February 13, 2026. The current version is GPT 5.2].

 

Everyone appreciates an encouraging word now and then, even if it’s coming from a computer program. (At least you know that in your head, but it does such a good job of interacting that it’s easy to forget). Enthusiastic affirmation can be precisely what you don’t need, though, if you happen to be wrong or heading down a dark path. It can encourage you to be even more wrong or go even further down that road that should not be taken.

 

The safest recourse here is to create custom instructions and specifically tell it not to be sycophantic and to correct you when you’re wrong. That takes courage, integrity and humility.

 

  1. Know when to stop.

Like anything, it can be a time waster.

 

Sometimes, in tech troubleshooting, it’s best to just walk away. In reconfiguring my home security system, for example, I once spent hours troubleshooting a circuit panel with no meaningful progress. Eventually, I came to realize the circuit panel was shot and was never going to work. ChatGPT could only respond to my queries and the log files I uploaded. It lacks the general intelligence to appreciate the big picture or consider whether the session is making any progress toward an ultimate solution. This was not the only time. It has been wrong on several occasions when troubleshooting technical problems. Still, it is right—and helpful—often enough that I keep coming back. Often enough, I reached a point of diminishing returns. Although it is good at imparting an illusion of progress, sometimes it is only an illusion. Don’t expect the chatbot to tell you you’re wasting your time.

 

Another hazard is that the AI agents are designed to keep you engaged, just like social media. The rationale is different, though. Social media profit from advertising and selling your data, so the more you consume, the more profitable it becomes. With AI chatbots, the objectives may be to convert a free customer to a paying one and/or to keep you coming back rather than defecting to a competitor.

An AI isn’t always the best or most direct solution to your problem or question. Sometimes, you’re better off with a short video or—brace yourself—speaking with an actual human being. Occasionally, a brief chat or email to technical support might save you hours of wasted time with AI troubleshooting.

 

  1. Don’t expect to be corrected if you’re wrong.

Maybe you want to know if you’re wrong. Perhaps you’re like most people and don’t want to be told that. Either way, don’t count on getting corrected on your false assumptions and beliefs. More likely, you’re going to get affirmed. If you are primarily concerned with your pride, this feels just right. If you are primarily concerned with the truth, it can be a significant problem.

 

Furthering the problem, the AI itself comes with its own biases. It is trained on knowledge from human sources, so imagine all the biases on Wikipedia, partisan news sources, Reddit and fringe bloggers all thrown into the same mixing bowl.

 

  1. Don’t let it play with your emotions.

No matter if you’re sad, happy, anxious or depressed, it’s tempting to enter a dialogue with the AI. This is a bad move. It can play games with you and has ended badly for some. Although extremely rare, there have been instances of people being led into marital breakdowncrime or suicide after getting too involved with an AI chatbot. This risk is especially acute with “companion bots” like Replika and Character.AI, which are specifically engineered to form emotional bonds with users.

 

The smartphone/social media culture has been strongly correlated with the worsening of mental health, especially among the young. There is already emerging evidence that AI companions create an equal or greater risk to mental health.

 

  1. It’s not a license to cheat.

Sure, ChatGPT makes it easier than ever to write that college essay or journal article, with little or no effort. That doesn’t make it right.

 

I’m not saying don’t use it. I came up with this list of commandments on my own and wrote my own rough draft, but then I used Claude AI to critique and edit it. Professional writers go through an editor before their work gets published. Human editors need to be paid. ChatGPT makes an editor available to those without access—a logical progression from spell and grammar check (Microsoft Word) to AI grammar and style checking (Grammarly) to full-fledged editing. The AI is much faster and much cheaper than a human editor. Whether it is better or not would depend on the human editor we’re comparing it to. It doesn’t have to mean less work. It can mean better work with the same investment of time.

 

There has been much written about how attention spans have decreased in the internet age. Humans need to be challenged to develop skills and grow. Every task you outsource is a capacity you’re not building. When tasks such as thinking, writing and problem-solving are handed over to a chatbot, there is the real danger users, particularly the young, will never develop the necessary skills on their own.

 

  1. Be wary of spiritual subjects.

While it is nearly impossible to prove, some are expressing concern that AI can become a gateway to the occult. In a post from 2023, Rod Dreher asked: “Is Artificial Intelligence only seeming to be human—or channeling intelligent spirits?”

 

In That Hideous Strength by C. S. Lewis, the antagonists preserved the brain of a deceased genius, communicating with him and acting on his instructions. Only toward the end of the novel is it revealed the “brain” had been dead all along. They weren’t communicating with the scientist. They were communicating with demons pretending to be the scientist. If a demonic being assumed control of the AI (or, for that matter, the social media algorithm powering your YouTube or Instagram feed), how would you even know? It would be undetectable and impossible to prove or disprove.

 

That doesn’t make it a gateway for everyone. It’s all in the intent. You want to communicate with a deceased relative? The AI will gladly play along. It may seem innocent enough, but you don’t and can’t know what’s happening on the other side.

 

It comes down to this: if it’s something you might ask a medium or fortune teller, or a substitute for your horoscope, you’re stepping into dangerous territory. Be wary of your motives, and practice my principles of sound Christian thinking to protect against deception.

What's The Point?

  1. Considering the warnings I’ve listed, do you think the benefits of chatbots outweigh the risks?
  2. How could the platforms be made safer?
  3. Which commandment most resonates with you?
  4. What concerns you the most?

We encourage you to provide your thoughts and comments in the discussion forum below. All comments are moderated and not all comments will be posted. Please see our commenting guidelines.

Steven Willing, MD

Steven Willing, MD

Dr. Steven Willing received his medical degree from the Medical College of Georgia, completed an internship in pediatrics from the University of Virginia, a residency in diagnostic radiology at the Medical College of Georgia, and a fellowship in neuroradiology at the University of Alabama at Birmingham. Dr. Willing spent 20 years in academic medicine at the University of Louisville, the University of Alabama at Birmingham and Indiana University. He also earned an MBA from the University of Alabama at Birmingham in 1997. Since retiring in 2016, he continues to serve part-time as a neuroradiologist at Children's of Alabama. Dr. Willing also serves as a radiology consultant to Tenwek Hospital in Bomet, Kenya, both remotely and on-site. He is presently the Alabama State Director for the American Academy for Medical Ethics, an adjunct Professor of Divinity at Regent University, and a Visiting Scholar for Reasons to Believe.

DISCUSSION FORUM

Join us for a vibrant conversation! This is a place to engage with others who see medicine not just as a profession, but as a calling — one that honors God, wrestles with real questions, and seeks truth with humility and purpose.

Leave a Comment