AI And Your Health Care Information

FAQ
A dark background with a computer lit up in the center. The screen is casting a red and yellow glow.

I've had more and more conversations with patients lately about using AI for their health care, and I have been exploring AI myself! AI can be a powerful tool in researching health information, especially when it can be difficult to get an appointment with a provider. But, trusting AI (and the parent companies!) with our personal information, including our health information, carries some very real risks, including privacy and accuracy. So I wanted to write a blog entry sharing my perspective, as well as some cautions!

This isn’t all doom-and-gloom, though - I think AI can be a very powerful and helpful tool - but I didn’t always think that! I was initially very hesitant to use AI at all, but I have found specific situations where it can be very useful. But most people genuinely don't realize what happens to the health information they type into AI tools. Regardless of how we feel about it, AI is here, so it’s important to learn some techniques to keep our information more secure. So, let’s talk about that a bit here…

HIPAA Doesn’t Apply To AI

One very important thing to remember is that AI companies have no legal obligation to keep your health information private. HIPAA does not apply to AI - or search engines, for that matter! The only policies that protect your information are the company’s own privacy policy, which can change at any time.

You can read more about some of my cautions about the limits of HIPAA in this entry, but generally speaking, HIPAA offers a lot less protection than we think. HIPAA only applies to healthcare companies and providers that bill insurance - not to tech companies.

An important consideration when using AI is understanding how the information you have provided is used. Without diving too much into the technical details, it is essential to understand that all the information provided to the AI models is, by default, being used to train the model. This means the information you provide about your medical condition could surface in other unexpected ways, with unknown parties, in the future. And, once you have provided the information to the companies, you lose control over the ability to remove the information. Put another way: once the information is out there, you can’t take it back. So, it’s important to be proactive in protecting your information.

Using your data to train the model is on by default in pretty much every available model - let’s talk about that next!

Top 3 AI Tools And How To Protect Your Info

The key information to keep in mind that applies to all three major free tools (ChatGPT, Gemini, and Claude):

  • They use your conversations for AI training by default

  • They allow human staff to review flagged conversations

  • They have no HIPAA protections whatsoever

  • Publicly shared links, even if you don’t share the link with anyone, could be found by people you don’t intend to share it with (as an example)

  • Additionally, ChatGPT and Gemini can (and will!) use your information to target you with advertisements (more on that below).

  • Paying for a consumer subscription (ChatGPT Plus, Claude Pro) doesn't meaningfully change these points, except for the advertisements in ChatGPT (depending on the paid tier).

And, while some companies have “private” AI models for their employees, this is a trade-off: your employer has access to all chat logs for their private AI models, and can build live systems of monitoring what people are using the tools for. So, if you wouldn’t be comfortable with a report being run of all your chat logs, do not discuss it in your company’s AI model.

But what does “training” an AI model mean? Without getting too into the weeds, all the information the AI model processes is used to form future answers. With search engines, typing in a question is like looking in a filing cabinet and finding individual webpages that match your search. AI is completely different. AI is like a pool of all the collected information that it has encountered, and from that information, it will provide answers.(1) All the new information you put into the model gets dissolved into the pool and can’t be removed.

So, what can you opt out of?

With all of the above in mind, how can you make what you share with AI marginally more private? While you can’t always opt out of your information being saved completely, you can opt out of your information being used for training. Let’s review the opt-out options for ChatGPT, Gemini, and Claude:

ChatGPT (Free tier)

The biggest current concern: as of February 2026, ChatGPT shows ads to free users based on their conversation history, including past chats. This means your health questions are now actively used to target you commercially. There's also a court order currently requiring OpenAI to retain even deleted conversations indefinitely - so even deleting a chat doesn’t mean it is gone.

  • Turn off training: Settings → Data Controls → toggle off "Improve the model for everyone" (do not be tricked by the guilt trip!)

  • Turn off personalized ads: Settings → Ad Controls → turn off "Personalized Ads"

  • For sensitive questions: Use "Temporary Chat" (sidebar) — not saved, not used for training

  • For more information on ways you can control your data, check out this page

  • For more information on their data retention policies, visit this page

Gemini (Google, Free tier)

The concern here is that Gemini is tied to your Google account and the broader Google ecosystem, meaning your health queries sit inside the same infrastructure as your search history and ad profile. It is a good idea to opt out of Gemini, even if you only use Google as a search engine, since Gemini is integrated into essentially every Google tool.

Here’s what you can opt out of:

  • Turn off activity saving: Visit the Gemini Apps Activity page and turn off “keep activity.”

  • Visit this page for more information! The links to change settings move frequently, so I recommend re-checking every so often.

  • Default retention: 18 months (you can reduce this in the same settings)

Claude (Anthropic, Free/Pro)

Previously, the most privacy-protective of the three, but it changed its policy in late 2025 to allow training on conversations by default. Currently ad-free, which it has publicly committed to maintaining.

  • Turn off training: Settings → Privacy → toggle off "Help improve Claude" (again, ignore the guilt trip)

  • Incognito mode: Incognito chats are not used to improve Claude, even if the above training mode is turned on

  • If opted out: data is retained 30 days; if opted in, up to 5 years

  • For more information on how your data is used to train the model, visit this page

(Thanks to Claude for help compiling this information - though, yes, I did verify it! 🙂 I enjoyed the irony of AI helping me research itself.)

I chose these three AI models because they are the most commonly used, but there are many models out there! You’ll have to find the settings in the model you use, if it is not one of the three above. I also try to keep in mind who is running the company; for example, I would never ask Grok anything, because I wouldn’t trust Elon Musk with any of my personal information. For what it is worth, of the three, I only use Claude AI!

A Few Notes on AI Accuracy

  • AI is not the same as a search engine. You will often not be shown sources or know where the information was found, so it is much more difficult to know if the source is reliable or not. Because of this, it is really important to verify the information you receive and make sure it is accurate! 

    • Tip: Ask the model to provide sources for the information, and double-check information with trusted sources outside the model. For example, if you’re asking about a health condition, double-check information with Mayo Clinic or another trusted source.

  • AI will confidently say things that may not be correct, and you may not realize it. AI will also often “fill in the blanks," or give an answer that sounds definite when it does not have all the necessary information needed. This can be especially tricky when we are using AI to answer questions on a topic we don’t know much about, so we won’t be able to tell if the answer is correct or not. 

    • Tip: You can tell the models that you prefer it to say “I don’t know” or to ask questions if it is not sure of an answer, and not to guess. It can feel a bit silly sometimes, but this is training the model to be more effective (and trustworthy!).

  • You may not even get the same answer if you ask the same question more than once. This is especially true if you are asking the model to reason an answer for you - here is a simple experiment run on the most popular models to show how answers can vary (or just are wrong!).

    • Tip: Make sure you are using reliable models (see the link for examples) and always, always do a gut check: does this answer even make sense?

  • AI is also only as useful as the information it was trained on. This becomes a big issue when you start to ask questions about topics that require a lot of nuance or detail to know the answer. For example, Claude’s abilities with Chinese medicine information are limited - and I have tested it a lot! It struggles to tell the difference between similar Chinese medicine patterns and doesn’t reliably match an herbal remedy to the correct pattern.

    • Tip: The rule I use is if something requires a professional degree or license to practice it, I should be very careful to check the information - or, better yet, check with the professional! For example, I may ask AI a quick legal question if I am curious - but I absolutely will consult with a real attorney if the answer is important! In the same way, making health decisions (such as taking a supplement or herb) without checking with someone with appropriate training in the field can be risky!

  • I'll write a full post on this soon, because it deserves more space, but the short version: AI herbal recommendations in general are genuinely unreliable and can sometimes be unsafe, especially within a Chinese medicine framework or in making formulas. AI can't assess your pattern, your constitution, or how an herb interacts with your full clinical picture, and is notoriously bad at assessing herb-drug interactions. If AI suggests something and you're curious about it, bring it to your appointment, and we'll look at it together.

    • Tip: Don’t start taking an herb or supplement because AI suggested it. Always double-check with trusted sources, and with a provider you trust - whether that is me, or someone else!

The Practical Bottom Line

A few simple habits go a long way:

  • Never enter your full name alongside specific health information

  • Use the opt-outs above - they take about a minute, and make a real difference

  • Use AI to generate questions to ask your providers, not answers to act on

  • Treat any AI herbal suggestion as a conversation starter, not a prescription, and do not start any new supplements/herbs just because AI mentioned it

  • Train the model you are using to ask clarifying questions, provide sources, and to say that it doesn’t know the answer if it is not sure. You can do this by literally telling the model that you prefer it to ask questions when it is not sure of an answer, and that it is okay to say “I don’t know.”

These tools aren't going away! And they have a real potential to be helpful. But it is important to still be cautious and use common sense, and learn how to safely use the tools. We still don’t know the full potential of AI, or how our data may be used in the future - so it is best to err on the side of caution!


Footnotes

1. I asked Claude how it would describe how AI models are trained, and it said:

“…the text dissolves into the model and changes its shape. The model learns patterns — what words tend to follow other words, what concepts tend to cluster together, how certain kinds of questions tend to be answered. Your specific words aren't "in there" retrievable like a database record. But they contributed to the shape of the thing.

Think of an AI model like a chef who has read ten thousand cookbooks to learn how to cook. They don't have those books memorized — they can't recite page 47 of any particular one. But everything they read shaped their instincts: their flavor combinations, their techniques, their sense of what goes together.

Now imagine your private family recipe — the one that's been in your family for generations — ended up in one of those cookbooks without you knowing. The chef read it. It shaped them in some small way. You can't get it back out. You can't ask the chef to "unlearn" it. And if their cooking ever produces something that resembles your recipe, you'd have no way to know whether that came from your recipe specifically, or just from the general patterns they absorbed.

That's roughly what happens when your health information goes into a training dataset. It doesn't sit there labeled with your name. But it shaped the model. And that shaping can't be reversed.”

Current as of February 2026. AI privacy policies change frequently - settings described here may have been updated since publication! I tried to include links to the pages to reference, so double-check the privacy info and your settings!


Related Posts

Next
Next

2026 New Offerings and Pricing Update