“When you talk to a doctor or a lawyer, there's medical privileges, legal privileges. There's no current concept of that when you talk to an AI, but maybe there should be.”
—Sam Altman, The Atlantic (2024)
Take it from the CEO of OpenAI: Confidentiality and privacy protections are among the biggest unresolved issues for those interested in using emerging artificial intelligence tools in settings such as law and healthcare. Strict professional privileges exist for a reason. We ask people to share the most sensitive moments of their lives so we can educate, support, and guide them. They open up because they believe their words will be handled with care, kept in context, and never used against them.
AI promises speed and scale, but it does not confer privilege. The tension shows up in daily work. A team wants to turn a powerful interview into a plain-language explainer. Another wants multilingual versions of an adherence story. A third needs a short script for a Patient Ambassador™ video. AI could help each project move faster.
Yet the raw ingredients (transcripts, recordings, voice messages, emails) can contain names, dates, locations, adverse events, trade secrets, and distinctive phrases. Feed those into a general model and you may create a trail you can't fully see or control. Even when policies say inputs aren't used for training, exceptions exist. Retention rules evolve. Humans sometimes review snippets. The words you were trusted to protect may travel further than you intended.
Evolved Privacy Policies
Across the world, legislators have passed regulations to protect their constituents from overly intrusive data harvesting practices. The life science industry and their agencies have established processes to comply with regulatory mandates, in many cases being more restrictive than required. For example, the European Union's GDPR rules have emerged as the industry’s best practice, even outside of the EU's jurisdiction. The point is to be as restrictive with data, and as respectful with patients’ safety and privacy, as possible.
Such good faith efforts are important when building a trust-based partnership among equals. And they're expected. Privacy, for patients, isn't abstract. It's protection from harm that could affect an already vulnerable population. Patients and caregivers are motivated to work with the life science industry for many reasons: human connection, empowerment, motivation, education, inspiration. Becoming a dataset or having sensitive information exposed isn't one of them. So, to enter the AI era respecting privacy, three rules apply.
Rule #1: Guide the Process
The lead in patient engagement needs to be held by a sentient human being like yourself. A human can be held accountable. A human also knows that they could, now or in the future, be a patient themselves. So, they understand the gravity of their responsibility and know how to address concerns and apprehensions.
This only works if that human is more than just an order taker. They have to be familiar with the patient, competent at their craft, and able to guide through a regulatory-compliant cocreation process.
Rule #2: Have a Conversation About Consent
If a patient contributes to creative content, they should know exactly how that piece will be created, where AI may help, and where humans will review.
They should understand the process for creation, publication, and future reuse. If they change their mind, the path to withdraw should be simple and honored quickly. A checkbox at the bottom of a long policy is paperwork. A clear discussion about purpose, limits, and control is respect.
Rule #3: Maximize Meaning, Not Data
Once the ground rules for cocreation are understood and the humans have sorted out how they will work together, AI assistance can come into play. Generative AI can be a detriment to authenticity and has a measurable image problem within the patient and caregiver community[i]. So if you introduce AI, do so only on a need-to-know basis and preferably only on behind-the-scenes processes.
For example, for an LLM to assist with drafting, it won't need the full, raw record of life. It will need meaning. That means removing identifiers and letting AI operate on a focused, safe excerpt or summary of the original notes. Always ask: “Is this information necessary to accomplish what we're looking for?” If the answer is yes, and if the information is privileged, manual work or a secure, on-site system is required.
Keep the Human Voice at the Center
The reason we always emphasize authenticity is because real human stories change behavior more than engineered, sterile content ever will. AI can help translate and organize those stories. It cannot live them.
We can safely expect AI to become better at mimicking human nuances: imperfect phrases, deliberate pauses, heart-wrenching rawness, elevating inspiration. But as generative content floods every channel, audiences will reward not only what feels true but what they know to be real and accountable. Whatever you do, never pass artificial patients off as real ones. Break that promise once, and all your work could be called into question.
Respecting these guardrails prevents rework. When teams avoid over-collection on the front end, they spend less time redacting. When consent is explicit, legal reviews move faster. When provenance is built in, Medical-Legal-Regulatory discussions focus on substance, not process.
The result is a speedy and satisfying mode of engagement with predictable cycles, fewer surprises, and assets that stand up months later when someone asks, “Where did this come from?” If you wouldn't want to read a prompt and its source materials in a public forum, don't put them in a model you don't control.
Environment and Expertise
Teams that want to use AI will need the right environment and the right expertise. The environment should match the stakes: private instances when identifiable data is unavoidable, zero-retention settings, strong access controls, short retention windows, encryption in transit and in storage, and separation between systems that store identifiers and systems that generate content.
The expertise is the human part: interviewing that captures meaning without excess detail, editorial judgment that keeps education separate from advice, cultural fluency in every language we publish, and regulatory rigor that anticipates hard questions before they're asked.
This is not easy to build, but the right agency can help select the right use cases for AI, set up the protected environment, convert legal principles into usable workflows, and defend the final product. After all, they know exactly how it was made.
AI can have a place in responsible patient engagement. Privacy has the first place. Until society gives AI conversations the protections we expect from medicine and law, act as if every patient word entrusted to you is a promise. Keep that promise, and you'll earn the speed AI offers without spending trust you cannot afford to lose.
[i] Data from a recent survey SNOW conducted with 297 patients and caregivers suggests there is a strong negative feeling toward generative AI in patient content: 42% are not open to AI being used at all in the creation of content involving patients or caregivers.











