Building Patient Trust Through AI Transparency
- Published
- Sep 19, 2025
- Topics
- Share
Key Takeaways
- Clear disclosure of machine-generated information use helps healthcare organizations comply with evolving regulations, strengthen patient trust, support informed consent, and minimize legal and reputational risk.
- Developing a structured approach to AI transparency—including what, when, who, how, and the level of disclosure—establishes a consistent patient experience building ongoing trust
- Tools, training, scripts, FAQs, and policies can help teams confidently explain AI’s role and allow patients to understand and engage with AI-supported care in a comfortable environment.
As AI tools continue to play a growing role in the health care industry—supporting areas from diagnostics to documentation to care coordination—one concern is on everyone's mind: Are patients aware of when and how AI is involved in their care? Healthcare providers and insurers face rising scrutiny over how they use AI.
- California’s AB 3030 mandates that health systems disclose when generative AI is used in patient-facing materials—ensuring patients know when they're interacting with machine-generated information.
- In Kisting-Leung v. Cigna Corp., plaintiffs argue that the insurer used an artificial intelligence tool to reject over 300,000 health insurance claims in mere seconds—without conducting individual reviews. The court has allowed parts of the case to move forward, signaling growing legal concerns about the use of automated decision-making in healthcare operations.
- AI-related malpractice claims rose by 14% from 2022 to 2024. Patients are suing over missed diagnoses, false positives, and overreliance on AI without human oversight. These cases often target hospitals, physicians, and AI developers.
Creating the foundation for transparency is no longer a simple regulatory box that companies need to check. It is critical to build trust, reduce legal risk, and reinforce the provider-patient relationship.
Why AI Transparency Matters in Healthcare
Patients today are more informed than ever and expect clarity around how technology influences their care. Organizations that embed forward-thinking systems are not only meeting compliance standards but also building long-term trust.
-
Compliance with evolving law: As AI continues to shape the healthcare industry, transparency expectations are rising. Across the U.S., states such as California and Colorado now require disclosure when AI is used in clinical content or patient-facing materials. Clear communication helps patients know when they are interacting with machine-generated information and helps reduce litigation risk as regulations expand.
-
Ethical alignment and clinical trust: Patients want to know how their care is delivered. Informed consent requires that patients be told about information that could influence their decision-making. Recent surveys show that 60% of U.S. adults said they’d feel uncomfortable if their physician relied on AI, and 70–80% expressed skepticism that AI would improve key aspects of their care. Only one in three said they trust healthcare systems to use AI responsibly. Most notably, 63% of patients want to be notified when AI is involved in their diagnosis or treatment. Not disclosing when AI was involved can erode trust and weaken physician accountability and respect for the patient. Even though there is a human in the loop and care decisions are not solely reliant on AI, the question of disclosure to the patient is important to keep the trust.
-
Brand reputation and patient confidence: Transparency works best when it is consistently implemented. By disclosing AI use across verbal, written, and digital touchpoints, health systems reduce confusion and create a unified patient experience where the patient receives consistent information. Done well, this consistency doesn’t just minimize risk – it reinforces trust, builds brand credibility, and positions the organization as a leader in responsible innovation.
- Healthcare organizations who seek to be at the forefront of AI adoption while prioritizing patient safety, trust and experience, must address transparency throughout the AI Management Lifecycle, particularly in areas which directly affect patient care.
Where AI Touches Patient Care
Even if patients don’t realize it, AI is already integrated into many systems that affect their care. Leaders should prioritize disclosure in high-risk or high-touch areas, such as:
- Clinical decision support for conditions such as sepsis or stroke alerts
- Imaging prioritization that flags critical scans for faster human review
- Risk stratification to identify readmission risk or care gaps
- Ambient documentation that generates visit notes or encounter summaries
- Patient communications, including chatbot triage or AI-drafted portal replies
Each of these use cases introduces an opportunity to consider transparency. For example, if a patient receives a discharge summary that includes AI-generated language, was the patient told? Could they opt out? Could the care team explain AI’s role if asked? A disclosure framework helps establish consistent answers to those questions. Five Key Components to Patient Disclosure
At EisnerAmper, we recommend health systems use a simple but structured approach to AI transparency. This framework is built on five core dimensions:
- What to disclose: Clearly state how AI is used and emphasize that providers review all outputs before action is taken. An example would be, “AI helped prioritize your imaging for faster review before the radiologist conducted a final review.”
- Who should disclose: Frontline clinicians during encounters, backed by consistent systemwide messaging.
- When to disclose: Ideally, disclosure is made before or at the point AI is being used. This is especially the case in diagnostic or treatment settings. Language should be included in consent documents or onboarding materials.
- How to disclose: Use plain, patient-friendly language such as “This tool helps your doctor detect early warning signs.”
- Level of disclosure: Tailor the disclosure depth to the risk level—more detail for high-stake uses cases, less for routine admin support.
Supporting Infrastructure: Tools and Training
Transparency requires more than good intentions—it also requires systems and support. To strengthen AI disclosure efforts, organizations should invest in both patient-facing and staff-facing resources. This includes:
- Standardized scripting for clinician conversations, using platforms such as Epic SmartPhrases
- Written notices in consent documents, patient packets, and portal messages
- Clear policies and procedures for archiving patient opt-out requests
- Easily accessible AI FAQs on the health system’s website
- Staff education so frontline teams can explain AI’s role confidently and empathetically
Strategic Considerations for Executives
For healthcare leaders, AI transparency is as much a strategic consideration as it is a compliance concern. While the legal implications of nondisclosure are evolving, clear disclosure helps reduce legal risk, strengthen an organization's defensibility in the event of patient harm, and positions the organization as a trusted partner for future patient safety. As guidance from regulators such as the FDA and OCR evolves, systems with strong transparency practices will be better prepared.
Beyond compliance, clear communication about AI use across patient touchpoints can strengthen your organization’s reputation, improve patient experience, and set you apart as a leader in ethical and responsible innovation.
Building AI Transparency for Your Patients
As AI transforms healthcare, strategic adoption is more critical than ever. At EisnerAmper, we are dedicated to guiding organizations through the complexities of AI integration with confidence and clarity. Following our comprehensive approach, organizations will evaluate their current state within the Five Pillars of AI Adoption (Management & Structure, Technology, Financial, Compliance & Clinical Risk, and People) across the phases of AI deployment (Readiness & Evaluation, Testing & Usage, and Monitoring & Validation), including accelerators within the People Pillar that can guide your AI transparency strategy. These assessments will shape an AI strategy that puts patients first.