Two healthcare clinicians looking at a computer monitor.

How will ChatGPT for Healthcare affect compliance operations?

Without clear guidelines on how to use generative AI tools, organizations must tread carefully to avoid compliance violations.

29 January 2026 5 mins read
Global Relay Icon By Ryan Thaxton
Written by humans

Written by a human

In brief:

  • Open AI has released ChatGPT for Healthcare, an enterprise workspace for clinical and administrative staff
  • This launch represents a significant step in the official use of AI tools for healthcare workers, but regulators have yet to provide clear guidance for generative AI
  • Without such guidance, healthcare organizations risk compliance violations across the Health Insurance Portability and Accountability Act (HIPAA) provisions, U.S. Food and Drug Administration (FDA) regulations, and record-keeping requirements

OpenAI releases ChatGPT for Healthcare

In January, OpenAI announced the launch of ChatGPT for Healthcare, an enterprise-tailored workspace for administrators, clinicians, and researchers that aims to help users deliver care and reduce administrative burden. The platform promises to streamline workflows, support clinical decision-making, and alleviate documentation overload all while maintaining HIPAA compliance.

What are the regulatory concerns?

There are several regulatory concerns around GenAI use, including privacy concerns and how GenAI chats should be archived and monitored. OpenAI has included some bespoke features in ChatGPT for Healthcare to address industry needs, such as transparent citations to help show clear decision making in clinical decisions and specific data controls that support HIPAA compliance.

However, cyber experts warn OpenAI can still experience security breaches, putting protected health information (PHI) at risk, and hallucinations of false sources that users must manually check.

What’s the word from HHS?

The U.S. Department of Health and Human Services (HHS) published a Request for Information in December 2025 asking for public comment as to how the department can accelerate the adoption and use of AI as part of clinical care.

The embrace mirrors the early promotion of electronic health records (EHR) by the HHS, culminating in the Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009. Many healthcare organizations implemented EHRs before HITECH.

Those early adopters were eager to capture the benefits of digitizing recordkeeping systems, but they took on greater legal and financial risk than those who waited for the regulatory landscape to solidify.  

For example, the copy-paste function of EHRs resulted in mistakes getting repeated across various data forms, with little clarity on who was responsible for the mistake at each instance. Early EHR systems lacked the governance frameworks to prevent these issues, exposing early adopters to malpractice claims.

What can healthcare compliance teams do today?

Compliance teams are at an impasse. A majority of healthcare employees already use GenAI at work on a daily basis, but best practices have yet to be established. Organizations cannot risk playing catchup once regulators officially establish standards of use. By then, it very well could be too late to change employees’ longstanding, improper use of GenAI.

Compliance teams can develop their own best practices around the use of ChatGPT, then institute training and monitoring to ensure proper use. Specific steps to take while waiting for regulatory guidance include:

  • Establish governance policies for ChatGPT use, including acceptable use cases and prohibited applications.
  • Train staff on proper AI use and documentation requirements.
  • Implement comprehensive archiving of all AI interactions that inform clinical or administrative decisions.
  • Create monitoring protocols to detect improper use, hallucinations, or privacy breaches.
  • Consult legal counsel on how AI-assisted decisions should be reflected in medical records.

What regulations do healthcare organizations need to consider?

Monitoring for improper care recommendations

Clinicians are disallowed from making certain recommendations that violate laws such as the Anti-Kickback Statue and Stark Law. ChatGPT for Healthcare may not be prompted to avoid recommendations that would financially benefit the healthcare professionals (HCPs) and, therefore, cross these statues.

Documentation requirements for defending clinical decisions

Just as with EHR implementation, organizations must document how clinicians reviewed and validated AI recommendations. This is crucial in defending HCPs against malpractice claims, when HCPs must provide full documentation as to their clinical decisions. Documentation does not just include recording logs of ChatGPT conversations, but also how clinicians verified sources and the chain of thought behind ChatGPT outputs. 

Protecting patient health information under privacy laws

Any electronic communication of patient data requires compliance with HIPAA, HITECH, and applicable state privacy laws. Policies should address data processing locations, prohibit use of model training without authorization, and meet state-specific requirements that may exceed federal HIPAA protections.

How to archive and monitor ChatGPT for Healthcare

The Global Relay Connector for ChatGPT Enterprise allows compliance teams to archive and monitor all communications on ChatGPT for Healthcare. With Global Relay you can connect all ChatGPT logs with the rest of your archived communications data to maintain a complete timeline of clinical decision-making. Global Relay also gives employers the ability to monitor ChatGPT logs for improper care recommendations that violate provisions like the Stark Law or the Anti-Kickback Statue.  

Find out more about Global Relay’s ChatGPT Enterprise Connector

About Article

Published 29 January 2026

About Author

Share Article