A woman reviews best practices for GenAI recordkeeping in healthcare and life sciences organizations.

How can healthcare and life sciences organizations control GenAI recordkeeping risks?

With the use of generative AI tools like ChatGPT increasing within healthcare and life sciences organizations, how can businesses balance compliance and modernization while tapping into the benefit AI can offer?

11 December 2025 6 mins read
Profile picture of Kathryn Fallah By Kathryn Fallah
Written by humans

Written by a human

In brief:

  • Generative AI tools are becoming indispensable for pharmaceutical organizations, used for clinical research, to assist with analysis, and lighten administrative burdens
  • As with businesses across other verticals, pharmaceutical firms are integrating platforms like ChatGPT into workflows and strategies
  • As organizations consider how they can use generative AI applications to boost efficiencies and decision-making, they must ensure they’re staying in step with evolving regulatory requirements

The increasing prominence of generative AI (GenAI) platforms like ChatGPT has made waves across the healthcare and life sciences industry. Organizations are weighing up how AI can drive efficiencies, improve productivity, and support administrative functions by streamlining day-to-day operations.

However, as teams begin to adopt these tools, it’s critical that they have the right systems in place to manage and capture data to meet regulatory obligations and leverage the insights and opportunities this data can offer.

A dose of GenAI for pharma

In October, biopharmaceutical company Lundbeck announced a strategic collaboration with OpenAI to deploy ChatGPT to its global workforce. Lundbeck stated that this would “transform how the company innovates and operates across its entire value chain” by enhancing discovery and improving decision-making. Lundbeck joins the ranks of pharmaceutical manufacturers Eli Lilly, Sanofi, and Moderna, which all announced partnerships with OpenAI in 2024.

Other pharmaceutical organizations are implementing GenAI tools to accelerate clinic trial executions, better evaluate drug development results, and help pharmaceutical sales representatives prepare for meetings with external healthcare providers. Some, like Bristol Myers Squibb, developed proprietary GenAI tools of their own in place of seeking partnerships.

With facilities looking to utilize AI to revolutionize workflows and keep pace with modernization, and regulators leveraging the benefit of AI themselves, it’s likely a matter of time before generative applications become wholly ingrained across most organizations.

Though, to ensure that the adoption of AI tools doesn’t outpace good governance, life sciences organizations will need to implement the right compliance tools if they hope to stay in line with regulatory expectations, protect consumers, and safeguard market integrity.

Which current regulations apply to GenAI?

While AI-specific rules continue to develop, guidelines and statements set out by healthcare regulators outline how organizations should approach evolving technology within business workflows – especially regarding transparency, security, and data integrity.

21 CFR Part 11

The Food and Drug Administration’s (FDA) Code of Federal Regulations (CFR) Part 11 lists requirements to ensure the integrity and reliability of electronic records and signatures through audit trails and system validation. As organizations utilize AI-enabled electronic systems to manage and generate data, 21 CFR Part 11 requirements carry over to the need to capture data outputs from generative tools.

HIPAA Compliance

Popular GenAI platforms like ChatGPT and Anthropic’s Claude are not inherently Health Insurance Portability and Accountability Act (HIPAA) compliant, so pharmaceuticals allowing the use of these platforms in clinical trial R&D should monitor for any noncompliant sharing of sensitive information. However, HIPAA rules on privacy, security, and breach notifications do apply to the use of in-house GenAI platforms any pharmaceutical company might develop.

DOJ’s ECCP

The Department of Justice (DOJ) made amendments to its Evaluation of Corporate Compliance Program (ECCP) in September 2024, which include extended guidelines around the risk management processes organizations must put in place when using emerging technologies to conduct company business. Organizations are expected to “monitor and test new technologies” to understand whether they’re functioning as intended – stressing the need for robust monitoring systems to maintain oversight of AI-driven decisions.

EU AI Act

The European Union AI Act identifies AI-based software used for medical purposes as “high risk,” meaning that generative platforms used by organizations must align with requirements around risk-mitigation, data quality, and user documentation to be compliant by the time the act fully takes effect in 2027.

ICH Guidelines

The International Council for Harmonization (ICH) also details principles around the deployment of AI as it relates to Good Manufacturing Practices (GMP), particularly in relation to process control and monitoring approaches. ICH’s Q9 guidance specifically encourages the use of advanced tools for better, more proactive risk management.

How to keep GenAI oversight in good health

As adoption continues to grow, healthcare and life sciences organizations should take steps to maintain comprehensive documentation and support the responsible use of AI in their workflows.

1) Maintain comprehensive records of AI data

Organizations must first ensure AI data is logged comprehensively and accessibly. This includes documenting AI’s use across operational workflows, from data in to data out, so that information is readily available should a compliance or regulatory inquiry arise. In addition to retaining data, maintaining structured records also enables organizations to draw insights about GenAI performance and perform advanced eDiscovery.

2) Ensure complete traceability

With regulators requiring that organizations retain records of AI-generated outputs and decisions, comprehensive audit trails are key to maintaining transparency, explainability, and accountability. By ensuring that all AI-related information can be traced back to its source – whether an AI-assisted recommendation, summary of clinical notes, or patient evaluation – organizations can ensure they’re providing clarity needed for effective governance.

3) Uphold data privacy and security

Considering the sensitivity of medical information and intellectual property, it’s crucial healthcare and life sciences companies uphold strict data privacy and security standards.  Organizations must have safeguards in place to prevent the unauthorized access of private data while ensuring compliance with rules set out in regulatory guidelines like HIPAA. In addition, it’s vital to verify sound security practices are in place with internal and external systems – from GenAI partners to third-party vendors – to maintain safety on all fronts.

4) Perform risk management to ensure accuracy

While AI is becoming integral to healthcare and life sciences operations, its complexity means it’s important to perform risk management to monitor outputs of AI models. Having “human-in the-loop” review processes in place to validate accuracy can help organizations justify AI-generated conclusions and ensure models are performing as expected. 

In conclusion

By establishing policies to govern and oversee the use of GenAI platforms like ChatGPT, healthcare and life sciences organizations can ensure they’re taking all necessary steps to safeguard patient data, maintain transparency of operational workflows, and validate decision-making when utilizing AI within operational workflows. 

With a complete suite of Connectors to capture communications across every business channel, including ChatGPT, Global Relay offers firms the ability to leverage the power of GenAI tools by maintaining comprehensive records while mitigating compliance gaps.