Skip to main content

From Innovation to Regulation: Health Care Enforcement Related to AI — EnforceMintz

Artificial intelligence (AI) is revolutionizing health care, offering new opportunities for efficiency and innovation—but it’s also creating complex regulatory challenges. As federal and state authorities race to establish guardrails, health care organizations must navigate evolving laws, enforcement risks, and compliance requirements. This article explores the latest developments in AI regulation and provides insights to help providers and companies prepare for 2026 and beyond.

KEY POINTS:
  • States are driving AI regulation amid federal uncertainty. With no comprehensive federal legislation, states have enacted laws governing AI in health care, focusing in part on claims adjudication, clinical decision support, and chatbot transparency. These laws aim to ensure appropriate human oversight and involvement, and prevent misrepresentation of AI as licensed care.
  • Enforcement risks are growing. AI use in health care introduces new False Claims Act (FCA) liability theories, including improper use in care delivery, documentation, and claims processing. Department of Justice (DOJ) has already prosecuted schemes involving AI-generated fraudulent consent recordings, signaling increased scrutiny.
  • Government agencies are using AI to detect fraud. Federal agencies like DOJ and CMS are leveraging AI for enforcement, including initiatives like the WISeR Model and the Health Care Fraud Data Fusion Center. These tools will enhance fraud detection and likely lead to more FCA investigations.
  • Compliance requires proactive governance and transparency. Health care organizations should establish AI governance committees, monitor technology performance, prioritize transparency, and align with emerging federal and state authorities and guidance. Using similar AI tools as DOJ for internal audits can help mitigate enforcement risk.

 


This article is part of EnforceMintz: Healthcare Enforcement Trends in 2025 & 2026 Outlook, a series exploring key developments and practical strategies for health care organizations navigating enforcement risks. Read more articles from EnforceMintz to stay current on enforcement trends and compliance strategies.


 

AI is rapidly transforming the health care landscape and offering unprecedented opportunities for innovation and efficiency. Unlike traditional algorithms that are explicitly programmed to follow predefined rules and produce the same output with the same input, AI and AI-enabled tools perform complex tasks, recognize patterns, make predictions, and adapt to new information. The evolution of AI technology and its many uses in health care raises critical questions regarding oversight, privacy, and the extent to which human involvement is necessary in otherwise automated decision-making processes.

With the introduction of new technologies, especially in the health care industry, federal and state governments often seek to jointly regulate. With respect to AI, we are in the midst of a federalism battle, with President Trump signing an Executive Order (EO) entitled “Ensuring a National Policy Framework for Artificial Intelligence” in early December 2025 that takes a hands-off approach to the development and use of AI. Before the EO was issued, and as of the date of this publication, Congress has not enacted legislation to provide guardrails for balancing the development and use of AI in health care with protecting patient safety and privacy, maintaining appropriate oversight and involvement from humans (including licensed health care professionals), and managing enforcement risk. States and other stakeholders are left to fill the gap.

This article discusses many of 2025 developments with respect to the use of AI in health care and considers enforcement risks in 2026 and beyond.

States Are Leading the Way on AI Regulation

Throughout 2025, states ramped up efforts to regulate AI in health care, aiming to balance innovation with appropriate safeguards. For example, legislation often focused on AI uses related to health insurance claims adjudication, clinical decision support systems, and the use of conversational AI tools like chatbots.

  • Health insurance claims adjudication and related functions: Many states addressed the use of AI by insurers and benefits managers in claims adjudication. For example, Texas passed legislation to prohibit utilization review agents from using AI to make adverse determinations (see Texas S.B. 815, effective September 1, 2025). Other states, like Maryland, enacted guardrails for AI use in the utilization review process (see Maryland H.B. 820, effective October 1, 2025).
  • Clinical decision-making tools: When it comes to AI-based tools in the context of health care decision making, many states confine the technology to a supporting role. For example, Illinois passed legislation that prohibits the use of AI to develop mental health treatment plans or to directly interface with patients, but does allow use of the technology for administrative support (e.g., managing appointments, drafting communications regarding therapy logistics that do not include therapeutic advice) and supplementary support (e.g., preparing and maintaining client records and notes, analyzing anonymized data to track trends and progress with oversight from a licensed professional) so long as a licensed professional provides oversight and the patient has given consent (see Illinois H.B. 1806, effective August 1, 2025).
  • Chatbots: Several states have passed laws requiring increased transparency for AI-powered chatbots. Chatbots may use generative AI to simulate human-like communication and develop interpersonal relationships with users by mimicking human characteristics and emotions. Many states have developed legislation focused on preventing any misrepresentation that care delivered by AI-powered chatbots is being provided by licensed human clinicians. For example, California has empowered state health professional licensing boards and enforcement agencies to take action against AI technology providers that run afoul of existing prohibitions on the use of any “terms, letters, or phrases” that falsely represent or imply possession of a license or certificate to practice a health care profession. Similarly, California has prohibited the use of a term, letter, or phrase in the advertising or functionality of an AI or generative AI system, program, device, or similar technology that indicates or implies that the care, advice, reports, or assessments being offered through that technology is being provided by a natural person with the appropriate license or certificate to practice as a health care professional (see California A.B. 489, effective January 1, 2026).

While some states are developing new laws and regulations applicable to AI, others are offering guidance on how their existing authorities apply to this technology. For example, California Attorney General Rob Bonta issued two legal advisories to kick off 2025 (accessible here and here) that provide guidance to consumers and entities that develop, sell, and use AI about their rights and obligations under California law, including under the state’s consumer protection, civil rights, competition, and data privacy laws. One of these advisories provides targeted guidance to health care providers, insurers, vendors, investors, and health care entities that develop, sell, and use AI and other automated decision systems. (We covered both advisories in a previous publication, and we address the health care–related advisory in more detail in our EnforceMintz article “The Old, the New, and the Unknown: Consumer Protection Enforcement Activity in Health Care.”) 

As mentioned above, President Trump’s EO may complicate the work of states and other stakeholders, as its explicit purpose is to preempt state legislation contrary to the EO’s policy of allowing “AI companies [to] be free to innovate without cumbersome regulation.” To this end, the EO issues many directives that are supposed to take effect within 30 to 90 days of the EO, including (1) the establishment of an AI Litigation Task Force whose sole responsibility will be to challenge state laws inconsistent with the EO, and (2) the publication of an evaluation of existing state AI laws that conflict with the EO. If those deadlines are met, we may soon have insights into how disruptive this EO will be to states’ ongoing efforts to regulate AI in the health care space, and beyond.

Government Enforcement Is Still Nascent, But Picking Up Steam

In recent years we have seen some FCA cases involving allegations regarding improper uses of AI, and we expect to see more. AI’s many use cases will undoubtedly yield a variety of theories of FCA liability, including inappropriate use of AI in rendering care, documenting and certifying that care, preparing and submitting claims, and adjudicating claims, among many others. Use of chatbots by health care companies also may garner attention. We also expect to see AI-related criminal enforcement. For example, in June the DOJ announced its 2025 National Health Care Fraud Takedown; among the schemes captured in this action was one in which defendants allegedly fraudulently obtained Medicare beneficiaries’ identification numbers and confidential health information and used AI to create fake recordings of Medicare beneficiaries purportedly consenting to receive certain products. This scheme resulted in defendants causing the submission of approximately $703 million in allegedly fraudulent claims to Medicare and Medicare Advantage plans.

Federal Enforcement Agencies Are Using AI as an Investigatory Tool

While federal and state agencies are working to regulate use of AI in health care through new laws and enforcement efforts, they are simultaneously embracing AI technology as an investigative and enforcement tool. For example, the Centers for Medicare & Medicaid Services Innovation Center developed the Wasteful and Inappropriate Service Reduction (WISeR) Model, which is a voluntary model that will run for six performance years (starting January 1, 2026) in six states. The model will leverage AI, machine learning, and human clinical review to introduce prior authorization requirements for certain outpatient services that (1) may pose risk to patient safety if delivered inappropriately, (2) have existing publicly available coverage criteria, and (3) have been a source of fraud, waste, abuse, and inappropriate utilization (e.g., skin substitutes, knee arthroscopy for knee osteoarthritis). CMS is touting the model as benefiting patients (ensuring that ordered services are reasonable, necessary, and appropriate), providers (increasing transparency, predictability, and efficiency in reimbursement decisions), and federal health care programs (reducing fraud, waste, and abuse and ensuring beneficiaries are receiving appropriate care).

DOJ has long touted its use of data analysis to detect health care fraud, waste, and abuse, and it will undoubtedly employ AI to refine and improve its efforts in this regard. For example, in DOJ’s press release related to the 2025 National Health Care Fraud Takedown mentioned above, DOJ announced that it is working with other federal agencies to create a “Health Care Fraud Data Fusion Center ... to leverage cloud computing, artificial intelligence, and advanced analytics to identify emerging health care fraud schemes.” Such tactics may lead to an increase in DOJ-initiated FCA investigations, which already are on the rise.

Leading Medical Associations and Accrediting Bodies Offer AI-Related Guidance

As we look ahead to an era of further development in AI technology, health care providers and companies will need to balance their business needs and innovation goals with navigating a dynamic regulatory landscape that prioritizes patient and data protections. To help in achieving this balance, several medical associations have produced potentially useful resources and guidance frameworks:

  • The Joint Commission and Coalition for Health AI (CHIA) briefing outlines guidance for health care organizations on deploying AI responsibly, from governance to monitoring and training protocols.
  • Explaining the key considerations for ensuring the ethical and responsible use of AI in health care, the American Medical Association (AMA) also highlighted the importance of a coordinated, human-centered approach that removes bias, secures data, and prioritizes transparency, while also noting the need for increased educational efforts related to AI.
  • The National Institute of Health (NIH) shared guidance on strategies to align AI guidelines with health care best practices, placing significant emphasis on the need for data privacy, bias prevention, safety and security, reliability and transparency, and human accountability.

Health care businesses and organizations that embrace the transformative potential of AI should consider these guidance documents, as well as federal and state guidance and legislation, in preparing for the future of AI. Doing so can mitigate legal and business risks and protect patients and clients. To that end, we recommend the following:

  • Lead with human oversight. Establish a dedicated AI governance committee that oversees the implementation of a formal AI strategy, adherence to compliance requirements, and development of workforce training.
  • Routinely monitor the technology. Implement ongoing processes to identify risks and regularly evaluate the technology for accuracy, reliability, and safety.
  • Prioritize transparency. Clearly communicate AI’s limitations, potential risks, and intended (and permitted) use to stakeholders.
  • Follow the government’s lead. Track developing federal and state regulation and guidance and consider using the same AI tools that government agencies are using to detect and combat fraud and abuse. DOJ published an AI Use Case Inventory that tracks all of the AI tools DOJ has utilized since 2023, including tools that DOJ intends to use in civil FCA investigations. Health care organizations should consider using the same analytical tools as part of the auditing and monitoring function of their compliance programs, and thereby minimize the risk of DOJ enforcement scrutiny and potential FCA liability.

 


Get early access to Part 2 of EnforceMintz

The next edition of EnforceMintz — our annual False Claims Act Statistical Year In Review — will analyze trends in FCA cases using data from DOJ’s recently released annual report on FCA settlements and judgments.

Notify Me About Part 2


 

 

 

Subscribe To Viewpoints

Authors

Daniel A. Cody is a Member at Mintz who represents clients across the health care and life sciences sectors, including the digital health industry, providing strategic counseling and leading civil fraud and abuse investigations. His practice encompasses a broad range of complex regulatory, compliance, privacy, and transactional matters.
Jordyn Flaherty

Jordyn Flaherty

Jordyn Flaherty is a Mintz Project Analyst.
Samantha advises clients on regulatory and enforcement matters. She has deep experience handling violations of the federal ant-kickback statute and FCA investigations for clinical laboratories and hospitals.
Karen S. Lovitch

Karen S. Lovitch

Member / Chair, Health Law Practice & Chair, Health Care Enforcement Defense Practice

Karen advises industry clients on regulatory, transactional, operational, and enforcement matters. She has deep experience handling FCA investigations and qui tam litigation for laboratories and diagnostics companies.