Skip to main content

EnforceMintz — A 2023 Legislative Push to Address AI in Health Care Will Continue in 2024

Since May 2023, federal legislators have introduced a broad array of bills designed to address the rapid promulgation of artificial intelligence (AI) technologies. In that time, over 50 bills have been introduced, the vast majority of which have been referred to various congressional committees (Mintz tracks AI-related legislation here). In addition, the Biden administration released its Executive Order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” in October 2023 (the “Executive Order”). Mintz covered the Executive Order in this post and likewise follows a wide range of AI-related developments here.

In November 2023, the House Subcommittee on Health of the Committee on Energy and Commerce held a hearing entitled “Understanding How AI is Changing Health Care” (the “Subcommittee Hearing”). Based upon statements from Congresswoman Cathy McMorris Rodgers (Chair, House Energy and Commerce Committee), Congressman Frank Pallone, Jr. (Ranking Member of the House Energy and Commerce Committee), Congressman Brett Guthrie (Chair, Subcommittee on Health), Congresswoman Anna Eshoo (Ranking Member of the Subcommittee on Health), and witness testimony, three primary themes emerged from the Subcommittee Hearing in connection with utilization of AI in the health care sector: patient privacy and protection of patient data, the role of clinicians, and health equity. Given these key areas of congressional concern, we expect that federal and state legislative activities will continue to be similarly focused, with multi-jurisdictional enforcement to follow.

Patient Privacy and Protection of Patient Data

AI-based tools often leverage voluminous patient medical data as companies continue to develop and advance these technologies. Congress is thus keenly focused on ensuring that health industry participants adopt safeguards to protect the privacy and security of patient data. During his Subcommittee Hearing testimony, Congressman Pallone noted: “As I have said at each of our AI hearings this year [2023], I strongly believe that as a bedrock of any AI regulation, we must enact strong federal data privacy protections for all consumers.” Adherence to existing federal privacy authorities, such as the Health Insurance Portability and Accountability Act of 1996, potential new federal legislation, and increasingly robust state patient privacy requirements will thus be paramount. For example, many AI machine learning tools rely on de-identified patient data, regardless of concerns that de-identified data sets may not be clinically sufficient. AI aside, privacy concerns about the use of patient data are not new, nor are enforcement actions in this space. Given historical enforcement activities and expressed congressional concerns, it is reasonable to expect that enforcement agencies and legislators alike will focus on the adequacy of de-identification processes used for data used to train AI tools.

The Role of Clinicians

In addition to better patient diagnoses and outcomes, one of the most significant potential benefits of AI in health care is potentially relieving administrative burdens on clinicians. For example, some AI-driven systems transcribe clinician-patient interactions while others assist in triaging emergency room patients depending on case severity. Based upon the Subcommittee Hearing, however, Congress is clearly concerned that AI tools may be improperly utilized to interfere with clinical decision-making and minimize vital clinician input. In announcing the Subcommittee Hearing, Chairs Rodgers and Guthrie stated: “This level of AI utilization is a new frontier in health care, and this committee has a vested interest in ensuring that it is improving patient care and driving innovation — not being used to supplant the clinical judgment of physicians or indiscriminately limit access to care.”

We have already seen some enforcement in this area (which we discuss in Artificial Intelligence and False Claims Act Enforcement. For example, over the past several years, administrative appeals, class actions, and enforcement actions against insurers have alleged that insurers inappropriately relied on AI predictive algorithms in making prior authorization and other determinations. In these cases, insurers were accused of denying services or terminating services early solely based upon AI predictive tools and over the objections of care providers.[1] In addition to efforts by enforcement agencies to remediate and prevent any improper use of predictive algorithms to deny medically appropriate care, we likewise expect to see regulators and legislators taking action to ensure that AI tools do not supplant the role of health care providers in delivering care and related services.

Health Equity in AI Tools

The Subcommittee Hearing also focused on existing inequities in the health care system and the potential that the design of AI tools may inadvertently exacerbate such inequities. For example, electronic health records used to train AI tools might contain demographically biased clinical data or data that excludes underserved populations. If AI tools are trained using potentially biased data, those tools might likewise perpetuate those biases and related inequities. In fact, we have already seen enforcement in this area. In August 2022, the California Attorney General initiated an investigation into potential racial bias associated with hospitals’ use of algorithmic tools for patient scheduling and other activities (we previously covered the investigation in this post). Moreover, this year’s National Association of Attorneys General Symposium included a panel discussion on “Regulating Algorithms — The How and Why,” further demonstrating enforcement agency focus upon AI algorithms. We expect legislation in this area to follow.

Given the pervasive congressional focus on AI and algorithmic tools and continuing enforcement activity, health care sector participants leveraging AI must update and improve their compliance mechanisms regarding AI use. In the context of AI-based tools in particular, such compliance should include individual or committee periodic reviews to ensure that AI tools are performing as intended and not jeopardizing patient privacy or data protection, usurping the role of clinicians, or perpetuating health inequities.

 

Endnotes

[1] See, e.g., Estate of Gene B. Lokken and The Estate of Dale Henry Tetzloff v. UnitedHealth Group, Inc. et al., No. 23-cv-03514 (USDC Minn., filed Nov. 14, 2023); Barrows and Haygood v. Humana, Inc., No. 23-cv-900654 (USDC W. Ky, filed Dec. 12, 2023).

Subscribe To Viewpoints

Authors

Daniel A. Cody is a Member at Mintz who represents clients across the health care and life sciences sectors, including the digital health industry, providing strategic counseling and leading civil fraud and abuse investigations. His practice encompasses a broad range of complex regulatory, compliance, privacy, and transactional matters.

Brian Dunphy

Samantha advises clients on regulatory and enforcement matters. She has deep experience handling violations of the federal ant-kickback statute and FCA investigations for clinical laboratories and hospitals.