New AI Disclosure Bill and AI Strategic Plan Update — AI: The Washington Report
Welcome to this week’s issue of AI: The Washington Report, a joint undertaking of Mintz and its government affairs affiliate, ML Strategies (MLS). The accelerating advances in artificial intelligence (“AI”) and the practical, legal, and policy issues AI creates understandably have exponentially increased the federal government’s interest in AI and its implications. In our weekly reports, we hope to keep our clients and friends abreast of that Washington-focused set of potential legislative, executive, and regulatory activities. Other Mintz and ML Strategies subject matter experts will continue to discuss and analyze other aspects of what could be characterized as the “AI Revolution.”
Today’s report focuses on three recent developments: a bill requiring disclaimers on content created by generative AI, the Biden Administration’s update to the National Artificial Intelligence Research and Development Strategic Plan, and a recent Senate Judiciary Committee hearing on intellectual property and artificial intelligence. Our key takeaways are:
- Stakeholders should be aware of the newly introduced AI Disclosure Act of 2023. The bill would require generative AI to include a disclaimer on any output it produces and may be included in future omnibus AI legislation.
- The Biden administration update to the National Artificial Intelligence Research and Development Strategic Plan includes a new AI R&D strategy on international cooperation and competition, signaling a reaffirmation of the need to situate federal AI R&D strategy within a global context.
- A June 7 Senate Judiciary Committee hearing on AI and patent law saw Chair Chris Coons (D-DE) and Ranking Member Thom Tillis (R-NC) both call for patent law reform that would encourage AI innovation and boost U.S. competitiveness globally.
The AI Disclosure Act of 2023: Mandating Generative AI Disclaimers
On June 5, 2023, Congressman Ritchie Torres (D-NY-15) introduced H.R. 3831, the AI Disclosure Act of 2023 (“AI Disclosure Act”). The bill would require generative artificial intelligence to include on any output it produces the following notice: “Disclaimer: this output has been generated by artificial intelligence.” Representative Torres’s bill would treat violations of its generative AI disclosure requirement as “unfair or deceptive acts or practices” under the terms of the Federal Trade Commission (“FTC”) Act. In other words, Torres designates the enforcement of this bill to the FTC, with its current enforcement orientation.
In introducing the bill, Torres stated that given the threat of “mass disinformation, dislocation, and destruction” potentially posed by AI, regulating this technology will be “one of the central challenges confronting Congress in the years and decades to come.” Although he admits that his proposed intervention “is by no means a magic bullet,” Torres nevertheless believes that his bill is “a common-sense starting point to what will surely be a long road to regulation.”
At three pages in length, the bill is brief and leaves many implementation details unanswered. For instance, the bill does not address the question of how content developed in part by generative AI and in part by human beings should be handled. This lack of precision appears to be by design. A spokesperson for Representative Torres informed The Hill that the office’s hope is that the AI Disclosure Act eventually becomes part of a larger legislative package on AI regulation. The spokesperson also indicated that if the bill were to become law, the implementation details would be left to the FTC.
In the days since the bill’s announcement, the concept of mandating a disclaimer on all content produced by generative AI, if not the AI Disclosure Act itself, has received qualified bi-partisan support. Representative Nancy Mace (R-SC-1), Chairwoman of the House Subcommittee on Cybersecurity, Information Technology, and Government Innovation, commented favorably on the approach adopted by the AI Disclosure Act. “While this particular bill may not be the best solution, by requiring a disclaimer for AI content, we empower users to make informed decisions about the information they consume,” commented Mace in a recent interview with Fox News.
Given the potential for broader bipartisan support of the AI Disclosure Act, stakeholders should closely monitor the progress of this legislation and note the possibility of its inclusion into an omnibus AI bill.
The 2023 Update to the AI Strategic Plan: An International Focus
In last week’s report, we discussed the Biden administration’s May 2023 request for information (“RFI”) on “National Priorities for Artificial Intelligence.” Alongside this RFI, the White House released the long-awaited update to the National Artificial Intelligence Research and Development Strategic Plan (“Strategic Plan”). Since we gave a detailed account of the Strategic Plan’s evolution in last week’s report, here we will provide an abridged summary.
In October 2016, the Obama White House released the first version of the Strategic Plan, a report detailing “a set of objectives for Federally-funded AI research.” The objectives outlined in this initial version of the report are:
- Make long-term investments in AI research
- Develop effective methods for human-AI collaboration
- Understand and address the ethical, legal, and societal implications of AI
- Ensure the safety and security of AI systems
- Develop shared public datasets and environments for AI training and testing
- Measure and evaluate AI technologies through standards and benchmarks
- Better understand the national AI R&D workforce needs
The Trump administration released an update to the Strategic Plan in June 2019. This update included new commentary to reflect advances in AI technology and an additional eighth principle: “Expand public-private partnerships to accelerate advances in AI.”
Development of the Strategic Plan has continued into the Biden administration. On February 2, 2022, the Office of Science and Technology Policy (“OSTP”) launched a “Request for Information to the Update of the National Artificial Intelligence Research and Development Strategic Plan.” In this RFI, the OSTP solicited “input on potential revisions to the strategic plan to reflect updated priorities related to AI R&D.”
After more than a year of deliberation, the National Science and Technology Council released the 2023 update to the Strategic Plan in May. Noting advances in computing that have allowed AI to become “ubiquitous in modern life and touch nearly every facet of daily activities,” the authors of the report assert that “realizing AI’s potential social and economic benefits and aligning it with American values requires considerable research investments, pursued in accordance with the principles of scientific integrity.” To this end, the 2023 Strategic Plan “defines the major research challenges in AI to coordinate and focus federal R&D investments.”
The updated report largely affirms the eight AI R&D strategies outlined in the Obama and Trump Strategic Plans, updating commentary to reflect technological and regulatory changes that have occurred since 2019. The most noteworthy aspect of the 2023 Strategic Plan is the addition of a ninth AI R&D strategy: “Establish a Principled and Coordinated Approach to International Collaboration in AI Research.”
Given the fact that AI “research production has become increasingly geographically dispersed” in the last decade, the report asserts the need to implement strategies that ensure “the United States remains a central hub within the AI R&D ecosystem.” The report outlines four such strategies:
- Cultivating a Global Culture of Developing and Using Trustworthy AI:To encourage the global use of trustworthy AI, or “AI with attributes that conform to various ethical, legal, and societal standards,” the U.S. should pursue “collaboration with likeminded nations” and “evaluate the risks of pursuing AI R&D collaboration with partners in countries that might not share democratic values or respect for human rights.”
- Supporting Development of Global AI Systems, Standards, and Frameworks: The U.S. should engage in collaborative research with international partners to develop “metrics, test methodologies, quality and security standards, development practices, and standardized tools for the design, development, and effective use of trustworthy AI systems.”
- Facilitating International Exchange of Ideas and Expertise: “Agency-to-agency collaborations and broader bilateral and multilateral cooperative arrangements” should be cultivated, as these partnerships “provide an opportunity for the United States to address gaps by leveraging AI research expertise around the world.”
- Encouraging AI Development for Global Benefit: Given the potential for AI to be deployed in a manner that is harmful to American interests and security, the U.S. should collaborate with international partners to “restrict competitors and adversarial nations from gaining access to or acquiring advanced AI tools and associated technologies critical to U.S. national security and other interests.” The U.S. should also cooperate with international partners to investigate ways in which AI can be proactively deployed to address issues of global concern, such as climate change, food insecurity, and public health emergencies.
While this updated AI R&D strategy reflects the distinctive priorities of the Biden administration to a certain extent, it is important to note that executive-level action on the development of an international framework for AI R&D stretches back to the Obama administration. The original 2016 Strategic Plan rhetorically asked whether there are “opportunities for industrial and international R&D collaborations that advance U.S. priorities?”
The Trump administration answered this question affirmatively in his February 2019 executive order (“E.O.”), “Maintaining American Leadership in Artificial Intelligence,” Trump listed the promotion of “an international environment that supports American AI research…while protecting our…critical AI technologies from acquisition by strategic competitors and adversarial nations” as one of the five principles guiding his proposed American AI Initiative.[1]
The Trump administration advanced this commitment through the signing of two documents, the first on AI principles and the second on AI R&D collaboration. In May 2019, the United States joined its Organisation for Economic Co-operation and Development (“OECD”) allies and a handful of other nations to sign on to the non-binding “OECD Principles on Artificial Intelligence.” Later that year, in September 2020, the U.S. and the United Kingdom released a joint statement affirming the nations’ desire to establish a “bilateral government-to-government dialogue” in the service of fostering “an AI R&D ecosystem that promotes the mutual wellbeing, prosperity, and security of present and future generations.”
Against this background, the addition of the ninth AI R&D principle in the 2023 Strategic Plan can be best understood as a reaffirmation of the need to situate federal AI R&D strategy within a global context, leveraging partnerships and pre-empting adversaries. This renewed emphasis on developing an international AI R&D strategy is reflected in the May 2023 leaders’ communiqué from the recent Hiroshima G7 Summit. In the communiqué, the signatories affirmed the need to “advance international discussions on inclusive artificial intelligence (AI) governance and interoperability to achieve our common vision and goal of trustworthy AI, in line with our shared democratic values.”
Judiciary Committee Hearing on AI and IP: Patents, Innovation, and Competition
On June 7, 2023, the Senate Judiciary Committee’s Subcommittee on Intellectual Property held the first in a series of hearings on “Artificial Intelligence and Intellectual Property.” This hearing centered on patents, innovation, and competition, while subsequent hearings in this series will focus on intellectual property, among other issues.
Patents: The Thaler Case and Suggestions of Reform
As advances in computing have allowed autonomous systems to assist in the creation of inventions in fields like biotechnology, the relationship between patent law and AI has gained salience. This issue came to a head in April 2023, when the Supreme Court declined to hear an appeal brought by Dr. Stephen Thaler in Thaler v. Vidal.[2] In this case, the Federal Circuit ruled that AI systems cannot be considered as “inventors” because, “the Patent Act requires an ‘inventor’ to be a natural person.” With Thaler and related developments as a background, this hearing investigated the need for patent law reform in the age of artificial intelligence.
Chair Chris Coons (D-DE) opened the hearing by emphasizing the need for reform of patent eligibility law in light of advances in artificial intelligence. “We should change our patent eligibility laws…so that we can protect critical AI innovations,” asserted Chair Coons. The Chair alluded to Thaler, noting that in light of the Supreme Court’s declining to hear the case, “decisions we make in Congress about whether and how to protect AI related innovations will…have significant consequences for U.S. innovation and competitiveness.”
Ranking Member Thom Tillis (R-NC) concurred, reasoning that “We have to have certainty and clarity” with to regard the status of AI-generated inventions in American patent law, or the United States may “run…the risk” of losing its “competitive advantage” in the field of AI innovation.
Along with the possibility of reforming patent law to account for AI innovations, the committee members also discussed measures to address AI use cases that could disrupt the patent process. Chair Coons raised the prospect of a malicious public or private actor using generative AI to “write and file a very large number of patent applications in an attempt to lock up patenting opportunities.” To address this potential harm, the Chair raised the possibility of increasing patent fees for certain large entities.
Innovation: Ensuring Regulation Stimulates AI Innovation
With the rapid adoption of publically available generative AI tools such as ChatGPT, lawmakers and technologists have been raising the alarm about the potential harms attendant with this technology. A range of solutions have been proposed to address these harms, including targeted regulation, the creation of a new administrative agency tasked with regulating AI, and a temporary “pause” on all AI development pending further study on how to prevent potential dangers.
During the June 7 hearing, Senator Richard Blumenthal (D-CT) strongly opposed the concept of an AI pause, calling the measure a “totally impractical” move that would serve to benefit “competitors in other jurisdictions.” Blumenthal instead called for legislative and private sector interventions that would respond to AI’s potential harms without blunting the technology’s development. As a legislative response, Blumenthal suggested the creation of “a new agency” to regulate AI in a manner that would “not in any way impinge on the current patent system.”
On the private sector side, Blumenthal called on AI developers to conduct their R&D with a greater degree of caution. During the hearing, Blumenthal referenced a recent letter he co-authored with Senator Josh Hawley (R-MO) to Meta criticizing its “unrestrained and permissive” release of an AI language model whose “open dissemination…represents a significant increase in the sophistication of the AI models available to the general public, and raises serious questions about the potential for misuse or abuse.”[3]
Chair Coons also expressed concern that AI regulation addresses potential harms without “favoring well-funded and established companies,” and discouraging innovation.
Competition: The Impact of IP Policy on America’s Global AI Competitiveness
For much of the hearing, the committee members assessed existing U.S. patent law in reference to those of other major AI innovators, such as the European Union and China. Ranking Member Tillis argued that “if we don’t tackle these IP issues, we are [less] likely to be the jurisdiction where [AI] inventions occur.” Senator Marsha Blackburn (R-TN) echoed Tillis’s concern, noting that from “2011 to 2021 Chinese AI related patents accounted for nearly 75% of the global total. That should be a wake-up call to every one of us.”
Given the emphasis that competitors such as China have placed in recognizing “IP policy as an important tool in national strategies for AI and other emerging technologies,” Senator Coons called for the establishment of a “rights regime that encourages AI generated innovation to stay here in the United States instead of incentivizing innovators to turn to other countries with more favorable laws to protect their AI generated inventions.” Anticipating federal legislation on AI, Chair Coons asserted that it is “critical that we include IP considerations in ongoing AI regulatory frameworks, and make certain that the U.S. Patent and Trademark Office has a seat at the table.”
Endnotes
[1] As discussed in last week’s report, the American AI Initiative proposed in the February 2019 E.O. became the National Artificial Intelligence Initiative Office, as established by the National Artificial Intelligence Initiative Act of 2020.
[2] Mintz AI attorney Drew DeVoog spoke to Dr. Stephen Thaler and Professor Ryan Abbott, the latter a witness in the June 7 hearing, on an October 2021 edition of the Mintz podcast “EXCLUSIVE RIGHTS: Intellectual Property.”
[3] In February 2023, Meta released LLaMA, a large language model, making the model’s code available for download by approved researchers. Shortly after, the model was leaked onto the internet.