SEC Chair Doubles Down on AI Conflict of Interest Rules, Warns Firms Not to “AI-Wash” — AI: The Washington Report
Welcome to this week’s issue of AI: The Washington Report, a joint undertaking of Mintz and its government affairs affiliate, ML Strategies.
This issue covers recent statements by US Securities and Exchange Commission (“SEC”) Chair Gary Gensler regarding securities regulation and AI. Our key takeaways are:
- Rules proposed by the SEC in July 2023 seek to impose certain requirements on firms in the securities industry utilizing novel tools powered by data analytics techniques, including artificial intelligence. The comment period on these rules closed in October 2023, and the SEC is currently reviewing these comments. The July 2023 proposals have met unusually stiff industry opposition to date.
- During a December 2023 panel discussion, Chair Gensler warned firms in the securities industry not to “AI-wash,” or make false or misleading claims about the use of AI technologies. Gensler’s warnings against “AI-washing” echo pronouncements made by other enforcement agencies such as the Federal Trade Commission (“FTC”) and Consumer Financial Protection Bureau (“CFPB”). To avoid being the subject of an enforcement action, firms should take care to ensure the accuracy of their AI-related representations.
- The SEC has aggressively pursued cases against SEC registrants after warning against greenwashing in the environmental, social, and governance (“ESG”) space. Thus, many in the industry may view Chair Gensler’s statement as more of a threat than a cautionary statement.
SEC Proposes Rules on Conflicts of Interests with AI Tools
In July 2023, the SEC proposed rules titled Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers.
These rules were proposed, at least in part, to respond to AI-related developments in the securities industry. Professionals in a variety of fields have found productive uses for novel generative AI tools, and broker-dealers and investment advisors are no exception. Actors within the securities industry have increasingly utilized AI to enhance compliance and surveillance practices, provide customized investment advice, monitor for financial crime, and handle customer inquiries.
In its proposed rules, the SEC calls these novel tools predictive data analytics–like technologies (“PDA-like technologies”). While the SEC acknowledges that the use of PDA-like technologies by broker-dealers and investment advisers “can bring potential benefits for firms and investors,” the agency asserts that absent regulatory reform, the widespread use of these tools by securities professionals may pose a risk to investors.
For instance, an AI-powered tool created to automatically provide clients with tailored investment advice could be of great value, in part due to the potential of PDA-like technologies to “scale outcomes from analysis of data, and evolve at rapid rates.” However, if the tool’s algorithm is “tainted by a conflict of interest,” then this capacity to quickly scale could become problematic, as “the transmission of…conflicted advice and recommendations could spread rapidly to many investors.” Similar algorithmic error amplification concerns can arise in the trading context as well.
The SEC characterizes its proposed rules as seeking to address these kinds of concerns by imposing new requirements on securities firms utilizing PDA-like technologies. These requirements include:
- Addressing AI-related conflicts of interest: Firms would be “required to identify and eliminate, or neutralize the effect of, certain conflicts of interest associated with their use of PDA-like technologies because the effects of these conflicts of interest are contrary to the public interest and the protection of investors.”
- Establishing relevant policies and procedures: Firms with any investor interactions using covered technologies would be required to maintain relevant policies and procedures designed to limit conflicts of interest. The mandated policies and procedures fall into five categories:
- Descriptions of the process for evaluating any use of a covered technology in any investor interaction.
- Descriptions of “any material features of any covered technology used in any investor interaction and of any conflicts of interest associated with that use.”
- Descriptions of the process for determining whether “any conflict of interest identified pursuant to the proposed conflicts rules results in an investor interaction that places the interest of the firm or person associated with the firm ahead of the interests of the investor.”
- Descriptions of the process for eliminating or neutralizing the effect of any conflict of interest relevant to these proposed conflict rules.
- An at least annual review of the effectiveness of the policies established pursuant to these rules.
- New record-keeping requirements: Finally, firms under the purview of these rules would be required “to make and keep books and records related to the requirements of the proposed conflicts rules” so as to facilitate SEC enforcement of the rules.
Industry criticism of the proposed rules was swift and wide-ranging. We discussed many key questions and potential criticisms of the proposal in a prior Mintz article.
Commenting on this proposed suite of rules during a December 2023 panel discussion, SEC Chair Gary Gensler asserted that the regulation of novel AI tools is definitively within the purview of securities law. “Artificial intelligence as we know it now still has humans in the loop…There [are] humans that are putting that AI model in place…and so, there are still humans that have responsibility for that AI model,” said Gensler.
The comment period on these rules closed in October 2023. During the December panel discussion, Chair Gensler stated that the SEC is in the process of reviewing these comments. We will continue to monitor the development of these rules and their potential impact on the securities industry.
“Don’t do it”: Chair Gensler Warns Firms Not to “AI-Wash”
During the same December 2023 panel discussion in which he discussed the SEC’s proposed predictive analytics rules, Chair Gensler warned firms not to misrepresent their AI capabilities. “One shouldn’t greenwash, and one shouldn’t AI-wash,” said Gensler. “If you’re raising money from the public, if you’re offering and selling securities, you come under the securities laws and give full fair and truthful disclosure, and then investors can decide.”
Though there is much uncertainty surrounding the future course of AI legislation in the United States, one feature of AI regulation in the current regulatory environment is clear: firms would be unwise to make false or misleading claims regarding their use of AI.
Escalating Enforcement Potential Across Multiple Agencies
The SEC is far from the only agency warning firms under their jurisdiction against “AI-washing.”
As covered in a previous newsletter, the Federal Trade Commission (“FTC”) has been particularly vocal on this issue, warning advertisers “not to overpromise what your algorithm or AI-based tool can deliver” lest they violate consumer protection law. We have also discussed comments by the Consumer Financial Protection Bureau (“CFPB”) on this matter. According to the CFPB, financial institutions “run the risk” of noncompliance with federal consumer financial laws when “chatbots ingest customer communications and provide responses… [that] may not be accurate…”
Responding to the great interest of lawmakers and members of the general public on the potential harms caused by advancements in AI technology, enforcement agencies across the federal bureaucracy are energetically pursuing this topic. These agencies have also been spurred on in no small part by President Biden’s October 2023 executive order.
Given the emphasis that a range of enforcement agencies have placed on pursuing false or misleading claims related to AI, firms should take care to ensure that their AI-related representations are accurate. In short, firms should ensure that public representations regarding their use of AI tools conform to the actual capabilities of these tools.
While enforcement actions on AI have so far been relatively few, we expect the pace of enforcement to increase in 2024. We will continue to monitor, analyze, and issue reports on these developments.