The US Securities & Exchange Commission Targets AI on Multiple Fronts — AI: The Washington Report
Welcome to this week’s issue of AI: The Washington Report, a joint undertaking of Mintz and its government affairs affiliate, ML Strategies.
The accelerating advances in artificial intelligence (“AI”) and the practical, legal, and policy issues AI creates have exponentially increased the federal government’s interest in AI and its implications. In these weekly reports, we hope to keep our clients and friends abreast of that Washington-focused set of potential legislative, executive, and regulatory activities.
This issue covers the U.S. Securities & Exchange Commission’s (“SEC”) regulatory interest in AI reflected in statements, proposed rules, and examinations of the activities of private fund advisors.
SEC Advances AI Regulatory Agenda
The U.S. Securities & Exchange Commission (“SEC”) is highly focused on regulating the use of artificial intelligence (“AI”) by financial services providers. A recent Mintz client alert discussed newly proposed rules for conflicts of interest in the use of AI, predictive data analytics, and similar technologies by broker-dealers and investment advisers (the “PDA Conflicts Rules”). If adopted as proposed, liberally construed, and strictly interpreted, the PDA Conflicts Rules could fundamentally alter the regulatory landscape for advisers and broker-dealers and have a substantial impact on AI service providers.
The PDA Conflicts Rules are still in the proposal stage, but they represent just one aspect of the SEC's expanding AI regulatory efforts. SEC Chairman Gary Gensler has highlighted specific risks associated with AI in recent public statements, an SEC “sweep” examination of private fund advisers focused on key risks appears underway, and AI-related SEC guidance and enforcement actions seem increasingly likely in the near term. We discuss below recent developments and what to watch for in the future.
Chair Gensler’s Speech
Part of the SEC’s recent focus on AI may reflect a view that it “will be at the center of future . . . financial crises.” Shortly before making this statement to the New York Times, Chair Gensler expressed a similar sentiment in a speech to the National Press Club, noting that AI could “play a central role in the after-action reports of a future financial crisis.” Chair Gensler used his National Press Club speech to highlight the following concerns about AI – echoes of which resurfaced about a week later in a proposing release for the PDA Conflicts Rules:
- Data Privacy and Intellectual Property. Chair Gensler noted that AI systems rely on enormous and diverse datasets, which can lead to concerns about data ownership, control, and rights. Concerns also exist about the privacy of personal details in these datasets and the misuse of intellectual property sourced, possibly without authorization, from various sources.
- Financial Stability. Chair Gensler expressed concern about AI intensifying existing financial vulnerabilities. For example, if many large financial institutions come to rely on a handful of AI models or data sources, their trading systems could act simultaneously on negative news or erroneous market indicators. Simultaneous selling of a publicly traded security by large financial institutions due to AI prompts could damage the US financial system and be amplified by the global financial system's intrinsic interconnectedness.
- Deception. The growing use of AI can also lead to its misuse for deceptive purposes. Chair Gensler pointed to AI-generated deepfake images, stories, and other false information to illustrate his concerns. He also emphasized that regardless of the tool or method, fraudulent practices remain illegal under federal securities regulations and that there is always a human programmer or other individuals behind any AI program.
- AI Bias. Chair Gensler noted that AI models are intricate and fed by vast datasets, which can make their decision-making processes inscrutable. The inability to understand an AI’s reasoning can lead to concern about equitable results, especially if the data that predictive tools rely on reflect historical prejudices or unintentionally discriminate against protected attributes. Not knowing why AI takes certain actions may also result in inadvertent conflicts of interest, as discussed further below and in the SEC release proposing the PDA Conflicts Rules.
- Potential for Conflicts of Interest. Chair Gensler voiced concern about advisers or broker-dealers intentionally or unintentionally prioritizing their own interests over their clients when using AI applications. This concern about adviser and broker-dealer conflicts of interest served as a central justification for the PDA Conflicts Rules, as further discussed in a recent Mintz client alert.
While the SEC may lack authority to issue regulations addressing all of the concerns above, understanding Chair Gensler’s views may help advisers and broker-dealers contextualize future regulatory and enforcement activity by the agency.
SEC Sweep of Private Funds Utilizing AI
The SEC’s interest in the use of AI does not appear limited to rule proposals and speeches. According to various sources, the SEC staff is currently conducting sweep examinations of private fund advisers using AI (the “AI sweep”). Questions posed by SEC examiners in the AI sweep include requests for:
- A description of the AI models and techniques used by the advisers.
- A list of algorithmic trading signals and associated models.
- The sources and providers of their data.
- Internal reports of any incidents where AI use raised regulatory, ethical, or legal issues.
- Copies of any AI compliance written supervisory policies and procedures.
- Contingency plans in case of AI system failures or inaccuracies.
- Client profile documents used by the AI system to understand a client's risk tolerance and investment objectives.
- AI-related security measures.
- A list and description of all data acquisition errors and/or adjustments to algorithmic modifications due to data acquisition errors.
- Samples of any reports detailing the validation process and performance of robo-advisory algorithms.
- A list of those who develop, implement, operate, manage, or supervise AI software systems.
- A list of all board, management, or staff committees with specific AI-related responsibilities, the frequency of any meetings, a list of the members of each committee, and whether minutes are kept.
- All disclosure and marketing documents to clients where the use of AI by the adviser is stated or referred to specifically in the disclosure, including audio and video marketing in which the adviser's use of AI is mentioned.
- A list of all media used to advertise, market or promote products and services, including social media, chat forums, websites, due diligence questionnaire responses, PPMs, pitch books, presentations, newsletters, annual reports, and podcasts and/or other video or audio marketing, and two recent examples of each kind of ad.
These questions go far beyond the conflict-of-interest issues central to the PDA Conflicts Rules. Some sweep questions appear focused on AI-related trading practices and may tie into AI-related financial stability and interconnectedness concerns. Others extend beyond trading, including questions about marketing practices and AI-related disclosures, security and contingency planning, and specific individuals and governing bodies with AI-related responsibilities. All advisers and broker-dealers should pay attention to these broader questions, as they appear relevant to any use of AI by a broker-dealer or adviser.
Potential Future Action
The SEC could use information gathered in its AI sweep for one or more purposes. First, the SEC staff has not issued specific guidance on the use of AI by investment advisers or broker-dealers. In April of this year, the SEC’s Investor Advisory Committee issued a letter suggesting, among other things, that the SEC’s Division of Examinations draft best practices for the use of AI by advisers and broker-dealers. The letter proposed building on 2017 guidance issued for robo-advisers, which focused on three core areas of their businesses:
- The substance and presentation of disclosures to clients about the adviser and its services;
- The obligation to obtain information from clients to support the adviser’s duty to provide suitable advice; and
- The adoption and implementation of effective compliance programs reasonably designed to address particular concerns relevant to the adviser’s business.
The SEC staff provided detailed questions for robo-advisers to consider in each core area above, and many of the AI sweep questions appear to focus on similar areas – albeit in the AI context. The Division of Examinations often issues informal guidance in the form of “Risk Alerts” based on examination findings, so the SEC’s robo-adviser guidance may be informative for companies that want to start preparing now for future AI guidance.
Second, sweeps can precede SEC enforcement actions. Even without specific topical guidance, the SEC has taken action against companies in the past in a practice known as "regulation by enforcement." Thus, AI-related SEC enforcement actions may arise from the current AI sweep despite the lack of SEC guidance.
Final Thoughts
As reflected above, the SEC’s focus on the use of AI by financial service providers seems to have accelerated this year. This development underscores the need for advisers and broker-dealers to examine their existing use of AI tools, and to ensure that their internal policies and procedures address key areas of SEC concern outlined in its AI sweep letter and, where relevant, in the SEC staff’s 2017 guidance to robo-advisers. While conflicts of interest are likely to remain front and center due to the proposed PDA Conflicts Rules, recent speeches and AI sweep questions suggest a much broader SEC regulatory agenda and an increased likelihood of future regulation and SEC enforcement actions.