Skip to main content

CFTC Report Calls on Agency to Engage in Rulemaking on AI — AI: The Washington Report

  1. A working group within the Commodity Futures Trading Commission (CFTC) released a report on May 2, 2024, concerning the risks posed by AI adoption in the derivatives market.
  2. The report warns that the adoption of AI tools by actors in the derivatives market could result in bias and discrimination, risk to consumers’ personally identifiable information (PII), data quality concerns, and more.
  3. In order to be in a position to anticipate and respond to these risks, the report recommends that the CFTC deepen “industry engagement” and institute “relevant guardrails,” including through a rulemaking that would codify some of the recommendations regarding AI risk management put forth in the National Institute of Standards and Technology’s AI Risk Management Framework.   
     

   
On May 2, 2024, the Commodity Futures Trading Commission’s (CFTC) Technology Advisory Committee (TAC)[1] released a report entitled “Responsible Artificial Intelligence In Financial Markets: Opportunities, Risks & Recommendations.” Against the backdrop of rapid advances in AI, this report seeks to inform the CFTC of AI-related developments in the derivatives market.

The report begins by acknowledging that “AI represents a potentially valuable tool for CFTC internal and external stakeholders,” emphasizing that the technology might “improve automated processes governing core functions, including risk management, surveillance, fraud detection, and the identification, execution, and back-testing of trading strategies.”

However, the report cautions that stakeholders seeking to responsibly adopt AI must consider and respond to issues such as “responsible development, the quality of training data, the extent of involvement of humans in autonomous trading models, data privacy, auditing and oversight, and the breadth of internal talent” competent to oversee the implementation of AI. To encourage the responsible adoption of AI in the derivatives market, the report calls on the CFTC to deepen “industry engagement” and institute “relevant guardrails.”

The report is split into two main sections. The first outlines risks posed by AI adoption in the derivatives market, and the second recommends policies, programs, and rules for the CFTC to adopt to achieve AI readiness.

AI Risks in the Derivatives Market

Given the scale and speed of AI adoption by actors in the derivatives market, the report asserts that the CFTC should identify the “specific risks that have high salience for CFTC-regulated markets, and then measure the potential harm that could occur should these risks be insufficiently managed.” As a preliminary step in this direction, the report outlines five types of AI use cases relevant to derivatives markets and highlights the risks associated with these use cases.

  • Trading and Investment: Leveraging AI to analyze data, identify trade execution strategy, predict asset prices, and engage in high-frequency trading.
    • Associated Risks: Data overfitting (a situation in which a model too closely aligns to its training data and cannot be used for predictive purposes), data poisoning (a cyberattack in which model training data is intentionally compromised), critical infrastructure dependance risks, erroneous AI output.
  • Customer Communication: Utilizing AI to facilitate marketing, customer acquisition, and customer service and to provide customized investing advice.
    • Associated Risks: AI hallucinations (an AI-generated response that contains false or misleading information presented as fact), privacy risk to customers’ PII, explainability and transparency risks, and biased and discriminatory treatment of customers.
  • Risk Management: Deploying AI to engage in margin model optimization, develop and execute hedging strategies, monitor and adjust excess funds requirements, analyze publicly available data for key developments related to depositories or counterparties, and engage in collateral and liquidity optimization.
    • Associated Risks: Data quality, AI hallucinations, critical infrastructure dependency, bias and discrimination, and explainability.
  • Regulatory Compliance: Market surveillance, recordkeeping, know your customer compliance, Anti-Money Laundering/Countering the Financing of Terrorism transaction monitoring, monitoring employee communications, monitoring transfers and withdrawals from customer and proprietary accounts.
    • Associated Risks: Critical infrastructure dependence, AI hallucinations, bias and discrimination, explainability, and data privacy.

Recommendations for the CFTC to Become AI Ready

To increase CFTC preparedness for the changes to be brought to the derivatives market by AI, the report recommends five changes to CFTC structure and policy.

  1. The CFTC should solicit opinions from stakeholders in the derivatives market on the “business functions and types of AI technologies most prevalent within the sector.” The suggested fora of engagement include public roundtable discussions and direct outreach to CFTC-registered entities.
  2. On the basis of the information collected through these rounds of public engagement, the CFTC should promulgate a rule that codifies some of the recommendations regarding AI risk management put forth in the National Institute of Standards and Technology’s AI Risk Management Framework.
  3. The CFTC should create an inventory of existing regulations that would impact the use of AI in the derivatives market. Utilizing this inventory, the CFTC should “develop a gap analysis of the potential risks associated with AI systems to determine compliance relative to further opportunities for dialogue, and potential clarifying staff guidance or potential rulemaking.”
  4. The CFTC should deepen interagency cooperation and alignment on AI issues with bodies such as the Securities and Exchange Commission, the Department of the Treasury, and “other agencies interested in the financial stability of markets.”
  5. The CFTC should increase its involvement in the development of AI standards by “engaging staff as both ‘observers’ and potential participants in ongoing domestic and international dialogues around AI” and increasing budgetary resources towards developing agency expertise on AI.

Conclusion

As discussed in previous newsletters, in the absence of action from Congress on comprehensive AI legislation, the executive branch has become the primary driver of AI policy. The Federal Trade Commission, United States Patent and Trademark Office, Consumer Financial Protection Bureau, and more have been consistently engaging on AI-related issues through workshops, requests for information, and rulemakings.

These agencies’ preliminary forays into AI have been encouraged by President Biden’s October 2023 executive order on AI, which, as we have covered in this newsletter series, has been marshalling agencies across the executive branch to utilize their existing authority to investigate and regulate AI.

Against that backdrop, we would expect the CFTC to take up in some fashion the recommendations that have come out of this report.

With Majority Leader Schumer's long anticipated AI “road map” finally released, there is a possibility that Congressional action on AI will pick up pace. However, as we cautioned in our newsletter on the road map, “Whether comprehensive AI legislation will be implemented in a matter of years, if at all, is anybody’s guess.”

In the interim, the executive branch will continue to deepen engagement on AI. We encourage interested stakeholders to closely track AI-related developments coming out of relevant federal agencies. We will continue to monitor, analyze, and issue reports on these developments. Please feel free to contact us if you have questions as to current practices or how to proceed.

 

Endnotes

[1] The TAC is a panel that advises the CFTC “on complex issues at the intersection of technology, law, policy, and finance.”

 

Subscribe To Viewpoints

Authors

Bruce D. Sokler

Member / Co-chair, Antitrust Practice

Bruce D. Sokler is a Mintz antitrust attorney. His antitrust experience includes litigation, class actions, government merger reviews and investigations, and cartel-related issues. Bruce focuses on the health care, communications, and retail industries, from start-ups to Fortune 100 companies.

Alexander Hecht

ML Strategies - Executive Vice President & Director of Operations

Alexander Hecht is Executive Vice President & Director of Operations of ML Strategies, Washington, DC. He's an attorney with over a decade of senior-level experience in Congress and trade associations. Alex helps clients with regulatory and legislative issues, including health care and technology.

Christian Tamotsu Fjeld

Senior Vice President

Christian Tamotsu Fjeld is a Vice President of ML Strategies in the firm’s Washington, DC office. He assists a variety of clients in their interactions with the federal government.

Raj Gambhir

Raj Gambhir is a Project Analyst in the firm’s Washington DC office.