Algorithmic Disgorgement: An Increasingly Important Part of the FTC’s Remedial Arsenal — AI: The Washington Report
Welcome to this week’s issue of AI: The Washington Report, a joint undertaking of Mintz and its government affairs affiliate, ML Strategies.
This week, we discuss the Federal Trade Commission’s (FTC or Commission) use of a remedy known as “algorithmic disgorgement” in settlements with AI companies. Our key takeaways are:
- Algorithmic disgorgement is the enforced deletion of algorithms developed using illegally collected data. As stated by FTC Commissioner Rebecca Kelly Slaughter, the rationale behind this remedy is that “when companies collect data illegally, they should not be able to profit from either the data or any algorithm developed using it.”
- The FTC first deployed this remedy in its settlement with Cambridge Analytica in 2019. The Commission has since used algorithmic disgorgement in multiple subsequent settlements.
- The most recent settlement including algorithmic disgorgement, the FTC’s December 2023 settlement with the Rite Aid Corporation, is significant in that it is the Commission’s first use of its Section 5 unfairness authority against an allegedly discriminatory use of AI. The presence of algorithmic disgorgement in the Rite Aid settlement suggests an increased use of model deletion in future AI enforcement actions.
The FTC’s Remedy of Choice for the AI Age?
As we discussed in last week’s newsletter, for decades, the FTC has filed complaints against companies for data privacy violations. In certain cases, the FTC has been able to obtain settlements with these firms. These settlements often mandate that the offending firms institute appropriate changes to address and redress the breach of data privacy, such as the institution of regular data privacy assessments and risk management systems.
As technology has evolved, so too has the FTC’s enforcement approach. This week, we discuss the Commission’s new enforcement paradigm for the AI age: algorithmic disgorgement, or the enforced deletion of algorithms developed using illegally collected data.
In just a few short years, this remedy has gone from inchoate theory to a powerful enforcement tool leveraged against multibillion-dollar firms. As one key FTC official put it, algorithmic disgorgement is a “significant part” of the Commission’s overall AI enforcement strategy.
What is Algorithmic Disgorgement?
Most algorithms are developed using a process called “training,” including the powerful large language models that power popular generative AI services. During the model training process, algorithms are fed massive amounts of data. Developers then instruct the models to “find patterns or make predictions” on the basis of the data. This process is repeated until the model reaches a sufficient level of accuracy or usefulness.
As discussed in last week’s newsletter, the FTC has been concerned that AI model developers “have a continuous appetite for data to develop new or customer specific models or refine existing ones” and that this dynamic “can be at odds with a company’s obligations to protect users’ data…” In other words, the FTC believes that the imperatives of AI model training will lead certain AI firms to neglect their data privacy commitments.
For the FTC, this privacy harm called for a novel remedy: algorithmic disgorgement. FTC Commissioner Rebecca Kelly Slaughter wrote that the premise behind algorithmic disgorgement is that “when companies collect data illegally, they should not be able to profit from either the data or any algorithm developed using it.” In this way, the enforced deletion of certain algorithms as part of a settlement can help prevent companies from profiting from illegally collected user data. According to Slaughter, the “authority to seek this type of remedy comes from the Commission’s power to order relief reasonably tailored to the violation of the law.”
FTC Settlements Including Algorithmic Disgorgement
The FTC first used algorithmic disgorgement as a remedy in its 2019 settlement with Cambridge Analytica. In this settlement, the Commission ordered that “any algorithms or equations, that originated, in whole or in part, from” data that had been illegally collected from Facebook users had to be deleted.
The enforcement paradigm would evolve with its subsequent application in the FTC’s January 2021 settlement with Everalbum Inc. In this settlement, the FTC used the term “Affected Work Product” to denote “any models or algorithms developed in whole or in part using” illegally collected user data. The settlement required Everalbum to “delete or destroy any Affected Work Product” within ninety days.
In January 2022, Everalbum (now renamed Paravision Inc.) submitted its settlement compliance report to the FTC. In the report, Paravision confirmed that it had deleted the Affected Work Product as ordered by the FTC. This report demonstrates that FTC orders have led to the destruction of algorithms.
Following the Everalbum settlement, the FTC has utilized algorithmic disgorgement in three subsequent settlements to date. These include the Commission’s settlement with WW International Inc. and Kurbo Inc., Edmodo LLC, and most recently, the Rite Aid Corporation.
FTC Case Against Rite Aid Inc.
On December 19, 2023, the FTC publicly announced its enforcement action against the Rite Aid Corporation for failing to “implement reasonable procedures and prevent harm to consumers in its use of facial recognition technology in hundreds of stores.” This complaint is significant in that it is the Commission’s first use of its Section 5 unfairness authority against an allegedly discriminatory use of AI.
In its complaint, the FTC alleges that Rite Aid failed to instantiate systems that would prevent its AI-based facial recognition systems from misidentifying customers in an inaccurate and discriminatory manner. Due to these inaccuracies, customers falsely identified as having engaged in criminal behavior (such as shoplifting) were allegedly subjected to “increased surveillance,” store bans, police reports, and more.
The FTC claims in its complaint that Rite Aid has “used facial recognition technology in their retail stores without taking reasonable steps to address the risks that their deployment of such technology was likely to result in harm to consumers as a result of false-positive facial recognition match alerts,” thereby violating the Section 5 prohibition on “unfair acts or practices” in or affecting commerce.
In response to these alleged violations, the FTC is mandating that Rite Aid destroy certain algorithms. In its proposed settlement, the FTC orders Rite Aid to “delete or destroy all photos and videos of consumers used or collected in connection with the operation of a Facial Recognition or Analysis System…and any data, models, or algorithms derived in whole or in part therefrom…”
In its April 2023 “Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems,” the FTC signaled its intention to enforce the law against AI tools that “produce outcomes that result in unlawful discrimination.” As the Rite Aid case demonstrates, the FTC’s intention to police discriminatory uses of AI is more than just rhetoric.
Conclusion: More Algorithms in the Commission’s Crosshairs?
In July 2023, the associate director of the FTC’s Division of Privacy and Identity Protection stated that algorithmic disgorgement is a “significant part” of the FTC’s enforcement strategy with regard to AI. Given the consequence of losing developed algorithms and associated data due to FTC action, firms should ensure that they are not using illicitly collected data to develop algorithms.
We will continue to monitor, analyze, and issue reports on these developments. Please feel free to contact us if you have questions as to current practices or how to proceed.