FTC Warns AI Companies to Honor Privacy and Confidentiality Commitments — AI: The Washington Report
Welcome to this week’s issue of AI: The Washington Report, a joint undertaking of Mintz and its government affairs affiliate, ML Strategies.
On January 9, 2024, the Federal Trade Commission (FTC or Commission) published a business guidance blog post entitled “AI Companies: Uphold Your Privacy and Confidentiality Commitments.” The piece discusses the Commission’s resolve to enforce the privacy commitments of certain AI firms known as “model-as-a-service” companies.
Model-as-a-service companies train and host AI models and make these models accessible to customers for their own personal or professional use. As discussed in a previous newsletter, developing complex AI models often necessitates the deployment of vast amounts of data and computing resources. Model-as-a-service companies allow firms that do not have access to these resources to more easily integrate AI into their products and services.
Our key takeaways are:
- The training of complex AI models, the main product offered by model-as-a-service companies, relies on vast troves of data. The FTC argues that this fact creates an incentive towards data collection on the part of model-as-a-service firms that can be “at odds with a company’s obligations to protect users’ data, undermining peoples’ privacy or resulting in the appropriation of a firm’s competitively significant data.” It may also undermine an important principle of data minimization that is encouraged by privacy laws abroad and increasingly in some US states as well.
- The Commission warns that Section 5 of the FTC Act empowers the Commission to bring enforcement actions against companies, including model-as-a-service firms, that violate privacy commitments made to customers. AI firms found violating these commitments may be required to delete certain algorithms under an enforcement paradigm known as algorithmic disgorgement.
- The FTC asserts that the misappropriation of consumer data on the part of model-as-a-service companies can “violate antitrust laws as well as consumer protection laws,” as such companies can “appropriate the competitively significant information of business customers” and use such information to anticompetitive ends. The Commission ends the article by warning that there is “no AI exemption from the laws on the books.”
The Role of Model-as-a-Service Companies in the AI Economy
Given the relative expense entailed in developing and maintaining AI models, firms known as “model-as-a-service” companies have sprung up to offer access to AI models for personal or enterprise use. On January 9, 2024, the FTC published a business guidance post warning model-as-a-service firms to respect their privacy commitments, or else face legal consequences.
To illustrate the role that model-as-a-service companies play in the AI economy, the FTC provides the example of a business that would like to deploy an AI model to create a customer service chatbot program. This business could leverage a model developed by a model-as-a-service firm to power its chatbot program, saving the time and cost needed to develop a model in-house.
FTC: Data Collection Imperatives Do Not Trump Commitments to Consumers
Despite the benefits offered by model-as-a-service firms, the FTC argues that the growing importance of these companies poses risks to consumer privacy. As the AI model training process depends on vast amounts of data, the FTC warns that “model-as-a-service companies have a continuous appetite for data to develop new or customer specific models or refine existing ones.”
The imperative to collect consumer data “can be at odds with a company’s obligations to protect users’ data, undermining peoples’ privacy or resulting in the appropriation of a firm’s competitively significant data,” according to the FTC. Since companies are increasingly relying on the AI models provided by third parties for core business activities, the FTC is concerned that “[a] model-as-a-service company may, through its [software components], infer a range of business data from the companies using its models, such as their scale and precise growth trajectories.”
Given these risks, the FTC warns that model-as-a-service companies “that fail to abide by their privacy commitments to their users and customers may be liable under the laws enforced by the FTC.” Those companies might also face liability under antitrust statutes, state consumer protection laws, or potentially even some state consumer privacy laws that are quickly proliferating across the country. The relevant law that the FTC enforces against companies that violate their data privacy commitments is Section 5 of the FTC Act’s prohibition on “unfair or deceptive acts or practices in or affecting commerce.” For decades, the FTC has filed complaints against firms accused of violating their public representations regarding their handling of consumer data.
In recent years, the Commission has sought to ensure that enforcement keeps pace with technological developments. Given recent developments in AI technology, the FTC is deploying a novel remedy in certain data privacy cases known as “algorithmic disgorgement,” or the enforced deletion of certain algorithms. In their January 9 blog post, the FTC warns that model-as-a-service companies that develop algorithms with data collected in a manner contrary to consumer protection laws may be subject to algorithmic disgorgement.
Importantly, the FTC warns that model-as-a-service companies must “abide by their commitments to customers regardless of how or where the commitment was made.” Along with terms of service and privacy policies, model-as-a-service companies will also be held to commitments made in promotional materials and in online marketplaces. Deceptive practices violating privacy commitments, the FTC cautions, can include “surreptitiously changing its terms of service or privacy policy, or burying a disclosure behind hyperlinks, in legalese, or in fine print…” Model-as-a-service companies should carefully craft their online policies to reflect and clearly explain their actual practices, and stakeholders within these organizations should review and refresh those policies often to make sure their content keeps apace with their fast-moving AI-technology.
Along with harming consumer privacy, the FTC warns that “misuse of data associated with the training and deployment of AI models also pose potential risks to competition.” This is because model-as-a-service companies may gain access to “competitively significant information of business customers” and leverage this information to anticompetitive ends. This could be destabilizing for the entirety of the marketplace and is an area we expect regulators will continue to monitor moving forward.
Conclusion: Model-as-a-Service Companies Under the Microscope
The data collection and AI model training practices of model-as-a-service companies may, according to the FTC, “run afoul of the prohibition against unfair methods of competition.” The article ends with the Commission’s much-repeated refrain that there “is no AI exemption from the laws on the books.” Companies providing individual or enterprise access to AI models should be aware of the FTC’s stance on enforcing data privacy commitments and ensure that their actual practices align with their commitments, policies, and public statements made to customers.