Skip to main content

FTC Staff Report Warns of Privacy Risks in Social Media Companies’ AI Practices — AI: The Washington Report

  • On September 19, 2024, the FTC released a Staff Report on the data collection and data use practices of large social media and video streaming service (SMVSS) platforms.
  • The Report describes how these platforms engage in mass user data collection to monetize users’ personal information but fail to implement sufficient guardrails to protect consumers from privacy risks and other harms.
  • The Report highlights that the platforms “heavily” rely on AI and algorithms to both collect user information and power their social media and video streaming services. However, to the FTC staff, their use of AI raises privacy and civil rights concerns, and the platforms have had inconsistent and insufficient approaches to monitoring and testing AI.
  • Concluding that the platforms’ self-regulation is “failing,” the Report recommends that Congress should pass comprehensive privacy legislation and data rights protections.
  • Although all of the FTC’s Commissioners voted to issue the Report, two Commissioners released dissenting statements, highlighting concerns that the Report puts the FTC on the pro-regulation side of the AI debate.  
     

  
On September 19, 2024, the FTC released a Staff Report titled “A Look Behind the Screens: Examining the Data Practices of Social Media and Video Streaming Services.” The Report finds that these major SMVSS platforms have engaged in “vast surveillance” of users to monetize their personal information without adequate guardrails in place to protect consumers. The Report concludes that the companies’ data collection and use practices were “woefully inadequate” and urges Congress to pass comprehensive privacy and data rights protection.

The Report devotes an entire section to AI (to which this note is limited), which the platforms “heavily” relied on to collect user information and power their SMVSS. However, the Report asserts that the platforms had “differing, inconsistent, and inadequate approaches” to testing and monitoring their use of AI and other automated systems, in light of the potential harms to privacy and civil rights posed by AI.

The Report’s Findings

In December 2020, the FTC issued orders to some of the largest SMVSS platforms. The orders requested information on the platforms’ data collection and use practices, including “how they collect, use, and present personal information, their advertising and user engagement practices, and how their practices affect children and teens.”

Almost four years later, the Staff Report lays out a number of findings about these platforms’ data and advertising practices. The Report zeroes in on the platforms’ use of algorithms and AI to collect and process user data and information to power their SMVSSs. Specifically, it highlights five key findings about algorithms and AI:

  1. Companies use AI and algorithms to run their platforms and SMVSS. Most of the platforms have “relied heavily” on algorithms and AI to ingest personal information to power their SMVSS, “to carry out most basic functions and to monetize their platforms.” “Automated systems have dictated much of the user’s experiences,” the report finds. Algorithms and AI also predict and infer a wide range of personal details about users, including their “interests, habits, demographic categories, familial status and relationships, employment and income details, and likely other details and information not provided by the Companies.”
  2. Companies collect user information from many sources. To run their algorithms and AI systems, most of the companies not only used personal information sourced from the users themselves but also collected “information passively about users’ and non-users’ activity across the Internet and in the real world (i.e., location information),” while other companies relied on information from third parties, including data brokers and harvesters.
  3. The use of personal information in algorithms and AI models presents a host of concerns. The companies’ “use of personal information by algorithms, data analytics, or AI raises privacy and other concerns,” including potential risks and harms to consumers’ civil rights. For example, AI models that infer demographic information about users “can lead to sensitive inferences or categorizations,” which can “be especially harmful to specific groups that face identity-based threats or unlawful discrimination.” The report outlines several sources of potential bias from algorithms and AI models, including “skewed, unrepresentative, or imbalanced datasets that can lead to erroneous outputs” and “black box” models that lack transparency. Compounding these problems is the fact that consumers harmed by algorithms and AI “often have no recourse when it comes to biases or inaccurate data or decisions.” Most companies did not give consumers the option to opt in or to opt out of the companies’ policy of using consumers’ personal information.
  4. AI and algorithms used by the companies may especially harm children and teens. AI models and algorithms can have negative mental health consequences for teens and children when they favor engagement. “Social media platforms are often designed to maximize user engagement, which has the potential to encourage excessive use and behavioral dysregulation” that may harm children and teens in particular, according to the 2023 Surgeon General Advisory Report titled, “Social Media and Youth Mental Health” that the Staff Report cites. Meanwhile, only a few companies provide parents with controls to limit their child’s use of the platforms.
  5. The companies did not have a uniform or standard approach to monitoring AI and algorithms. Some companies had internal teams dedicated to AI oversight, while others did not. Furthermore, the frequencyand the ways in which they monitored and tested algorithms and AI also varied greatly from company to company. Some companies lacked “specific policies or practices to monitor and test for things such as unlawful discrimination.” The Report asserts that “differing, inconsistent, and inadequate approaches” to monitoring and testing AI raise concerns about the companies’ ability to self-regulate.

FTC Staff Recommendations

The Staff Report makes a number of recommendations to inform companies and policymakers regarding data and advertising practices and the companies’ uses of algorithms and AI. Three specific recommendations relate specifically to algorithms and AI:

  1. Companies are advised to “address the lack of access, choice, control, transparency, and interpretability relating to their use of automated systems.” Users could not control whether their information was used by AI and algorithms, nor did they have recourse to correct incorrect data or determinations.
  2. Companies should also “implement more stringent testing and monitoring standards.” The companies’ heavy reliance on AI and algorithms, coupled “with sometimes limited, inconsistent, or differing human review, oversight, or testing practices, poses risks for consumers and society.”
  3. Congressional “legislation and regulation are badly needed.” “Self-regulation is failing,” according to the report, “when it comes to ensuring these firms’ AI systems do not result in unlawful discrimination, error, addiction, and other harms.” The report notes that, although the FTC has authority under Section 5 to regulate AI, “comprehensive federal legislation would cement baseline consumer data rights and protections” and provide regulators and enforcers with the tools to address the myriad challenges that algorithms and AI pose.

Reaction from FTC Commissioners

All five FTC Commissioners voted to issue the Report, and four of them released concurring statements. In their statements, the Commissioners applaud the Report for shedding light on the privacy concerns that exist on SMVSS platforms, as well as the specific harms that these platforms pose to children and teens. However, in addition to their concurrences, both Republican-appointed Commissioners released dissenting statements.

In her dissent, Commissioner Holyoak voices concern that the Report “may affect free speech online,” because it relates to how platforms regulate and moderate online content. Furthermore, she argues that the “Report’s so-called “recommendations” effectively seek to regulate private conduct through a sub-regulatory guidance document,” but the FTC “should not dictate or otherwise seek to reshape private-sector conduct in a guidance document.”

Commissioner Ferguson’s dissent focuses on how the Report treats AI. He contends that the purpose of the Report is to put the FTC “firmly on the pro-regulation side of the AI debate raging across academia, industry, and government.” This side is the “wrong one,” according to Ferguson, because “neither AI’s creators nor its would-be regulators really understand it” and “imposing comprehensive regulations at the incipiency of a potential technological revolution would be foolish.” Instead, Ferguson contends that existing laws are sufficient to regulate AI and mitigate its risks and harms, but he acknowledges that “a time may come when comprehensive federal AI legislation would be appropriate.”

The Report comes as a number of AI bills have stalled in Congress, as we’ve covered, and as federal lawmakers have yet to introduce comprehensive federal AI legislation, despite calls for such legislation from the Bipartisan Senate AI Working Group earlier this year. We will continue to monitor and report on federal activity on AI.Our Privacy and Cypersecurity Practice Group comprehensively follows the whole spectrum of privacy issues raised in this Report and elsewhere.

 

Subscribe To Viewpoints

Authors

Bruce D. Sokler

Member / Co-chair, Antitrust Practice

Bruce D. Sokler is a Mintz antitrust attorney. His antitrust experience includes litigation, class actions, government merger reviews and investigations, and cartel-related issues. Bruce focuses on the health care, communications, and retail industries, from start-ups to Fortune 100 companies.

Alexander Hecht

ML Strategies - Executive Vice President & Director of Operations

Alexander Hecht is Executive Vice President & Director of Operations of ML Strategies, Washington, DC. He's an attorney with over a decade of senior-level experience in Congress and trade associations. Alex helps clients with regulatory and legislative issues, including health care and technology.

Christian Tamotsu Fjeld

Senior Vice President

Christian Tamotsu Fjeld is a Vice President of ML Strategies in the firm’s Washington, DC office. He assists a variety of clients in their interactions with the federal government.

Matthew Tikhonovsky

Matthew is a Mintz Senior Project Analyst based in Washington, DC.