Tech Policy Trifecta: Data Privacy, AI Governance, and Content Moderation
Democrats and Republicans recognize that developing and advancing data privacy, artificial intelligence (AI), and online content moderation legislation is crucial, but the debate about which policy area to prioritize is ongoing. Political challenges, technological innovations, and new research findings have influenced policymakers’, technology companies’, scholars’, and advocacy organizations’ ever-evolving views on the most feasible and urgent policy area to address in this technology policy trifecta.
For issues at the intersection of all three policy areas, legislation in any policy area could be impactful. For instance, data privacy, AI governance, and online content moderation legislation could affect interactive computer service providers’ use of AI systems that process consumer data (which may include personally identifiable information) to conduct content moderation.
This explainer illustrates the role that AI-powered content moderation systems play in shaping online experiences and analyzes how three bills in the 118th Congress could influence the design, development, deployment, and oversight of these systems. The piece also highlights other issues at the intersection of data privacy, AI governance, and online content moderation policy and demonstrates that examining intersectional issues can help Congress legislate effectively across all three policy areas.
AI-Powered Content Moderation Systems and Their Effect on the Modern Internet
Given the extremely high and ever-increasing volume of online content and the toll that reviewing gruesome and disturbing content can take on human moderators, AI-powered content moderation systems play an important role in online ecosystems. App stores are home to millions of apps; YouTube hosts billions of videos; and even though the top 20 posts on Facebook collectively have over 700 million views in a single quarter, they represent only 0.04% of all U.S. content views that quarter. AI systems can help sift through that tremendous amount of information, identify and remove problematic material, and tailor content recommendations to particular individuals or groups at a speed and scale that human moderators cannot achieve. However, if people do not design, develop, deploy, and monitor AI-powered content moderation systems appropriately and effectively, these systems can cause harm by removing permitted content, disseminating prohibited content, or moderating content in a biased or otherwise problematic manner.
Legislation’s Impact on Intersectional Issues
Numerous provisions in data privacy, AI governance, and online content moderation bills could affect issues at the center of the technology policy trifecta. Below, this piece provides an example of how one data privacy bill, one AI governance bill, and one online content moderation bill could influence AI-enabled content moderation.
Children and Teens’ Online Privacy Protection Act (COPPA 2.0)
On May 3, 2023, Sens. Markey (D-MA) and Cassidy (R-LA) reintroduced COPPA 2.0 (S.1418) “to update online data privacy rules for the 21st century to ensure children and teenagers are protected online.” Although some may characterize COPPA 2.0 as a data privacy bill, its cosponsors acknowledge that its effects would extend throughout the technology policy trifecta. Sen. Markey stated that the bill would aim to “combat” the ongoing “youth mental health crisis by outlawing data practices that fuel harmful algorithms targeting kids and teenagers with toxic content.” “Kids and teens’ data is the raw material Big Tech uses to power algorithms that amplify toxic content, including posts promoting eating disorders and self-harm, which damage children and teens,” said Sen. Cassidy.
Section 6 of COPPA 2.0 contains provisions that aim to protect youth privacy by limiting the collection, use, disclosure, and compiling of children’s and teens’ personal information for targeted advertising. Specifically, the bill limits such practices when they involve (or are reasonably likely to involve) collecting personal information from a child or teen or when they occur on a “website, online service, online application, mobile application, or connected device” that “is directed to children or teens.” By restricting targeted advertising to children and teens, these provisions would establish content moderation obligations. To comply, website and app operators may need to update their AI-powered content moderation systems to ensure those systems do not collect personal information from children and teens and use that data for targeted advertising.
The “Removal of Content” provisions in Section 7 would establish a limited right-to-be-forgotten requirement that protects youth privacy and a content moderation obligation to implement mechanisms for removing certain content containing children’s and teens’ personal information. (Notably, the bill does not require website and app operators to remove content that the users requesting erasure did not submit themselves.) Depending on how courts interpret “to the extent technologically feasible,” fulfilling these obligations may require websites and app operators to update their AI-powered content moderation systems and broader content moderation policies and practices. For example, operators that previously only aimed to identify and remove child sexual abuse material would need to update their policies, technological tools, and practices to address other content containing or displaying children’s and teens’ personal information.
REAL Political Advertisements Act
Rep. Clarke (D-NY) initially introduced the REAL Political Advertisements Act (S.1596/H.R.3044) on May 2, 2023. Sens. Klobuchar (D-MN), Booker (D-NJ), and Bennet (D-CO) introduced the Senate companion bill on May 15, 2023. This legislation would amend the Federal Election Campaign Act of 1971 to establish new transparency and accountability obligations for AI-generated content (i.e., “content generated in whole or in part with the use of artificial intelligence”) in political advertisements.
The REAL Political Advertisements Act seems like an AI governance bill, but several of its provisions would also impact data privacy and online content moderation policy. Section 4 contains a requirement to provide “clear and conspicuous” notice when covered communications contain “an image or video footage which was generated in whole or in part with the use of artificial intelligence (generative AI).” Labeling AI-generated images or recordings can help protect privacy. For example, labeling a deepfake image in a political advertisement can help protect the depicted individuals’ privacy by discouraging viewers from drawing erroneous inferences about the individuals’ views, actions, or physical appearance.
By helping viewers identify AI-generated content in political ads, Section 4’s notice requirement may also help limit the spread and harmful consequences of online misinformation, disinformation, and malinformation. The REAL Political Advertisements Act would not directly amend Section 230 of the Communications Decency Act. Nonetheless, this bill could motivate interactive computer service providers to implement new policies prohibiting users from posting or disseminating AI-generated images or recordings without providing clear and conspicuous notice that the images or recordings contain AI-generated content. Online platforms and other interactive computer service providers also may update their AI-powered content moderation systems to identify and remove AI-generated images and recordings that do not satisfy this notice requirement.
Internet Platform Accountability and Consumer Transparency (Internet PACT) Act
Sen. Schatz (D-HI), Sen. Thune (R-SD), and six other Senators reintroduced the Internet PACT Act (S.483) on February 16, 2023. This bill would update “the Communications Act of 1934 by requiring social media companies to establish clear content moderation policies and holding them accountable for content that violates their own policies or is illegal.”
Though frequently called a content moderation (or “platform transparency”) bill, the Internet PACT Act has implications across the entire technology policy trifecta. In addition to establishing new content moderation obligations (i.e., requiring interactive computer service providers to publish an acceptable use policy and establish a corresponding complaint system), this bill would mandate that interactive computer service providers “publish a transparency report every six months.” The bill specifically states that interactive computer service providers shall publish transparency reports “in a manner that preserves the privacy of information content providers” and specifies elements that the transparency reports must include. Gathering and analyzing the information these transparency reports require may necessitate modifying existing AI-powered content moderation systems or developing new content moderation tools and practices.
Furthermore, the Internet PACT Act would direct the National Institute of Standards and Technology (NIST) to “develop a voluntary framework, with input from relevant experts, that consists of nonbinding standards, guidelines, and best practices to manage risk and shared challenges related to . . . good faith moderation practices by interactive computer service providers.” This content moderation framework would need to include “technical standards and processes” for sharing information among interactive computer service providers. Adopting these standards and processes may lead interactive computer service providers to adjust the data disclosure requirements and restrictions in their existing data privacy policies. NIST’s content moderation framework would also need to include “recommendations on automated detection tools and the appropriate nature and level of human review to correct for machine error in assessing nuanced or context-specific issues.” These recommendations may encourage interactive computer service providers to develop and improve AI system features that help flag context-specific issues and to alter the degree and type of human involvement in content moderation processes.
Considerations for Congress: Additional Intersectional Issues
Considering data privacy, AI governance, and online content moderation legislation’s potential influence on the AI-powered content moderation systems that shape our online experiences can help bills in these policy areas achieve important objectives while minimizing unintended negative consequences. Other issues at the center of the technology policy trifecta are also crucial to examine. Such issues include, but are not limited to:
- The relationship between privacy impact assessments and AI impact assessments or audits
- How data minimization requirements could hinder AI technology providers’ ability to help mitigate bias and other risks through training and testing
- How limitations on collecting children’s and teens’ data could affect efforts to ensure that the content that children and teens can access is safe and age-appropriate
- How data minimization requirements may limit online platforms’ ability to detect and investigate AI bot accounts and activity
- How similar an AI-generated image of an individual must be to a photograph of that individual to qualify as personal information protected under data privacy laws and regulations
- The extent to which individuals can meaningfully exercise data deletion rights when technology providers have already used those individuals’ personal data to train AI-powered content moderation systems or generally accessible generative AI systems
- How to ensure that data privacy, AI governance, and online content moderation legislation’s oversight and enforcement provisions create effective mechanisms to address issues at the intersection of these three policy areas
In future pieces, BPC plans to explore several of these issues and the ways in which legislation from the 118th Congress could impact them.
Conclusion
As interactions increasingly transcend the physical and digital realms, developing and advancing federal data privacy, AI governance, and online content moderation legislation will become increasingly vital. Both Republicans and Democrats in Congress recognize the need for legislation across all three policy areas and are becoming increasingly aware of how bills in one area may affect the other two areas. To avoid creating duplicative or conflicting requirements while advancing much-needed legislation across all three policy areas, identifying and assessing interrelated provisions in data privacy, AI governance, and online content moderation bills is essential. Analyzing and carefully addressing issues at the intersection of these three policy areas can help ensure federal legislation effectively mitigates risks, promotes benefits, and avoids unintended negative consequences.
Share
Read Next
Support Research Like This
With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.
Give NowRelated Articles
Join Our Mailing List
BPC drives principled and politically viable policy solutions through the power of rigorous analysis, painstaking negotiation, and aggressive advocacy.