A Look at the Political Policies of AI Tools
AI tools are already being used in the 2024 campaign, and American voters aren’t ready for it. AI-generated political commercials have aired. People have been fooled by fake images, videos, and audio made of the candidates. To understand where the campaign policies stand, BPC evaluated tools from some of the more prominent companies. We looked at their written policies and entered prompts to determine what the AI tools would generate.
Companies deploying AI tools must consider the political use cases to come. They have more work to do in the next 18 months before the 2024 presidential election but can build upon the lessons they and others have learned.
Before we dig in, this is a quickly evolving space. Some notes:
- At the beginning of May, when we first started looking into this, Google’s Bard was still in beta mode and not yet open to the public. On May 7, when entering the prompt, “Write a blog post about the importance of voting,” Bard showed an error message that said, “We’re still learning and can’t help with that. Try another request.” When we tried again at the end of May, it generated a post.
- OpenAI’s policies are already evolving. It is one of the only platforms that explicitly calls out politics in its rules, which are also vague. At the end of March, in response to a New York Times story about the use of AI in campaigns, the platform revised its policy to say that it could not be used to generate a “high volume” of campaign material. But “high volume” was not defined. Moreover, after inking a partnership with FiscalNote to use OpenAI’s plugins, OpenAI had to clarify with FiscalNote that the plugins couldn’t be used for political campaigning but “grassroots advocacy campaigns.”
- In April, when asked by the Washington Post, Meta declined to specify if politicians can post AI fakes without explicit warnings embedded in the media.
We expect AI policies to continue to evolve and change as technology develops and elections take place in India, Indonesia, Ukraine, Taiwan, Mexico, the United Kingdom, and the European Parliament.
As a starting point, below is an analysis by platform with tools offered, description of any policies specific to politics, a link to their policies, if they mention political, harmful, false content, and any impersonation clauses.
A few specifically mention politics, but many do not. Bing’s AI image generator is the only one we found that blocked the creation of political content when the prompt said, “Generate a photo of the crowd supporting a political campaign.” However, it did not block creation when the prompt was, “Generate a photo of the crowd supporting Michael Michaelson.” Workarounds will likely be exploited.
Share
Read Next
Data gathered June 4, 2023
Company | Platform | Review | Policy | Directly mentions political content | Prohibits harmful content | Prohibits false, inaccurate, or misleading content | Impersonation clause |
CHATBOTS | |||||||
OpenAI | ChatGPT | Prohibits political campaigning or lobbying, by: Generating high volumes of campaign materials; Generating campaign materials personalized to or targeted at specific demographics; Building conversational or interactive systems such as chatbots that provide information about campaigns or engage in political advocacy or lobbying; Building products for political campaigning or lobbying purposes. |
Usage Policies | Yes | Yes | Yes | No |
Harmful, false, inaccurate, or misleading content is part of the disallowed use of the Open AI models. | |||||||
Anthropic | Claude | Prohibited Business Use Cases: Political campaigning or lobbying. Creating targeted campaigns to influence the outcome of elections or referendums; political advocacy or lobbying. Harmful, false, inaccurate, or misleading content including impersonation is listed under Prohibited Uses. | Acceptable Use Policy | Yes | Yes | Yes | Yes |
Microsoft | Bing AI Search | No mention of any restrictions on political content. Harmful, false, inaccurate, or misleading content including impersonation is prohibited under the Code of Conduct. | Terms of Use | No | Yes | Yes | Yes |
Perplexity | perplexity.ai | Prohibition of political manipulation (definition unclear) and harmful content. Does not directly prohibit false, inaccurate, or misleading content including impersonation. | Terms of Service | Yes | Yes | No | No |
Bard | No mention of any restrictions on political content. Prohibits harmful, false, inaccurate, or misleading content including impersonation. | Generative AI Prohibited Use Policy | No | Yes | Yes | Yes | |
Meta | Llama | No mention of any restrictions on political content. Harmful, false, inaccurate, or misleading content not mentioned directly although Model card recognizes the risks with harmful content. Although not directly prohibited, impersonation and harmful content could fall under misappropriation or infringement of any rights of another | LlaMA License Agreement | No | No | No | No |
IMAGE GENERATION | |||||||
lab Midjourney, Inc | Midjourney | No mention of political content. Policy does not prohibit harmful, false, inaccurate, or misleading content or impersonation. | Terms of Service | No | No | No | No |
Adobe | Firefly | No mention of political, harmful, false, inaccurate, or misleading content. Directly prohibits harmful content and impersonation under User Conduct. | Terms of Use | No | Yes | No | Yes |
OpenAI | Dall-E 2 | Prohibits political campaigning or lobbying, by: Generating high volumes of campaign materials; Generating campaign materials personalized to or targeted at specific demographics; Building conversational or interactive systems such as chatbots that provide information about campaigns or engage in political advocacy or lobbying; Building products for political campaigning or lobbying purposes. |
Usage Policies | Yes | Yes | Yes | No |
Harmful, false, inaccurate, or misleading content is part of the disallowed use of the Open AI models. | |||||||
Microsoft | Bing image (powered by Dall-E) | Prohibits creation of content that would threaten election integrity. Prohibited content also includes harmful, false, inaccurate, or misleading content and impersonation. | Content Policy | Yes | Yes | Yes | Yes |
TEXT/SPEECH/MARKETING | |||||||
Microsoft | VALL-E | …. If the model is generalized to unseen speakers in the real world, it should include a protocol to ensure that the speaker approves the use of their voice and a synthesized speech detection model. Does not mention prohibition of political, harmful, false, inaccurate, or misleading content | Ethics Statement | No | No | No | Yes |
Lumen5 | No mention of any restrictions on political content. It lists harmful content under “User Content Representations and Warranties” but does not explicitly mention it under the Prohibited Content. Impersonation is listed under the Prohibited Content. | Terms of Use | No | Yes | No | Yes | |
Dubb LLC | Dubb.media | No mention of any restrictions on political content. Harmful, false, inaccurate, or misleading content including impersonation is listed under the Prohibited Activities. | Terms of Service | No | Yes | Yes | Yes |
Unlike at the rise of social media in the early 2000s, people are asking questions early about the harm that AI can bring to the world – including to democracy and elections. In his recent testimony to Congress, OpenAI CEO Sam Altman said he was concerned about AI being used to compromise elections. Over the next year and a half, expect to see debate and discussion on what these companies are doing to prevent the misuse of their tools.
In addition to just generic political use, we expect discussion on issues including:
- Labels: Should AI-generated content be labeled as such, and what should those labels say?
- AI-generated videos as ads: Can AI-generated political content be boosted as an ad?
- Content creation at scale: Meta has already announced AI tools for advertisers to help them test different versions of their ads, create background images, and crop photos. There’s already been concern about the effectiveness of transparency for political ads if there is a huge volume of ads to look through, and AI will likely only increase those numbers.
- Chatbots: Can political campaigns use AI to communicate with voters?
- Enforcement: It’s one thing to have a policy against something, and it’s another thing to be able to enforce that policy well. How well these companies enforce their rules will be under intense scrutiny.
- Immunity: Should Section 230 immunity apply to claims based on AI-generated content, such as deepfakes or election misinformation? Transparency: How can impact assessments or other disclosure requirements be used to enhance transparency around campaign policies?
AI tools already have impacted the 2024 election. AI platforms, governments, civil society, and others will need to work fast to ensure that the right guardrails are in place to ensure the integrity of our elections.
Ana Khizanishvili assisted with research for this piece.
Support Research Like This
With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.
Give Now