
AI in 2024: How the latest
tech will be used in political campaigns
Hope Reese (Journalist and Editor)
In the fall of 2023, people walking the streets of Buenos Aires could glance up to see Soviet-looking posters of presidential candidate Sergio Massa, appearing strong and confident. Later, they would see Massa, in posters, appearing as a Chinese communist leader. The two depictions, created by each of the political parties, had something in common, images – they were created by AI. The images – they were created by AI. The images were just a start – there was manipulated text, audio, and videos, prompting the New York Times to pose the question: Is Argentina the First A.I. Election?
in today’s elections across the globe, Ai is the latest campaign staffer to join the team.
The use of AI-driven data analytics in political elections is nothing new: Barack Obama’s 2012 campaign transformed the game, harnessing AI and big data, innovating voter targeting – but this was just the tip of the iceberg. The 2020 US. Presidential election was our first big wakeup call to the power of AI in elections. That’s when AI was used (by both parties) to analyze voter data, make predictions, and target specific voting demographics – this time, mining social media. Republican contender Donald Trump’s team teamed up with Cambridge Analytica, the London-based data firm that worked with social media giant Facebook – drawing on data from 87 million profiles – changing the course of the election. This strategy was also employed in other major elections, such as Brexit.
There are countless other examples: In Canada’s 2019 election, parties used AI-targeted content based on online activity. South Korea’s 2017 election used AI-powered chatbots on platforms like KakaoTalk to help boost voter engagement.
Today’s AI is more powerful and pervasive than ever. And in 2024, it’s set to become an integral part of campaign strategies. Emerging technologies, from chatbots to generative AI, are changing the game, helping campaign teams target voters better than ever before.
Here’s how political campaigns can stay ahead of the curve:
two branches of ai
AI encompasses everything from chatbots to data analysis. But it’s helpful to divide it into two categories: analyzing and generating content.
Classification AI is when you take information, say, about a person, “and you decide, ‘does this person get a phone call or not?’” said Taren Stinebrickner-Kauffman, founder of the AI Impact Lab. This is the kind of AI used in Barack Obama’s 2012 US Presidential campaign.
While that technology continues to improve, it’s “not undergoing a sea change in the way that generative AI is,” Stine- brickner-Kauffman explained. This includes large language models, such as ChatGPT, as well as models that generate images, audio, and videos – the difference is that the models are “trained” on different data sets. Also, the multimodal model is a recent development – we only recently have computers with the bandwidth to perform this – in which these are combined, creating, say, a video that is based on generative text and audio.
Here’s an example: HeyGen lets you take a video and then use one model to translate the English into French, another model to analyze the voice and create a synthetic one, then an audio model to take the translated French and write it, and another model to speak the new voice out loud, “matching the new audio to the movement of my mouth,” explained Stinebrickner-Kauffman. “It looks like I just said all those things in French.”
Deepfakes are another branch of generative AI, in which content such as videos are created in order to manipulate viewers. But keep in mind: this same technology, what Stinebrickner-Kauffman dubs “synthetic content”, can be used to simulate something like training videos.
Using ChatGPT and other LLMs on the campaign
OpenAI’s ChatGPT is a kind of generative AI called a large language model (LLM). Using massive training sets of text and information gathered on the internet, an LLM can decipher text and generate responses – anything from answering simple questions about historical events to drafting a cover letter.
The release of OpenAI’s GPT-4 was “the biggest AI event of the century,” said Stinebrickner-Kauffman. “I’ve been an organizer, a campaigner, run social media marketing, email marketing, run an organization,” continued Stinebrickner-Kauffman. “And every single job I’ve ever done, I would have done differently with ChatGPT.”
Since its inception in November 2022, ChatGPT has become increasingly advanced. And other LLMs, such as Google’s Bard or Anthropic’s Claude, followed suit, offering political organizers new ways to create text. Using it is like “using electricity,” she said. “The low hanging fruit is content generation – drafting blog posts, drafting emails, creating powerful designs,” Stinebrickner-Kauffman continued. “We write a lot. And writing takes a lot of time.”
This should be fact-checked, but nevertheless serves as a great first draft, saving campaigns time and resources. “Knowledge workers get somewhere between 15 and 50% productivity gains from using tools like ChatGPT,” said Stinebrickner-Kauffman.
Generative AI: Real-use cases
At the Cooperative Impact Lab in Washington, D.C., Oluwakemi (Kemi) Oso has been running programs to find out which “areas within the progressive movement are lacking innovation” – during the pandemic, for instance, they helped “get folks onto digital,” she said. With the dawn of generative AI, they ran a cohort to see how campaigns used the technology. They then brought these tools to a larger umbrella of progressive campaigns. As a senior program manager with a background in data engineering, Oso has trained thousands of organizers, helping them understand the latest tools.
In July 2023, Oso’s team ran a cohort “to figure out how generative AI is actually useful for campaigning organizations,” she said. The group is a mix of US-based nonprofits – some with an explicit political focus and some focusing on issues such as getting people out to vote – and a couple of European groups, as well.
Oso read a lot about “how AI was going to kill or ruin or destroy or fundamentally change elections, right away, and kind of go into like the boogeyman worst case scenarios.” While she thinks these scenarios “can and will happen,” she believes that the use of AI tools is “a lot less nefarious than the imagination will go to,” she said.
In the group, members had varying levels of familiarity and comfort with tools like ChatGPT. They learned how to use LLMs: What questions should they put in front of them? What are the risks involved? And what kind of ethical dimensions should be considered?
The more people learned about the tools, the better they could use them – both ethically and practically speaking, Oso said. Eventually, the group used them for six purposes: content creation (such as drafting blog posts), data management, coding, message testing, chatbots, and phone banking.
GenAI works best “in an input-in, input-out sort of way,” she said. For instance, if you want to generate text or an image, you give the AI program some sort of feed “whether that’s raw information about what you want the outcome to look like or sample text,” Oso explained. When a Political Action Committee, (PAC), needed to generate 30-second scripts for the candidate, “something they often do by hand, and takes a long time,” Oso said, they used Claude. They fed the program information about tone, details of the issue, background about the race – and, voilà! – a first draft was created.
Another example in phone banking: “Currently, you’re relying on volunteers and their understanding of the candidate, what they remember, what they don’t,” Oso said. But AI can let you upload a candidate’s platform and their stance on issues and use it to chat in real-time to answer questions – supplying volunteers who are on the phone with constituents with the latest information about a candidate.
Increasing accessibility
AI tools help people with visual or auditory impairments – but they also “allow you to have much more nuanced conversations in languages you don’t speak,” said Stinebrickner-Kauffman. To hire organizers without great professional English, for instance. Now, campaign managers can broaden their scope and hire someone who is smart on strategy and has good relationships with the community, leaning on ChatGPT for writing English – “it’s also pretty good in French and German and Spanish,” she added. This can connect campaigns with audiences they might not reach otherwise.
chatgpt – a “thought-partner” for operations
“Campaigns are strapped for resources and time,” said Stinebrickner-Kauffman. “How do you run a great hiring process? How do you manage conflict? How do you onboard new organizers? For these kinds of strategy questions, I use ChatGPT as a thought-partner.”
Take, for example, a staffer transitioning to a new role, who wanted to learn how to manage people who used to be her peers – instead of turning to a manager for advice, she asked ChatGPT. “Fast-paced campaigns don’t have the resources to invest in those kinds of questions,” said Stinebrickner-Kauffman. In ChatGPT, “you basically have access, all the time, to a pretty good executive coach – essentially, for free.”
beyond chatgpt
These are just some examples of AI in political campaigns. Chatbots, such as those deployed in South Korea’s 2017 presidential election, are another powerful tool to engage with voters, answer questions, and spread information. AI-powered cybersecurity tools can also be used to safeguard voter information – of particular significance in the EU, in order to comply with GDPR regulations. Advances in sentiment analysis (mentioned above in terms of facial recognition) mean that campaign organizers can refine messages based on responses to existing messages, increasing the ability to reach voters.
AI can also be used for predictive modeling, meaning that it can coach organizers on the likelihood of success in terms of campaign strategies, which can also include real-time monitoring.
rewriting the rules around ai
With the increasing power and complexity of AI tools, there’s also a greater risk of AI being used to spread misinformation, steal voter information, and create bias in decision-making. Therefore, it’s more important than ever to develop standards around transparency and ethical use in campaigning.
“We don’t have rules,” said Stinebrickner-Kauffman. “Not only legal rules, but we don’t have, like, tradition.”
While the US is slower to regulate AI, the same does not hold in the EU. Here, the General Data Protection Regulation (GDPR) has implications for AI use in political campaigns. This law, which protects privacy, has a larger scope that includes AI-powered tools that are used by political campaigns. According to the GDPR, campaigns must be transparent about data collection, receive informed consent, and promise data security. Integrating feedback from the Commission in May 2021, the EU also developed a 2022 Code of Practice on Disinformation, which is a less formal measure that protects against disinformation, and could potentially impact the kinds of AI-enabled messaging campaigns can produce.
However, the most significant development just happened on 8 December 2023, when the European Parliament and EU member states passed the AI Act, set to roll out in 2025. This landmark bill, the result of record-breaking negotiations, is set to regulate tools currently used for voter profiling, microtargeting, and content generation. A “rulebook” for AI, it will also include the establishment of a dedicated AI Office within the Commission to enforce the new regulations. It draws on the Organisation for Economic Co-operation and Development (OECD) Council’s definition of AI, which states: “An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that [can] influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”
Beyond legal regulations, it’s essential that individual campaign organizers take the lead by modeling behavior like transparency and disclosing how AI is and isn’t used.
Don’t Be Afraid: Embrace Ai
“AI is not inherently good or bad,” said Stinebrickner-Kauffman. “AI is power. if we shy away from using the power, then we’re going to lose.” Progressives, she said, are often “shying away from the tools,” she added, “which is a massive mistake.”
Oso agreed. She learned that the fears and the actual use cases from campaigners “are like two circles that don’t actually overlap.” “The fear is that we’ll build robots that are better at persuasion and can, like, hyper-talk to each voter to spread misinformation so that they vote for a candidate,” she said. But “in reality, our campaigns use it to draft an email. They’re still going over it for the first draft of an op-ed,” and know that “they’re not just getting the thing from ChatGPT and putting it out into the world, there’s fact checking.” The GenAI is being seen “much more as an assistant than a thing to take over a task itself or a job,” she explained.
“I do think the right is using it for what I would call more ‘nefarious reasons’ – without the same guardrails of what is permissible and what is not,” Oso said.
But as “nefarious actors are still going to use it nefariously,” it’s even more essential for political organizations to learn as much as they can – and to train voters and staffers – on these new tools.
Oso sees education campaigns as critical to “spot things that are fake.” The same tools that create misinformation can also be used to fact-check, she said. And Taren says that “if you’re not using the tools you’re not going to understand what the capabilities are. And that doesn’t mean you’ll be able to detect it, necessarily, but it’ll certainly give you a fighting shot.”
“Any tool that can spread disinformation can also spread positive narratives and positive information,” said Stinebrickner-Kauffman. “Let’s win the game.”
It’s impossible to predict the next big AI development of 2024 – but what we do know is that, beyond what AI tools are capable of, it’s up to us to be creative and strategic in how we use them. Campaign managers must learn how to use generative AI, tools like ChatGPT, to help streamline both the hard work of reaching voters, and the equally hard work of building the teams that can make this happen. Rather than taking over these jobs, it can be an assistant to fill the gaps, to supplement knowledge – this means we can think creatively about who can join the campaign team. We can let AI help us out with work that’s time-consuming, repetitive, and could use an extra pair of eyes. We can take away the pressure to have a perfect resume or proper grammar, and hire people based on things like commitment, connections, work ethic – the values integral to building a successful political campaign.