Scott Brennen, Head of Online Expression Policy at the University of North Carolina's Center on Technology Policy, discusses the current state of AI in political ads and the potential harms and benefits of its usage. He categorizes AI usage in political ads into three categories: publicity, alteration, and fabrication. While concerns about AI in political ads have been overstated in some areas, risks concerning bias and down-ballot races have been understated. Brennen provides recommendations for public policy to promote learning about generative AI and target electoral harms. He also suggests ways for campaigns to protect themselves from unintended consequences.
00:31 Current State of AI in Political Ads
01:03 Categories of AI Usage in Political Ads
03:04 Concerns and Overblown Panic
06:17 Harms and Recommendations
13:30 Actions by Tech Platforms and Regulatory Agencies
18:01 Recommendations for AI in Political Ads
26:27 Protecting Campaigns from Harms
Eric Wilson (00:02.385)
I'm Eric Wilson, managing partner of Startup Caucus, the home of campaign tech innovation on the right. Welcome to the Business of Politics show. On this podcast, you're joining in on a conversation with entrepreneurs, operatives, and experts who make professional politics happen. Our guest today is Scott Brennan, head of online expression policy at the University of North Carolina's Center on Technology Policy. He and his colleagues recently published a set of policy frameworks.
for political ads in the age of artificial intelligence. You can read that report in our show notes. We discussed the current state of AI in political ads, what should be done about it, and recommendations for protecting voters from the potential harms of AI in political ads. Scott, as a part of your research for these recommendations, you looked at how AI is currently being used in
political advertising. So what did you find?
Scott Brennen (01:03.222)
Yeah, so the truth is we don't actually have that many great known examples of generative AI being used in political ads in campaigns. Now, this is, I think, very likely to change as the campaign really heats up. And I think part of that is campaigns might not be using it that much, but also it might be that it's just hard to identify.
especially given the ways that we've seen it start to begin to be used, right? It's, sometimes it just looks so good that we can't even identify when it is being used. But, you know, in the report, we do look back, kind of review those examples that we do have that have been identified, and try to identify some like big picture trends in how.
how campaigns seem to be using these tools. And we come up with three kind of categories, rough categories, publicity, alteration, and fabrication. So publicity is where, we saw these examples, a couple of examples where the fact that it is that an ad contains AI generated imagery is really front and center. And it's part of the ad, it's part of the effort to kind of, yeah, exactly.
Eric Wilson (02:21.937)
It's like the gimmick to get people to write about the ad.
Scott Brennen (02:25.086)
Yeah, and the best example of this was that RNC ad that came out in the early part of the spring that was used, you know, these tools to produce imagery about what the country might look like under a second term of a Biden presidency. The second category, alteration, you know, is includes kind of
subtle changes to imagery in ads. And the best example of this is there's a DeSantis campaign ad where it turns out the campaign used these tools to add in jets to this image of him. Yeah, like the flyover, I guess to make him look more, I don't know, powerful or something.
Eric Wilson (03:04.541)
The flyover. Yeah.
Scott Brennen (03:14.226)
That only came out because some great journalistic work actually took the time to compare the shot in the campaign ad to the original video. But I think this is probably how general AI may be used most moving forward. These small touch ups to video or these small alterations.
And then finally, we have fabrication, and that's probably what gets the most sort of discussion. And that's where we see generative AI being used to completely fabricate something that did not exist. So the best example of this is, again, from the DeSantis campaign where there was an ad that included these images of Trump and Fauci hugging.
There were these still images that were kind of set within a collage that did not happen and presumably they used one of these generative AI image generators.
Eric Wilson (04:17.545)
Got it. So, you know, this is something that I talk about a lot. Everyone's bragging about how they're using AI, so we haven't really had many opportunities where someone tries to sneak it in to give us their version of fabrication or something like that, but I think you're right. We'll see that happen. And then there's a lot of discussion among practitioners of...
of where do you draw the line? So for example, you know, we airbrush candidates' photos all the time. Is that different than putting the candidate in a different setting? So there's a lot of really just kind of gray areas here, but I like those categories. Given that you can count, yeah.
Scott Brennen (05:06.294)
Yeah, no, I think, oh, sorry, okay. No, I mean, that's a great point, that most of the focus in the discussion about gendered and political ads has focused on these concerns about producing deceptive imagery, but not the like most, what will probably be the most common uses when, you know, it may actually be kind of helpful.
for some campaigns to just have cheaper ways of making better looking ads, you know, while modifying things that really don't have anything to do with the real substance of the claims being made. And so it was important to us in the report to at least sort of grapple with not only the potential harms of these tools in political ads, but also, you know, some of the benefits as well.
Eric Wilson (05:54.921)
Yeah, and so given that we can count the examples of AI being used by campaigns on our hands, and that people are detecting these, is the moral panic about AI and political campaigning being overblown?
Scott Brennen (06:17.01)
Yeah, no, that's a great question. So the report, you know, the point of the report, the report was, you know, we tried to be really systematic in identifying the potential harms that are being discussed regarding generative AI and political ads, and then to assess those
Scott Brennen (06:45.546)
that address those harms accordingly. And what we found is that some of the harms that are commonly discussed have probably been a bit overstated, while others though have been understated and likely need more attention. So the harms around authenticity or scale, that gendered AI will make deceptive imagery that's far more authentic, seeming,
or will produce far more content that will then lead to much more, you know, much larger effect on candidate choice, likely those have been somewhat overstated. But risks to down ballot races, or risks concerning the way that these tools might amplify or replicate some of the biases that we know are built into these generative AI tools,
I think those sorts of harms have really sort of been under-stated or under-considered.
Eric Wilson (07:48.997)
Yeah, got it. So if I could summarize that for you, it's more of like we're less worried about the Terminator and we're more worried about kind of, you know, disinformation, misinformation that gets spread on social media platforms just getting plused up. We've seen the effects of that going back several campaigns now. And so that's really where we're trying to focus our attention.
Scott Brennen (07:56.418)
Scott Brennen (08:16.894)
Well, yeah, I mean, certainly the concerns around misinformation, disinformation, I mean, that is the sort of core concern that most people have about gendered and political ads, right? They're like campaigns or other advertisers will use these tools to create real seeming imagery that will have a major effect on the election, that will convince people to.
to support a candidate they would not otherwise support, that would convince them to go out and vote, or to not vote, to give money, or to take, either to believe something they wouldn't believe, or really to take some action that they wouldn't otherwise do. Now, ultimately, when looking at the quite large literature on both the effectiveness of political advertisements in campaigns, and on the...
effects that misinformation, especially sort of like single one-off pieces of misinformation, can have. You know, truthfully, much of the more recent research is kind of suggesting that these things don't really have as much impact as they are often assumed to have. So for political ads, I think, is, if anything, there's a bit clearer evidence that, you know, political ads
rarely seem to or have a strong effect on who people support.
Eric Wilson (09:42.101)
from an academic measurement standpoint, right.
Scott Brennen (09:46.354)
Exactly. Yeah, there's this one this one great sort of meta analysis of anything was 49 empirical studies of the effect of political Political communication and it said there's this quote that we pulled out in the report It's the best estimate of the effects of campaign contact and advertising on americans candidate choices in general elections is zero Right. That's not to say that political ads don't have important
impact or important effects in elections, right, that can still influence behaviors, right, again, as I said, right, turning turnout, you know, donations, signing up for emails, things like that. But, you know, I think we have to kind of grapple the fact that ads really, you know, really don't have a huge effect on who people vote for. And that means that these
Generative AI Changing, you know the candidate that someone supports. It's probably you know, probably not so like
Eric Wilson (10:52.637)
Yeah. Well, but in your report, you do outline kind of four categories of how AI might be used to affect the, shall we call it, the effectiveness of political advertising. Can you run us through those?
Scott Brennen (11:11.274)
Yeah, absolutely. So the point here was to really systematically sort of assess the claims being made about the potential harms of these technologies and political ads. And so we kind of combed through not only academic studies, but just sort of like popular discussion, op-eds, editorials, and tried to just sort of synthesize
the potential harms. And we came up with these four categories of harms. They are scale, that is the concern that these new generative AI-based tools will make it easier or cheaper for advertisers to create false content. And then of course, that increase in scale will have effects, that concern. Authenticity, right, the concern that these tools will allow bad actors to create more realistic deceptive ads.
personalization, the concern that these tools will allow advertisers to more easily create personalized content. Digital ads, in particular, we know are, of course, the whole idea is them being targeted to small segments. But here we have these tools that will make it easier to create different versions of ads that are really, um,
you know that are really keyed to particular segments and then finally bias and you know to be honest I think this is the category that has received the least amount of attention overall But we're starting to see some people express some concern and that is that you know these as campaigns or advertisers begin to use these models You know they might you know that we know have these sort of built-in biases
race, gender, identity, sexuality, nationality, that those are going to be kind of replicated or brought into political ads in different ways. So these are the four sort of harms that we identified kind of being discussed out there in the world. And then we look to the academic literature to sort of make sense of which of these harms, how are these, what can we say about the empirical basis for these harms?
Eric Wilson (13:30.881)
So give us a quick rundown of what, if anything, that the tech platforms and regulatory agencies are doing currently about AI in political ads.
Scott Brennen (13:42.194)
Yeah, absolutely. So this is like a huge deal for tech platforms. And we're seeing a lot of discussion by the tech platforms, by the governments at different levels. So the big ad platforms, well, in particular Google and Meta, have already announced that they're going to start requiring
uh, disclosures on generative AI containing ads for the 2024, you know, US presidential election. And I don't quite remember what date that there's going to start. I know that Google made this announcement, um, you know, uh, some time ago, but said it wouldn't start until after the 2023 elections. Um, I think kind of also notably a lot of the generative AI platforms have policies in place that, um,
prohibit the use of those tools to create deceptive content, especially for elections. So if you go into OpenAI, or chat, excuse me, chat GPT, or Dali too, and try to, you know, ask it to create kind of this sort of deceptive political content, you'll often get a notice that says, are we, you know, this is our, against our terms of service, terms of service.
I will say though there was a big Washington Post investigation some months ago that found that even after that had those, Chachipati had those sort of restrictions in place, they were still able to get around them quite easily.
Eric Wilson (15:17.533)
Yeah, I don't know anyone who has been stopped by that.
Scott Brennen (15:22.123)
Yeah, yeah, exactly. And then, of course, there are some that don't seem to have restrictions at all. And there's, of course, open source models that, yeah. So despite the policies in place, there's going to still be ways to use these tools for political campaigns. And then at the go ahead.
Eric Wilson (15:42.685)
Yeah, and one of the things that I found really interesting about Meta's policy is there does seem to be a little bit of a nuance there, which I appreciate, which is around, you know, they're not interested in the, you know, did you clean up the audio on this using AI? Did you color correct using AI? Was this materially changed? And so I think that's a really important...
caveat and one that we talk about as practitioners.
Scott Brennen (16:15.006)
Yeah, absolutely. And, you know, I think, and we'll talk about this in a second when we when you get to the government side, the policymaker side, but I think the other the other kind of big piece of that is to really grapple with what is you what is really different about, you know, deceptive content created through generative AI versus deceptive content created through Photoshop.
or through, I mean, so, you know, in my other life, you know, I was an academic researcher and I did a lot of work on misinformation. I remember studying COVID misinformation. I did this piece on visual disinformation for COVID and, you know, I can't, in the data set that we were analyzing, I don't think we saw any examples of sort of, you know, wholly created deceptive imagery.
Eric Wilson (16:40.426)
Scott Brennen (17:07.09)
It was all crude use of Photoshop or basically pieces of content that were one thing, but were claimed to be something else. And so I think we really have to sort of grapple with like, what is really unique here? And if we're going to be concerned, if we're concerned about generative AI produced deceptive content, shouldn't we also be concerned about, you know...
Eric Wilson (17:17.319)
Scott Brennen (17:34.917)
other tools or ways of creating that deceptive content.
Eric Wilson (17:39.281)
No, you're listening to the Business of Politics show. I'm speaking with Scott Brennan from the UNC Center on Technology Policy. Now, Scott, I want to dig into your recommendations. You've got 10, but give us the high level summary of what should be done about AI and political ads and, and who should be doing it.
Scott Brennen (18:01.202)
Yeah, so, you know, I guess one thing I should say before we kind of before we get into this is, you know, to just kind of finish up on what we're talking about, you know, we really tried to, you know, bring some rigor to our analysis of the harms. But in a lot of areas that we looked at, there's good reason to think that like, we don't really have enough research here. And this will
that we found a lot of unanswered or open questions. And so we give the best read of the literature that we have now, that doesn't mean that won't change as we move forward. Now that idea about needing to have additional understanding is one of the key sort of.
principles that anchors our 10 recommendations. So we have 10 recommendations, and they're really split into two categories. The first is public policy should promote learning about generative AI. And I guess I should say, these recommendations are just for policymakers. They're not for platforms addressing these content, and they're not for campaigns and how they should try to navigate this evolving landscape. So two categories.
promoting learning about generative AI, and then second, public policy should target the electoral harms rather than the technologies themselves.
Eric Wilson (19:35.325)
Yeah, and I think, I mean, that I think is a really important distinction, right? Because anytime you try and get, um, policymakers trying to write regulations around tech, by the time they agree on it, the tech has changed.
Scott Brennen (19:50.346)
Yeah, absolutely. And that seems especially important here, right? Where, yeah, it's all changing so fast, even faster, right, than other kind of technologies of the past decade or so. And I think this just goes back, oh, go ahead.
Eric Wilson (20:04.789)
So like one of the examples of those electoral harms that you talk about, just to give our listeners example, is voter suppression. So if you are using AI-generated content to discourage or prevent people from voting, that is clearly wrong. It's wrong whether you use AI or not, but that is an example of the electoral harm, not the tech.
Scott Brennen (20:30.238)
Well, exactly, right? What is the thing that we should really care about? And that is the harm, right? In this case, that is voter suppression. That is something that I think many of us can agree is something that is bad, that we should work against. But the actual thing about voter suppression is that we don't have federal legislation outlined voter suppression in this country.
And this is actually something that we've written about before. And it kind of shocked me when I first kind of learned this several years ago. Some of the states do have prohibitions, but we don't have a federal law outlying this. And so rather than focusing, rather than starting
Scott Brennen (21:29.294)
questions about generative AI, one good place to start is passing federal prohibition on voter suppression.
Eric Wilson (21:37.093)
Yeah. So Scott, we alluded to this earlier, but I'm curious at more examples of how we can mitigate the unintended consequences of these proposals that might make it harder for legitimate campaign uses of AI. One of the examples that I go to all the time when people ask me about this actually comes from India. So India has lots of different regional languages.
Scott Brennen (22:05.998)
Eric Wilson (22:06.533)
And so candidates running there or running nationally, obviously don't speak all of those languages, but AI, deep fakes, can be used to make sure that voters are hearing the candidate in their own language. And then we have more recent examples here, stateside of a candidate who's sort of not as well resourced, can't afford the volunteer operation, is using an AI.
caller to contact voters. So how do we save those use cases, which I think most people recognize as like, okay, this is an improvement, without throwing out the, I guess the baby with the bathwater on making sure that we have boundaries on AI.
Scott Brennen (22:55.276)
Yeah, no, that's a great point. And as I said before, I mean, I think that is the, you know, that is the idea that really animates the second kind of category of our recommendations, right? About promoting learning. Like that should be one of the key goals of public policy in these areas. How can we learn more? Like recognizing that this is all so new and there are, you know, there are certainly things to be worried about here, but they're all.
are also are some real potential benefits of these technologies. So how can public policy help us understand better? What are the impacts of these sort of tools and political ads? And so some of the things that we call for are like, include governments.
funding empirical studies. I mean, we have great ways for the federal government through agencies to give some money to better understand this. And we actually lay out very specific kind of holes in the existing literature to hopefully kind of direct some future research here. And I think another big area, and this is something that, oh.
Eric Wilson (23:49.362)
Eric Wilson (24:08.925)
Yeah, it's a novel idea that we would get information to make decisions rather than legislating from a position of panic.
Scott Brennen (24:19.43)
Yeah, exactly. I mean, and this is also ties into something else that Matt Pearl, who's the director of our center, and I have written about and we actually have a piece coming out in Brookings on this. But this idea of, you know, this is a great place for policy experiments. You know, in that other piece we kind of talk about, you know, the real this benefit of
policymakers kind of approaching the idea of policymaking from this place of curiosity, right? Rather than certainty to recognize that often we have imperfect information and we're going to, right? That like policy is really hard and often there are these unintended consequences. So can we create ways of testing out new policies before we're committed to them forever, you know? And so there are these ways like,
sandboxes or policy experiments, or even just these smaller kind of provisions that could be attached to bills that sunset laws or that require data sharing. So for example, we're starting to see some states pass restrictions or requirements around generative AI and politicalized. So
Scott Brennen (25:46.75)
this past year instituted some new rules. Michigan, I think, is going to start requiring disclaimers. That's a great opportunity. Like, whatever you think about the law, there is a great opportunity there to collect, yeah, exactly, to collect some data about it. See how it goes, right? So that in future elections, we can see, did these disclaimers have any sort of material effect, good or bad, right? Like, there are, you know.
Eric Wilson (25:59.387)
Let's test that.
Scott Brennen (26:13.983)
So I think there's really opportunity for policymakers to just build some of these tools and to allow us to better understand the impact that these policies can have.
Eric Wilson (26:27.049)
Well, Scott, before we wrap up, I want to pull you out of your ivory tower of academia and pretend you're in our listener's shoes, you're in the front lines of a campaign. You know, there are obviously concerns that we don't know how AI could be used as an attack vector against campaigns. Any ideas or tips for how someone could protect themselves or protect their campaign from
Scott Brennen (26:31.117)
Eric Wilson (26:54.741)
from these harms while we're waiting on public policy to catch up.
Scott Brennen (26:59.246)
Yeah, that's a really hard one. I'm not gonna lie. I mean, you know, I'm certainly not a campaign practitioner. But so, you know, I think, as I said a few minutes ago, one of the key harms that we think has been sort of under acknowledged is around bias. And the way that the biases that are, you know, that we know that these models have might be
brought into political ad content unintentionally, right? So I guess I would just say it, recommend that as campaigns or advertisers begin to integrate these tools in different ways and to experiment with how they can use them to just be very sort of mindful that...
Scott Brennen (27:52.358)
when you ask a model to generate a boilerplate picture of a constituent or to create kind of boilerplate language that you should really give it some deep consideration about what sort of biases are being kind of brought through. And I think the other side is one of our recommendations actually kind of plays on
Eric Wilson (28:00.49)
Scott Brennen (28:19.374)
the very famous Steve Bannon quote about his general sort of approach, just to flood the zone with shit, forgive my language. And the idea was, well, given that, it seems that these technologies might have more impact at the down ballot races at the local or state level where there's just less advertising, there's less communication, there's less attention, there's less oversight.
And so a piece of a single ad or campaign might have more potential impact. The idea is, well, can local election officials flood the zone with good content? And I think there's a role for candidates at the local and state election to just be very open to working with local campaign officials to help just put out good factual.
Eric Wilson (29:04.522)
Scott Brennen (29:17.614)
content, especially as it concerns just the basics about voting dates or voting processes. Yeah, so I know those are maybe low-hanging fruit on how candidates should think about it, but a few things that they can do.
Eric Wilson (29:39.913)
Well, my thanks to Scott Brennan for a great conversation. You can read his report at the link in our show notes, and there's more information about the Center for Technology Policy at the University of North Carolina. If this episode made you a little bit smarter, gave you some ideas about how to think about AI and its use in political advertising, all we ask is that you share it with a friend or colleague, you look smarter in the process, more people learn about the show, it's a win-win all around.
Remember to subscribe to the Business of Politics show wherever you listen to podcasts, so you never miss an episode. And you can also get email updates, see past episodes at our website, busi Thanks for listening. We'll see you next time.