SLAAIT in Brief: Corrosive Artificial Intelligence

By Riley Lankes

23rd January 2024


Ahead of the state’s presidential primary in January 2024, voters from across New Hampshire reported receiving phone calls from incumbent president Joe Biden, discouraging them from coming to the polls. Days later, the New Hampshire Attorney General’s office reported that it was investigating “…reports of an apparent robocall that used artificial intelligence to mimic President Joe Biden’s voice” (AP). 

The New Hampshire robocalls are only the latest drop in a rapidly growing wave of AI-powered political disinformation flooding the global political landscape. With its ability to create original images, videos, and mimic voices, generative AI technology appears to be at the center of this wave of disinformation. As the capabilities of generative AI tech grow rapidly, and as this new tech is rapidly deployed to end users, it’s becoming clear that the potential for misuse is very high. At the same time, we’re beginning to understand how the misuse of generative AI can damage, or even corrode, the trust we place in our political leaders and institutions (political trust).

The idea of corrosive AI explored in this briefing was one I originally coined in my master’s thesis, entitled “Corrosive AI: Emerging Effects of the Use of Generative AI on Political Trust.”  This briefing serves as a short summary of this longer paper, which is available in full here.

What makes AI corrosive?

In the last few years, political scholars and technologists alike have predicted that the advent of AI, specifically generative AI, could erode political trust. These predictions, while close to the mark, were not entirely accurate. There are two reasons for this. First, it has not been AI itself that has damaged our collective political trust, but rather the misuse of AI. Artificial intelligence technologies do not (yet) have agency of their own, so we want to avoid ascribing agency or autonomy to the technology when discussing issues surrounding it. Rather than stating “generative AI erodes political trust”, we would instead state that “the misuse of generative AI will erode trust.”

The second issue with predictions that generative AI could erode trust is in the choice of the verb “erode”. Erosion denotes a slow decline over a long period of time, in the way that a river erodes rock to form a canyon over thousands of years. If recent events are any indication, the misuse of generative AI is not so much slowly eroding political trust as it is rapidly corroding it. This is where we get the term corrosive AI.

In previous research on this topic, I identified three factors that make the misuse of generative AI corrosive to political trust. Those factors are as follows:

  1. The predominant business model used by the major AI players, featuring easy user access and rapid deployment of updates.
  2. The potential for AI generated content to damage trust in video media.
  3. Generative AI’s capability to empower disinformation and fabricate scandals

Again, it’s important to point out that none of these factors are intrinsic to generative AI technology. Rather, these factors result from choices made by the technology’s developers and end users. In the next few sections, we’ll take a more in-depth look at each of these factors. By then considering all three factors together, we’ll come to a complete understanding of how and why generative AI is so likely to corrode political trust among the general public.

Generative AI Access, Development, & Deployment

Today, the predominant access model for most generative AI services (ChatGPT, Midjourney, NightCafe) involves users accessing a website to submit requests for content. These requests are then sent to the service hosting the generative model to be executed, with generated content sent back to the user. As a result, the ability to generate content using generative AI is available to nearly any user with an internet connection – powerful computing hardware is not required for individual users.

The fact that these various AI content generation services are not run locally on end user’s computers lets developers change and update these generative models rapidly. While rapid development of these models may sound beneficial, the rapid deployment of these changes can be problematic. With limited testing, issues with changes made to generative AI models are difficult to catch. Deploying updates to users without extensive testing also leaves the door open for end users to misuse the technology (read more about AI-generated child abuse).

Fixing issues as they arrive has the potential to cause significant harm when the technology in question has the potential to significantly influence how users perceive the world around them. With the predominant model of AI development and deployment we see today, it is difficult for developers to anticipate how their generative AI models will be misused. This problem is exacerbated by the fact that anyone with an internet connection can use (or misuse) generative AI technology.

AI & Trust in Video

Today, video is one of the primary mediums through which we get information about the world we live in. Video provides us with the ability to witness events as if we were there, which in turn helps us form opinions about said events. We generally conceive of video content as being authentic, a true representation of events as they occurred. With the widespread availability of generative AI, the collective trust that our society places in the authenticity of video content is threatened.

Generative AI technology can be used in several ways to create or alter video content. While the technology is able to generate entirely original video content, its most problematic use in the realm of video content thus far has been in the integration of generative AI into Deepfakes. Deepfakes are the manipulation of facial features to replace one subject’s face with the features of another, in either images or videos. Deepfake generators empowered by generative AI can accomplish this more quickly and convincingly than their pre-AI counterparts. This technology has made headlines due to its misuse in creating deepfake porn, often as a form of targeted sexual harassment. In the realm of politics, AI-powered deepfakes can be used to depict political figures doing things they never actually did or saying things they never actually said. 

In a nutshell, this is how the misuse of generative AI can damage political trust. Political trust is formed when people observe their representatives in government saying or doing things that are in line with the peoples’ expectations. With the widespread misuse of generative AI, how can people trust their representatives if everything they’re recorded saying or doing is subject to doubt?

Disinformation & Fabricated Scandals

Disinformation is a form of false information that is intentionally designed to mislead people. Unfortunately, generative AI lends itself quite well to creating content that can be used to spread disinformation. As the last section discussed, we tend to treat videos (and to a lesser extent images) as authentic representations of the world. If we can’t experience events ourselves, images and videos allow us to experience them from the perspective of another. Even before the rise of generative AI, the spread of disinformation was an issue, particularly on social media. Whether it was by altering the content itself, or by adding intentionally misleading context to it, disinformation has always been relatively easy to spread. The wide availability of generative AI only exacerbates this issue. Altering images and videos to show what you want them to show is easier than ever, and no longer requires content editing skills. Generative AI can also be used to create original content. Without a way to clearly identify what has or hasn’t been generated by AI, this content can be passed off as authentic. If this generated disinformation is political in nature, it’s easy to see how a flood of disinformation empowered by generative AI could corrode political trust on a massive scale.

If we take this political disinformation and scale it up to the point that it’s believed and shared by thousands of people, it becomes a scandal. In politics, scandals damage trust. The loss of trust due to a scandal is felt by both the political figure(s) at the center of the scandal, and by the political institution they represent. Watergate is the quintessential example of this phenomenon. As many are aware, the scandal destroyed the American public’s trust in Richard Nixion, leading to his resignation in 1974. What is less obvious is that Watergate was likely a major contributing factor to the rapid decline in levels of public trust in government seen between 1972 (53%) and 1974 (36%).

Scandals are incredibly damaging to public trust in individual politicians, and to trust in the government system as a whole. With widespread access to generative AI, this damage to political trust can result from scandals that never actually occurred. To illustrate with another example, in 2020 a video went viral of then House Speaker Nancy Pelosi appearing to drunkenly slur her words while giving a speech. The video was quickly shown to have been altered, yet it was still shared over 91,000 times before the original version was taken down. With easy access to rapidly advancing generative AI technology, scandals rooted in doctored content like this are only going to become more frequent, causing further damage to the public’s trust in politicians and political systems.

Whether the disinformation misleads a handful or people or leads to a national scandal, the same rule applies. Content does not need to be grounded in truth to damage trust, it only needs to be believed.

Executive Summary

The misuse of generative artificial intelligence technology is likely to quickly corrode trust in political figures and institutions. There are three factors that make the misuse of generative AI likely to corrode political trust. 

The first factor is the predominant business model used by major AI players to develop and deploy generative technologies. This model makes generative AI easily accessible to millions of users, but also allows for changes to generative tech to be rolled out to users with very little testing. The potential for misuse of generative tech is high, yet developers can only fix issues once they become problematic on a large scale.

The second factor is the misuse of generative AI in ways that damage trust in the authenticity of video content. Video is one of the primary mediums through which people get information about political issues and figures. Video is generally believed to be authentic – a true representation of the world as it is. Generative AI can be used to alter video content quickly and convincingly. With generative AI, videos can be doctored to show political figures doing things they never actually did. Even if these videos are not believed, trust in the authenticity of video is likely to be damaged.

The third factor is the capability of generative AI to empower disinformation, potentially leading to scandals. Generative AI can be used to easily alter existing videos and images, but can also generate original content. This capability can be misused to create content intended to mislead people. Generative AI allows for this to be done quickly. Due to the aforementioned business model which features easy access for users, this technology is in the hands of millions of people. If this disinformation is shared and believed on a large enough scale, it has the potential to become a scandal, which are known to damage political trust in several ways.

Considered together, these three factors make it clear that the misuse of generative AI is highly likely to corrode political trust. Content which people generally consider to be authentic (video and images) can be easily and convincingly altered. Additionally, original content can be created and passed off as authentic. Real-world examples demonstrate that this type of content is being used to mislead people into distrusting individual political figures. If this type of content is shared and widely believed, it has the potential to become a political scandal, which are known to damage trust. Even if the content is not widely believed, it can still damage trust in the authenticity of video content as a whole. Thanks to the predominant business model adopted by the major players in the AI space, generative technology capable of creating this type of misleading content is easily accessible to millions of people. This business model also means that developers are constantly playing catch-up when trying to prevent the misuse of generative AI, as the technology has been rolled out and constantly updated with very little testing.