Perspectives in Digital Literacy/New GenAI Misinformation in The U.S Political Landscape
Introduction
In recent years, politics within the United States has grown more divisive among not just the two major political parties, but among the voters as well. Conspiracy theories and misinformation and disinformation are changing the world views and beliefs of some voters in what feels like no time at all. But, while misinformation is nothing new to politics and elections, the way that such information is created and distributed has changed drastically over time. As technology continues to grow at an incredible rate, so do ways of spreading discord, particularly with Generative AI . The rise of GenAI has given birth to more sophisticated and convincing forms of misinformation, surpassing the traditional tactics of mischaracterizing an opponent's comments and actions. It can take images, voices, and video clips and mold them into something completely new, and completely false. These mind-tricking pieces of a false reality are called “Deep Fakes” and they can be found all over the internet, especially on social media. While some deep fakes can be categorized as fun and unserious memes, there are many that are also more sinister and trouble-making in intent. It can range from something as simple as false images of Taylor Swift endorsing Donald Trump to something far more chaotic like a Robocall using the President of the United States’ voice trying to convince people against voting in their parties primaries.[1]
With technology capable of creating such content now widely accessible both at home and globally, a critical consideration for our election cycle is: What are deep fakes, what risks do they pose to U.S. voters, and how can we mitigate these potential threats? According to an April 2024 poll conducted by Elon University, more than 3 in 4 Americans believed that deep fakes would be used to affect the 2024 election (ABC News). This seems to be a feeling shared between both major political parties unevenly, as about 36% of Republicans think that AI systems are biased against Republicans with only 15% of Democrats sharing this sentiment, while 23% of Republicans think AI systems are biased against Democrats with only 14% of Democrats agreeing (Elon University).[2] This poll among many others highlights the doubts and fear the American people have towards AI and its effects on the U.S. political system.
Examples, Statistics and Calls for Regulation
In October, CNN Journalist and host of “The Lead” Jake Tapper aired a segment in which a highly convincing deep fake of himself introduced the topic of deep fake AI videos.[3] In this same segment, he also introduced examples of deep fakes already being implicated in the U.S’s 2024 election cycle. A year ago, noted Tapper, “the RNC ran an ad depicting San Francisco getting shut down and other dystopian, completely fake images of this future where Biden is re-elected. CNN’s Donie O’Sullivan showed this to voters, and they could not tell it was fake.” Then there’s also the deep fake images of Donald Trump hugging Dr. Anthony Fauci that were spread online by Florida Governor Ron Desantis when he ran for president, in an attempt to smear Trump’s reputation with his base. These acts alone show that some American politicians have used deep fakes against their opponents as a means to mislead voters away from voting for that person.
But, it is not just political campaigns that have gotten in on these actions, some supporters of these campaigns have as well, creating AI images of Trump surrounded by groups of people he is known to severely lack support with
such as African-Americans (CNN). With situations like this becoming more common, major social media companies have often been called on to do better when it comes to detecting and removing these deep fakes, only to find such actions to be difficult or the attempts to be rather lackluster. And the longer these images, audios, and videos remain up, the greater the chance for the general public to consume them. Tragically, there is little faith among the people for such actions to take place. While 77% of U.S. adults say big tech companies have a responsibility to prevent the misuse of their platforms to influence the 2024 elections, only 20% of Americans say they are very or somewhat confident in tech companies like Facebook, X, TikTok and Google to prevent the misuse of their platforms to influence the 2024 presidential election, according a poll conducted by Pew Research Center.[4]
Those who are unaware of what AI deep fakes are, run the risk of falling for one or many of these pieces of content. A survey of about 5,100 adults conducted by Pew Research Center in 2023 showed that 50% of adults in the United States do not know what deep fakes are, while 60% of Americans under 30 years old do know.[5] Some, like Mina Momeni, an assistant professor in Communication Arts at the University of Waterloo in Canada, believe that it would benefit the American people if they were educated on the matter of AI and deep fakes as it takes root in our society: “[i]t is…necessary to increase public awareness of different types of digital manipulations and forgeries, and how they are used to influence society.”
The worry from voters about deep fakes being used to influence voters is not unfounded, as there has already been a case of AI being used to misinform voters in Slovakia’s elections for parliament last year. The pro-NATO political faction lost its election to their Pro-Moscow opponents after deep fake audio emerged of the candidate claiming plans to “rig” the election and raise beer taxes.[6] The probability for situations like this to occur here has raised concerns about the call to regulate GenAI. But more than that, the concern for the safety of the lives of election workers play a factor in this as well. Voter intimidation towards election workers has increased in recent years, putting many innocent people and their families at risk, even accumulating into tragic situations such as the January 6th attack on the United States Capitol, by people wishing to overturn the 2020 election.[7] In October of this year, ABC News reported that “[i]n a bulletin to state election officials, the Department of Homeland Security warns that AI voice and video tools could be used to create fake election records; impersonate election staff to gain access to sensitive information; generate fake voter calls to overwhelm call centers; and more convincingly spread false information online.”[8] The idea of someone dying or losing a loved one because of something that was not true is not only horrific, but also unfounded in and of itself. Of course, some states like California have passed and continue to pass their own individual laws that create some protections against AI, but on a federal level there would still be much to discuss, such as what parts and types of GenAI to regulate, how to regulate it, and who should be the ones to regulate it as well.
Alternative uses for AI
But perhaps there is a bright side to the madness. In the right hands, these same methods that are used for discourse and chaos can be used in a more positive, educational, and productive manner. According to Capitol University, a private university specializing in STEM in Laurel, Maryland, “AI can be used to educate voters by providing relevant and reliable information about candidates, their platforms, and the voting process” (“The Good, The Bad”).[9] Instead of creating videos about why “x” candidate is bad, the GenAI created video could simply be used to lay out agendas that each candidate has supported or fought against in the past or present that voters may not know about yet, or it could be used as a “how-to” guide for people who are new to voting, or videos that help distinguish fact from fiction for the public about the voting process.
Conclusion
As United States politics grows more divisive, and the methods of misinformation towards the public grows, there is much to take into account when looking into how it influences the people and their decisions. It is important to educate the people on matters of truth and fiction, especially when it comes to the decisions that shape this country. Education is a key aspect in combating deep fakes and the false information they spread. It is just a matter of how we go about it that defines success or failure.
References
edit- ↑ News, A. B. C. "AI deepfakes a top concern for election officials with voting underway". ABC News. Retrieved 2024-12-13.
{{cite web}}
:|last=
has generic name (help) - ↑ "New survey finds most Americans expect AI abuses will affect 2024 election". Today at Elon. 2024-05-15. Retrieved 2024-12-13.
- ↑ Tapper, Jake (2024-10-04), Can you tell the difference? Jake Tapper uses his own deepfake to show how powerful AI is | CNN Politics, retrieved 2024-12-13
- ↑ Gracia, Shanay (2024-09-19). "Americans in both parties are concerned over the impact of AI on the 2024 presidential campaign". Pew Research Center. Retrieved 2024-12-13.
- ↑ Gracia, Shanay (2024-09-19). "Americans in both parties are concerned over the impact of AI on the 2024 presidential campaign". Pew Research Center. Retrieved 2024-12-13.
- ↑ "Election 2024: The Deepfake Threat to the 2024 Election | Council on Foreign Relations". www.cfr.org. Retrieved 2024-12-13.
- ↑ 221; 212 (2023-12-12). "Regulating AI Deepfakes and Synthetic Media in the Political Arena | Brennan Center for Justice". www.brennancenter.org. Retrieved 2024-12-13.
{{cite web}}
: CS1 maint: numeric names: authors list (link) - ↑ News, A. B. C. "AI deepfakes a top concern for election officials with voting underway". ABC News. Retrieved 2024-12-13.
{{cite web}}
:|last=
has generic name (help) - ↑ Laurel, Capitol Technology University 11301 Springfield Road; Md 20708 888.522.7486. "The Good, the Bad, and the Unknown: AI's Impact on the 2024 Presidential Election | Capitol Technology University". www.captechu.edu. Retrieved 2024-12-13.