Risks of Using Artificial Intelligence in Creating the Image of Politicians and in Electoral Campaigns

Authors

DOI:

https://doi.org/10.12797/AdAmericam.25.2024.25.10

Keywords:

Artificial intelligence (AI), X‑Risk Analysis, AI Safety Research, deepfake, user profiling, microtargeting, electoral campaigns

Abstract

In the light of the rapid development of advanced technologies in recent years, many questions have been raised about the future application of available technological solutions in various spheres of life, including politics. An important issue that should be discussed in this field concerns the risks associated with the use of artificial intelligence algorithms in creating the public image of politicians and in electoral campaigns. This paper is based on the concept of eroded epistemics, which is a part of Existential Risk Analysis for AI research. Using the AI Safety Research perspectives of monitoring and systemic safety, it examines the potential risks of using AI in politics and ways to minimize them. The analysis is based on the examples of actions of American politicians. Firstly, the threats of using deepfake technology in creating and manipulating the image of politicians such as Nancy Pelosi, Barack Obama, and Donald Trump, are presented. The second part of the paper discusses user profiling and microtargeting strategies and how they may form opinions and influence voters’ decisions. Finally, examples of present‑day solutions that are being developed to combat these risks are described.

PlumX Metrics of this article

Author Biography

Helena Jańczuk, College of Europe

Is a MA student at the Institute of American Studies and Polish Diaspora (Faculty of International and Political Studies) at the Jagiellonian University in Kraków, Poland. In 2022 she graduated with a bachelor’s degree from the Faculty of English at the Adam Mickiewicz University in Poznań, Poland. Her BA thesis titled “The Human-Robot Relationship in Isaac Asimov’s I, Robot and Philip K. Dick’s Do Androids Dream of Electric Sheep?” has won the dean’s award for the best paper in the field of literary studies. Her research is centered around social, moral, and ethical aspects of technological advancement as well as the philosophy of transhumanism and its critique.

References

Anderson, Berit. “The Rise of Weaponized AI Propaganda Machine.” Medium, 13 February 2017, https://medium.com/join‑scout/the‑rise‑of‑the‑weaponized‑ai‑propaganda‑machine‑86dac61668b (10.09.2024).

Bender, Emily M., et al. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” FAccT’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. New York: Association for Computing Machinery, 2021, pp. 610‑623, https://doi.org/10.1145/3442188.3445922. DOI: https://doi.org/10.1145/3442188.3445922

Bomboy, Scott. “What Are the Real Swing States in the 2016 Election?” National Constitution Center, 13 June 2016, https://constitutioncenter.org/blog/what‑are‑the‑really‑swing‑states‑in‑the‑2016‑election (10.09.2024).

Bostrom, Nick. “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards.”

Journal of Evolution and Technology, vol 9, no. 1, 2002, pp. 1‑36.

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press, 2014.

Bucknall, Benjamin S., and Shiri Dori‑Hacohen.

“Current and Near‑Term

AI as a Potential Existential

Risk Factor.” AIES’22: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society. New York: Association for Computing Machinery, 2022, pp. 119‑129, https://doi.org/10.1145/3514094.3534146. DOI: https://doi.org/10.1145/3514094.3534146

BuzzFeedVideo. “You Won’t Believe What Obama Says In This Video!” YouTube, 17 April 2018, https://www.youtube.com/watch?v=cQ54GDm1eL0 (10.09.2024).

Coats, Daniel R. Statement for the Record: Worldwide Threat Assessment of the US Intelligence Community.

Washington, D.C.: Senate Select Committee on Intelligence, 2019, https://www.dni.gov/files/ODNI/documents/2019‑ATA‑SFR‑‑‑SSCI.pdf (25.09.2023).

Cole, Samantha. “This Deepfake of Mark Zuckerberg Tests Facebook’s Fake Video Policies.”

Vice, 11 June 2019, https://www.vice.com/en/article/ywyxex/deepfake‑of‑mark‑zuckerberg‑facebook‑fake‑video‑policy

(10.09.2024).

Duke Reporters’ Lab. “Fact‑Checking Sites.” Reporters Lab, https://reporterslab.org/fact‑checking/ (25.09.2023).

Easterly, Jen, et al. “Artificial Intelligence’s Threat to Democracy.” Foregin Affairs, 3 January 2024, https://www.foreignaffairs.com/united‑states/artificial‑intelligences‑threat‑democracy (10.09.2024).

European Union. “Art. 22 GDPR: Automated Individual Decision‑making, Including Profiling.”

General Data Protection Regulation (GDPR), https://gdpr‑info.eu/art‑22‑gdpr/(25.09.2023).

Gabriel, Iason. “Artificial Intelligence, Values, and Alignment.” Minds and Machines, vol. 30, no. 3, 2020, pp. 411‑437, https://doi.org/10.1007/s11023‑020‑09539‑2. DOI: https://doi.org/10.1007/s11023-020-09539-2

Galindo, Gabriela. “XR Belgium Posts Deepfake of Belgian Premier Linking Covid‑19 with Climate Crisis.” The Brussels Times, 14 April 2020, https://www.brusselstimes.com/all‑news/belgium‑all‑news/

politics/106320/xr‑belgium‑posts‑deepfake‑of‑belgian‑premier‑linking‑covid‑19‑with‑climate‑crisis (10.09.2024).

Goldstein, Josh A., and Girish Sastry. “The Coming Age of AI‑Powered Propaganda.” Foreign Affairs, 7 April 2023, https://www.foreignaffairs.com/united‑states/coming‑age‑ai‑powered‑propaganda (10.09.2024).

Hendrycks, Dan, and Mantas Mazeika. “X‑Risk Analysis for AI Research.” arXiv, https://doi.org/10.48550/arXiv.2206.05862.

Higgins, Eliot [EliotHiggins]. “Making pictures of Trump getting arrested while waiting for Trump’s arrest.” X (Twitter), 20 March 2023, https://twitter.com/EliotHiggins/status/1637927681734987777 (10.09.2024).

Kanoje, Sumitkumar, et al. “User Profiling Trends, Techniques and Applications.” International Journal of Advance Foundation and Research in Computer, vol. 1, no. 1, 2014, https://doi.org/10.48550/arXiv.1503.07474.

Kertysova, Katarina. “Artificial Intelligence and Disinformation: How AI Changes the Way Disinformation Is Produced, Disseminated, and Can Be Countered.” Security and Human Rights, vol. 29, 2018, pp. 55‑81, https://doi.org/10.1163/18750230‑02901005. DOI: https://doi.org/10.1163/18750230-02901005

Library of Congress. “H.R.4955 – Banning Microtargeted Political Ads Act of 2021.” Congress, https://www.congress.gov/bill/117th‑congress/house‑bill/4955 (25.09.2023).

Lorenz‑Spreen, Philipp, et al. “Boosting People’s Ability to Detect Microtargeted Advertising.”

Scientific Reports, vol. 11, 2021, https://doi.org/10.1038/s41598‑021‑94796‑z. Merriam‑Webster.

“Deepfake,” https://www.merriam‑webster.com/dictionary/deepfake (25.09.2023).

Miceli, Giacomo. The Infinite Conversation, https://infiniteconversation.com/ (25.09.2023).

National Association of Secretaries of State. “#TrustedInfo2024.” NASS, http://www.nass.org/initiatives/trustedinfo (15.01.2024).

Ngo, Richard, et al. “The Alignment Problem from a Deep Learning Perspective.” arXiw, 2022, https://doi.org/10.48550/arXiv.2209.00626.

Ord, Toby. The Precipice: Existential Risk and the Future of Humanity. New York: Hachette Books, 2020.

Russell, Stuart. Human Compatible: Artificial Intelligence and the Problem of Control. New York: Viking Books, 2019.

Somers, Meredith. “Deepfakes, Explained.” MIT Sloan School of Management, 21 July 2020, https://mitsloan.mit.edu/ideas‑made‑to‑matter/deepfakes‑explained (10.09.2024). DOI: https://doi.org/10.1287/420a0422-8060-4775-b8d7-a8f01fa36ffe

Stanley‑Becker, Isaac, and Naomi Nix. “Fake Images of Trump Arrest Show ‘Giant Step’ for AI’s Disruptive Power.” The Washington Post, 22 March 2023, https://www.washingtonpost.com/politics/2023/03/22/trump‑arrest‑deepfakes/(10.09.2024).

Supasorn Suwajanakorn. “Synthesizing Obama: Learning Lip Sync from Audio.” YouTube, 12 July 2017, https://www.youtube.com/watch?v=9Yq67CjDqvw (10.09.2024).

Suwajanakorn, Supasorn, et al. “Synthesizing Obama: Learning Lip Sync from Audio.”

ACM Transactions on Graphics, vol. 36, no. 4, 2017, pp. 1‑13, https://doi.org/10.1145/3072959.3073640. DOI: https://doi.org/10.1145/3072959.3073640

The New York Times. “2016 Presidential Election Results,” 9 August 2017, https://www.nytimes.com/elections/2016/results/president (10.09.2024).

The New York Times. “President Map,” 2012, https://www.nytimes.com/elections/2012/results/president.html?mtrref=www.google.com&gwh=7B66F1AD24AF6781F080C09AD73E0D3A&gwt=pay&assetType=PAYWALL (10.09.2024).

Trump, Donald J. [realDonaldTrump]. “PELOSI STAMMERS THROUGH NEWS CONFERENCE.”

X (Twitter), 24 May 2019, https://twitter.com/realDonaldTrump/status/1131728912835383300 (10.09.2024).

United States House of Representatives. “Speakers of the House by Congress.” History, Art & Archives, https://history.house.gov/People/Office/Speakers‑List/(25.09.2023).

Wasilewski, Andrzej. Kto sieje wiatr, zbiera burzę. Wrocław: Muzeum Narodowe we Wrocławiu, 26 February 2023–4 June 2023.

Watson, Kathryn. “Trump Tweets Heavily Edited Video of Pelosi Played by Fox Business.”

CBS News, 24 May 2019, https://www.cbsnews.com/news/trump‑tweets‑heavily‑edited‑video‑of‑pelosi‑played‑by‑fox‑news/(10.09.2024).

Whitson, George M. “Artificial Intelligence.” Salem Press Encyclopedia of Science, 2023, https://searchworks‑lb.stanford.edu/articles/ers__89250362 (10.09.2024).

Downloads

Published

2024-12-30

How to Cite

Jańczuk, H. “Risks of Using Artificial Intelligence in Creating the Image of Politicians and in Electoral Campaigns”. Ad Americam, vol. 25, Dec. 2024, pp. 169-82, doi:10.12797/AdAmericam.25.2024.25.10.