Artificial intelligence has been one of the most pivotal innovations of the 2020s. Many facets of our lives, from school to entertainment, have been changed significantly. Student use of ChatGPT is something teachers now have to contend with. Hordes of large-language-model AIs (LLMs) are taking over social media bot accounts to spread misinformation. However, 2024 is the first time we’ve ever seen a United States election in the era of AI.
In recent years, advancements in AI generated images have been rapid and plenty. Initially, AI images were very obviously fake, adding too many fingers to hands, blurring faces and backgrounds, putting objects in very strange locations, and having distorted text. However, now, depending on the software used, AI generated images are near-indistinguishable from reality.
To counter slanderous information, most AI image generators, like DALLE or Midjourney implement restrictions on adding real people into the generators, as well as any profane or illegal content. However, Elon Musk’s new AI chatbot named “Grok” lacks proper censoring, and fully allows users to generate images of real people doing fake things, putting this dangerous tech into the hands of everyday people.
This opens up the door for all sorts of misinformation. Yes, it’s obvious to some that these images are fake, but for the elderly or technologically illiterate, these AI images could deceive millions of people.
For example, there are two specific photos that have gone viral — and neither are good. The first of the two is an image of Donald Trump and Kamala Harris, the two 2024 presidential candidates, robbing a cashier in a store at gunpoint, while the second image depicts the pair holding guns and flying a plane over New York City, with the two towers right in the forefront.
While it’s pretty obvious to most people that Kamala Harris and Donald Trump definitely did not commit the terrorist attacks on 9/11 or rob anyone at gunpoint, this technology exists now, and could absolutely be used for slander.
In fact, they’ve already been used for this very purpose. Former President Donald Trump has already uploaded several images on his social media platform, Truth Social, of AI generated supporters wearing “Swifties for Trump” shirts as a bid to make it look like he had the support of Taylor Swift fans.
He’s also uploaded an AI generated image of his opponent, Vice President Kamala Harris, leading a communist rally. It was pretty clearly fake, but it’s still a use of Harris’s likeness without her consent to portray her doing something she never did.
The opposite occurred, as well. During a Harris-Walz rally in Detroit, Trump falsely claimed that Harris’s crowd, and the images of it, were all AI generated, leading to massive online discourse. This raises another concern, that anything candidates do can be countered with the phrase “it’s just AI,” leading to a whole new level of gaslighting and propaganda.
Aside from actual presidential candidates posting AI generated images, supporters of both sides have been doing the same with their candidates. Unsettling or inappropriate AI images of Donald Trump, Kamala Harris, and Joe Biden have been spread like wildfire across the whole internet.
Fake protest images, fake images of candidates doing morally questionable or violent acts, offensive or revealing AI images of them, or even very disturbing false images of president “ships,” with the two candidates kissing each other, or for some reason, pregnant, have been created and published all over social media in efforts to defame, slander, or falsely promote them.
Many people brush AI images off as non-dangerous or easy-to-spot, but this is simply untrue. A poll was conducted here at PPCHS to determine whether or not people could tell if an image was generated via artificial intelligence or not. Five images were provided to the viewer — three were AI generated, and two were real.
75% of the participants marked at least one AI image as real, while one person marked all three as real. Interestingly, the presence of AI imagery caused the participants to doubt the validity of the real images; half of the participants flagged a false negative, stating that one of the real images was AI.
This opens up all sorts of possibilities for politicians or political marketers to sow doubt in voters over what really is true, and what’s fake. “It really doesn’t have anything that would make me feel it’s fake,” said PPCHS Aice Marine teacher, Mr. Kapela, as he looked at an artificially created image, “so I’d have to say it’s real.”
The scariest part is that the participants picked these images as false or real knowing that some were fake. In a real-world scenario, no one will have this heads-up, making it even more difficult to determine.
When browsing images online, it’s crucial to verify if they’re genuine. Pay attention to the shadows—if something seems odd, it might be AI-generated. Similarly, check the hands and fingers; they often appear distorted or awkwardly placed. Eyes can look unnervingly artificial, and objects might be arranged too perfectly. These are all signs that the image could be AI-created. Did this last paragraph throw you off? It’s AI too.