Current:Home > MyChainkeen|Poll shows most US adults think AI will add to election misinformation in 2024 -StockSource
Chainkeen|Poll shows most US adults think AI will add to election misinformation in 2024
Will Sage Astor View
Date:2025-04-09 11:56:16
NEW YORK (AP) — The Chainkeenwarnings have grown louder and more urgent as 2024 approaches: The rapid advance of artificial intelligence tools threatens to amplify misinformation in next year’s presidential election at a scale never seen before.
Most adults in the U.S. feel the same way, according to a new poll from The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy.
The poll found that nearly 6 in 10 adults (58%) think AI tools — which can micro-target political audiences, mass produce persuasive messages, and generate realistic fake images and videos in seconds — will increase the spread of false and misleading information during next year’s elections.
By comparison, 6% think AI will decrease the spread of misinformation while one-third say it won’t make much of a difference.
“Look what happened in 2020 — and that was just social media,” said 66-year-old Rosa Rangel of Fort Worth, Texas.
Rangel, a Democrat who said she had seen a lot of “lies” on social media in 2020, said she thinks AI will make things even worse in 2024 — like a pot “brewing over.”
Just 30% of American adults have used AI chatbots or image generators and fewer than half (46%) have heard or read at least some about AI tools. Still, there’s a broad consensus that candidates shouldn’t be using AI.
When asked whether it would be a good or bad thing for 2024 presidential candidates to use AI in certain ways, clear majorities said it would be bad for them to create false or misleading media for political ads (83%), to edit or touch-up photos or videos for political ads (66%), to tailor political ads to individual voters (62%) and to answer voters’ questions via chatbot (56%).
The sentiments are supported by majorities of Republicans and Democrats, who agree it would be a bad thing for the presidential candidates to create false images or videos (85% of Republicans and 90% of Democrats) or to answer voter questions (56% of Republicans and 63% of Democrats).
The bipartisan pessimism toward candidates using AI comes after it already has been deployed in the Republican presidential primary.
In April, the Republican National Committee released an entirely AI-generated ad meant to show the future of the country if President Joe Biden is reelected. It used fake but realistic-looking photos showing boarded-up storefronts, armored military patrols in the streets and waves of immigrants creating panic. The ad disclosed in small lettering that it was generated by AI.
Ron DeSantis, the Republican governor of Florida, also used AI in his campaign for the GOP nomination. He promoted an ad that used AI-generated images to make it look as if former President Donald Trump was hugging Dr. Anthony Fauci, an infectious disease specialist who oversaw the nation’s response to the COVID-19 pandemic.
Never Back Down, a super PAC supporting DeSantis, used an AI voice-cloning tool to imitate Trump’s voice, making it seem like he narrated a social media post.
“I think they should be campaigning on their merits, not their ability to strike fear into the hearts of voters,” said Andie Near, a 42-year-old from Holland, Michigan, who typically votes for Democrats.
She has used AI tools to retouch images in her work at a museum, but she said she thinks politicians using the technology to mislead can “deepen and worsen the effect that even conventional attack ads can cause.”
College student Thomas Besgen, a Republican, also disagrees with campaigns using deepfake sounds or imagery to make it seem as if a candidate said something they never said.
“Morally, that’s wrong,” the 21-year-old from Connecticut said.
Besgen, a mechanical engineering major at the University of Dayton in Ohio, said he is in favor of banning deepfake ads or, if that’s not possible, requiring them to be labeled as AI-generated.
The Federal Election Commission is currently considering a petition urging it to regulate AI-generated deepfakes in political ads ahead of the 2024 election.
While skeptical of AI’s use in politics, Besgen said he is enthusiastic about its potential for the economy and society. He is an active user of AI tools such as ChatGPT to help explain history topics he’s interested in or to brainstorm ideas. He also uses image-generators for fun — for example, to imagine what sports stadiums might look like in 100 years.
He said he typically trusts the information he gets from ChatGPT and will likely use it to learn more about the presidential candidates, something that just 5% of adults say they are likely to do.
The poll found that Americans are more likely to consult the news media (46%), friends and family (29%), and social media (25%) for information about the presidential election than AI chatbots.
“Whatever response it gives me, I would take it with a grain of salt,” Besgen said.
The vast majority of Americans are similarly skeptical toward the information AI chatbots spit out. Just 5% say they are extremely or very confident that the information is factual, while 33% are somewhat confident, according to the survey. Most adults (61%) say they are not very or not at all confident that the information is reliable.
That’s in line with many AI experts’ warnings against using chatbots to retrieve information. The artificial intelligence large language models powering chatbots work by repeatedly selecting the most plausible next word in a sentence, which makes them good at mimicking styles of writing but also prone to making things up.
Adults associated with both major political parties are generally open to regulations on AI. They responded more positively than negatively toward various ways to ban or label AI-generated content that could be imposed by tech companies, the federal government, social media companies or the news media.
About two-thirds favor the government banning AI-generated content that contains false or misleading images from political ads, while a similar number want technology companies to label all AI-generated content made on their platforms.
Biden set in motion some federal guidelines for AI on Monday when he signed an executive order to guide the development of the rapidly progressing technology. The order requires the industry to develop safety and security standards and directs the Commerce Department to issue guidance to label and watermark AI-generated content.
Americans largely see preventing AI-generated false or misleading information during the 2024 presidential elections as a shared responsibility. About 6 in 10 (63%) say a lot of the responsibility falls on the technology companies that create AI tools, but about half give a lot of that duty to the news media (53%), social media companies (52%), and the federal government (49%).
Democrats are somewhat more likely than Republicans to say social media companies have a lot of responsibility, but generally agree on the level of responsibility for technology companies, the news media and the federal government.
____
The poll of 1,017 adults was conducted Oct. 19-23, 2023, using a sample drawn from NORC’s probability-based AmeriSpeak Panel, designed to represent the U.S. population. The margin of sampling error for all respondents is plus or minus 4.1 percentage points.
____
O’Brien reported from Providence, Rhode Island. Associated Press writer Linley Sanders in Washington, D.C., contributed to this report.
____
The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP’s democracy initiative here. The AP is solely responsible for all content.
veryGood! (24)
Related
- This was the average Social Security benefit in 2004, and here's what it is now
- See Maddie Ziegler and Dance Moms Stars Reunite to Celebrate Paige Hyland's Birthday
- Prosecutor cites ‘pyramid of deceit’ in urging jury to convict FTX founder Sam Bankman-Fried
- House weighs censure efforts against Rashida Tlaib and Marjorie Taylor Greene over their rhetoric
- Are Instagram, Facebook and WhatsApp down? Meta says most issues resolved after outages
- Kenya is raising passenger fares on a Chinese-built train as it struggles to repay record debts
- A stabbing attack that killed 1 woman and wounded 2 men appears to be random, California police say
- See the Photo of Sophie Turner and Aristocrat Peregrine Pearson's Paris PDA
- South Korean president's party divided over defiant martial law speech
- Ottawa Senators must forfeit first-round pick over role in invalidated trade
Ranking
- 2 killed, 3 injured in shooting at makeshift club in Houston
- Ole Miss coach Lane Kiffin dunks on Texas A&M's Jimbo Fisher as only Kiffin can
- Chaotic and desperate scenes among Afghans returning from Pakistan, say aid agencies
- Israel aid bill from House is a joke, says Schumer, and Biden threatens veto
- How to watch the 'Blue Bloods' Season 14 finale: Final episode premiere date, cast
- Lindsay Lohan Gives Details on That Fetch Mean Girls Reunion
- Alabama parents arrested after their son's decomposing body found in broken freezer
- Walmart to reopen over 100 remodeled stores: What will be different for shoppers
Recommendation
Finally, good retirement news! Southwest pilots' plan is a bright spot, experts say
New Orleans swears in new police chief, Anne Kirkpatrick, first woman to permanently hold the role
Israeli envoy to Russia says Tel Aviv passengers hid from weekend airport riot in terminal
Alex Murdaugh doesn’t want the judge from his murder trial deciding if he gets a new day in court
Military service academies see drop in reported sexual assaults after alarming surge
Former Delta co-pilot indicted for threatening to shoot captain during commercial flight, officials say
As child care costs soar, more parents may have to exit the workforce
Diplomatic efforts to pause fighting gain steam as Israeli ground troops push toward Gaza City