Uncategorized

AI Disclaimers in Political Ads Backfire on Candidates, Study Finds

Many U.S. states now require candidates to disclose when political ads used generative AI, reports the Washington Post.

Unfortunately, researchers at New York University’s Center on Technology Policy “found that people rated candidates ‘less trustworthy and less appealing’ when their ads featured AI disclaimers…”

In the study, researchers asked more than 1,000 participants to watch political ads by fictional candidates — some containing AI disclaimers, some not — and then rate how trustworthy they found the would-be officeholders, how likely they were to vote for them and how truthful their ads were. Ads containing AI labels largely hurt candidates across the board, with the pattern holding true for “both deceptive and more harmless uses of generative AI,” the researchers wrote. Notably, researchers also found that AI labels were more harmful for candidates running attack ads than those being attacked, something they called the “backfire effect”.

“The candidate who was attacked was actually rated more trustworthy, more appealing than the candidate who created the ad,” said Scott Babwah Brennen, who directs the center at NYU and co-wrote the report with Shelby Lake, Allison Lazard and Amanda Reid.

One other interesting finding… The article notes that study participants in both parties “preferred when disclaimers were featured anytime AI was used in an ad, even when innocuous.”

Read more of this story at Slashdot.

Many U.S. states now require candidates to disclose when political ads used generative AI, reports the Washington Post.

Unfortunately, researchers at New York University’s Center on Technology Policy “found that people rated candidates ‘less trustworthy and less appealing’ when their ads featured AI disclaimers…”

In the study, researchers asked more than 1,000 participants to watch political ads by fictional candidates — some containing AI disclaimers, some not — and then rate how trustworthy they found the would-be officeholders, how likely they were to vote for them and how truthful their ads were. Ads containing AI labels largely hurt candidates across the board, with the pattern holding true for “both deceptive and more harmless uses of generative AI,” the researchers wrote. Notably, researchers also found that AI labels were more harmful for candidates running attack ads than those being attacked, something they called the “backfire effect”.

“The candidate who was attacked was actually rated more trustworthy, more appealing than the candidate who created the ad,” said Scott Babwah Brennen, who directs the center at NYU and co-wrote the report with Shelby Lake, Allison Lazard and Amanda Reid.

One other interesting finding… The article notes that study participants in both parties “preferred when disclaimers were featured anytime AI was used in an ad, even when innocuous.”

Read more of this story at Slashdot.

Read More 

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top
Generated by Feedzy