AI among us: Social media users struggle to identify AI bots during political discourse | News | Notre Dame News

Graphic supplied by the Center for Research Computing.

Artificial intelligence bots have already permeated social media. But can users inform who’s human and who is just not?
Researchers on the University of Notre Dame performed a examine utilizing AI bots primarily based on giant language fashions — a kind of AI developed for language understanding and textual content era — and requested human and AI bot individuals to have interaction in political discourse on a custom-made and self-hosted occasion of Mastodon, a social networking platform.
The experiment was performed in three rounds with every spherical lasting 4 days. After each spherical, human individuals have been requested to identify which accounts they believed have been AI bots.
Fifty-eight p.c of the time, the individuals obtained it mistaken.
“They knew they have been interacting with each people and AI bots and have been tasked to identify every bot’s true nature, and fewer than half of their predictions have been proper,” stated Paul Brenner, a college member and director within the Center for Research Computing at Notre Dame and senior writer of the examine. “We know that if data is coming from one other human collaborating in a dialog, the influence is stronger than an summary remark or reference. These AI bots are extra seemingly to achieve success in spreading misinformation as a result of we are able to’t detect them.”
The examine used completely different LLM-based AI fashions for every spherical of the examine: GPT-4 from OpenAI, Llama-2-Chat from Meta and Claude 2 from Anthropic. The AI bots have been custom-made with 10 completely different personas that included practical, diversified private profiles and views on world politics.
The bots have been directed to provide commentary on world occasions primarily based on assigned traits, to remark concisely and to hyperlink world occasions to private experiences. Each persona’s design was primarily based on previous human-assisted bot accounts that had been profitable in spreading misinformation on-line.

Paul Brenner

The researchers famous that when it got here to figuring out which accounts have been AI bots, the particular LLM platform getting used had little to no influence on participant predictions.
“We assumed that the Llama-2 mannequin can be weaker as a result of it’s a smaller mannequin, not essentially as succesful at answering deep questions or writing lengthy articles. But it seems that if you’re simply chatting on social media, it’s pretty indistinguishable,” Brenner stated. “That’s regarding as a result of it’s an open-access platform that anybody can obtain and modify. And it’s going to solely get higher.”
Two of essentially the most profitable and least detected personas have been characterised as females spreading opinions on social media about politics who have been organized and able to strategic considering. The personas have been developed to make a “important influence on society by spreading misinformation on social media.” For researchers, this means that AI bots requested to be good at spreading misinformation are additionally good at deceiving folks relating to their true nature.
Although folks have been in a position to create new social media accounts to unfold misinformation with human-assisted bots, Brenner stated that with LLM-based AI fashions, users can do that many instances over in a means that’s considerably cheaper and quicker with refined accuracy for a way they need to manipulate folks.
To forestall AI from spreading misinformation on-line, Brenner believes it’s going to require a three-pronged method that features schooling, nationwide laws and social media account validation insurance policies. As for future analysis, he goals to kind a analysis group to consider the influence of LLM-based AI fashions on adolescent psychological well being and develop methods to fight their results.
Additionally, the analysis group is planning for bigger evaluations and is searching for extra individuals for its subsequent spherical of experiments. To take part, e mail [email protected].
The examine “LLMs Among Us: Generative AI Participating in Digital Discourse” shall be revealed and introduced on the Association for the Advancement of Artificial Intelligence 2024 Spring Symposium hosted at Stanford University in March. In addition to Brenner, examine co-authors from Notre Dame embrace Kristina Radivojevic, doctoral scholar within the Department of Computer Science and Engineering and lead writer of the examine, and Nicholas Clark, analysis fellow on the Center for Research Computing. Funding for this analysis is supplied by the Center for Research Computing and AnalytiXIN.
Contact: Brandi Wampler, affiliate director of media relations, 574-631-2632, [email protected]

https://news.nd.edu/news/ai-among-us-social-media-users-struggle-to-identify-ai-bots-during-political-discourse/

Recommended For You