As synthetic intelligence and its associated applied sciences develop in prominence, College of Georgia researchers are working to remain one step forward. Current research from the Grady Faculty of Journalism and Mass Communication counsel a few of the doable dangers and rewards that associate with the rising expertise.
When AI will get it mistaken
AI will not be an ideal science. Though it’s being refined over time, this expertise is sure to make errors — so there must be a plan for when that occurs.
Wenqing Zhao, a doctoral candidate within the division of promoting and public relations, has uncovered in her analysis that many communication organizations will not be totally prepared for addressing these errors.
“AI can have bias, misinformation, a scarcity of transparency, privateness points and copyright points. For any doable threats and for the sake of everybody within the group, the group ought to have that consciousness that they want one thing in place,” Zhao mentioned.
Wenqing Zhao
Zhao surveyed lots of of communication practitioners on what occurs when AI will get it mistaken.
AI errors require hands-on options
The dearth of a disaster plan comes right down to duty, Zhao discovered. As AI-generated content material goes up the chain of command with errors inside, no one desires to be held accountable for not catching it.
“This comes from a mannequin referred to as the issue of many arms; for any explicit hurt, many individuals may be concerned resulting in it,” Zhao mentioned. “Nonetheless, no single particular person may be assigned that duty.”
The duty doesn’t essentially must go to a supervisor both. Zhao says so long as there’s a clear define for who catches issues like bias, misinformation or privateness violations, then that’s a begin.
“It’s crucial to construct a tradition of lively duty in organizations, particularly with AI threats or AI disaster administration,” Zhao mentioned.
Zhao says management taking duty continues to be splendid, nonetheless, as a result of it units the best tone of duty for an entire group.
Transparency inside expertise
Sarcastically, Zhao discovered that what these communication practitioners had been missing was communication itself. Individuals are hesitant to have the powerful discussions about whether or not their group’s AI use was moral and being executed transparently.
“There’s a concern a couple of lack of disclosure and transparency, so that you suppose the very first thing you want to do is inform your shopper or boss that you simply used AI on this work. That’s essentially the most direct approach to improve transparency. Nonetheless, practitioners didn’t suppose that is very efficient, in all probability as a result of many individuals, together with the shoppers, don’t belief AI,” Zhao mentioned.
Even with these dangers, Zhao discovered that practitioners had been nonetheless very probably to make use of AI of their day-to-day work
Whether or not it’s used for getting inspirations, writing and modifying, or technique creation, there’s a must take AI’s potential instruments within the office with a grain of salt. Zhao recommends that companies have an obligation to make all ranges of workers accountable with AI use, and to be clear about what utilizing it seems like.
When AI exhibits emotion
As AI continues to be growing, so are its doable makes use of. Chatbots are already changing into extra widespread, however Ja Kyung Web optimization, a Ph.D. candidate in UGA’s division of promoting and public relations, explored the affect chatbots can have on people in her new research.
When somebody is informed that they “discuss like a robotic,” that normally means they converse in a flat, impassive manner. Giving chatbots an experiential thoughts, or having them show or talk about feelings, may assist folks see chatbots as extra human.

Ja Kyung Web optimization
To see how folks would reply to those chatbots, the researchers had individuals chat with the bots about aware consumption, or shopping for fewer pointless gadgets.
“After they had been requested how their day was going, a chatbot with an experiential thoughts would say, ‘There was a large replace not too long ago, so I’m busy maintaining with the brand new issues. I’m beneath a little bit of stress,’” Web optimization mentioned. “A chatbot with out an experiential thoughts says, ‘I don’t have private experiences or feelings, so I don’t have a subjective state of being.’”
Web optimization speculated that by humanizing chatbots on this manner, the dialog may very well be extra participating. This, in flip, may enhance attitudes towards the chatbot’s message.
Utilizing chatbots to encourage conduct change
The core of Web optimization’s analysis was seeing how the humanization of chatbots may assist enhance attitudes towards aware consumption. After a little bit of small discuss, the chatbots informed the individuals concerning the hyperlink between shopping for much less and lowering environmental air pollution. It then detailed the advantages of shopping for much less, suggesting that individuals ought to make extra aware purchases.
A bot that may present emotion would speak about how a lot it beloved the planet and the way it was scared people would miss the possibility to reserve it. If not, it will merely inform individuals to not miss their probability to assist with out mentioning emotion in any respect.
The research discovered that chatbots that confirmed emotion improved folks’s attitudes towards shopping for much less as a result of individuals had been extra engaged with the conservation and considering extra deeply on the message.
Each eeriness and amazement might stir curiosity in conversations with chatbots
Whereas speaking to a chatbot with an experiential thoughts, individuals reported each a way of eeriness and amazement.
Individuals discovered the chatbot so human-like that it was eerie. However on the similar time, they had been pleasantly shocked by how the bots appeared to indicate emotion or say surprising issues.
Although eeriness and amazement appear to go towards one another, each had been tied to individuals being extra engaged within the dialog. This, in flip, led to extra constructive attitudes towards shopping for much less.
“Earlier literature largely targeted on the adverse a part of eeriness and the way that negatively influences folks’s notion,” Web optimization mentioned. “However in our research, we discovered that eeriness can truly enhance folks’s cognitive absorption into the dialog, so in the long run, it positively influenced folks’s perspective towards shopping for much less conduct messages.”
Though eeriness may be helpful, Web optimization warned it’s nonetheless dangerous in massive quantities. She advisable that chatbot designers strike a stability between eeriness and amazement based mostly on what the chatbot is used for.
For instance, if getting folks to suppose deeply on a message is the aim, extra eeriness may very well be useful. If the bot is supposed to entertain, much less could also be more practical since an eerie feeling is commonly related to individuals seeing the chatbot as much less engaging.
She additionally warned towards misusing emotionally expressive chatbots to mislead shoppers, corresponding to claiming a product is environmentally pleasant. But when designers discover that stability and corporations are clear about their goal behind the usage of chatbots, chatbots may have a spot in fields corresponding to promoting.
“Persuasion now includes participating folks in interactive dialogue,” Web optimization mentioned. “Some corporations are integrating their chatbot into show adverts, so when folks click on it, it directs customers to the chatbot. Organizations may first use show adverts to advertise their model after which combine a chatbot that helps unfold their missions.”
These research, accomplished via the help of the Grady Faculty, embody co-authors Hye Jin Yoon alongside Web optimization in addition to Anna Rachwalski, Maranda Berndt-Goke and Yan Jin alongside Zhao. Zhao’s mission was supported by the Arthur W. Web page Heart at Penn State. The Web page Heart and Disaster Communication Assume Tank (CCTT) at UGA initiated a cross-institutional collaboration in 2023 to help two student-led analysis tasks yearly. Zhao’s mission was one of many first chosen by this collaborative initiative.