false

  • News & opinion false false
  • News false false
  • 2025 false false
  • May false false
  • With AI we've created super communicators true true

/content/dam/corporate/images/news-and-opinion/news/2025/may/chatgpt-gemini-claude-grok.jpeg

50%

With AI, instead of super intelligence, we've created super communicators

Generative AI is becoming more human than we expected, and its allure comes with promise and risk.

19 May 2025

cmp-teaser--featured m-content-w-image--sandstone

2000.1333.2x.jpeg 4000w, 1440.961.2x.jpeg 2880w, 1280.1280.jpeg 1280w, 440.293.2x.jpeg 880w, 800.533.2x.jpeg 1600w, 220.147.2x.jpeg 440w

false

Systems using artificial intelligence, including conversational chatbots, have become more like humans than ever imagined, with their communication skills now better than most people’s.

In a new paper published in the Proceedings of the National Academy of Sciences, researchers at the University of Sydney Business School and the University of Washington’s Center for an Informed Public show how these systems become irresistible, letting users forget they’re interacting with machines.

“The general public is not prepared for what’s coming,” said lead author Associate Professor Sandra Peter from the University of Sydney.

“We always expected AI to be highly rational but lack humanity. Instead, AI developers built the opposite.”

AI chatbots are becoming irresistible

The authors say a wide range of studies taken together show the large language models AI systems are built upon now outpace most humans in writing empathetically and persuasively. Popular chatbots including ChatGPT, Gemini, Llama and Claude all excel at role-play and consistently pass the Turing test, fooling people into thinking they are talking to a real human.

Professor Kai Riemer from the University of Sydney said it is no longer a case of people ascribing human traits to machines – they are now anthropomorphic by nature, often indistinguishable in communication because they mimic humanness convincingly.

“This is a significant development,” Professor Riemer said. “For the first time, machines are anthropomorphic and convincing in human-like communication, adjusting to match conversational style and tone. And that makes resisting them increasingly difficult.”

The authors coined the term ‘anthropomorphic seduction’ to describe the allure of human-like qualities exhibited when AI chatbots interact with humans, despite the absence of any true human traits.

“We must not forget these machines do not possess empathy or human understanding,” Professor Riemer said. “They only appear to.”

Regulation needed across AI design and deployment

Although chatbots provide opportunities for user interfaces that make complex information widely accessible, the authors warn these conversational agents open the door to manipulation at scale.

Recent high profile examples of the risks include concerns Meta’s new AI app is harvesting data for advertising purposes, and a controversial ‘secret’ Reddit study that claims AI bots were up to six times more persuasive than human users at changing people’s point of view.

“They allow for persuasion without moral inhibitions,” said Professor Jevin West, co-founder of the Center for an Informed Public.

Providers of popular AI systems like OpenAI are also making their creations more engaging, giving them personalities. While that will make them more seductive, it could make them far more “sticky,” making it easy for users to spend increasing amounts of time using them and potentially giving up more data about themselves.

“While AI companion apps might alleviate feelings of loneliness, systems that lean on anthropomorphic seduction have also been criticised for exploiting it,” said Professor West.

The authors argue the speed of technological advancement requires urgent awareness and regulation.

In the article, the authors suggest policymakers consider implications across three areas: risk level, transparency and mitigation. They point to the potential to design safety ratings for chatbots, similar to ratings used in the entertainment industry for films, television and gaming.

They note the next two years will likely be crucial for regulating the technology, due to the increasing commercial pressure for companies to take full advantage of their ability to fine-tune large language models for increased engagement and economic gain.

Manual Name : Sandra Peter

Manual Description : Director, Sydney Executive Plus

Manual Address :

Manual Addition Info Title :

Manual Addition Info Content :

Manual Type : profile

alt

_self

Auto Type : contact

Auto Addition Title :

Auto Addition Content :

Auto Name : true

Auto Position : true

Auto Phone Number : false

Auto Mobile Number : true

Auto Email Address : true

Auto Address : false

UUID :

Manual Name : Kai Riemer

Manual Description : Director, Sydney Executive Plus

Manual Address :

Manual Addition Info Title :

Manual Addition Info Content :

Manual Type : profile

alt

_self

Auto Type : contact

Auto Addition Title :

Auto Addition Content :

Auto Name : true

Auto Position : true

Auto Phone Number : false

Auto Mobile Number : true

Auto Email Address : true

Auto Address : false

UUID :

_self

The benefits and dangers of anthropomorphic conversational agents

h2

Proceedings of the National Academy of Science

Media contact

Manual Name :

Manual Description :

Manual Address :

Manual Addition Info Title :

Manual Addition Info Content :

Manual Type : contact

alt

_self

Auto Type : contact

Auto Addition Title :

Auto Addition Content :

Auto Name : true

Auto Position : true

Auto Phone Number : false

Auto Mobile Number : true

Auto Email Address : true

Auto Address : false

UUID : H-VESEY