Posted May 22, 2025

Study questions whether chatbots are more judgmental than their human counterparts

Fox School of Business Associate Professor Sezgin Ayabakan addresses why some people view text-based chatbots to be more judgmental than mental health providers. 

Photography By: 
Ryan Brandenberg
Associate Professor Sezgin Ayabakan highlights people’s perceptions of mental health chatbots.

Recognizing that some people facing mental health issues are not turning to traditional providers for assistance, a Temple University associate professor has examined how artificial intelligence could be leveraged to help improve access to healthcare resources. 

“Our starting point was that mental health has a big stigma amongst the public and people would be more open to disclosing their information to a robot instead of a human,” said Sezgin Ayabakan, a Harold Schaefer Fellow in the Management Information Systems Department at the Fox School of Business. 

“We thought that people would be more willing to reach out to an AI agent because they might think that they would not be judged by the robots, because they are not trained to judge people,” he added. “People may feel like the judgmentalness of the human professional may be high, so they may not reach out.” 

However, after conducting multiple lab experiments, his research team found an unexpected result.  

“People perceived the AI agents as being more judgmental than a human counterpart, though both agents were behaving exactly the same way,” Ayabakan said. “That was the key finding.” 

The researchers conducted four lab experiments for a vignette study among four groups of 290 to 1,105 participants. During the experiments, participants were shown videos of a conversation between an agent and a patient. One group of participants were told that the agent was an AI agent, while the other was informed that the agent was human. 

“The only variable that was changing was the agent type that we were disclosing,” Ayabakan explained. 

“That’s the beauty of vignette studies. You can control all the other things, and you only change one variable. You get the perception of people based on that change.” 

Next, the researchers conducted a qualitative study to understand how chatbots are perceived to be more judgmental. They conducted 41 in-depth interviews during this study to learn why people felt like they were being judged by these chatbots. 

 “Our findings suggest that people don’t think that chatbots have that peak emotional understanding like human counterparts can,” Ayabakan said. 

“They cannot understand deeply because they don’t have those human experiences, and they lack those social meanings and emotional understanding that leads to more increased perceived judgmentalness.” 

The interview subjects thought that chatbots were not capable of having empathy, compassion and an ability to validate their feelings. 

“People feel like these agents cannot deliver that human touch or that human connection, at least in a mental health context,” Ayabakan continued. 

“The main highlight is that people perceive such agents for those things that they cannot do, instead of the things they can do. But if they want to judge a human agent, they normally judge them for those things that they do, instead of the things they cannot do.”