Posted July 2, 2025

New research reveals superior visual perception in humans compared with AI

Assistant Professor Vlad Ayzenberg from Temple’s Psychology and Neuroscience Department found visual object recognition to be stronger in young children than in state-of-the-art AI models. 

Vlad Ayzenberg next to a baby scan
Photography By: 
Joseph V. Labolito
In a new study, Assistant Professor Vlad Ayzenberg and other researchers found preschoolers’ visual object recognition to be much better than AI’s capabilities.

As artificial intelligence (AI) rapidly grows—a recent UN Trade and Development report projects the global AI market soaring to $4.8 trillion by 2033—the technology seems equipped to handle any task. Driving cars. Analyzing medical images. Making music. Having a conversation.

But a new study from Vlad Ayzenberg, CLA ’12, assistant professor in Temple’s Psychology and Neuroscience Department, highlights a notable limitation of the technology and stark contrast between AI and humans as young as 3 years old.

“Our findings suggest that the human visual system is far more data efficient than current AI and that the perceptual abilities of even young children are extremely robust,” said Ayzenberg.

Ayzenberg and researchers from Emory University compared the visual perceptual abilities of preschoolers and state-of-the-art AI models and found that these children outperformed the best computer vision models currently available. The only models that performed better were those with more visual experience than humans are capable of experiencing. The study, “Fast and Robust Visual Object Recognition in Young Children,” was published on July 2 in Science Advances.

For the study, 3- to 5-year-olds were asked to identify objects from images presented at speeds of 100 milliseconds while their attention was disrupted by factors such as noise.

“We thought this task would be really hard for young children because it’s designed for adults,” said Ayzenberg.

According to Ayzenberg, the study also illustrates how cognitive and neural insights from children can be used to improve current AI models and inversely how AI models can eventually be used to gain insights into the functioning of the human mind.

“AI models are useful, but they make mistakes that no human would,” he said. “They require more training and energy than we do. For instance, training a large language model like ChatGPT has a carbon footprint about 17 times greater than that of a person in one year. If we can understand how young children are able to visually perceive objects, we can then make the models more efficient.

“This study provides a benchmark for these AI models: Here’s what preschoolers can do. Can the AI achieve as much as a 3-year-old while using less data than previously required?” added Ayzenberg.

Ayzenberg, who became a faculty member at Temple in July and earned a bachelor’s degree in psychology from the university, opened the Vision Learning and Development Lab on Main Campus this summer. In this lab, researchers will employ a combination of behavioral, neuroimaging and computational techniques to understand how the human brain is organized at birth and in early childhood to support the rapid development of cognitive and perceptual abilities. The goal is to eventually create more human-like AI agents based on what they learn from children.

Moving forward, Ayzenberg plans to use functional MRI in awake infants to measure large-scale brain activity while they engage in specific cognitive tasks.

“Babies and young children haven’t been in the world very long, but they can do a lot very quickly,” explained Ayzenberg. “We want to understand what neural processes allow them to rapidly develop these sophisticated abilities in the absence of extensive experience.”

He looks forward to conducting this research at Temple. “Temple’s psychology and neuroscience program is strong, especially in the developmental area.”