Why are some images more memorable than others?

summary: A new study reveals that the brain prioritizes remembering images that are difficult to interpret. The researchers used a computational model and behavioral experiments to show that scenes that were more difficult for the model to reconstruct were more memorable for participants.

This finding helps explain why certain visual experiences remain in our memory. The study could also help in the development of artificial intelligence memory systems.

Key facts:

  • Memory configuration: The brain tends to remember images that are difficult to explain or interpret.
  • Computational model: A model was used that handles visual signal compression and reconstruction.
  • Effects of artificial intelligence: Insights can help create more efficient memory systems for artificial intelligence.

source: Yale

The human mind filters a stream of experiences to create specific memories. Why do some experiences in this deluge of sensory information become “memorable,” while the brain ignores most of it?

A computational model and behavioral study developed by Yale University scientists points to new evidence for this ancient question, they report in the journal. The nature of human behavior.

The Yale team found that the more difficult it was for the computational model to reconstruct the image, the more likely participants were to remember the image. Credit: Neuroscience News

“The mind prioritizes remembering things that it cannot explain well,” said İlker Yildirim, assistant professor of psychology at Yale University’s College of Arts and Sciences and senior author of the paper. “If the scene is expected, rather than surprising, it may be ignored.”

For example, a person might be briefly confused by the presence of a fire hydrant in a remote natural setting, making the image difficult to interpret, and therefore unmemorable. “Our study explored the question of which visual information can be remembered by integrating a computational model of scene complexity with a behavioral study,” Yildirim said.

See also  SpaceX teams launch Falcon 9 rocket on Starlink mission from Cape Canaveral - Spaceflight Now

For the study, led by Yildirim and John Lafferty, the John C. Malone Professor of Statistics and Data Science at Yale University, the researchers developed a computational model that addressed two steps in memory formation — compression and reconstruction of visual cues.

Based on this paradigm, they designed a series of experiments in which people were asked whether they remembered specific images from a series of natural images presented in rapid succession. The Yale team found that the more difficult it was for the computational model to reconstruct the image, the more likely participants were to remember the image.

“We used the AI ​​model to try to shed light on people’s perception of scenes, and this understanding could help develop more efficient AI memory systems in the future,” said Lafferty, who is also director of the Center for Neural Computing. and Machine Intelligence at the Wu Tsai Institute at Yale University.

Former Yale graduate students Chi Lin (psychology) and Zifan Lin (statistics and data science) are first authors of this paper.

About this visual memory research news

author: Bill Hathaway
source: Yale
communication: Bill Hathaway – Yale
picture: Image credited to Neuroscience News

Original search: Closed access.
Images with visual representations that are difficult to reconstruct leave stronger traces in memory“By İlker Yildirim et al. The nature of human behavior


a summary

Images with visual representations that are difficult to reconstruct leave stronger traces in memory

Much of what we remember is not due to deliberate choice, but is simply a byproduct of perception.

See also  The lizards meteor shower is expected to reach its peak next week

This raises a fundamental question about the structure of the mind: How does perception interact with and influence memory?

Here, inspired by the classic proposal linking perceptual processing to memory robustness, level-of-processing theory, we introduce a sparse encoding model for compressing image embedding features, and show that reconstruction residuals from this model predict how well images are encoded in memory.

On an open memorable dataset of scene images, we show that reconstruction error explains not only memory accuracy but also response latency during retrieval, including, in the latter case, all of the variance explained by robust vision-only models. We also confirm the prediction of this account through “model-based psychophysics.”

This work identifies reconstruction error as an important signal linking perception and memory, possibly through adaptive modulation of cognitive processing.

Leave a Reply

Your email address will not be published. Required fields are marked *