Findings suggest dogs are more sensitive to actions rather than who or what is doing the action
Scientists have decoded visual images of a dog’s brain, offering the first insight into how the canine mind reconstructs what it sees. The Journal of Visualized Experiences published research done at Emory University.
The results suggest that dogs are more sensitive to actions in their environment rather than who or what is doing the action.
The researchers recorded fMRI neural data for two awake, unrestrained dogs as they watched videos in three 30-minute sessions, for a total of 90 minutes. They then used a machine learning algorithm to analyze patterns in the neural data.
“We’ve shown that we can monitor activity in a dog’s brain while it’s watching video and, at least to some extent, reconstruct what it’s watching,” says Gregory Berns, professor of psychology at Emory and corresponding author of the article. . “The fact that we are able to do this is remarkable.”
The project drew on recent advances in machine learning and fMRI to decode visual stimuli from the human brain, providing new insights into the nature of perception. Beyond humans, the technique has only been applied to a handful of other species, including some primates.
“Although our work is based on just two dogs, it offers proof of concept that these methods work on dogs,” says Erin Phillips, the paper’s first author, who carried out the work as a research specialist. at the Canine Cognitive Neuroscience Lab in Berns. “I hope this article will help pave the way for other researchers to apply these methods to dogs, as well as other species, so that we can get more data and a better understanding of how the spirit of different animals.”
Phillips, originally from Scotland, came to Emory as a Bobby Jones Scholar, an exchange program between Emory and the University of St Andrews. She is currently a graduate student in ecology and evolutionary biology at Princeton University.
Berns and his colleagues pioneered training techniques to get dogs to walk into an fMRI scanner and remain completely still and unrestrained while their neural activity is measured. Ten years ago, his team published the first fMRI brain images of a fully awake, unrestrained dog. This opened the door to what Berns calls The Dog Project – a series of experiments exploring the minds of the oldest domesticated species.
Over the years, his lab has published research on how the canine brain processes vision, words, smells, and rewards such as receiving praise or food.
Meanwhile, the technology behind machine learning computer algorithms kept improving. Technology has allowed scientists to decode certain patterns of human brain activity. The technology “reads minds” by detecting in brain data patterns the different objects or actions an individual sees while watching a video.
“I started thinking, ‘Can we apply similar techniques to dogs?’ Berns recalls.
The first challenge was to come up with video content that a dog might find interesting enough to watch for an extended period of time. Emory’s research team attached a video recorder to a gimbal and selfie stick that allowed them to shoot steady footage from a dog’s perspective, roughly the size of a human. or a little lower.
They used the device to create a half-hour video of scenes related to most dogs’ lives. Activities included dogs being petted by people and receiving treats from people. Scenes with dogs also showed them sniffing, playing, eating, or walking on a leash. Scenes of activity showed cars, bicycles or a scooter passing on a road; a cat walking through a house; a deer crossing a path; people seated; people hug or kiss; people offering a rubber bone or a ball to the camera; and the people who eat.
The video data was segmented by timestamps into various classifiers, including object-based classifiers (such as dog, car, human, cat) and action-based classifiers (such as sniffing, playing, or eating).
Only two of the dogs that had been trained for experiments in an fMRI had the focus and temperament to stand perfectly still and watch the 30-minute video uninterrupted, including three sessions for a total of 90 minutes. These two “super star” dogs were Daisy, a mixed breed that can be part Boston terrier, and Bhubo, a mixed breed that can be part boxer.
“They didn’t even need treats,” says Phillips, who monitored the animals during fMRI sessions and watched their eyes follow the video. “It was fun because it’s serious science, and a lot of time and effort went into it, but it came down to these dogs watching videos of other dogs and humans acting in unison. bit silly.”
Two humans also underwent the same experience, watching the same 30-minute video in three separate sessions, while lying in an fMRI.
Brain data could be mapped to video classifiers using timestamps.
A machine learning algorithm, a neural network known as Ivis, was applied to the data. A neural network is a method of machine learning by asking a computer to analyze training examples. In this case, the neural network has been trained to classify the content of brain data.
The results for the two human subjects revealed that the model developed using the neural network showed 99% accuracy in mapping brain data to object-based and action-based classifiers.
In the case of decoding dog video content, the model did not work for object classifiers. However, it was 75% to 88% accurate when decoding action classifications for dogs.
The findings suggest major differences in brain function between humans and dogs.
“We humans are very object-oriented,” Berns says. “There are 10 times more nouns than verbs in English because we have a particular obsession with naming objects. Dogs seem less concerned with who or what they see and more concerned with the action itself. “
Dogs and humans also have major differences in their visual systems, Berns notes. Dogs only see in shades of blue and yellow, but have a slightly higher density of visual receptors designed to detect movement.
“It makes perfect sense that dogs’ brains are first and foremost very sensitive to actions,” he says. “Animals need to be very concerned about what is happening in their environment to avoid being eaten or to watch out for animals they might want to hunt. Action and movement are paramount.”
For Philips, understanding how different animals perceive the world is important to his current field research on the potential impact of the reintroduction of predators in Mozambique on ecosystems. “Historically, there hasn’t been much overlap between computer science and ecology,” she says. “But machine learning is a growing field that is beginning to find broader applications, including in ecology.”
Other authors of the paper include Daniel Dilks, associate professor of psychology at Emory, and Kirsten Gillette, who worked on the project as an undergraduate neuroscience and behavioral biology major at Emory. Gilette has since graduated and is now in a post-baccalaureate program at the University of North Carolina.
Daisy is owned by Rebecca Beasley and Bhubo is owned by Ashwin Sakhardande. The study’s human experiments were supported by a grant from the National Eye Institute.