- Researchers at Johns Hopkins University found it takes 80 milliseconds after seeing an object to begin processing value.
- Information goes first to the sensory cortex then flows to the frontal cortex, the place where decisions are thought to be made, then on to the motor cortex, where people produce actions.
- The findings could have implications for artificial intelligence.
There's a reason it takes hardly any time to scan clothes on a rack and decide whether to grab an item or keep looking.
When humans become experts on something, whether clothes, houses, cars, and so on, their brains develop mechanisms to rapidly process visual value information, researchers at Johns Hopkins University have found. That all happens incredibly quickly, or to be precise, in just 80 milliseconds, less than a tenth of a second, after seeing something.
"Speed counts. Whether identifying fruits when foraging, assessing mates or trying to avoid predators, you need to understand the world as quickly as possible," said Ed Connor, senior author of the study and director of the Zanvyl Krieger Mind/Brain Institute at Johns Hopkins.
Before, scientists assigned value processing to the prefrontal cortex of the brain, Connor said. The researchers found that instead, information goes first to the sensory cortex then flows to the frontal cortex, the place where decisions are thought to be made, then on to the motor cortex, where people produce actions.
The findings, published today in the journal Current Biology, tell scientists more about the human brain, Connor said, but they could find their way into the growing field of artificial intelligence. Computer system vision has so far largely focused on object recognition, he said. Human vision, however, gives us far more insight than just identifying objects.
"When we look at an object, not only can we name the category it's in, we understand its precise 3D structure," Connor said. "We know about its materials. We can guess about its construction and history. We understand where it is physically. We know if it's a chair we can sit in it and how far we can lean back."
Connor predicts that one day developers will train computer vision systems to process information like this, but computational vision has so far been largely focused just on objects. His team is working with another Johns Hopkins professor, Alan Yuille, to study the relationship between neuroscience and vision and the so-called deep convoluted network to determine how to improve computer neural networks.
If anything, this study reflects the sheer power of visual learning, Connor said.
"The amount of information we pull out about the world, and fast, it's very hard to fool us visually, is astounding stuff people haven't even tried with neural networks," he said. "One of the reasons we can be so incredibly good at pulling out so much detail and understand things with vision without even trying is we spend years and years and years learning to do that through life."