A project from Facebook's research division is attempting to rethink face recognition by imitating the way our own neurons deal with visual input—with promising results. "DeepFace" nearly matches human accuracy in telling whether two faces are the same or different.
Instead of a system where the computer attempts to tell whether a given photo is similar to other photos (matching colors, size and shape), DeepFace analyzes data in a very abstract way, with hardly any information about what faces look like to begin with, or where eyes ought to be.
(Read more: FB to start charging for promotion?)
In your brain, there are groups of neurons in the visual system that respond to, for example, vertical lines but not horizontal ones, or to curves but not straight edges. The DeepFace system simulates neural networks like this, doing millions of simple analyses in a fraction of a second. Where is the darkest point in the picture? Where is the longest unbroken line? How far is it from this local maximum to that one?