At CES: Robots that can recognize if you're sad

Social robots like those shown off at the 2016 Consumer Electronics Show in Las Vegas may soon be interpreting many more smiles, frowns and tilts of the head, if recent forecasts come to pass.

Consumers will increasingly welcome so-called companion robots or social robots — like SoftBank's Pepper and NAO — into their homes, according to Juniper Research. And those robots will use facial recognition and other technologies to gauge users' emotions and respond accordingly.

One in 10 households will have a home robot by 2020, up from one in 25 last year, according to Juniper, although that includes less-smart robots like the Roomba intelligent vacuum.

But the wider acceptance of robots in homes depends on people getting more comfortable with being watched, and read, by a robot.

Generally, facial recognition technology starts with identifying where a face is, then what its features are, explains Jonathon Phillips, face recognition program manager at the National Institute of Standards and Technology.

"[Robots] learn mathematically what distinguishes different faces," said Phillips, whose agency works with the tech companies to develop new technologies used both by businesses and the government for national security purposes.

Once a robot identifies a face, it makes additional calculations to measure expressions, Phillips said. Changes in features, for instance whether the corners of the mouth are moving up or down, are sensed and interpreted, Phillips explained. There are a variety of programs that are used to recognize faces and interpret expressions, many of them based on proprietary, often closely guarded algorithms, Phillips said.

The more often a robot sees its user's face, the more accurate its emotion reading should become, Phillips said.

Jeniece Pettitt | CNBC

"Even if [Pepper] has never seen you happy, he is able to recognize that you are happy because he has seen plenty of other people happy," said Rodolphe Gelin, Chief of innovation at Aldebaran, the company that developed Pepper for Softbank.

And Pepper could be getting smarter. IBM announced at 2016 Consumer Electronics Show in Las Vegas that the robot will incorporate IBM's Watson cognitive learning system, which will help it pick out a speaker in a crowd and interpret expressions, gestures, bodily movement as well as voice to interact with people and respond in a conversational manner.

Those are all new capabilities for Watson that have been developed and are being introduced for the first time with the help of Pepper, said Rob High, IBM's chief technology officer for Watson.

Pepper can't interpret a user rolling his or her eyes for example, but that's an example of the kinds of features that IBM will be working on, High said.

"Watson's core capability is to be able to understand intention … we don't necessarily always use words," said Rob High, IBM's chief technology officer for Watson. He said that Pepper could be useful both in the home and at work.

"We think there is a huge variety of applications ... almost any point in which humans are interacting with computers," said High in a phone interview.

A number of other social and home robots that incorporated facial recognition were on display at CES.

Among them were the kid-friendly Tyche robot made by AiBrain, Buddy, the forthcoming family helper from Blue Frog Robotics, and another home robot, Jibo.

While social robots are still on their way into more U.S. homes and businesses, facial recognition is already being widely used for other purposes.

Facebook uses facial recognition to identifies people in photos and suggests that users "tag" those individuals as being in those photos.

And Windows Hello, a feature included in Microsoft's Windows 10, uses facial recognition like a password prompt for users to gain access to devices running on that operating system.

Similar to Watson, though on a smaller scale, Windows Hello requires special hardware to work, including dedicated fingerprint sensors, infrared and other sensors, according to Microsoft. Thanks to those sensors, holding up a user's picture won't pass a facial recognition test, a Microsoft spokeswoman said in an email.