Enterprise

Everyone keeps talking about A.I.—here's what it really is and why it's so hot now

Key Points
  • The term A.I. dates to the 1950s, and there have been past boomlets in the field.
  • Improvements in computing power have brought about a revolution in AI in the past five years.
  • Alphabet, Amazon, Apple, Facebook and Microsoft are all investing heavily in AI.
Ke Jie, left, takes on the AlphaGo AI Go player represented by Aja Huang during the second of three games at the Future of Go Summit in China in May 2017.

Artificial intelligence is everywhere, from Apple's iPhone keyboard to Zillow's home price estimates. There's also a lot of stuff out there that marketers are calling AI, but really isn't.

Perhaps things reached a new high point last month when AlphaGo, a virtual player of the ancient Chinese board game Go developed by Alphabet's DeepMind AI research group, trounced the top human player in the world, China's Ke Jie.

A moment of drama encapsulates the achievement: After Jie resigned in the second of three matches, the 19-year-old lingered in his chair, staring down at the board for several minutes, fidgeting with game pieces and scratching his head. Aja Huang, the DeepMind senior research scientist who was tasked with moving game pieces on behalf of AlphaGo, eventually got up from his chair and walked offstage, leaving Jie alone for a moment.

Still, it's generally true that a human being like Jie has more brainpower than a computer. That's because a person can perform a wide range of tasks better than machines, while a given computer program enhanced with A.I. like AlphaGo might be able to edge out a person at just a few things.

But the prospect of A.I. becoming smarter than people at most tasks is the single biggest thing that drives debates about effects on employment, creativity and even human existence.

Here's an overview of what A.I. really is, and what the biggest companies are doing with it.

So what is AI, really?

Given that everybody's talking about A.I. now, you would think it's new. But the underlying techniques are not. The field got its start in the mid-twentieth century, and one of its most popular methods came about in the 1980s.

AI first took hold in the 1950s. While some of its underlying concepts predate it, the term itself dates to 1956, when John McCarthy, a math professor at Dartmouth College, proposed a summer research project based on the idea that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."

In the next few years A.I. research labs popped up at the Massachusetts Institute of Technology (MIT) and Stanford University. Research touched on computer chess, robotics and natural-language communication.

The first definition of AI: "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."
John McCarthy
Professor, Dartmouth College -- 1956

Interest in the field fluctuated over time. A.I. winters came in the 1970s and 1980s as public interest waned and outside funding dried up. Startups boasting promising capabilities and venture capital backing in the mid-1980s abruptly disappeared, as John Markoff detailed in his 2015 book "Machines of Loving Grace."

There are several other terms you often hear in connection to A.I.

Machine learning generally entails teaching a machine how to do a particular thing, like recognizing a number, by feeding it a bunch of data and then directing it to make predictions on new data.

The big deal about machine learning now is that it's getting easier to invent software that can learn over time and get smarter as it accumulates more and more data. Machine learning often requires people to hand-engineer certain features for the machine to look for, which can be complex and time-consuming.

Deep learning is one type of machine learning that demands less hand-engineering of features. Often the approach involves artificial neural networks, a mathematical system loosely inspired by the way neurons work together in the human brain. Neuroscientist Warren McCulloch and mathematician Walter Pitts came up with the first such system in 1943. Through the years, researchers advanced the concept with various techniques, including adding multiple layers. With each successive layer, higher-level features could be detected in the original data to make a better prediction. The layers pick out features in the data themselves. But using more layers demands more computing power.

Why is it suddenly so hot?

Through the years, hardware has gotten more powerful, and chipmakers including Nvidia have refined their products to better suit A.I. computations. Larger data sets in many domains have become available to train models more extensively as well.

In 2012, Google made headlines when it trained a neural network with 16,000 central processing unit (CPU) chips on 10 million images from YouTube videos and taught it to recognize cats. But later that year, the world of image recognition was rocked when an eight-layer neural network trained on two graphics processing units (GPUs) outdid all others in a competition to accurately classify images based on their content. Months later, Google acquired DNNresearch, the University of Toronto team behind the breakthrough.

Since then, A.I. activity has only accelerated, with the world's top technology companies leading the way.

Meanwhile, the world's most valuable companies -- technology companies! -- continue to publish research on their latest gains, which only adds to the fascination.

Who's leading the field?

Google and its parent company Alphabet have made several A.I. acquisitions, the most significant being its reported $500 million purchase of DeepMind in 2014. While DeepMind's AlphaGo project has brought Alphabet lots of attention, DeepMind's AI software has led to real business value, lowering the cost of cooling Google data centers by some 40 percent.

Demis Hassabis, co-founder of Google's artificial intelligence (AI) startup DeepMind.
Jeon Heon-Kyun | Getty Images

Meanwhile Google has enhanced its core search engine, Gmail, Google Street View, Google Photos, Google Translate, YouTube and other applications using A.I.

In recent years several open-source frameworks for deep learning have emerged, but Google's TensorFlow is thought to be the most popular. Googlers have even developed a tensor processing unit (TPU) to accelerate network network training and predictions beyond what's currently possible with commercially available silicon. And Alphabet's Waymo is at the forefront of autonomous vehicle research.

Alphabet research scientists regularly publish academic papers on their latest achievements, which is somewhat rare in a such a highly competitive industry that values secrecy. In fact, A.I. is so important to the company that Google CEO Sundar Pichai has begun describing Google is an AI-first company.

Amazon has long used A.I. to recommend products in its e-commerce business, and it employs robots to move products around fulfillment centers.

But in the past few years the company has brought in some revenue by selling Amazon Echo speakers through which people can talk to Amazon's Alexa virtual assistant. While Alexa's speech recognition isn't perfect, it does quickly respond to user input, and it plugs into an increasing number of third-party services and devices.

Building on the public's fascination with Alexa, Amazon last year introduced AI services for recognizing objects in images and understanding voice and text input. Amazon has also opened a convenience store that uses A.I. to identify the products that customers grab off the shelves.

Apple has looked to AI to recognize handwriting, lengthen device battery life and even find the text that could be selected in PDF files. But Siri, Apple's virtual assistant on iPhones and other Apple hardware, uses deep learning now, and the company has recently announced the HomePod speaker that packages up Siri.

New Apple HomePod smart speaker are on display during Apple's Worldwide Developers Conference in San Jose, California, on June 05, 2017.
Josh Edelson | AFP | Getty Images

Apple has also sought to improve image recognition in its Photos app and emoji prediction in the QuickType keyboard on iOS.

Most recently, Apple introduced Core ML, a software library for running machine learning workloads, including neural networks, on Apple devices. Apple is also reported to be developing an AI chip that could be tucked inside its mobile devices.

On the whole, Apple has sought to differentiate its A.I. efforts by emphasizing privacy. The company has not published much of its A.I. research, although it did recently hire a prominent researcher, Russ Salakhutdinov, as its director of A.I. research. It has also accumulated talent through acquisitions of startups like Perceptio.

Facebook set up an A.I. research group in 2013, with the hire of Yann LeCun, who is known for popularizing a technique called convolutional neural networks. You can see him explaining the technique in this video from 1993:

The group often publishes research papers and has released the Caffe2 and PyTorch open-source A.I. frameworks. It has even come up with special server hardware that's optimized for deep learning with GPUs.

But Facebook has also looked to A.I. to better rank posts in the News Feed, translate users' posts into different languages and even generate text descriptions of users' images. Most recently Facebook said it would try to curtail terrorism-related content with the help of A.I.

Last year Facebook CEO Mark Zuckerberg cited A.I. -- along with virtual reality, augmented reality and connectivity -- in its 10-year roadmap, pointing to vision, language, reasoning and planning as areas of exploration.

Microsoft has employed A.I. researchers for many years, although in September AI appeared to become a higher priority with the formation of its Microsoft AI and Research Group.

Microsoft has incorporated A.I. into Cortana, Word, PowerPoint, Skype and SQL Server. Earlier this year it introduced Story Mix, an video editing app that can keep additions like stickers tied to specific objects as they move in video, thanks to A.I. Microsoft has also introduced services for speech recognition, computer vision, emotion detection and video understanding that developers can use in their own apps. Microsoft fields the Cognitive Toolkit open-source A.I. framework.

So where do we go from here?

None of this gets at what's possible in the future.

First, more of human labor could be automated. Drivers at app-enabled cab companies like Uber and Lyft, for example, could find themselves without work as self-driving cars using A.I. become acceptable for everyday travel. Machine translation systems could render human translators unnecessary except for specialty jobs. Banks would not need to employ tellers once ATMs can create new accounts and give out loans. Fewer journalists will be needed to write the news. The thought of those developments has lead to discussion of alternative economic models like universal basic income, which Zuckerberg recently talked up.

Beyond that, perhaps in a few decades, an A.I. system with superhuman capabilities in most domains -- sometimes referred to as artificial general intelligence -- could emerge. Depending on whom you ask, that could be either very good or very bad. In an extreme case, an AGI system could end up making humans extinct. But if things go well, perhaps AGI will be something that will supercharge humans and help them live longer. The prospect of either of these two scenarios is perhaps what draws so much attention to A.I. development today, and what has inspired so much science fiction in the past.

But for now, what people generally see is narrow A.I. -- intelligence applied to a small number of domains -- and it doesn't always work the way it should. Look at Alexa, Cortana, the Google Assistant or Siri -- they misunderstand spoken words all the time.

The thing is, the biggest companies in the world are now investing in A.I. like never before. And that trend is not about to let up.