Fcaeebook, Google and Microsoft are tapping the power of a vintage computer gaming chip to raise your smartphone’s IQ with artificially intelligent programs that recognize faces and voices, translate conversations on the fly and make searches faster and more accurate.
It’s part of a trend toward the use of an artificial intelligence technique called “deep learning” that is creating a sense that a new era of smart technology is only dawning.
“It has become difficult to tell the difference between state-of-the-art technology and magic, thanks to new sources of data, cheap memory storage, new algorithms and really powerful computers,” said Stephen J. Eglash, executive director of Stanford’s data science and artificial intelligence laboratories.
“We’re all carrying these supercomputers around in our pockets. They do amazing things, yet I think they have barely scratched the surface of what’s possible,” he said.
The artificial intelligence trend accelerated with the discovery that a computer gaming chip called a graphics processing unit, or GPU — developed by Santa Clara’s Nvidia 15 years ago and now made by several other companies, as well — could be used to make AI programs run faster on “neural networks” of computer chips that work a little bit like the human brain.
These programs are composed of layers that are assigned specific tasks. Then they’re exposed repeatedly to mountains of data and given simple instructions on how to make sense of it.
Facebook has trained neural networks to read stories, answer questions, play games and learn tasks that are not specifically assigned, by observing examples. It is currently using artificial intelligence to caption news feeds for the blind.
“We’re early in figuring out how this technology is going to make Facebook more powerful and simpler for our users, which is really the goal for the company,” said Serkan Piantino, director of engineering for Facebook’s artificial intelligence research.
In the past half-year, Google has introduced several new products that rely on neural networks and deep learning, which Jen-Hsun Huang, CEO of Nvidia, calls “the big bang” of modern artificial intelligence — in other words, the nucleus of an explosion in the use of AI.
There’s a Google app, Google Photos [pictured above], that searches for unlabeled images by their content (“find all photos with waterfalls”); a new search feature that offers standard replies based on the content of an email; RankBrain, which uses machine intelligence to do searches based on terms it hasn’t seen before; and a speedy improvement of search. Google’s Translate app now translates menus and street signs in foreign languages without requiring a wireless or cellular connection, thanks to a very small neural net contained in the app.
Microsoft has introduced a Skype translator that lets people talk to one another on Skype in different languages, translating the conversation for each person as they speak.
Most of these achievements wouldn’t be possible without the GPU’s ability to handle the many simultaneous calculations required by deep learning programs.
Until recently, for example, teaching a computer to recognize cats in photos was “incredibly computationally intense” and could take six months as the computer was exposed to millions of photos, said Ian Buck, vice president and general manager of Nvidia’s accelerated computing business.
“Somewhere along the line, a bunch of researchers discovered algorithms that matched very well with GPUs and can be accelerated many times over,” he said. “They reduced training times to almost a single day.”
At the same time, the accuracy of the networks has grown to a point where some are right 90 to 93 percent of the time, he said.
Throw a mountain of data at a neural network using clever deep learning algorithms and it figures out what to do with it — with very little help. The technique leaves the computer to seemingly perform miracles, like learning to play computer games on its own.
In 2015, Microsoft and a Chinese university said they used a deep learning program to achieve IQ test scores at the college postgraduate level. That was followed by an announcement from Chinese Web services company Baidu that its Deep Speech 2 program had learned English and Mandarin with a single algorithm.
Google’s brain team grabbed headlines in January when its learning program AlphaGo swept the human European champion in a five-game match of the ancient game of Go.
In computer science, neural nets and deep learning are merging previously independent research areas.
“It’s a huge hammer, and it is replacing large fields of research,” Nvidia’s Buck said. “There used to be a whole research field in computer vision, and it’s all being replaced by AI. People who are 20-year veterans in those fields are learning AI.”
The problems are so complex that researchers are pooling their findings. Google opened its TensorFlow software library, originally developed by its Brain Team for building artificial intelligence programs, explaining that it hopes to provide an open standard for exchanging research ideas and designing products.
Facebook recently open-sourced its Big Sur computer, which is specifically designed for artificial intelligence.
Big Sur is twice as fast as Facebook’s previous neural network machine, can be trained twice as fast and explore networks twice as large.
“We’ve had a ton of inbound (requests) from people wanting to know how it works and wanting to use it,” Facebook’s Piantino said.
The next step in deep learning is visual questioning, said Pieter Abbeel, a UC Berkeley deep learning researcher who develops robots that learn. That involves having a computer deduce things from an image. For example, looking at a photo of a man driving a car with a child seat, it might deduce that the man probably has a child.
Abbeel recently posted a simulation of a robot learning to stand up through a trial-and-error sequence that has tapped into people’s fascination with artificial intelligence.
The robot had only bare-bones instructions to keep its head pointing “north” or up, and was penalized for how hard it hit the ground when it fell over, he said.
“Thing is failing, failing, failing until, all of a sudden, it invents how to do this,” he said. “It’s drawn the most attention of all the videos I’ve ever shown in 15 years.”