Latest News

Exploring Facebook’s massive, picture-painting AI brain




Inside a 350,000-square-foot building in the hills of Prineville, OR, slotted inside a nondescript server rack, is one of Facebook’s most valuable artificial intelligence tools. It’s called Big Sur, and it’s a hardware system for training software to improve itself over time. It uses enormous amounts of data, funneled in from all over the world, and taps into the building’s extraordinary computing power to accelerate a process that once took months down to a matter of hours. With Big Sur, Facebook is able to train AI processes that power board-game-playing programs and help software “read” photos and explain their contents back to people.

Big Sur systems can be found inside the second of Facebook’s Prineville data centers, on a site where ground was first broken only six years ago. Prineville, a tiny Central Oregon city with just over 9,000 residents, marked the first of the social networking company’s US server farms, constructed to accommodate the meteoric rise in Facebook users and the site’s growing computational needs. Today, the campus encompasses more than 1 million square feet, with nearly half a dozen monolithic gray buildings stretching in all directions off Route 126, at 735 Connect Way.

The operation looks more like a government building than it does a data center, and security guards cover every entrance in and out. To those outside the tech industry, it’s easy to think of Big Sur as the equivalent of a cache of classified documents, stored in physical space deep inside a locked-down multi-level complex.

Inside the second of two operational data centers at Facebook Prineville; a third is under construction.
The real surprise: none of it is really kept under wraps. In fact, Facebook announced last year that Big Sur would be an open-source project before it had even placed the system in its Prineville data center and in a handful of other locations around the country. The company has since submitted the designs of Big Sur to the Open Compute Project. The data center community, started by Facebook in 2010, is designed to make hardware more energy-efficient and share what the company, and its competitors, learns from the ever-growing number of server farms around the country.

You could even build a rudimentary version of Big Sur yourself, using eight off-the-shelf — albeit very expensive — Nvidia GPUs and reference designs from manufacturer Quanta, just like Facebook does. But without rigging thousands of those GPU-based systems together, as the company has done in Prineville, you can’t achieve the kind of AI training capabilities it was designed for. 

Building a true Big Sur installation requires the kinds of resources that only a large company, such as Google or Microsoft, would be willing to invest. (Both of those companies are part of the Open Compute Project and can build a version of Big Sur if they so choose.)

"We’re not in the business of having secret things," says Kevin Lee, a technical program manager at Facebook who oversees Big Sur and other server designs at Prineville. "Our goal is to understand the world, to push AI." Of course, Google has its own open-source AI-training software, TensorFlow, so Facebook has a competitive reason to continue sharing its secrets as well.

Lee says AI is one of the three core pillars of Facebook’s future. When outlining the company’s 10-year road map at the F8 developer conference in April, CEO Mark Zuckerberg explained how Facebook.com was the company’s first step, and its many mobile apps have been the second. Ten years from now, Zuckerberg wants Facebook to be taking the lead on internet connectivity and drones, augmented and virtual reality, and AI.

AI is helping Facebook software see and understand the world, decipher human language, reason on its own, and plan its own courses of action. Some of it is already operational. Facebook’s new multilingual composer lets you compose text in one language and have it automatically translated into others, for example. Another new feature uses Facebook’s AI to analyze photos and describe them to blind and visually impaired users. And every time you upload a photo, a Big Sur-trained image recognition algorithm recognizes the faces and suggests which people to tag.

Central to every one of these features is machine learning, an AI training technique that’s nearly as old as the field of AI itself. But thanks to the massive data sets now available and recent leaps in computing power, machine learning has become an increasingly effective way to improve this type of software over time. Facebook, like many of its competitors, uses machine learning to train neural networks, which are algorithms inspired by the human brain that draw patterns and pluck probabilistic findings out of complex data sets.

"The first time we trained a single neural net, it took three months," says Ian Buck, Nvidia’s VP of accelerated computing, who works closely with Facebook’s AI and data center teams. After optimizing the training hardware with newer Nvidia GPUs, the time was cut down to one month. With Big Sur using the latest Nvidia hardware, he adds, it’s now less than a single day to train a neural net to perform a task that once required a human being.

Nvidia's Ian Buck standing next to a Big Sur-trained neural network creating art based on more than 12,000 paintings.
Deep in a lower level of Facebook’s Prineville data center, Buck shows this off in real time. A Facebook AI, trained under Big Sur, consumes countless paintings from what look like French impressionist artists and begins painting itself. Not with a virtual easel and brush, but by generating image files of what it thinks paintings are based on its examples. Buck says the team gave it about 12,000 pieces of art, and within 30 minutes it began outputting original works. There are even granular ways to train it, he adds, by telling the AI to focus more on one of its paintings that has, say, less clouds and focus less on those with overcast skies.

This is nothing but a tech demo. Google’s Deep Dream neural net also uses computer vision to construct surreal images. But Facebook’s proof of concept suggests that its plans for AI go far beyond photo tagging and translation — and the company is just beginning to explore the possibilities.

FACEBOOK'S PAINTING AI SUGGESTS IT PLANS TO GO BEYOND PHOTO TAGGING


Down the line, Facebook hopes to improve Big Sur with ever-more-powerful parts. Lee says the system is modular so that it can support newer GPUs and different server and rack designs. As it stands today, Facebook data scientists and AI researchers are able to log onto servers in Prineville and access Big Sur to train offline algorithms before they’re put into live use. The algorithms sometimes train for weeks or even months, Lee says.


But the company is not shying away from letting third-party researchers access those opportunities, either. Through the Open Compute Project, companies and individuals can join the community and contribute to and use the open-source hardware and software.

"Keeping the hardware secret is not one of the things we want to do," Lee says. You can tell he means it, too, as a group of visiting reporters gathers around a Big Sur system he’s slid out of a server rack. Photos snap away at aggressive frequency, as Lee takes out components and explains their functions. The voracious appetite for its inner workings probably seems silly to Lee — there’s a 95-page PDF online telling you exactly what Big Sur looks like, how it works, and how to build it yourself.


Sources: The Verge

No comments:

Post a Comment

MRs TechNews Designed by TechNews Copyright © 2016

Powered by Blogger.