Facebook gets ready for New Year’s Eve

Click photo to enlarge

As 2011 gradually checks out this New Year’s Eve in time zones from Sydney to Mumbai, to London, New York and on to the Bay Area, people will pull out their phones, shoot a photo and — probably more than 1 billion times — they will share it on Facebook.

The holiday season is a dead zone for things like freeway traffic, stock exchange trades and real estate transactions, but for social networks like Facebook and Twitter, holidays like New Year’s Eve and even Halloween unleash a tsunami of traffic, as people use their smartphones to share photos and post holiday greetings to their friends.

Facebook received more than a billion photo uploads for the first time on Halloween night 2010 and the following day, when it accepted 1.2 billion uploads, and the Menlo Park social network, which has at least 200 million more members than it had last Halloween, could top that total for New Year’s Eve 2011, said Jay Parikh, Facebook’s director of engineering.

Facebook this week has been girding to handle the annual flood of New Year’s Eve traffic, running checks on its hardware and software to ferret out early signs of problems, designating engineers to be on call in case something does break, and preparing to bring additional capacity on line in its data centers around the Bay Area and in Prineville, Oregon.

“I think it’s part of the

interesting things of what we’ve done, is that it’s not a special event,” Parikh said of Facebook’s preparations for New Year’s Eve. “There is a little bit of a special preparation in terms of watching over it, but it’s not this fire drill.”

Experts and insiders like Parikh say the robust data infrastructure behind the social network — CEO Mark Zuckerberg has long been focused on having the digital horsepower to support unbridled growth — are a key reason behind the 800-million member social network’s success.

“That psychology of always being overbuilt for whatever potential use case might emerge has been part of their culture since the beginning,” said David Kirkpatrick, the author of “The Facebook Effect”, a 2010 book that follows the company’s seven-year history. “It’s not something that’s that uncommon for mature Internet companies, but I think that’s one of the things that really differentiated them from (early social networks) Friendster and MySpace in a really critical way.”

Since its founding in 2004, Facebook has faced plenty of criticism for its approach to privacy and its introduction of new features. But the Menlo Park-based social network, unlike Twitter, has virtually never had problems with the functioning of its site, even as its global membership has exploded in size. Facebook now reaches 55 percent of the global Internet audience, according to Internet metrics firm comScore, and accounts for one in every seven minutes spent online around the world.

“The primary reason Friendster died was because it couldn’t handle the volume of usage it had. And believe me, that was a lesson that was well learned by” Zuckerberg, Facebook co-founder Dustin Moskovitz, and founding president Sean Parker, Kirkpatrick said. “They always talked about not wanting to be ‘Friendstered,’ and they meant not being overwhelmed by excess usage that they hadn’t anticipated.”

Google (GOOG) and Amazon typically get more attention for their “clouds” — networks of many thousands of computer servers interconnected by the Internet in a far-flung system of data centers that store the companies’ data and run its software. But because of three key factors, Facebook’s cloud is unique, Parikh said.

To manage Facebook’s data infrastructure, “you kind of need to have this sense of amnesia,” said Parikh, who previously held executive engineering jobs at Palo Alto-based Ning and Greater Boston-based Akamai Technologies. “Nothing you learned or read about earlier in your career applies here.”

One aspect of Facebook’s uniqueness is its membership — so large and so scattered around the world, with more than three quarters outside the United States. Second is that the social network’s members are so interconnected, with each account linked to an average of 130 friends. Third is that the Facebook experience would not work if photos and posts did not appear on friends’ pages almost instantly.

“It’s very important that it’s real time,” Parikh said. “If I were to make a friend request to you, and you wouldn’t see it til tomorrow, I would get confused.”

Twitter also gets a surge on New Year’s Eve, recording a peak 6,939 Tweets per second on New Year’s Eve 2010. But Facebook doesn’t have a “Fail Whale” — the infamous page that appears when Twitter is overwhelmed with traffic. One reason, Parikh said, is that Facebook has “emegency parachutes” that allow the social network to survive system failures or unanticipated surges in traffic, like the May 2 death of Osama bin Laden, which produced the biggest global spike of status updates on Facebook in 2011.

“Let’s say we’re having a capacity problem: Maybe we serve slightly smaller photos … and that saves us some bandwidth,” he said. Facebook has many subtle options “to sort of degrade gracefully when and if a problem happens, so we don’t just go off-line and the whole thing disappears. You get a slightly less optimal experience, but you still have users being able to engage in that feature.”

Different types of events stress different parts of Facebook. Holidays require hundreds of terabytes of capacity for photo and video uploads — more data than total web archive collected by the Library of Congress, and enough data for more than 80 years of music recorded on CDs.

But news events like bin Laden’s death and big sports events can affect News Feed, the primary feature where users’ status updates appear.

“The Super Bowl, the World Series, any sort of major sports or news event, whatever might be happening in the world that people are talking about, can really impact this part of the site,” Parikh said.

The simple act of opening your homepage requires Facebook’s network to involve about 100 different computer servers.

“We then have to rank, sort, and privacy-check all of that data, and then render that data in this user experience that you see in front of you on your desktop,” Parikh said. “And arguably, all this has to happen in less than a second, because I don’t think if we were to spend any more time doing that, people would stay on the site and be as engaged as they are today.”

Contact Mike Swift at 408-271-3648. Follow him at Twitter.com/swiftstories and view his Google+ profile.

Four Key Reasons why Facebook never goes down

1) Facebook’s cloud has been designed to be “horizontally scalable,” meaning that the social network can easily tie additional servers into its global network to cope with spikes in traffic, either from a holiday like New Year’s Eve, or an unplanned news event.
2) Instrumentation. Facebook has invested extensively in “dashboard” systems that monitor traffic within its cloud, and give engineers early warnings when software or hardware is starting to fail.
3) People on call. For a big event like Halloween or New Year’s Eve, Facebook makes sure it has skilled engineers ready to respond.
4) “Emergency Parachutes.” Facebook has extensive controls that allow it to slow parts of the site down in subtle ways to avoid a complete failure.
Source: Facebook

Open all references in tabs: [1 – 7]

Leave a Reply