Viewing posts for the category facebook

Facebook Engineering - BigPipe - Pipelining web pages for high performance

Site speed is one of the most critical company goals for Facebook. In 2009, we successfully made Facebook site twice as fast, which was blogged in this post. Several key innovations from our engineering team made this possible. In this blog post, I will describe one of the secret weapons we used called BigPipe that underlies this great technology achievement.

BigPipe is a fundamental redesign of the dynamic web page serving system. The general idea is to decompose web pages into small chunks called pagelets, and pipeline them through several execution stages inside web servers and browsers. This is similar to the pipelining performed by most modern microprocessors: multiple instructions are pipelined through different execution units of the processor to achieve the best performance. Although BigPipe is a fundamental redesign of the existing web serving process, it does not require changing existing web browsers or servers; it is implemented entirely in PHP and JavaScript.

Facebook Engineering - Making Facebook 2x Faster

Everyone knows the internet is better when it's fast. At Facebook, we strive to make our site as responsive as possible; we've run experiments that prove users view more pages and get more value out of the site when it runs faster. Google and Microsoft presented similar conclusions for their properties at the 2009 O'Reilly Velocity Conference.

So how do we go about making Facebook faster? The first thing we have to get right is a way to measure our progress. We want to optimize for users seeing pages as fast as possible so we look at the three main components that contribute to the performance of a page load: network time, generation time, and render time.

Facebook Engineering - Under the hood: MySQL Pool Scanner (MPS)

Facebook has one of the largest MySQL database clusters in the world. This cluster comprises many thousands of servers across multiple data centers on two continents.

Operating a cluster of this size with a small team is achieved by automating nearly everything a conventional MySQL Database Administrator (DBA) might do so that the cluster can almost run itself. One of the core components of this automation is a system we call MPS, short for “MySQL Pool Scanner.”

MPS is a sophisticated state machine written mostly in Python. It replaces a DBA for many routine tasks and enables us to perform maintenance operations in bulk with little or no human intervention.

A closer look at a single database node
Every one of the thousands of database servers we have can house a certain number of MySQL instances. An instance is a separate MySQL process, listening on a separate port with its own data set. For simplicity, we'll assume exactly two instances per server in our diagrams and examples. 

The entire data set is split into thousands of shards, and every instance holds a group of such shards, each in its own database schema. A Facebook user's profile is assigned to a shard when the profile is created and every shard holds data relating to thousands of users.

It’s more easily explained by a diagram of a single database server:

Facebook Engineering - TAO The power of the graph

Facebook puts an extremely demanding workload on its data backend. Every time any one of over a billion active users visits Facebook through a desktop browser or on a mobile device, they are presented with hundreds of pieces of information from the social graph. Users see News Feed stories; comments, likes, and shares for those stories; photos and check-ins from their friends -- the list goes on. The high degree of output customization, combined with a high update rate of a typical user’s News Feed, makes it impossible to generate the views presented to users ahead of time. Thus, the data set must be retrieved and rendered on the fly in a few hundred milliseconds.