Big Web 2.0 Technology Challenges: Utility-class scaling of dynamic data

Managed scalability is one of those things that many do not appreciate having until they really needed it, i.e., when their site gets really popular. Once this happens, they start screaming for the need for scalability (look at AOL’s “Access Crisis” or Twitter’s “Fail Whale” if you need an example). By this time it too late: your reputation has taken a hit that is hard to forget and cost of emergency scaling is usually very high (often involving very large, expensive servers).

It is much more efficient to plan and design for scalability from the start. When you do this, you are ready to adjust and respond to traffic loads when they arrive. Of course, not many people will appreciate what you have achieved (ironic, isn’t it?)

Why the Focus on Scaling Dynamic Data?

Scaling static data is easy. By definition, it does not change very often (i.e., its Read/Write ratio is usually enormously high). As such it is easy to scale. The most common approach to this is to create cached copies of static information. This provides highly reliable, low-cost scaling.

Scaling dynamic data has always been more difficult. These data elements change all the time. As a result, scaling is much more difficult. The two most common approaches involve either management of near-real-time data replicates or horizontal partitioning of data:

Read-only replicates are useful when the amount of data per user is small (e.g., master file records, account profiles). However, their utility breaks down due to replication lag when the amount of data updates per user grows. In these situations the more complex horizontally portioning model is more useful.

Web 2.0 Needs Break Both of These Models

In Web 2.0, dynamic data pursues a much more distributed path:

This essentially, requires you to scale in three dimensions:

  1. Enable many, many people to author (insert and edit) data from many places
  2. Enable a small group from a single place to moderate (edit) data from many authors
  3. Enable a single site to display (in once place) the most recent, highest rated, etc. data from many authors

Unfortunately, None of the single-approach scaling methods address three dimensions:

All Data in One Big System: This makes it easy to look at data from all perspectives (authoring, moderating and publishing). However, it creates a single-point of failure. When the system goes down, everyone gets a “Fail Whale”

Caching or Read-only Replicates: This does not enable you to scale the high volume of data writes without using many primaries (which essentially moves you to a horizontally-partitioned model). Also, maintaining data integrity when replicating from multiple sources becomes a nightmare.

Horizontal Partitioning: This approach is perfect when I only need to access my data. However, it falls apart when managing moderation and publication of live feeds.  Essentially, you have to pull data from many different places to create single views and feeds. This is hard for publication (read-only) and extremely difficult for moderation (reading and updating)

What Model Should I Use?

There is no single “silver bullet” like there was in the Web 1.0 world. As a result you need to apply some engineering analysis:

  • Analyze the read/write ratios by dynamic content type (blog posts, forum thread posts, profiles, ideas, etc.)
  • Examine how people use this data, e.g., what data they request at the same time

After you have done this, you will find you need to use a balanced recursive combination of multiple techniques:

  1. Use caching and read-only replicates for items with low dynamism, e.g., profiles, blog titles (not blog posts)
  2. Next partition dynamic data by content-type. This allows you to split load without interfering with the ability to easily manage moderation (moderation is usually performed by content type)
  3. Now denormalize your dynamic data using a noun-adjective model (I know, your DBA just started screaming…)  This allows you to further split load without interfering with the ability to moderate content by type and status.
  4. If this still does not give you enough scalability you need to either partition at the physical level (something many database management systems do well) or at the logical level (mostly likely by user or content ID). Logical partitioning will require you to use a grid computing model involving multiple, parallel calls to different databases. This is not a trivial exercise but enables massing scaling (I have done this both on commodity Linux and expensive mainframe architectures scaling to support tables with several billion rows of data).

Of course, now you are looking for an illustrative diagram to show how I have done this for high-volume, multi-tenant, cross-community social networking architectures.  Unfortunately, if I shared this diagram I would be widely distributing a set of trade secrets. However, I am willing to discuss more details on this approach in environments less public than an open blog post (perhaps I will have chance to do so at Wharton later this month).

Update:

The technology world has evolved since this post was written. Now, many new database architectures give you these scaling models “out of the box”. However, for some data needs, even this is not enough. In those case, we recommend the Lambda Architecture (invented by Nathan Merz, a brilliant engineer acqui-hired by Twitter). We are big fans of the Lambda architecture–and the slightly smaller scale variants used at places like Netfix. If you are interested in learning more, please see our Big Data and Analytics services.

Post Topics:
, , , , , , , , , , ,

Share your thoughts