Google Big Table paper

Google Labs have just released a paper about Big Table. Big Table is “a sparse, distributed, persistent multidimensional
sorted map” that provides other Google components with a scalable and reliable persistent data storage layer. This is one of several infrastructure components that Google apps are built from including: Map Reduce, GFS, etc.

The thought that is going through my head is whether Google might open up this system and allow the public to use it. Clearly you could use Big Table as a file storage medium. This could then become a direct competitor to Amazon‘s S3 file storage service. That would be an interesting battle 🙂

Via Greg (as usual)

3 thoughts on “Google Big Table paper

  1. And having read it I can tell you that there’s another paper due for release on Google’s HA distributed lock manager (apparently called Chubby, errr, wouldn’t have been my choice).

  2. I agree – “interesting” choice of name.

    A few other points I thought were interesting on an initial skim:

    1. the volume of data that is being stored in the system ie the web crawl contains 800TB in 1000 billion cells. 1000 billion is quite a large number. I would have thought the 800TB is a low figure for the size of the web though…

    2. the fact that the random read performance suffers quite badly as the number of tablet servers increases. As they point out this is due to the fact that they shift around 64kb blocks even though the required cell data might only contain a kb or so and the network/cpu overhead ends up saturating. Would have been nice to see how the system performs with a reduced block size – 8kb?

    3. Fijnally:

    One lesson we learned is that large distributed systems
    are vulnerable to many types of failures, not just
    the standard network partitions and fail-stop failures assumed
    in many distributed protocols.

    Yep – this stuff is hard!

  3. Re: data volume – I couldn’t tell whether they were talking compressed or uncompressed form.

    Re: random read – I seem to recall they allow the programmer to specify a transfer size and I think they also said, typically, it’s 8k. I guess they were after finding worst case performance. Certainly would’ve liked to see the updated figures as well 🙂

    I also noticed that they tend to be running several “services” on the same machine so BigTable, GFS etc – I would’ve expected more segmentation but maybe I/O is more of an issue for them than memory or CPU.

Leave a Reply

Your email address will not be published. Required fields are marked *