Large-scale Incremental Processing Using Distributed Transactions and Notifications

resource thumbnail

Remove from Bookmarks

Do you really want to remove?
This action cannot be undone. Choose 'Cancel' to stop and go back.
Ratings: 0
  • Which text to add here??

Added by gromgull on 2012-07-11 14:52

» Viewed 654 times
» Favorited by 0 user(s)
» 0 Comments
» This resource has public visibility

Holder of Rights:

License: unknown

Creator(s): Daniel Peng, Frank Dabek

Description:
Updating an index of the web as documents are crawled requires continuously transforming a large repository of existing documents as new documents arrive. This task is one example of a class of data processing tasks that transform a large repository of data via small, independent mutations. These tasks lie in a gap between the capabilities of existing infrastructure. Databases do not meet the storage or throughput requirements of these tasks: Google's indexing system stores tens of petabytes of data and processes billions of updates per day on thousands of machines. MapReduce and other batch-processing systems cannot process small updates individually as they rely on creating large batches for efficiency.

We have built Percolator, a system for incrementally processing updates to a large data set, and deployed it to create the Google web search index. By replacing a batch-based indexing system with an indexing system based on incremental processing using Percolator, we process the same number of documents per day, while reducing the average age of documents in Google search results by 50%.

Add to Collection

You don't have any collections yet. Click here to create your first collection!

Share to Group

You don't have any group you can share this resource with: the resource is already shared to all groups you are member in. Click here to see available groups!

Create QR Code

Please select the URI for the QR Code:




Tags

sort: alphabeticallyby frequency
use blanks to separate tags

Comments

Large-scale Incremental Processing Using Distributed Transactions and Notifications Updating an index of the web as documents are crawled requires continuously transforming a large repository of existing documents as new documents arrive. This task is one example of a class of data processing tasks that transform a large repository of data via small, independent mutations. These tasks lie in a gap between the capabilities of existing infrastructure. Databases do not meet the storage or throughput requirements of these tasks: Google's indexing system stores tens of petabytes of data and processes billions of updates per day on thousands of machines. MapReduce and other batch-processing systems cannot process small updates individually as they rely on creating large batches for efficiency. We have built Percolator, a system for incrementally processing updates to a large data set, and deployed it to create the Google web search index. By replacing a batch-based indexing system with an indexing system based on incremental processing using Percolator, we process the same number of documents per day, while reducing the average age of documents in Google search results by 50%. Daniel Peng, Frank Dabek