Tag Archives: real-time

Preparing for the Storm

storm-02

In a previous post, I described Big Data as containing certain criteria: Volume, Velocity and Structure (probably worth revising to Variety as that is now more commonly used). While Hadoop is currently the primary choice for analytics on Big Data, it is not necessarily designed to handle  velocity. That is where Storm comes in.

Hadoop is designed to operate in batch mode: its strength is in consolidating a lot of data, applying operations on the data in a massively parallel way and reducing the results to a file. Since this is a batch job, it has a defined beginning and a defined end. But what about data that is continuous and needs to be processed in real time? Given the popularity of Hadoop, several open source projects have risen to meet this challenge. Most notably, HBase is the project that is designed to work with Hadoop and provide for storing large quantities of sparse data leveraging the Hadoop Distributed File System (HDFS).

Storm Use Cases

But what about real-time analysis of a Twitter stream? This use case is a perfect fit for Storm. A recent project that was developed by Twitter, it is similar to Hadoop but tuned for dealing with high volume and velocity. Storm is a project that is intended to handle a real-time stream of information with no defined end. It can handle analyzing streams of data forever. if you are familiar with Yahoo’s S4, Storm is similar.  It handles three broad use cases (but not exclusively these):

  • Stream processing
  • Real time computation
  • Distributed remote procedure calls (RPC)

Storm is relatively new but has been adopted by several companies, most notably Twitter (obviously), Groupon and Alibaba. You’ll notice that even though these companies are in different industries, they all have to deal with large amounts of streaming data.

Components

Storm integrates with queuing systems (like Kestrel or JMS) and then writes results to a database. It is designed to be massively scalable, processing very high throughputs of messages with very low latency. It is also fault-tolerant so if any workers or nodes in a cluster die, then are automatically restarted. This gives it the ability to guarantee data processing – since it can track lineage, it will know if data isn’t processed and can make sure that it does. Since Storm has no concept of storage, you will need to store the results in another system (like a database or another application).

Storm also has its own components which bear some explaining. Although there are several components in a Storm cluster (including Zookeeper which is used for coordination and maintain state), there are three which are referenced frequently:

  • Topologies – this is equivalent to a Map Reduce job in Hadoop except that it doesn’t really end (unless you kill it of course)
  • Spouts – this is the primitive that takes source data and emits it into streams for processing
  • Bolts – this is the primitive that takes the data and does the actual processing (like filtering, joining, functions, etc.). This is similar to the part of the Map Reduce WordCount example that is summing the count of words.

Since Storm is intended to run in a highly distributed fashion – it is also a natural fit for the cloud. In fact, there is a project that makes it easy to deploy a Storm cluster on AWS. As an open source project, it can run on any Linux machine and works well with servers that can be horizontally scaled. If you want to learn more technical details regarding this project, check out the excellent wiki on Github, where you can also download the latest version.