2012 is the year of Big Data

This is an article that I wrote recently that was published in the Cloud Computing Journal. While 2011 may have been the year of the cloud, 2012 is proving to be the year that Big Data breaks through in a big way. I discuss a brief history of Big Data and go into a little more detail as to why I think the cloud and Big Data are a natural fit. The key is ensuring that the right architecture is available to support the performance needs of Big Data applications. Click on the link below to get the full article.

The Big Data Revolution — For many years, companies collected data from various sources that often found its way into relational databases like Oracle and MySQL. However, the rise of the Internet, Web 2.0, and recently social media began an enormous increase in the amount of data created as well as in the type of data. No longer was data relegated to types that easily fit into standard data fields. Instead, it now came in the form of photos, geographic information, chats, Twitter feeds, and emails. The age of Big Data is upon us. A study by IDC titled “The Digital Universe Decade” projects a 45-fold increase in annual data by 2020. In 2010, the amount of digital information was 1.2 zettabytes (1 zettabyte equals 1 trillion gigabytes). To put that in perspective, the equivalent of 1.2 zettabytes is a full-length episode of “24” running continuously for 125 million years, according to IDC. That’s a lot of data. More important, this data has to go somewhere, and IDC’s report projects that by 2020, more than one-third of all digital information created annually will either live in or pass through the cloud. With all this data being created, the challenge will be how to collect, store, and analyze what it means.