Internet

Published on December 8th, 2015 | by Guest

0

The real case scenarios where Hadoop can help organizations with Big Data

Big Data has become a crucial component for an organization which wishes to create differentiation and maintain a competitive as Big Data provides unprecedented analytic capabilities to find out trends, behaviour and preferences of the customers. Hadoop is a powerful data storage and analysis platform which can store large quantity of structured and unstructured data inexpensively. It also has great potential of doing sophisticated analytic on large volume of data. This has started a trend which is making large and small businesses to transform into data driven organization.

The real case scenarios where Hadoop can help organizations with Big Data

Data Staging

The traditional databases have three main components Application data which is captured in relational databases, ETL tools which can extract, transform and load this data into a data warehouse and finally an analytic tool which interact with this data warehouse.

The usage of Hadoop in above scenario is that it can take care of ETL processing. You can use Map reduce which is a resource management and processing platform to write jobs to load the application data into Hadoop Distributed File System(HDFS) and then send the transformed data into data warehouse.  The low cost Hadoop storage can easily store the unprocessed application data and transformed data. This means all your data will be at one place which will be easy to manage, reprocess and analyze anytime you want. This usage of Hadoop eliminates the need of ETL tools which means low cost data staging process.

Data Processing

Hadoop provides an inexpensive alternate of using costly data warehouses resources to update data. Whenever you need to update data using data warehouse resources, it will be a costly affair rather you can send the data which to is be updated to Hadoop, Map Reduce will update the data and send the updated data to warehouse. In terms of Big Data when you are dealing with large volume of structured and unstructured data, the cost of updating using data warehouses will be considerable so Hadoop is a cheaper and effective alternate.

Data Achieving

What if you want to store your decade old data, traditionally you have to destroy your data after its regulatory life to save on storage costs. Sometimes such a data will be very important for your company, Hadoop provides an easy and cost effective alternate to save historical data.  Hadoop runs on commodity software which scales easily and quickly that gives organization an effective to store and archive a large volume of data at a lower cost.

These capabilities of Hadoop can not only be used with large volume of data but in case when there is a limited amount of data. Hadoop is a powerful and effective solution to all the data related problems. Its ability to handle large volume of unstructured data and derive meaning from it has made it very popular amongst organizations which are dependent on their data. The Map reduce can yield results at lightening fast speed even when it is dealing with large volume of data.

Unstructured Data

HDFS or Hadoop Distributed File System is very useful in case of unstructured data. Hadoop can handle structured and unstructured efficiently by breaking data into blocks, distributing it across number of servers. The infrastructure is comprised of HDFS, MapReduce and Yarn. MapReduce works on the concept of divide and conquer as it acts as a computer engine for “mapping” and “reducing” data.

Live Examples of Hadoop implementation

Nokia

Nokia collects and analyzes vast amounts of data from mobile phones

Problem: 
(1) Dealing with 100TB of structured data and 500TB+ of semi-structured data
(2) 10s of PB across Nokia, 1TB / day

Solution: HDFS data warehouse allows storing all the semi/multi structured data and offers processing data at peta byte scale

Hadoop Vendor: Cloudera

Cluster/DataSize
(1)500TB
(2) 10s of PB across Nokia, 1TB / day

NextBio

NextBio is using Hadoop MapReduce and HBase to process massive amounts of human genome data.

Problem: Processing multi-terabyte data sets wasn’t feasible using traditional databases like MySQL which works on relational database.

Solution: NextBio uses Hadoop map reduce to process genome data in batches and it uses HBase as a scalable data store to provide effective solution.

Hadoop Vendor: Intel

 

About the Author:

Vaishnavi Agrawal loves pursuing excellence through writing and have a passion for technology. She has successfully managed and run personal technology magazines and websites. She currently writes for intellipaat.in, a global training company that provides e-learning and professional certification training.

The courses offered by Intellipaat address the unique needs of working professionals. She is based out of Bangalore and has an experience of 5 years in the field of content writing and blogging. Her work has been published on various sites related to Hadoop, Big Data Training, Business Intelligence, Cloud Computing, IT, SAP, Project Management and more.

Tags: , , , , , , , , ,


About the Author

Contribution of guest authors towards Techno FAQ blog



Leave a Reply

Your email address will not be published. Required fields are marked *

Back to Top ↑