ALL ABOUT HADOOP

Hadoop

Hadoop is a platform that stores and processes “big data” that is scalable and reliable. It is an open-source software framework for storing data and running applications on clusters of commodity hardware .It stores the massive kind of data and it has the ability to processes limitless concurrent tasks or jobs.

hadoop

History of Hadoop

Hadoop was created by Doug Cutting and Mike Cafarella in 2005.It was developed originally for supporting distribution of Nutch Search engine. This Hadoop open source software is managed by Apache Software Foundation. Hadoop was the Yahoo’s attempt to break down the big data problem into small pieces. Hadoop Training Chennai offers the well trained MNC professionals as trainers.

Layers in Hadoop

Hadoop has divided into three layers. They are:

  1. Cluster Management
  2. Storage
  3. Processing

Importance of Hadoop

.Ability to store and process huge amounts of any kind of data, quickly

.Computing Power

.Fault tolerance

.Flexibility

.Low Cost

.Scalability

How does Hadoop works?

Hadoop has divided into two main systems. They are:

Hadoop Distributed File System (HDFS):The storage system which spread out over multiple machines as a means to reduce cost and increase reliability.

Map Reduce System (MRS): This is the algorithm that filters ,sorts and then uses the database input in any way.

How does HDFS Works?

HDFS is written once on the server and subsequently read and re-used several times by thereafter by user. It explains the speed with which operates the repeated read write actions of most other file systems with a speed.  It is the excellent of deal with the high volumes and velocity of data.

HDFS woks in a main way having <<Name Node>> and multiple of <<data nodes>> in a commodity hardware cluster. All the nodes are usually organized in the same rack in data center. Data is then broken down into separate <<blocks>>that are distributed among the various data nodes.

The Name Node is the <<smart>>node in the cluster which knows the location of data node within the machine cluster. Join in Hadoop Training to learn the course from top IT leaders. It also manages to access the data creating, reads, writes, deletes, and replicates the data.

Data Integrity is also monitored by HDFS with many capabilities. Data Nodes constantly communicate with the Name, Nodes to complete a task. Data Nodes usually communicate with Name, Nodes are made aware of each data node’s status.

How does Map Reduce System Works?

MRS is an algorithm developed and maintained by the Apache Hadoop project. This algorithm is to break down the data into smaller manageable pieces, process the data in parallel on your distributed cluster. This Big Data Training in Chennai is the best way to learn the Hadoop course.

Once the data is in a form of acceptable to map, each key-value is processed by mapping function. To track and collect the output data the programmer uses the <<Output Collector>>.Another function called <<Reporter>>this helps to complete the tasks. Hadoop Map reduction is the heart of the Hadoop. It applies to processes the data into re-silent and tolerable manner.

Hadoop has awarded as top most technology in the popular social media site in “LinkedIn”, Given high priority for Hadoop technology in previous year survey. All might you do is, learn Hadoop Training in Chennai and get a lucrative career in this competitive world.

 

Leave a Reply