Hadoop

1. Introduction

Hadoop is a scalable, open source, distributed, data-intensive, fault-tolerant computing framework, capable of handling thousands of nodes and petabytes of data. It comprises of three main subprojects:

  • Hadoop Common: common utilities package

  • HDFS: Hadoop Distributed File System

  • MapReduce: A software framework for distributed processing

When people talk about Hadoop, they often refer to the Hadoop Ecosystem, which includes various components of the Apache Hadoop software library, as well as accessories and tools provided by the Apache Software Foundation.

2. Nodes

Master- and slave nodes organize the Hadoop cluster. Either node type may take on several roles. For example, the master node contains:

  • Job tracker node (MapReduce layer)

  • Task tracker node (MapReduce layer)

  • Name node (HDFS layer)

  • Data node (HDFS layer)

While a slave node may contain:

  • Task tracker node (MapReduce layer)

  • Data node (HDFS layer)

3. Installation

Here is a very crude example of how you can install an Hadoop distribution:

4. HDFS

There are two shells to interact with the file system. That is, the local and distributed file system. The following line would list local files, including distributed files that happen to be stored at that particular location.

Then we can also use the following line to print out files stored in a distributed fashion.

One would typically get a file onto the system in some way, by downloading it for example. After that one would put the file onto hdfs:

Notice that we specify a folder, instead of a filename when we put the file onto hdfs. Now that it's there, we can inspect its contents using -cat or -tail:

5. MapReduce

Before the rise of abstractions such as Hive, Pig, and Impala, one would typically write a MapReduce JAR program that contained the map and reduce code and the configuration to run a Hadoop job.

This location contains some examples which you can as such:

After the job finishes, the output will contain a _SUCCESS flag to indicate that the processing was successful. If something appears to be going wrong, you can stop a running job as such:

6. Hadoop Ecosystem

Some components that comprise the ecosystem are:

  • HBase is a non-relational

  • Oozie: workflow scheduler

  • Sqoop

  • Gobblin

  • Hive

  • Impala

  • Pig

7. Hive, Impala, Pig,

Technology

Description

Hive

Data Warehouse Infrastructure on Hadoop

Generates query at compile time

Cold start problem

More universal pluggable language

Impala

Runtime code generation

Always ready

Brute processing for fast analytic results

Pig

Last updated

Was this helpful?