Data PRO or Pay-Per-Course
Pick a plan that right's for you!
Why is Big Data a Big Deal
Installing Hadoop in a Local Environment
The MapReduce "Hello World"
Run a MapReduce Job
Juicing your MapReduce - Combiners, Shuffle and Sort and The Streaming API
HDFS and Yarn
MapReduce Customizations For Finer Grained Control
The Inverted Index, Custom Data Types for Keys, Bigram Counts and Unit Tests!
Input and Output Formats and Customized Partitioning
Recommendation Systems using Collaborative Filtering
Hadoop as a Database
Setting up a Hadoop Cluster
You, this course and Us
The Big Data Paradigm Serial vs Distributed Computing What id Hadoop? HDFS or the Hadoop Distributed File System MapReduce Introduced YARN or Yet Another Resource Negotiator Big Data PDF
Hadoop Install Modes Hadoop Standalone mode Install Hadoop Pseudo-Distributed mode Install Install Guides ZIP File
The basic philosophy underlying MapReduce MapReduce - Visualized And Explained MapReduce - Digging a little deeper at every step "Hello World" in MapReduce The Mapper The Reducer The Job MR Intro & Source Code Downloads
Get comfortable with HDFS Run your first MapReduce Job Source Code Download
Parallelize the reduce phase - use the Combiner Not all Reducers are Combiners How many mappers and reducers does your MapReduce Have? Parallelizing reduce using Shuffle And Sort MapReduce is not limited to the Java language - Introducing the Streaming API Python for MapReduce Downloads
HDFS - Protecting against data loss using replication HDFS - Name nodes and why they're critical HDFS - Checkpointing to backup name node information Yarn - Basic components Yarn - Submitting a job to Yarn Yarn - Plug in scheduling policies Yarn - Configure the scheduler Downloads
Setting up your MapReduce to accept command line arguments The Tool, ToolRunner and GenericOptionsParser Configuring properties of the Job object Customizing the Partitioner, Sort Comparator, and Group Comparator Downloads
The heart of search engines - The Inverted Index Generating the inverted index using MapReduce Custom data types for keys - The Writable Interface Represent a Bigram using a WritableComparable MapReduce to count the Bigrams in input text Setting up your Hadoop project Test your MapReduce job using MRUnit Downloads
Introducing the File Input Format Text And Sequence File Formats Data partitioning using a custom partitioner Make the custom partitioner real in code Total Order Partitioning Input Sampling, Distribution, Partitioning and configuring these Secondary Sort Downloads
Introduction to Collaborative Filtering Friend recommendations using chained MR jobs Get common friends for every pair of users - the first MapReduce Top 10 friend recommendation for every user - the seconday MapReduce Dowloads
Structured data in Hadoop Running an SQL Select with MapReduce Running an SQL Group By with MapReduce A MapReduce Join - The Map Side A MapReduce Join - The Reduce Side A MapReduce Join - Sorting and Partitioning A MapReduce Join - Putting it all together Downloads
What is K-Means Clustering? A MapReduce job for K-Means Clustering K-Means Clustering - Measuring the distance between points K-Means Clustering - Custom Writables for Inputs/Output K-Means Clustering - Configuring the Job K-Means Clustering - The Mapper and Reducer K-Means Clustering : The Iterative MapReduce Job Downloads
Manually configuring a Hadoop cluster (Linux VMs) Getting started with Amazon Web Services Start a Hadoop Cluster with Cloudera Manager on AWS Downloads
Setup a Virtual Linux Instance (For Windows users) [For Linux/Mac OS Shell Newbies] Path and other environment variables Downloads
What will I learn?
- Develop advanced MapReduce applications to process BigData.
- Master the art of "thinking parallel" - how to break up a task into Map/Reduce transformations.
- Self-sufficiently set up their own mini-Hadoop cluster whether it's a single node, a physical cluster or in the cloud.
- Use Hadoop + MapReduce to solve a wide variety of problems : from NLP to Inverted Indices to Recommendations.
- Understand HDFS, MapReduce and YARN and how they interact with each other.
- Understand the basics of performance tuning and managing your own cluster.
About the course
This course is taught by a 4 person team including 2 Stanford-educated, ex-Googlers and 2 ex-Flipkart Lead Analysts. This team has decades of practical experience in working with Java and with billions of rows of data. Get your data to fly using Spark for analytics, machine learning and data science.
This course is a zoom-in, zoom-out, hands-on workout involving Hadoop, MapReduce and the art of thinking parallel.
- Zoom-in, Zoom-Out: This course is both broad and deep. It covers the individual components of Hadoop in great detail, and also gives you a higher level picture of how they interact with each other.
- Hands-on workout involving Hadoop, MapReduce: This course will get you hands-on with Hadoop very early on. You'll learn how to set up your own cluster using both VMs and the Cloud. All the major features of MapReduce are covered - including advanced topics like Total Sort and Secondary Sort.
- The art of thinking parallel: MapReduce completely changed the way people thought about processing Big Data. Breaking down any problem into parallelizable units is an art. The examples in this course will train you to "think parallel".
Using MapReduce to
- Recommend friends in a Social Networking site: Generate Top 10 friend recommendations using a Collaborative filtering algorithm.
- Build an Inverted Index for Search Engines: Use MapReduce to parallelize the humongous task of building an inverted index for a search engine.
- Generate Bigrams from text: Generate bigrams and compute their frequency distribution in a corpus of text.
Build your Hadoop cluster:
- Install Hadoop in Standalone, Pseudo-Distributed and Fully Distributed modes.
- Set up a hadoop cluster using Linux VMs.
- Set up a cloud Hadoop cluster on AWS with Cloudera Manager.
- Understand HDFS, MapReduce and YARN and their interaction.
Customize your MapReduce Jobs:
- Chain multiple MR jobs together.
- Write your own Customized Partitioner.
- Total Sort : Globally sort a large amount of data by sampling input files.
- Secondary sorting.
- Unit tests with MR Unit.
- Integrate with Python using the Hadoop Streaming API
.. and of course all the basics:
- MapReduce : Mapper, Reducer, Sort/Merge, Partitioning, Shuffle and Sort.
- HDFS & YARN: Namenode, Datanode, Resource manager, Node manager, the anatomy of a MapReduce application, YARN Scheduling, Configuring HDFS and YARN to performance tune your cluster.
Who should take the course?
- Analysts who want to leverage the power of HDFS where traditional databases don't cut it anymore.
- Engineers who want to develop complex distributed computing applications to process lot's of data.
- Data Scientists who want to add MapReduce to their bag of tricks for processing data.
Pre-requisites & Requirements
- You'll need an IDE where you can write Java code or open the source code that's shared. IntelliJ and Eclipse are both great options.
- You'll need some background in Object-Oriented Programming, preferably in Java. All the source code is in Java and we dive right in without going into Objects, Classes etc.
- A bit of exposure to Linux/Unix shells would be helpful, but it won't be a blocker.