Big Data, Hadoop, and Spark Basics
Organizations need skilled, forward-thinking Big Data practitioners who can apply their business and technical skills to unstructured data such as tweets, posts, pictures, audio files, videos, sensor data, and satellite imagery, and more, to identify behaviors and preferences of prospects, clients, competitors, and others. ****
This course introduces you to Big Data concepts and practices. You will understand the characteristics, features, benefits, limitations of Big Data and explore some of the Big Data processing tools. You'll explore how Hadoop, Hive, and Spark can help organizations overcome Big Data challenges and reap the rewards of its acquisition.
Hadoop, an open-source framework, enables distributed processing of large data sets across clusters of computers using simple programming models. Each computer, or node, offers local computation and storage, allowing datasets to be processed faster and more efficiently. Hive, a data warehouse software, provides an SQL-like interface to efficiently query and manipulate large data sets in various databases and file systems that integrate with Hadoop.
Open-source Apache Spark is a processing engine built around speed, ease of use, and analytics that provides users with newer ways to store and use big data.
You will discover how to leverage Spark to deliver reliable insights. The course provides an overview of the platform, going into the different components that make up Apache Spark. In this course, you will also learn how Resilient Distributed Datasets, known as RDDs, enable parallel processing across the nodes of a Spark cluster.
You'll gain practical skills when you learn how to analyze data in Spark using PySpark and Spark SQL and how to create a streaming analytics application using Spark Streaming, and more.
What you'll learn
- "After completing this course, a learner will be able to..."
- Describe Big Data, its impact, processing methods and tools, and use cases.
- Describe Hadoop architecture, ecosystem, practices, and applications, including Distributed File System (HDFS), HBase, Spark, and MapReduce.
- Describe Spark programming basics, including parallel programming basics, for DataFrames, data sets, and SparkSQL.
- Describe how Spark uses RDDs, creates data sets, and uses Catalyst and Tungsten to optimize SparkSQL.
- Apply Apache Spark development and runtime environment options.
Get a Reminder
Rating | Not enough ratings |
---|---|
Length | 6 weeks |
Effort | 6 weeks, 2–3 hours per week |
Starts | On Demand (Start anytime) |
Cost | $99 |
From | IBM via edX |
Instructors | Karthik Muthuraman, Aije Egwaikhide |
Download Videos | On all desktop and mobile devices |
Language | English |
Subjects | Programming |
Tags | Computer Science |
Get a Reminder
Similar Courses
Careers
An overview of related careers and their average salaries in the US. Bars indicate income percentile.
Volunteer Big Data Engineer $48k
Data Scientist - Big Data $68k
Big Data and AWS Data Lake $73k
Big Data Developer (Streaming Data) $77k
Big data developer with AWS $78k
Research Scientist Big Data $94k
Big Data Developer Consultant $98k
Big Data Engineer 6 $107k
Big data and ETL specialist $121k
Big Data Specialist $149k
Principal Big Data Architect $180k
Senior Big Data Sales $181k
Write a review
Your opinion matters. Tell us what you think.
Please login to leave a review
Rating | Not enough ratings |
---|---|
Length | 6 weeks |
Effort | 6 weeks, 2–3 hours per week |
Starts | On Demand (Start anytime) |
Cost | $99 |
From | IBM via edX |
Instructors | Karthik Muthuraman, Aije Egwaikhide |
Download Videos | On all desktop and mobile devices |
Language | English |
Subjects | Programming |
Tags | Computer Science |
Similar Courses
Sorted by relevance
Like this course?
Here's what to do next:
- Save this course for later
- Get more details from the course provider
- Enroll in this course