In this course, you use processing methods to prepare structured and unstructured big data for analysis. You learn to organize this data into a variety of Hadoop distributed file system (HDFS) storage formats for processing efficiency using Apache Hive and Apache Pig. You also learn SAS software technologies that integrate with Hive and Pig and how to leverage these open source capabilities by programming with Base SAS and SAS/ACCESS Interface to Hadoop.
Learn how to
- Move data in and out of the Hadoop Distributed File System (HDFS).
- Create processing-efficient Hadoop data storage formats.
- Use Hive to design a data warehouse in Hadoop.
- Perform data analysis using Hive Query Language (HiveQL).
- Join data sources using HiveQL.
- Perform extract, load, and transformation.
- Create and access processing-efficient Hadoop storage formats using Hive table definitions.
- Perform analysis on unstructured data using Apache Pig.
- Join massive data sets using Pig.
- Use user-defined functions (UDFs).
- Analyze big data using Pig.
- Use SAS programming to submit Hive and Pig programs that execute in Hadoop and store results in Hadoop or return results to SAS.
Who should attend
Data scientists and programmers, database administrators, applications developers, and ETL developers who are looking for an in-depth technical overview of data management in the Hadoop ecosystem
A basic understanding of and experience with UNIX and SQL is preferred. For advanced topics such as user-defined functions, prior programming experience is necessary.
This course addresses Base SAS, SAS Data Connector to Hadoop software.