Building Your First Regression Model in Python with Scikit-learn

Free Live ML Workshop on July 2 - Register Now

dotsdots

Distributed Computing with Spark SQL

Description

This course is for students with SQL experience and now want to take the next step in gaining familiarity with distributed computing using Spark. Students will gain an understanding of when to use Spark and how Spark as an engine uniquely combines Data and AI technologies at scale. The four modules build on one another and by the end of the course the student will understand: Spark architecture, Spark DataFrame, optimizing reading/writing data, and how to build a machine learning model. The first module will introduce Spark, including how Spark works with distributed computing and what are Spark Dataframes. Module 2 covers the core concepts of Spark such as storage vs. computing, caching, partitions and Spark UI. The third module looks at Engineering Data Pipelines covering connecting to databases, schemas and type, file formats and writing good data. The final module looks at the application of Spark with Machine Learning through the business use case, a short introduction to what machine learning is, building and applying models and a final course conclusion. By understanding when to use Spark, either scaling out when the model or data is too large to process on a single machine, or having a need to simply speed up to get faster results, students will hone their SQL skills and become a more adept Data Scientist.Read more.

This resource is offered by an affiliate partner. If you pay for training, we may earn a commission to support this site.

Career Relevance by Data Role

The techniques and tools covered in Distributed Computing with Spark SQL are most similar to the requirements found in Data Engineer job advertisements.

Similarity Scores (Out of 100)