Switch to English Site

dotsdots

Distributed Computing with Spark SQL

描述

This course is for students with SQL experience and now want to take the next step in gaining familiarity with distributed computing using Spark. Students will gain an understanding of when to use Spark and how Spark as an engine uniquely combines Data and AI technologies at scale. The four modules build on one another and by the end of the course the student will understand: Spark architecture, Spark DataFrame, optimizing reading/writing data, and how to build a machine learning model. The first module will introduce Spark, including how Spark works with distributed computing and what are Spark Dataframes. Module 2 covers the core concepts of Spark such as storage vs. computing, caching, partitions and Spark UI. The third module looks at Engineering Data Pipelines covering connecting to databases, schemas and type, file formats and writing good data. The final module looks at the application of Spark with Machine Learning through the business use case, a short introduction to what machine learning is, building and applying models and a final course conclusion. By understanding when to use Spark, either scaling out when the model or data is too large to process on a single machine, or having a need to simply speed up to get faster results, students will hone their SQL skills and become a more adept Data Scientist.阅读更多.

此资源由附属合作伙伴提供。 如果您支付培训费用,我们可能会赚取佣金来支持该网站。

按照数据工作岗位排列职业相关性

Distributed Computing with Spark SQL 中涵盖的技术和工具与 数据工程师 招聘广告中的要求最为相似。

相似度得分(满分 100)