Switch to English Site

描述

Manipulating big data distributed over a cluster using functional concepts is rampant in industry, and is arguably one of the first widespread industrial uses of functional ideas. This is evidenced by the popularity of MapReduce and Hadoop, and most recently Apache Spark, a fast, in-memory distributed collections framework written in Scala. In this course, we'll see how the data parallel paradigm can be extended to the distributed case, using Spark throughout. We'll cover Spark's programming model in detail, being careful to understand how and when it differs from familiar programming models, like shared-memory parallel collections or sequential Scala collections. Through hands-on examples in Spark and Scala, we'll learn when important issues related to distribution like latency and network communication should be considered and how they can be addressed effectively for improved performance.阅读更多.

此资源由附属合作伙伴提供。 如果您支付培训费用,我们可能会赚取佣金来支持该网站。

按照数据工作岗位排列职业相关性

Big Data Analysis with Scala and Spark 中涵盖的技术和工具与 数据工程师 招聘广告中的要求最为相似。

相似度得分(满分 100)