I Understand
We use cookies.Click here for details.

Big Data Analysis with Scala and Spark

Manipulating big data distributed over a cluster using functional concepts is rampant in industry, and is arguably one of the first widespread industrial uses of functional ideas. This is evidenced by the popularity of MapReduce and Hadoop, and most recently Apache Spark, a fast, in-memory distributed collections framework written in Scala. In this course, we'll see how the data parallel paradigm can be extended to the distributed case, using Spark throughout. We'll cover Spark's programming model in detail, being careful to understand how and when it differs from familiar programming models, like shared-memory parallel collections or sequential Scala collections. Through hands-on examples in Spark and Scala, we'll learn when important issues related to distribution like latency and network communication should be considered and how they can be addressed effectively for improved performance.

Created by École Polytechnique Fédérale de Lausanne


brand

What you’ll learn


In this learning opportunity you may improve the compentencies demanded from organizations in 2021. The most relevant technique from the educational resource that is frequently included by employers is Data Analysis. The most in demand tool is SQL. You will also find out about Communication Skills, a trait frequently included in job maps.

Who will benefit?


Comparing the description from this learning opportunity with nearly 10,000 data-related job postings, we discover that those in or pursuing Data Scientist roles have the most to gain.