Summarised from the GitBook: Mastering Apache Spark 2 (Spark 2.2+)
Author: Jacek Laskowski

Why Spark

For many, Spark is Hadoop++, i.e. MapReduce done in a better way.+

And it should not come as a surprise, without Hadoop MapReduce (its advances and deficiencies), Spark would not have been born at all.

As a Scala developer, you may find Spark’s RDD API very similar (if not identical) to Scala’s Collections API.

Apache Spark uses a directed acyclic graph (DAG) of computation stages (aka execution DAG). It postpones any processing until really required for actions. Spark’s lazy evaluation gives plenty of opportunities to induce low-level optimizations (so users have to know less to do more).

Spark can cache intermediate data in memory for faster model building and training. Once the data is loaded to memory (as an initial step), reusing it multiple times incurs no performance slowdowns.

Spark gives Extract, Transform and Load (ETL) a new look with the many programming languages supported - Scala, Java, Python (less likely R). You can use them all or pick the best for a problem.
Scala in Spark, especially, makes for a much less boiler-plate code (comparing to other languages and approaches like MapReduce in Java).

Further reding or watching

Keynote: Spark 2.0 - Matei Zaharia, Apache Spark Creator and CTO of Databricks
Apache Spark 2.0 Preview: Machine Learning Model Persistence