Sunday, November 22, 2015

Apache Spark


"Apache Spark" is an open-source data analytics cluster computing framework originally developed in the AMPLab at UC Berkeley in 2009 and became an Apache open-source project in 2010. Spark fits into the Hadoop open-source community, building on top of the Hadoop Distributed File System (HDFS).However, Spark is not tied to the two-stage MapReduce paradigm, and promises performance up to 100 times faster than Hadoop MapReduce for certain applications. Spark provides primitives for in-memory cluster computing that allows user programs to load data into a cluster's memory and query it repeatedly, making it well suited to machine learning algorithms.Spark became an Apache Top-Level Project in February 2014 and was previously an Apache Incubator project since June 2013. It has received code contributions from large companies that use Spark, including Yahoo! and Intel as well as small companies and startups. By March 2014, over 150 individual developers had contributed code to Spark, representing over 30 different companies. Prior to joining Apache Incubator, versions 0.7 and earlier were licensed under the BSD License.


source
History Of Apache Spark (source)
Apache Spark is a fast and general cluster computing system for Big Data. Apache Spark is more generalized system, where you can run both batch and streaming jobs at a time. It supersedes its predecessor MapReduce in speed by adding capabilities to process data faster in memory. It is also more efficient on disk. It leverages in memory processing using its basic data unit RDD (Resilient Distributed Dataset). These hold as much dataset as possible in memory for complete lifecycle of job hence saving on disk I/O.  Some data can get spilled over disk after memory upper limits. Spark offers development APIs for Java, Scala, Python and R languages and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming for stream processing. Spark runs on Hadoop YARN, Apache Mesos as well as it has its own standalone cluster manager.

Apache Spark Ecosystem (source)

Spark core API is base of Apache Spark framework, which handles job scheduling, task distribution, memory management, I/O operations and recovering from failures. Main logical data unit in spark is called RDD (Resilient Distributed Dataset), which stores data in distributed way to be processed parallel later. It lazily computes operations. Therefore, memory need not be occupied all the time, and other jobs can utilize it.

Many problems do not lend themselves to the two-step process of map and reduce . Spark can do map and reduce much faster than Hadoop can. One of the great things about Apache Spark is that it's a single environment and you have a single API from which you can call machine learning algorithms, or you can do graph processing or SQL. Spark's distributed data storage model, resilient distributed datasets (RDD), guarantees fault tolerance which in turn minimizes network I/O. RDDs achieve fault tolerance through a notion of lineage: if a partition of an RDD is lost, the RDD has enough information about how it was derived from other RDDs to be able to rebuild just that partition.So you don’t need to replicate data to achieve fault tolerance. In Spark MapReduce, mappers output is kept in OS buffer cache and reducers pull it to their side and write it directly to their memory, unlike Hadoop where output gets spilled to disk and read it again. Spark’s in memory cache makes it fit for machine learning algorithms where you need to use same data over and over again. Spark can run complex jobs, multiple steps data pipelines using Direct Acyclic Graph (DAGs). Spark is written in Scala and it runs on JVM (Java Virtual Machine).
-------------------------------------------------------------
Overview :
At a high level, every Spark application consists of a driver program that runs the user’s main function and executes various parallel operations on a cluster. The main abstraction Spark provides is a resilient distributed dataset (RDD), which is a collection of elements partitioned across the nodes of the cluster that can be operated on in parallel. RDDs are created by starting with a file in the Hadoop file system (or any other Hadoop-supported file system), or an existing Scala collection in the driver program, and transforming it. Users may also ask Spark to persist an RDD in memory, allowing it to be reused efficiently across parallel operations. Finally, RDDs automatically recover from node failures. Spark app(Driver) buils DAG()from RDD operations. DAG is split into tasks that are executed by workers as shown in the block diagram .

Spark internals(source)

 A second abstraction in Spark is shared variables that can be used in parallel operations. By default, when Spark runs a function in parallel as a set of tasks on different nodes, it ships a copy of each variable used in the function to each task. Sometimes, a variable needs to be shared across tasks, or between tasks and the driver program. Spark supports two types of shared variables: broadcast variables, which can be used to cache a value in memory on all nodes, and accumulators, which are variables that are only “added” to, such as counters and sums.



Spark core API: RDDs, transformations and actions
RDD (Resilient Distributed Dataset) is main logical data unit in Spark. An RDD is distributed collection of objects. Distributed means, each RDD is divided into multiple partitions. Each of these partitions can reside in memory or stored on disk of different machines in a cluster. RDDs are immutable (Read Only) data structure. You can’t change original RDD, but you can always transform it into different RDD with all changes you want. RDDs can be created by 2 ways:

1. Parallelizing existing collection.

2. Loading external dataset from HDFS (or any other HDFS supported file types).

Creating SparkContext

To execute any operation in spark, you have to first create object of SparkContext class. A SparkContext class represents the connection to our existing Spark cluster and provides the entry point for interacting with Spark. We need to create a SparkContext instance so that we can interact with Spark and distribute our jobs. Spark provides a rich set of operators to manipulate RDDs. RDD performs 2 operations mainly, transformations and actions.

Transformations:Transformations create new RDD from existing RDD like map, reduceByKey and filter we just saw. Transformations are executed on demand. That means they are computed lazily. We will see lazy evaluations more in details in next part.

Lineage Graph: RDDs maintain a graph of 1 RDD getting transformed into another called lineage graph, which helps Spark to recompute any intermediate RDD in case of failures. This way spark achieves fault tolerance.

Hadoop v/s Spark Fault Tolerance
Actions return final results of RDD computations. Actions triggers execution using lineage graph to load the data into original RDD, carry out all intermediate transformations and return final results to Driver program or write it out to file system.

RDD Transformation and action (source)
Basic setup instructions.

1)Building Spark
Spark is built using [Apache Maven](http://maven.apache.org/). To build Spark and its example programs, run:mvn -DskipTests clean package(You do not need to do this if you downloaded a pre-built package). More detailed documentation is available from the project site@ "http://spark.apache.or/docs/latest/building-spark.html".
2) Interactive Scala Shell: The easiest way to start using Spark is through the Scala shell:"./bin/spark-shell". Try the following command, which should return 1000:scala> sc.parallelize(1 to 1000).count()
3) Interactive Python Shell: Alternatively, if you prefer Python, you can use the Python shell: "./bin/pyspark ". And run the following command, which should also return 1000:    sc.parallelize(range(1000)).count()
4) Example Programs: Spark also comes with several sample programs in the `examples` directory.To run one of them, use "./bin/run-example <class> [params]". For example:"./bin/run-example SparkPi" will run the Pi example locally. You can set the MASTER environment variable when running examples to submit examples to a cluster. This can be a mesos:// or spark:// URL, "yarn-cluster" or "yarn-client" to run on YARN, and "local" to run locally with one thread, or "local[N]" to run locally with N threads. You can also use an abbreviated class name if the class is in the examples package.For instance:  MASTER=spark://host:7077" ./bin/run-example SparkPi" Many of the example programs print usage help if no params are given.
5)Running Tests:Testing requires [building Spark](#building-spark). Once Spark is built, tests can be run using:"./dev/run-tests". Steps to  run automated tests : "https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark#ContributingtoSpark-AutomatedTesting".
6) A Note About Hadoop Versions:
Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported storage systems. Because the protocols have changed in different versions of Hadoop, you must build Spark against the same version that your cluster runs

Please refer to the build documentation at "http://spark.apache.org/docs/latest/building-spark.html#specifying-the-hadoop-version" for detailed guidance on building for a particular distribution of Hadoop, including building for particular Hive and Hive Thriftserver distributions. See also "http://spark.apache.org/docs/latest/hadoop-third-party-distributions.html" for guidance on building a Spark application that works with a particular distribution.
7) Configuration: Please refer to the Configuration guide at "http://spark.apache.org/docs/latest/configuration.html" in the online documentation for an overview on how to configure Spark.

Conclusion:
Apache Spark is a fast and general engine for large-scale data processing.

  • Run programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on diskWrite applications quickly in Java, Scala, Python, R
  • Combine SQL, streaming, and complex analytics
  • Spark runs on Hadoop, Mesos, standalone, or in the cloud.
  • It can access diverse data sources including HDFS, Cassandra, HBase, and S3.
Hadoop  v/s  Spark  Computation Model
Spark does not store all data in memory. But if data is in memory it makes best use of LRU cache to process it faster. It is 100x faster while computing data in memory and still faster on disk than Hadoop. Spark does not have its own storage system. It relies on HDFS for that. So, Hadoop MapReduce is still good for certain batch jobs, which does not include much data pipelining. New technology never completely replaces old one; they both would rather coexist.

-------------------------------------------------------------
Spark is not tied specifically to Hadoop. Although it does work with YARN, it can also work well with Apache Mesos and can also read data from Cassandra. So although Spark may become the real-time engine for Hadoop, it can also live independent of it, with users leveraging its related projects such as Spark SQL, Spark Streaming, and MLlib (Machine Learning). I think this capability means that Spark will soon become more important with Big Data developers and MapReduce will in turn become the solution for batch processing as opposed to the core paradigm for Hadoop. Specifically for batch use cases, MapReduce for now will be stronger than Spark, especially for very large datasets. - See more at: http://blog.gogrid.com/2014/07/15/mapreduce-dead/#sthash.gM4nEOrw.dpuf
References:
  • https://spark.apache.org/
  • https://cwiki.apache.org/confluence/display/SPARK
  • http://spark.apache.org/documentation.html
  • http://data-informed.com/performing-mapreduce-in-memory-no-hadoop-needed/
  • http://stanford.edu/~rezab/sparkclass/slides/itas_workshop.pdf 
  • http://www.jorditorres.org/spark-ecosystem/
  • http://www.informationweek.com/big-data/big-data-analytics/will-spark-google-dataflow-steal-hadoops-thunder/a/d-id/1278959?page_number=2
  • https://github.com/SatyaNarayan1/spark-workshop/commit/caac3f9b7dd771c65d83398b57acc4e99876b62a 
  • http://www.edupristine.com/blog/apache-spark-vs-hadoop

No comments:

Post a Comment