一、为什么学习Spark
大数据技术快速发展,Spark为Hadoop大数据技术生态体系带了新的活力与技术革新。至于为什么学习Spark,这样的问题真的是一千个人心中有一千个哈姆雷特。
Spark的官网告诉我们:Apache Spark™ is a fast and general engine for large-scale data processing.而其独特魅力已经在疯狂地影响从事大数据领域的IT修炼者,值得我们一窥究竟。
Speed
Run programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk.
Apache Spark has an advanced DAG execution engine that supports cyclic data flow and in-memory computing.
Ease of Use
Write applications quickly in Java, Scala, Python, R.
Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala, Python and R shells.
Generality
Combine SQL, streaming, and complex analytics.
Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application.
Runs Everywhere
Spark runs on Hadoop, Mesos, standalone, or in the cloud. It can access diverse data sources including HDFS, Cassandra, HBase, and S3.
二、Spark的学习资料整理
在学习Spark的过程中发现互联网上有很多不错的资源/资料,给予了个人修炼技术提供了很大的帮助与指导。但之前一直没有时间梳理, 先以此
日志来记录整理,后续再不断更新,方便自己和他人在Spark修炼路上查阅学习。
2.1、【Spark相关技术博客汇总】
2.1.1、过往记忆:
https://www.iteblog.com/
2.1.5、
高彦杰:http://blog.csdn.net/gaoyanjie55
2.1.6、
saisai_shao:http://jerryshao.me/
2.2、【Spark相关较好的Github资源】
2.2.1、源码学习:
https://github.com/apache/spark
2.2.2、原理剖析:
https://github.com/JerryLead/SparkInternals