Learning Spark

Spark basic concepts:

    1, RDD (resillient distributed dataset)
    2, Task: shuffleMapTask and resultTask (simillar to map and reduce)
    3, Job: a job can be made of multiple tasks
    4, Stage: a job can have multiple stages
    5, Partition: RDD can be partitioned into different machine
    6, NarrowDependency:  Base class for dependencies where each partition of the child RDD depends on a small number of partitions of the parent RDD. Narrow dependencies allow for pipelined execution. 
    7, ShuffleDependency: also called wideDependency, child RDD depend on all partitions of the parent RDD.
    8: DAG: Directed Acycle graph, no parent depend on child RDD

Spark Core functions:

    1, SparkContext: for driverApplication execution and output, we need to initiallize SparkContext before submit spark jobs
        SparkContext has:
        1) communiation
        2) distributed deployment
        3) message 
        4) storage
        5) computation
        6) cashe
        7) measurement system
        8) file service
        9) web service
        Application need use SparkContext API to create jobs, 
            use DAGScheduler, plan RDDs in DAG to different stages and submit the stages.
            use TaskScheduler, apply resouces, submit jobs and requst cluster for scheduling
    2, Storage System
        1) Spark take memory as priority, if memory is not enough, then consider to use disk, Tachyon (distributed memory file system)
    3, Computation Engine:
    4, Deployment
        1) Standalone
        2) Yarn
        3) Mesos

Tuning Spark:

    1, Data Serialization:
        1) Java serializaion (object --> byte --> object)
        2) Kyro serializaton (10x faster than Java serialization) (object --> object)
            val conf = new SparkConf().setMaster(...).setAppName(...)
            conf.registerKryoClasses(Array(classOf[MyClass1], classOf[MyClass2]))
            val sc = new SparkContext(conf)
    2, Memory Tuning:
        1) object header: 16 bytes
        2) String header: 40 bytes
        3) Common collection class: HashMap or LinkedList, 8 bytes
        4) Collections of primitive types often store them as "boxed" object as java.lang.Integer
    3, Memory management overview
        1)Memory usage in Spark largely falls under one of two categories: execution and storage. 
            a) Execution memory refers to that used for computation in shuffles, joins, sorts and aggregations
            b) Storage memory refers to that used for caching and propagating internal data across the cluster
        2) M/R
            a) When no execution memory is used, storage can acquire all the available memory and vice versa.
            b) R describes a subregion within M where cached blocks are never evicted
        3) This design ensures several desirable properties:
            a) First, applications that do not use caching can use the entire space for execution, obviating unnecessary disk spills.
            b) Second, applications that do use caching can reserve a minimum storage space (R) where their data blocks are immune to being evicted.
            c) Lastly, this approach provides reasonable out-of-the-box performance for a variety of workloads without requiring user expertise of how memory is divided internally.
阅读更多
文章标签: spark
个人分类: Learning
上一篇Leetcode 122: Best Time to Buy and Sell Stock II
下一篇Leetcode 179: largest number
想对作者说点什么? 我来说一句

Learning Spark .pdf 2015出版 高清

2017年05月25日 10.69MB 下载

Learning Spark SQL - Aurobindo Sarkar

2017年10月28日 39.01MB 下载

learning spark 中文版PDF高清

2018年05月27日 7.31MB 下载

Apache Spark 2.x Machine Learning Cookbook

2017年11月15日 11.16MB 下载

Learning Apache Spark 2 epub

2017年09月28日 13.41MB 下载

Learning Apache Spark 2

2017年06月14日 16.22MB 下载

learning-spark-streaming.pdf

2018年03月26日 6.71MB 下载

Learning Spark SQL azw3

2017年10月06日 19.22MB 下载

没有更多推荐了,返回首页

关闭
关闭