Hive on Spark: Getting Started

12 篇文章 0 订阅

Hive on Spark provides Hive with the ability to utilize Apache Spark as its execution engine.

set hive.execution.engine=spark;

Hive on Spark was added in HIVE-7292.

Version Compatibility

Hive on Spark is only tested with a specific version of Spark, so a given version of Hive is only guaranteed to work with a specific version of Spark. Other versions of Spark may work with a given version of Hive, but that is not guaranteed. Below is a list of Hive versions and their corresponding compatible Spark versions.

Hive VersionSpark Version
master2.3.0
3.0.x2.3.0
2.3.x2.0.0
2.2.x1.6.0
2.1.x1.6.0
2.0.x1.5.0
1.2.x1.3.1
1.1.x1.2.0

Spark Installation

Follow instructions to install Spark:  

 YARN Mode: http://spark.apache.org/docs/latest/running-on-yarn.html 
Standalone Mode: https://spark.apache.org/docs/latest/spark-standalone.html

Hive on Spark supports Spark on YARN mode as default.

For the installation perform the following tasks:

  1. Install Spark (either download pre-built Spark, or build assembly from source).  
    • Install/build a compatible version.  Hive root pom.xml's <spark.version> defines what version of Spark it was built/tested with. 
    • Install/build a compatible distribution.  Each version of Spark has several distributions, corresponding with different versions of Hadoop.
    • Once Spark is installed, find and keep note of the <spark-assembly-*.jar> location.
    • Note that you must have a version of Spark which does not include the Hive jars. Meaning one which was not built with the Hive profile. If you will use Parquet tables, it's recommended to also enable the "parquet-provided" profile. Otherwise there could be conflicts in Parquet dependency. To remove Hive jars from the installation, simply use the following command under your Spark repository:

      Prior to Spark 2.0.0:

      ./make-distribution.sh --name "hadoop2-without-hive" --tgz "-Pyarn,hadoop-provided,hadoop-2.4,parquet-provided"

      Since Spark 2.0.0:

      ./dev/make-distribution.sh --name "hadoop2-without-hive" --tgz "-Pyarn,hadoop-provided,hadoop-2.7,parquet-provided"

      Since Spark 2.3.0:

      ./dev/make-distribution.sh --name "hadoop2-without-hive" --tgz "-Pyarn,hadoop-provided,hadoop-2.7,parquet-provided,orc-provided"
  2. Start Spark cluster
    • Keep note of the <Spark Master URL>.  This can be found in Spark master WebUI.

Configuring YARN

Instead of the capacity scheduler, the fair scheduler is required.  This fairly distributes an equal share of resources for jobs in the YARN cluster.

yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler

Configuring Hive

  1. To add the Spark dependency to Hive:

    • Prior to Hive 2.2.0, link the spark-assembly jar to HIVE_HOME/lib.
    • Since Hive 2.2.0, Hive on Spark runs with Spark 2.0.0 and above, which doesn't have an assembly jar.
      • To run with YARN mode (either yarn-client or yarn-cluster), link the following jars to HIVE_HOME/lib.
        • scala-library
        • spark-core
        • spark-network-common
      • To run with LOCAL mode (for debugging only), link the following jars in addition to those above to HIVE_HOME/lib.
        • chill-java  chill  jackson-module-paranamer  jackson-module-scala  jersey-container-servlet-core
        • jersey-server  json4s-ast  kryo-shaded  minlog  scala-xml  spark-launcher
        • spark-network-shuffle  spark-unsafe  xbean-asm5-shaded
  2. Configure Hive execution engine to use Spark:

    set hive.execution.engine=spark;

    See the Spark section of Hive Configuration Properties for other properties for configuring Hive and the Remote Spark Driver.

     

  3. Configure Spark-application configs for Hive.  See: http://spark.apache.org/docs/latest/configuration.html.  This can be done either by adding a file "spark-defaults.conf" with these properties to the Hive classpath, or by setting them on Hive configuration (hive-site.xml). For instance:

    set spark.master=<Spark Master URL>
    set spark.eventLog.enabled=true;
    set spark.eventLog.dir=<Spark event log folder (must exist)>
    set spark.executor.memory=512m;              
    set spark.serializer=org.apache.spark.serializer.KryoSerializer;

    Configuration property details

    • spark.executor.memory: Amount of memory to use per executor process.
    • spark.executor.cores: Number of cores per executor.
    • spark.yarn.executor.memoryOverhead: The amount of off heap memory (in megabytes) to be allocated per executor, when running Spark on Yarn. This is memory that accounts for things like VM overheads, interned strings, other native overheads, etc. In addition to the executor's memory, the container in which the executor is launched needs some extra memory for system processes, and this is what this overhead is for.

    • spark.executor.instances: The number of executors assigned to each application.
    • spark.driver.memory: The amount of memory assigned to the Remote Spark Context (RSC). We recommend 4GB.
    • spark.yarn.driver.memoryOverhead: We recommend 400 (MB).
  4. Allow Yarn to cache necessary spark dependency jars on nodes so that it does not need to be distributed each time when an application runs.

    • Prior to Hive 2.2.0, upload spark-assembly jar to hdfs file(for example: hdfs://xxxx:8020/spark-assembly.jar) and add following in hive-site.xml

      <property>
        <name>spark.yarn.jar</name>
        <value>hdfs://xxxx:8020/spark-assembly.jar</value>
      </property>
    • Hive 2.2.0, upload all jars in $SPARK_HOME/jars to hdfs folder(for example:hdfs:///xxxx:8020/spark-jars) and add following in hive-site.xml

      <property>
        <name>spark.yarn.jars</name>
        <value>hdfs://xxxx:8020/spark-jars/*</value>
      </property>

Configuring Spark

Setting executor memory size is more complicated than simply setting it to be as large as possible. There are several things that need to be taken into consideration:

  • More executor memory means it can enable mapjoin optimization for more queries.

  • More executor memory, on the other hand, becomes unwieldy from GC perspective.

  • Some experiments shows that HDFS client doesn’t handle concurrent writers well, so it may face race condition if executor cores are too many. 

The following settings need to be tuned for the cluster, these may also apply to submission of Spark jobs outside of Hive on Spark:

PropertyRecommendation
spark.executor.coresBetween 5-7, See tuning details section
spark.executor.memoryyarn.nodemanager.resource.memory-mb * (spark.executor.cores / yarn.nodemanager.resource.cpu-vcores) 
spark.yarn.executor.memoryOverhead15-20% of spark.executor.memory
spark.executor.instancesDepends on spark.executor.memory + spark.yarn.executor.memoryOverhead, see tuning details section.

Tuning Details

When running Spark on YARN mode, we generally recommend setting spark.executor.cores to be 5, 6 or 7, depending on what the typical node is divisible by. For instance, if yarn.nodemanager.resource.cpu-vcores is 19, then 6 is a better choice (all executors can only have the same number of cores, here if we chose 5, then every executor only gets 3 cores; if we chose 7, then only 2 executors are used, and 5 cores will be wasted). If it’s 20, then 5 is a better choice (since this way you’ll get 4 executors, and no core is wasted).

For spark.executor.memory, we recommend to calculate yarn.nodemanager.resource.memory-mb * (spark.executor.cores / yarn.nodemanager.resource.cpu-vcores) then split that between spark.executor.memory and spark.yarn.executor.memoryOverhead. According to our experiment, we recommend setting spark.yarn.executor.memoryOverhead to be around 15-20% of the total memory.

After you’ve decided on how much memory each executor receives, you need to decide how many executors will be allocated to queries. In the GA release Spark dynamic executor allocation will be supported. However for this beta only static resource allocation can be used. Based on the physical memory in each node and the configuration of  spark.executor.memory and spark.yarn.executor.memoryOverhead, you will need to choose the number of instances and set spark.executor.instances.

Now a real world example. Assuming 10 nodes with 64GB of memory per node with 12 virtual cores, e.g., yarn.nodemanager.resource.cpu-vcores=12. One node will be used as the master and as such the cluster will have 9 slave nodes. We’ll configure spark.executor.cores to 6. Given 64GB of ram yarn.nodemanager.resource.memory-mb will be 50GB. We’ll determine the amount of memory for each executor as follows: 50GB * (6/12) = 25GB. We’ll assign 20% to spark.yarn.executor.memoryOverhead, or 5120, and 80% to spark.executor.memory, or 20GB.

On this 9 node cluster we’ll have two executors per host. As such we can configure spark.executor.instances somewhere between 2 and 18. A value of 18 would utilize the entire cluster.

Common Issues (Green are resolved, will be removed from this list)

IssueCauseResolution
Error: Could not find or load main class org.apache.spark.deploy.SparkSubmitSpark dependency not correctly set.Add Spark dependency to Hive, see Step 1 above.

org.apache.spark.SparkException: Job aborted due to stage failure:

Task 5.0:0 had a not serializable result: java.io.NotSerializableException: org.apache.hadoop.io.BytesWritable

Spark serializer not set to Kryo.Set spark.serializer to be org.apache.spark.serializer.KryoSerializer, see Step 3 above.

[ERROR] Terminal initialization failed; falling back to unsupported
java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected

Hive has upgraded to Jline2 but jline 0.94 exists in the Hadoop lib.
  1. Delete jline from the Hadoop lib directory (it's only pulled in transitively from ZooKeeper).
  2. export HADOOP_USER_CLASSPATH_FIRST=true
  3. If this error occurs during mvn test, perform a mvn clean install on the root project and itests directory.

Spark executor gets killed all the time and Spark keeps retrying the failed stage; you may find similar information in the YARN nodemanager log.

WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Container [pid=217989,containerID=container_1421717252700_0716_01_50767235] is running beyond physical memory limits. Current usage: 43.1 GB of 43 GB physical memory used; 43.9 GB of 90.3 GB virtual memory used. Killing container.

For Spark on YARN, nodemanager would kill Spark executor if it used more memory than the configured size of "spark.executor.memory" + "spark.yarn.executor.memoryOverhead".Increase "spark.yarn.executor.memoryOverhead" to make sure it covers the executor off-heap memory usage.

Run query and get an error like:

FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask

In Hive logs, it shows:

java.lang.NoClassDefFoundError: Could not initialize class org.xerial.snappy.Snappy
  at org.xerial.snappy.SnappyOutputStream.<init>(SnappyOutputStream.java:79)

Happens on Mac (not officially supported).

This is a general Snappy issue with Mac and is not unique to Hive on Spark, but workaround is noted here because it is needed for startup of Spark client.

Run this command before starting Hive or HiveServer2:

export HADOOP_OPTS="-Dorg.xerial.snappy.tempdir=/tmp -Dorg.xerial.snappy.lib.name=libsnappyjava.jnilib $HADOOP_OPTS"

Stack trace: ExitCodeException exitCode=1: .../launch_container.sh: line 27: $PWD:$PWD/__spark__.jar:$HADOOP_CONF_DIR.../usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure:$PWD/__app__.jar:$PWD/*: bad substitution

 

The key mapreduce.application.classpath in /etc/hadoop/conf/mapred-site.xml contains a variable which is invalid in bash.

From mapreduce.application.classpath remove

:/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar

from

/etc/hadoop/conf/mapred-site.xml

Exception in thread "Driver" scala.MatchError: java.lang.NoClassDefFoundError: org/apache/hadoop/mapreduce/TaskAttemptContext (of class java.lang.NoClassDefFoundError)
  at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:432)

MR is not on the YARN classpath.

If on HDP change from

/hdp/apps/${hdp.version}/mapreduce/mapreduce.tar.gz#mr-framework

to

/hdp/apps/2.2.0.0-2041/mapreduce/mapreduce.tar.gz#mr-framework

java.lang.OutOfMemoryError: PermGen space with spark.master=localBy default (SPARK-1879), Spark's own launch scripts increase PermGen to 128 MB, so we need to increase PermGen in hive launch script.

If use JDK7, append following in conf/hive-env.sh:

export HADOOP_OPTS="$HADOOP_OPTS -XX:MaxPermSize=128m"

If use JDK8, append following in Conf/hive-env.sh:

export HADOOP_OPTS="$HADOOP_OPTS -XX:MaxMetaspaceSize=512m"

Recommended Configuration

See HIVE-9153 for details on these settings.

mapreduce.input.fileinputformat.split.maxsize=750000000
hive.vectorized.execution.enabled=true

hive.cbo.enable=true
hive.optimize.reducededuplication.min.reducer=4
hive.optimize.reducededuplication=true
hive.orc.splits.include.file.footer=false
hive.merge.mapfiles=true
hive.merge.sparkfiles=false
hive.merge.smallfiles.avgsize=16000000
hive.merge.size.per.task=256000000
hive.merge.orcfile.stripe.level=true
hive.auto.convert.join=true
hive.auto.convert.join.noconditionaltask=true
hive.auto.convert.join.noconditionaltask.size=894435328
hive.optimize.bucketmapjoin.sortedmerge=false
hive.map.aggr.hash.percentmemory=0.5
hive.map.aggr=true
hive.optimize.sort.dynamic.partition=false
hive.stats.autogather=true
hive.stats.fetch.column.stats=true
hive.vectorized.execution.reduce.enabled=false
hive.vectorized.groupby.checkinterval=4096
hive.vectorized.groupby.flush.percent=0.1
hive.compute.query.using.stats=true
hive.limit.pushdown.memory.usage=0.4
hive.optimize.index.filter=true
hive.exec.reducers.bytes.per.reducer=67108864
hive.smbjoin.cache.rows=10000
hive.exec.orc.default.stripe.size=67108864
hive.fetch.task.conversion=more
hive.fetch.task.conversion.threshold=1073741824
hive.fetch.task.aggr=false
mapreduce.input.fileinputformat.list-status.num-threads=5
spark.kryo.referenceTracking=false
spark.kryo.classesToRegister=org.apache.hadoop.hive.ql.io.HiveKey,org.apache.hadoop.io.BytesWritable,org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch

See Spark section of configuration page for additional properties.

Design documents

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
【优质项目推荐】 1、项目代码均经过严格本地测试,运行OK,确保功能稳定后才上传平台。可放心下载并立即投入使用,若遇到任何使用问题,随时欢迎私信反馈与沟通,博主会第一时间回复。 2、项目适用于计算机相关专业(如计科、信息安全、数据科学、人工智能、通信、物联网、自动化、电子信息等)的在校学生、专业教师,或企业员工,小白入门等都适用。 3、该项目不仅具有很高的学习借鉴价值,对于初学者来说,也是入门进阶的绝佳选择;当然也可以直接用于 毕设、课设、期末大作业或项目初期立项演示等。 3、开放创新:如果您有一定基础,且热爱探索钻研,可以在此代码基础上二次开发,进行修改、扩展,创造出属于自己的独特应用。 欢迎下载使用优质资源!欢迎借鉴使用,并欢迎学习交流,共同探索编程的无穷魅力! 基于业务逻辑生成特征变量python实现源码+数据集+超详细注释.zip基于业务逻辑生成特征变量python实现源码+数据集+超详细注释.zip基于业务逻辑生成特征变量python实现源码+数据集+超详细注释.zip基于业务逻辑生成特征变量python实现源码+数据集+超详细注释.zip基于业务逻辑生成特征变量python实现源码+数据集+超详细注释.zip基于业务逻辑生成特征变量python实现源码+数据集+超详细注释.zip基于业务逻辑生成特征变量python实现源码+数据集+超详细注释.zip 基于业务逻辑生成特征变量python实现源码+数据集+超详细注释.zip 基于业务逻辑生成特征变量python实现源码+数据集+超详细注释.zip
提供的源码资源涵盖了安卓应用、小程序、Python应用和Java应用等多个领域,每个领域都包含了丰富的实例和项目。这些源码都是基于各自平台的最新技术和标准编写,确保了在对应环境下能够无缝运行。同时,源码中配备了详细的注释和文档,帮助用户快速理解代码结构和实现逻辑。 适用人群: 这些源码资源特别适合大学生群体。无论你是计算机相关专业的学生,还是对其他领域编程感兴趣的学生,这些资源都能为你提供宝贵的学习和实践机会。通过学习和运行这些源码,你可以掌握各平台开发的基础知识,提升编程能力和项目实战经验。 使用场景及目标: 在学习阶段,你可以利用这些源码资源进行课程实践、课外项目或毕业设计。通过分析和运行源码,你将深入了解各平台开发的技术细节和最佳实践,逐步培养起自己的项目开发和问题解决能力。此外,在求职或创业过程中,具备跨平台开发能力的大学生将更具竞争力。 其他说明: 为了确保源码资源的可运行性和易用性,特别注意了以下几点:首先,每份源码都提供了详细的运行环境和依赖说明,确保用户能够轻松搭建起开发环境;其次,源码中的注释和文档都非常完善,方便用户快速上手和理解代码;最后,我会定期更新这些源码资源,以适应各平台技术的最新发展和市场需求。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值