sparn on kerberos-yarn

一、环境:

-- Hadoop集群

1. Hadoop集群(dm46、dm47、dm48),开启了Kerberos安全,集群所有组件基于K8S管理,运行在docker pod中

2. yarn中的python版本是3.6,java版本是1.8

-- 客户端:dm45

1. 开源spark版本:3.3.1

2. conda有两个环境:base的python是3.6,pyspark_env的版本是3.8

二、目标:

在dm45结点,通过spark(3.3.1)客户端,基于yarn模式,向Hadoop集群提交spark作业。

三、步骤(在dm45操作):

1. 下载spark客户端,并解压

spark-3.3.1-bin-hadoop3.tgz

spark home路径:/home/xxxxx/kdh/spark

2. kinit yarn

3. spark配置

a. 将hdfs-site.xml、core-site.xml、yarn-site.xml拷贝到/home/xxxxx/kdh/spark/conf

b. 修改spark-env.sh

cd /home/xxxxx/kdh/spark/conf

cp spark-env.sh.template spark-env.sh

vi /home/xxxxx/kdh/spark/conf/spark-env.sh

-------------------------------------------------------------------------------------------

HADOOP_CONF_DIR=/home/xxxxx/soft/TDH-Client/conf/hadoop

YARN_CONF_DIR=/home/xxxxx/soft/TDH-Client/conf/hadoop

YARN_HOME=/home/xxxxx/soft/TDH-Client/hadoop/hadoop-yarn

HADOOP_YARN_HOME=/home/xxxxx/soft/TDH-Client/hadoop/hadoop-yarn

HADOOP_HOME=/home/xxxxx/soft/TDH-Client/hadoop/hadoop

HADOOP_LIBEXEC_DIR=/home/xxxxx/soft/TDH-Client/hadoop/hadoop/libexec/

HADOOP_HDFS_HOME=/home/xxxxx/soft/TDH-Client/hadoop/hadoop-hdfs

HADOOP_COMMON_HOME=/home/xxxxx/soft/TDH-Client/hadoop/hadoop

HADOOP_MAPRED_HOME=/home/xxxxx/soft/TDH-Client/hadoop/hadoop-mapreduce

SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs://dm47:8020//tmp/zzdb/sparklog/ -Dspark.history.fs.cleaner.enabled=true"

c. 修改spark-defaults.conf

cd /home/xxxxx/kdh/spark/conf

cp spark-defaults.conf.template spark-defaults.conf

vi /home/xxxxx/kdh/spark/conf/spark-defaults.conf

-------------------------------------------------------------------------------------------

spark.eventLog.enabled true

spark.eventLog.dir hdfs:///tmp/zzdb/sparklog

spark.eventLog.compress true

spark.yarn.historyServer dm48:19888

spark.yarn.jars hdfs:///tmp/zzdb/spark/jars/*.jar

d. 修改日志级别

cd /home/xxxxx/kdh/spark/conf

cp log4j2.properties.template log4j2.properties

e. 依赖包(TDH-Client里寻找)

cp guardian-common-guardian-3.1.0.jar /home/xxxxx/kdh/spark/jars/

cp yarn-plugin-transwarp-6.2.0.jar /home/xxxxx/kdh/spark/jars/

hadoop fs -mkdir -p /tmp/zzdb/spark/jars

cd /home/xxxxx/kdh/spark/jars

hadoop fs -put * /tmp/zzdb/spark/jars/

f. 其它

hadoop fs -mkdir -p /tmp/zzdb/sparklog

4. 启动history server

cd /home/xxxxx/kdh/spark/sbin

./start-history-server.sh

地址: http://dm45:18080

5. 测试

a. pi任务

-- base环境是python3.6

conda activate base

./spark-submit \

--master yarn \

--principal yarn@TDH \

--keytab /home/xxxxx/soft/yarn.keytab \

/home/xxxxx/kdh/spark/examples/src/main/python/pi.py 30

b. spark-shell

sh ./spark-shell --master yarn --deploy-mode client --driver-memory 4g --executor-memory 4g --num-executors 2 --executor-cores 2 --principal yarn@TDH --keytab /home/xxxxx/soft/yarn.keytab

c. pi任务-找茬:指定了高版本的python,任务在yarn上显示SUCCEEDED,但实际没看到pi的输出,可能是眼神问题

conda activate pyspark_env

cd /home/xxxxx/kdh/spark/bin

./spark-submit \

--master yarn \

--conf 'spark.pyspark.driver.python=/software/anaconda3/envs/pyspark_env/bin/python' \

--conf 'spark.pyspark.python=/software/anaconda3/envs/pyspark_env/bin/python' \

--principal yarn@TDH \

--keytab /home/xxxxx/soft/yarn.keytab \

/home/xxxxx/kdh/spark/examples/src/main/python/pi.py 30

d.提交wordcount - cluster模式

conda activate base

cd /home/kangwenqi/kdh/spark/bin

./spark-submit \

--master yarn \

--deploy-mode cluster \

--principal yarn@TDH \

--keytab /home/kangwenqi/soft/yarn.keytab \

/home/kangwenqi/workspace/pyspark_learn/02_pyspark_core/main/02_Wordcount_hdfs_yarn_cluster.py

四、地址查看:

http://dm45:18080

http://dm48:19888

http://dm46:8088/cluster

五、幕后花絮

1. 更换yarn pod的java版本,原版本是1.7,换为1.8

[root@dm46 ~]# docker images | grep yarn

dm46:5000/transwarp/yarn transwarp-6.2.1-final cb9ccbe898b6 3 years ago 2.22GB

transwarp/yarn transwarp-6.2.1-final cb9ccbe898b6 3 years ago 2.22GB

[root@dm46 ~]#

docker run -id dm46:5000/transwarp/yarn:transwarp-6.2.1-final bash

docker ps -a | grep yarn | grep bash

docker exec -it d0f513cd0780 bash

mv /usr/java/jdk1.7.0_71 /usr/java/jdk1.7.0_71-bak

mv /usr/java/jdk1.8.0_25 /usr/java/jdk1.7.0_71

docker tag dm46:5000/transwarp/yarn:transwarp-6.2.1-final dm46:5000/transwarp/yarn:transwarp-6.2.1-final-jdk17bak

docker commit d0f513cd0780 dm46:5000/transwarp/yarn:transwarp-6.2.1-final

docker push dm46:5000/transwarp/yarn:transwarp-6.2.1-final

2. pod中找不到dm45(找到就见鬼了)

tdh pod在宿主机上映射的hosts:/etc/transwarp/conf/hosts

3. yarn一直报:Operation category READ is not supported in state standby

但是感觉也没有太大影响,加上了以下语句,感觉没什么卵用

export SPARK_MASTER_HOST=dm47

暂时和解了吧

4. 失误,检查晚了:版本不对,一切白费!!!

之所以放到最后,是因为这是我走过的弯路,不是一开始,都是正确的。

(base) [root@dm45 bin]# hadoop version

Hadoop 2.7.2-transwarp-6.2.0

Subversion http://xxx:10080/hadoop/hadoop-2.7.2-transwarp.git -r f31230971c2a36e77e4886e0f621366826cec3a3

Compiled by jenkins on 2019-07-27T11:33Z

Compiled with protoc 2.5.0

From source with checksum 42cb923f1631e3c548d6b7e572aa6962

This command was run using /home/xxxxx/soft/TDH-Client/hadoop/hadoop/hadoop-common-2.7.2-transwarp-6.2.0.jar

正解:https://dlcdn.apache.org/spark/spark-3.2.3/spark-3.2.3-bin-hadoop2.7.tgz

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

kangwq2017

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值