CDH 升级pyspark带来的问题

一.问题描述

最近在自学pyspark,想本地通过pycharm来编辑pyspark脚本,运行的时候,提示没有pyspark模块,于是通过pycharm安装了pyspark(最新版本)。

而且这个安装到了远程服务器上,真的坑

奈何我安装的是CDH 6.3.1版本,然后spark版本是 2.4.0-cdh6.3.1,然后最新版本是3.0.2的
image.png

新的pyspak程序,都未记录到 History Server服务上。
image.png

重新安装了pyspark 2.4.0版本,依旧未记录到 History Server服务上。

[root@hp1 software]# pyspark
Python 2.7.5 (default, Apr  2 2020, 13:16:51) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
2021-04-09 17:22:21 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 2.4.0
      /_/

Using Python version 2.7.5 (default, Apr  2 2020 13:16:51)
SparkSession available as 'spark'.
>>> 

二.解决方案

首先想到的问题是卸载spark,然后重新安装

2.1 删除spark

image.png

image.png

image.png

更改配置后重新先停止然后删除spark
image.png

image.png

界面删除了spark,依旧可以运行pyspark

[root@hp2 ~]# pyspark
Python 2.7.5 (default, Apr  2 2020, 13:16:51) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 2.4.0-cdh6.3.1
      /_/

Using Python version 2.7.5 (default, Apr  2 2020 13:16:51)
SparkSession available as 'spark'.
>>> exit()

2.2 重启整个集群

image.png

2.3 重新添加spark服务

image.png

image.png

image.png

重启整个集群
问题依旧,hp1的pyspark指的是新安装的 Spark 2.4版本,而非Spark 2.4.0-cdh6.3.1

2.4 运行cdh安装目录下的Spark

对比hp2,查看cdh安装目录下的Spark

[root@hp2 ~]# which pyspark
/usr/bin/pyspark
[root@hp2 ~]# more /usr/bin/pyspark
#!/bin/bash
  # Reference: http://stackoverflow.com/questions/59895/can-a-bash-script-tell-what-directory-its-stored-in
  SOURCE="${BASH_SOURCE[0]}"
  BIN_DIR="$( dirname "$SOURCE" )"
  while [ -h "$SOURCE" ]
  do
    SOURCE="$(readlink "$SOURCE")"
    [[ $SOURCE != /* ]] && SOURCE="$BIN_DIR/$SOURCE"
    BIN_DIR="$( cd -P "$( dirname "$SOURCE"  )" && pwd )"
  done
  BIN_DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
  LIB_DIR=$BIN_DIR/../lib
export HADOOP_HOME=$LIB_DIR/hadoop

# Autodetect JAVA_HOME if not defined
. $LIB_DIR/bigtop-utils/bigtop-detect-javahome

exec $LIB_DIR/spark/bin/pyspark "$@"
[root@hp2 ~]# 
[root@hp2 ~]# find / -name pyspark
/home/pyspark
/etc/alternatives/pyspark
/var/lib/alternatives/pyspark
/usr/bin/pyspark
/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/bin/pyspark
/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/python/pyspark
/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/bin/pyspark

对比hp2的目录,直接运行hp1上的 /opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/bin/pyspark

[root@hp1 lib]# /opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/bin/pyspark
/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/bin/pyspark:行24: /opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/bin/load-spark-env.sh: 没有那个文件或目录
/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/bin/spark-class:行24: /opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/bin/load-spark-env.sh: 没有那个文件或目录
Python 2.7.5 (default, Apr  2 2020, 13:16:51) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/bin/spark-class:行24: /opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/bin/load-spark-env.sh: 没有那个文件或目录
Exception in thread "main" java.lang.IllegalStateException: Cannot find any build directories.
        at org.apache.spark.launcher.CommandBuilderUtils.checkState(CommandBuilderUtils.java:257)
        at org.apache.spark.launcher.AbstractCommandBuilder.getScalaVersion(AbstractCommandBuilder.java:245)
        at org.apache.spark.launcher.AbstractCommandBuilder.buildClassPath(AbstractCommandBuilder.java:196)
        at org.apache.spark.launcher.AbstractCommandBuilder.buildJavaCommand(AbstractCommandBuilder.java:117)
        at org.apache.spark.launcher.SparkSubmitCommandBuilder.buildSparkSubmitCommand(SparkSubmitCommandBuilder.java:261)
        at org.apache.spark.launcher.SparkSubmitCommandBuilder.buildCommand(SparkSubmitCommandBuilder.java:164)
        at org.apache.spark.launcher.Main.buildCommand(Main.java:110)
        at org.apache.spark.launcher.Main.main(Main.java:63)
Traceback (most recent call last):
  File "/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/python/pyspark/shell.py", line 38, in <module>
    SparkContext._ensure_initialized()
  File "/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/python/pyspark/context.py", line 303, in _ensure_initialized
    SparkContext._gateway = gateway or launch_gateway(conf)
  File "/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/python/pyspark/java_gateway.py", line 94, in launch_gateway
    raise Exception("Java gateway process exited before sending its port number")
Exception: Java gateway process exited before sending its port number
>>> exit()

发现少了一个shell文件,直接从hp2的指定位置将文件传到hp1

[root@hp2 bin]# scp ./load-spark-env.sh root@10.31.1.123:/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/bin/
root@10.31.1.123's password: 
load-spark-env.sh  

再次运行hp1上的/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/bin/pyspark

[root@hp1 bin]# /opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/bin/pyspark
Python 2.7.5 (default, Apr  2 2020, 13:16:51) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 2.4.0-cdh6.3.1
      /_/

Using Python version 2.7.5 (default, Apr  2 2020 13:16:51)
SparkSession available as 'spark'.
>>> 

从上面可以看到终于正常了
将hp1下的/usr/bin/pyspark修改为

 /opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/spark/bin/pyspark
  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 9
    评论
评论 9
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值