支持kubernetes原生Spark 与其他应用的结合(mysql,postgresql,oracle,hdfs,hbase)

安装运行支持kubernetes原生调度的Spark程序:https://blog.csdn.net/luanpeng825485697/article/details/83651742

dockerfile的目录

.
├── driver
│   └── Dockerfile
├── driver-py
│   └── Dockerfile
├── executor
│   └── Dockerfile
├── executor-py
│   └── Dockerfile
├── init-container
│   └── Dockerfile
├── resource-staging-server
│   └── Dockerfile
├── shuffle-service
│   └── Dockerfile
└── spark-base
    ├── Dockerfile
    └── entrypoint.sh

交互式Python Shell

在spark文件夹下面

./bin/pyspark

并运行以下命令,该命令也应返回1000:

sc.parallelize(range(1000)).count()

交互式Scala Shell

开始使用Spark的最简单方法是通过Scala shell:

./bin/spark-shell

尝试以下命令,该命令应返回1000:

scala> sc.parallelize(1 to 1000).count()

镜像重新封装

下载spark-2.2.0-k8s-0.5.0-bin-2.7.3客户端 镜像的封装都是在这个目录下进行的.

了解结构: 首先你需要封装spark-base镜像,这个镜像负责将spark需要的jar包和相关的库,执行文件等封装成镜像, 例如我们要用spark链接mysql,hbase就需要jar包. 就需要重新封装spark-base镜像. 而driver-py镜像和executor-py镜像也就是python版本的调度器和执行器是在spark-base的基础上封装执行python文件需要的内容,比如pip, numpy等等. 所以如何修改了spark-base,就要重新封装driver-py镜像和executor-py镜像.

封装spark-base镜像

先封装spark-base镜像.在spark-2.2.0-k8s-0.5.0-bin-2.7.3目录下执行

docker build -t spark-base -f dockerfiles/spark-base/Dockerfile .

spark-base镜像系统为ubuntu16.04 jdk版本为1.8

封装python3.6,安装需要的pip包

封装python3.7和相关python包.

修改spark-2.2.0-k8s-0.5.0-bin-2.7.3/dockerfiles/driver-py/Dockerfile文件

#  执行命令 docker build -t spark-driver-py:latest -f dockerfiles/driver-py/Dockerfile .
FROM spark-base

ADD examples /opt/spark/examples
ADD python /opt/spark/python

RUN apk add make automake gcc g++ subversion python3 python3-dev
RUN  ln -s /usr/bin/python3 /usr/bin/python
RUN  ln -s /usr/bin/pip3 /usr/bin/pip
RUN  pip install --upgrade pip
RUN  pip install --upgrade setuptools numpy pandas Matplotlib sklearn opencv-python
RUN  rm -r /root/.cache


# UNCOMMENT THE FOLLOWING TO START PIP INSTALLING PYTHON PACKAGES
# RUN apk add --update alpine-sdk python-dev


ENV PYTHON_VERSION 3.6.6
ENV PYSPARK_PYTHON python
ENV PYSPARK_DRIVER_PYTHON python
ENV PYTHONPATH ${SPARK_HOME}/python/:${SPARK_HOME}/python/lib/py4j-0.10.4-src.zip:${PYTHONPATH}

CMD SPARK_CLASSPATH="${SPARK_HOME}/jars/*" && \
    env | grep SPARK_JAVA_OPT_ | sed 's/[^=]*=\(.*\)/\1/g' > /tmp/java_opts.txt && \
    readarray -t SPARK_DRIVER_JAVA_OPTS < /tmp/java_opts.txt && \
    if ! [ -z ${SPARK_MOUNTED_CLASSPATH+x} ]; then SPARK_CLASSPATH="$SPARK_MOUNTED_CLASSPATH:$SPARK_CLASSPATH"; fi && \
    if ! [ -z ${SPARK_SUBMIT_EXTRA_CLASSPATH+x} ]; then SPARK_CLASSPATH="$SPARK_SUBMIT_EXTRA_CLASSPATH:$SPARK_CLASSPATH"; fi && \
    if ! [ -z ${SPARK_EXTRA_CLASSPATH+x} ]; then SPARK_CLASSPATH="$SPARK_EXTRA_CLASSPATH:$SPARK_CLASSPATH"; fi && \
    if ! [ -z ${SPARK_MOUNTED_FILES_DIR+x} ]; then cp -R "$SPARK_MOUNTED_FILES_DIR/." .; fi && \
    if ! [ -z ${SPARK_MOUNTED_FILES_FROM_SECRET_DIR+x} ]; then cp -R "$SPARK_MOUNTED_FILES_FROM_SECRET_DIR/." .; fi && \
    ${JAVA_HOME}/bin/java "${SPARK_DRIVER_JAVA_OPTS[@]}" -cp $SPARK_CLASSPATH -Xms$SPARK_DRIVER_MEMORY -Xmx$SPARK_DRIVER_MEMORY -Dspark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS $SPARK_DRIVER_CLASS $PYSPARK_PRIMARY $PYSPARK_FILES $SPARK_DRIVER_ARGS

修改spark-2.2.0-k8s-0.5.0-bin-2.7.3/dockerfiles/executor-py/Dockerfile文件

# docker build -t spark-executor-py:latest -f dockerfiles/executor-py/Dockerfile .

FROM spark-base

ADD examples /opt/spark/examples
ADD python /opt/spark/python

RUN apk add make automake gcc g++ subversion python3 python3-dev
RUN  ln -s /usr/bin/python3 /usr/bin/python
RUN  ln -s /usr/bin/pip3 /usr/bin/pip
RUN  pip install --upgrade pip
RUN  pip install --upgrade setuptools numpy pandas Matplotlib sklearn opencv-python
RUN  rm -r /root/.cache


ENV PYTHON_VERSION 3.6.6
ENV PYSPARK_PYTHON python
ENV PYSPARK_DRIVER_PYTHON python
ENV PYTHONPATH ${SPARK_HOME}/python/:${SPARK_HOME}/python/lib/py4j-0.10.4-src.zip:${PYTHONPATH}

CMD SPARK_CLASSPATH="${SPARK_HOME}/jars/*" && \
    env | grep SPARK_JAVA_OPT_ | sed 's/[^=]*=\(.*\)/\1/g' > /tmp/java_opts.txt && \
    readarray -t SPARK_EXECUTOR_JAVA_OPTS < /tmp/java_opts.txt && \
    if ! [ -z ${SPARK_MOUNTED_CLASSPATH}+x} ]; then SPARK_CLASSPATH="$SPARK_MOUNTED_CLASSPATH:$SPARK_CLASSPATH"; fi && \
    if ! [ -z ${SPARK_EXECUTOR_EXTRA_CLASSPATH+x} ]; then SPARK_CLASSPATH="$SPARK_EXECUTOR_EXTRA_CLASSPATH:$SPARK_CLASSPATH"; fi && \
    if ! [ -z ${SPARK_MOUNTED_FILES_DIR+x} ]; then cp -R "$SPARK_MOUNTED_FILES_DIR/." .; fi && \
    if ! [ -z ${SPARK_MOUNTED_FILES_FROM_SECRET_DIR+x} ]; then cp -R "$SPARK_MOUNTED_FILES_FROM_SECRET_DIR/." .; fi && \
    ${JAVA_HOME}/bin/java "${SPARK_EXECUTOR_JAVA_OPTS[@]}" -Dspark.executor.port=$SPARK_EXECUTOR_PORT -Xms$SPARK_EXECUTOR_MEMORY -Xmx$SPARK_EXECUTOR_MEMORY -cp $SPARK_CLASSPATH org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url $SPARK_DRIVER_URL --executor-id $SPARK_EXECUTOR_ID --cores $SPARK_EXECUTOR_CORES --app-id $SPARK_APPLICATION_ID --hostname $SPARK_EXECUTOR_POD_IP

封装镜像并push到仓库

提交python代码

在spark客户端的文件夹下执行

bin/spark-submit \
  --deploy-mode cluster \
  --master k8s://https://192.168.1.111:6443 \
  --kubernetes-namespace spark-cluster \
  --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
  --conf spark.executor.instances=5 \
  --conf spark.app.name=spark-pi \
  --conf spark.kubernetes.driver.docker.image=kubespark/spark-driver-py:v2.2.0-kubernetes-0.5.0 \
  --conf spark.kubernetes.executor.docker.image=kubespark/spark-executor-py:v2.2.0-kubernetes-0.5.0 \
  --conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2.0-kubernetes-0.5.0 \
  --conf spark.kubernetes.resourceStagingServer.uri=http://192.168.11.127:31000 \
  --jars local:///opt/spark/jars/RoaringBitmap-0.5.11.jar \
./demo_xxx.py

读取mysql数据

1、将mysql-connector-java-5.1.47-bin.jar文件放在spark-2.2.0-k8s-0.5.0-bin-2.7.3/jars/文件夹下面

下载地址:https://dev.mysql.com/downloads/connector/j/5.1.html

重新封装spark-base镜像,进而重新封装所有镜像

docker build -t spark-base -f dockerfiles/spark-base/Dockerfile .

python 读取mysql数据的demo

from pyspark.sql import SparkSession
from pyspark.sql import SQLContext

sc = SparkSession.builder.appName("mysqltest")\
    .config('spark.some.config,option0','some-value')\
    .getOrCreate()
sqlContext = SQLContext(sc)
jdbcDf=sqlContext.read.format("jdbc").options(url="jdbc:mysql://139.9.0.111:3306/note",
                                       driver="com.mysql.jdbc.Driver",
                                       dbtable="article",user="root",
                                       password="xxxxx").load()

print(jdbcDf.select('label').show())   # 读取label列,默认只展示20行

读写postgresql

将postgresql-42.2.5.jar文件放在spark-2.2.0-k8s-0.5.0-bin-2.7.3/jars/文件夹下面

下载地址:https://jdbc.postgresql.org/download.html

重新封装spark-base镜像,进而重新封装所有镜像

docker build -t spark-base -f dockerfiles/spark-base/Dockerfile .

python 读取postgresql的demo



from pyspark.sql import SparkSession
from pyspark.sql import SQLContext

sc = SparkSession.builder.appName("postgre_test")\
    .config('spark.some.config,option0','some-value')\
    .getOrCreate()
sqlContext = SQLContext(sc)

jdbcDf=sqlContext.read.format("jdbc").options(url="jdbc:postgresql://192.168.1.111:31234/postgres",
                                              driver="org.postgresql.Driver",
                                              dbtable="account",user="postgres",
                                              password="xxxx").load()

print(jdbcDf.select('name').show())   # 获取name列,默认只展示20行

读写oracle

将ojdbc8.jar文件放在spark-2.2.0-k8s-0.5.0-bin-2.7.3/jars/文件夹下面

下载地址:https://www.oracle.com/technetwork/database/application-development/jdbc/downloads/index.html

重新封装spark-base镜像,进而重新封装所有镜像

docker build -t spark-base -f dockerfiles/spark-base/Dockerfile .

读取hdfs数据

首先要保证客户端能连接hdfs的namenode和datanode, 因为 客户端先询问namenode数据在哪里,再连接datanode查询数据.

我们需要进入hdfs datanode的pod,
hadoop基本命令参考:https://blog.csdn.net/luanpeng825485697/article/details/83830569

创建一个文件包含随机的字符
echo "a e d s w q s d c x a w s z x d ew d">aa.txt
将文件放入hdfs文件系统
hadoop fs -mkdir /aa
hadoop fs -put aa.txt  /aa
查看文件在hdfs中是否存在
hadoop fs -ls /aa
查看文件内容
hadoop fs -cat /aa/aa.txt
删除目录
hadoop fs -rm -r /aa

在python中调用hdfs中的数据的demo

from pyspark import SparkConf,SparkContext
from operator import add

conf = SparkConf().setAppName("hdfs_test")
sc = SparkContext(conf=conf)

file = sc.textFile("hdfs://192.168.11.127:32072/aa/aa.txt")
rdd = file.flatMap(lambda line:line.split(" ")).map(lambda word:(word,1))
count=rdd.reduceByKey(add)
result = count.collect()
print(result)

读取hbase数据

将hbase/lib目录下的hadoop开头jar包、hbase开头jar包添加至spark-2.2.0-k8s-0.5.0-bin-2.7.3/jars/文件夹下面

此外还有hbase/lib目录下的:zookeeper-3.4.6.jar、metrics-core-2.2.0.jar(缺少会提示hbase RpcRetryingCaller: Call exception不断尝试重连hbase,不报错)、htrace-core-3.1.0-incubating.jar、protobuf-java-2.5.0.jar、 guava-12.0.1.jar添加至spark-2.2.0-k8s-0.5.0-bin-2.7.3/jars/文件夹下面

需要注意:在Spark 2.0以上版本缺少相关把hbase的数据转换python可读取的jar包,需要我们另行下载。
下载地址
https://mvnrepository.com/artifact/org.apache.spark/spark-examples?repo=typesafe-maven-releases
下载后同样也要放在spark-2.2.0-k8s-0.5.0-bin-2.7.3/jars/文件夹下面.

我现在最新的版本是spark-examples_2.11-1.6.0-typesafe-001.jar

重新封装spark-base,重新封装所有镜像

docker build -t spark-base -f dockerfiles/spark-base/Dockerfile .

python读取hbase数据的demo

print('=====================================')
import os
status = os.popen('echo "10.233.64.13   hbase-master-deployment-1-ddb859944-ctbrm">> /etc/hosts')    # 因为hbase之间是通过hostname解析的,所以要先修改hosts文件
print(status.read())
print('=====================================')


from pyspark.sql import SparkSession
from pyspark.sql import SQLContext


#
spark = SparkSession.builder.appName("hbase_test").getOrCreate()
sc = spark.sparkContext
#
zookeeper = '10.233.9.11,10.233.9.12,10.233.9.13'
table = 'product'
#
# # 读取
conf = {
    "hbase.zookeeper.quorum": zookeeper,
    "hbase.zookeeper.property.clientPort":"2181",
    "hbase.regionserver.port":"60010",
    "hbase.master":"10.233.9.21:60000",
    "zookeeper.znode.parent":"/hbase",
    "hbase.mapreduce.inputtable": table
}
keyConv = "org.apache.spark.examples.pythonconverters.ImmutableBytesWritableToStringConverter"
valueConv = "org.apache.spark.examples.pythonconverters.HBaseResultToStringConverter"
hbase_rdd = sc.newAPIHadoopRDD("org.apache.hadoop.hbase.mapreduce.TableInputFormat","org.apache.hadoop.hbase.io.ImmutableBytesWritable","org.apache.hadoop.hbase.client.Result",keyConverter=keyConv,valueConverter=valueConv,conf=conf)
count = hbase_rdd.count()
hbase_rdd.cache()
output = hbase_rdd.collect()
for (k, v) in output:
    print((k, v))


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

腾讯AI架构师

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值