基于Headless构建高可用spark+pyspark集群

1、创建Headless Service服务

Headless 服务类型并不分配容器云虚拟 IP,而是直接暴露所属 Pod 的 DNS 记录。没有默认负载均衡器,可直接访问 Pod IP 地址。因此,当我们需要与集群内真实的 Pod IP 地址进行直接交互时,Headless 服务就很有用。
其中Service的关键配置如下:clusterIP: None,不让其获取clusterIP , DNS解析的时候直接走pod。

---
kind: Service
apiVersion: v1
metadata:
  name: ecc-spark-service
  namespace: ecc-spark-cluster
spec:
  clusterIP: None
  ports:
    - port: 7077
      protocol: TCP
      targetPort: 7077
      name: spark
    - port: 10000
      protocol: TCP
      targetPort: 10000
      name: thrift-server-tcp
    - port: 8080
      targetPort: 8080
      name: http
    - port: 45970
      protocol: TCP
      targetPort: 45970
      name: thrift-server-driver-tcp  
    - port: 45980
      protocol: TCP
      targetPort: 45980
      name: thrift-server-blockmanager-tcp    
    - port: 4040
      protocol: TCP
      targetPort: 4040
      name: thrift-server-tasks-tcp              
  selector:
    app: ecc-spark-service

EOF

Service的完全域名: ecc-spark-service.ecc-spark-cluster.svc.cluster.local
headless service的完全域名: headless-service.ecc-spark-cluster.svc.cluster.local
在容器里面ping 完全域名, service解析出的地址是clusterIP,headless service 解析出来的地址是 pod IP。

2、构建spark集群

2.1 、创建spark master

spark master分为两个部分,一个是类型为ReplicationController的主体,命名为ecc-spark-master.yaml,另一部分为一个service,暴露master的7077端口给slave使用。

#如下是把thriftserver部署在master节点,则需要暴露thriftserver端口、driver端口、
#blockmanager端口服务,以提供worker节点executor与driver交互.
cat >ecc-spark-master.yaml <<EOF
kind: Deployment
apiVersion: apps/v1
metadata:
  name: ecc-spark-master
  namespace: ecc-spark-cluster
  labels:
    app: ecc-spark-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ecc-spark-master
  template:
    metadata:
      labels:
        app: ecc-spark-master
    spec:
      serviceAccountName: spark-cdp
      securityContext: {}
      dnsPolicy: ClusterFirst
      hostname: ecc-spark-master
      containers:
        - name: ecc-spark-master
          image: spark:3.4.1
          imagePullPolicy: IfNotPresent
          command: ["/bin/sh"]
          args: ["-c","sh /opt/spark/sbin/start-master.sh && tail -f /opt/spark/logs/spark--org.apache.spark.deploy.master.Master-1-*"]
          ports:
            - containerPort: 7077
            - containerPort: 8080
          volumeMounts:
            - mountPath: /opt/usrjars/
              name: ecc-spark-pvc
          livenessProbe:
            failureThreshold: 9
            initialDelaySeconds: 2
            periodSeconds: 15
            successThreshold: 1
            tcpSocket:
              port: 8080
            timeoutSeconds: 10
          resources:
            requests:
              cpu: "2"
              memory: "6Gi"
            limits:
              cpu: "2"
              memory: "6Gi"
         - env:
            - SPARK_LOCAL_DIRS
              value: "/odsdata/sparkdirs/"             
      volumes:
        - name: ecc-spark-pvc
          persistentVolumeClaim:
            claimName: ecc-spark-pvc-static

2.2、创建spark worker

在启动spark worker脚本中需要传入master的地址,在容器云kubernetes dns且设置了service的缘故,可以通过ecc-spark-master.ecc-spark-cluster.svc.cluster.local:7077访问。

cat >ecc-spark-worker.yaml <<EOF
kind: Deployment
apiVersion: apps/v1
metadata:
  name: ecc-spark-worker
  namespace: ecc-spark-cluster
  labels:
    app: ecc-spark-worker
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ecc-spark-worker
  template:
    metadata:
      labels:
        app: ecc-spark-worker
    spec:
      serviceAccountName: spark-cdp
      securityContext: {}
      dnsPolicy: ClusterFirst
      hostname: ecc-spark-worker
      containers:
        - name: ecc-spark-worker
          image: spark:3.4.1
          imagePullPolicy: IfNotPresent
          command: ["/bin/sh"]
          args: ["-c","sh /opt/spark/sbin/start-worker.sh spark://ecc-spark-master.ecc-spark-cluster.svc.cluster.local:7077;tail -f /opt/spark/logs/spark--org.apache.spark.deploy.worker.Worker*"]
          ports:
            - containerPort: 8081
          volumeMounts:
            - mountPath: /opt/usrjars/
              name: ecc-spark-pvc
          resources:
            requests:
              cpu: "2"
              memory: "2Gi"
            limits:
              cpu: "2"
              memory: "4Gi"
        - env:
            - SPARK_LOCAL_DIRS
              value: "/odsdata/sparkdirs/"              
      volumes:
        - name: ecc-spark-pvc
          persistentVolumeClaim:
            claimName: ecc-spark-pvc-static

EOF

2.3 构建pyspark提交环境

import json
import flask
from flask import Flask
from concurrent.futures import ThreadPoolExecutor

app = Flask(__name__)
pool = ThreadPoolExecutor(max_workers=8)

@app.route('/')
def hello_world():  # put application's code here
    return 'Hello World!'

@app.route('/downloadCode', methods=['post'])
def download_file():
    model_id = flask.request.json.get('modelId')
    print(model_id)
    """
    异步提交任务:pool.submit()
    """
    return json.dumps(0, ensure_ascii=False)

@app.route('/modelRun', methods=['post'])
def model_run():
    """
    异步提交任务:pool.submit()
    """
    return json.dumps(0, ensure_ascii=False)

if __name__ == '__main__':
    app.run()
spark@c67e6477b2f1:/opt/spark$ python3
Python 3.8.10 (default, May 26 2023, 14:05:08) 
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> 
>>> 

将python的调用整合到:start-master.sh 文件末尾启动调用,便可以通过k8s暴露spark-master的F5端口实现http调用。

3、使用spark-operator安装spark集群方式

可以参考阿里云文章:搭建Spark应用

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
在Kubernetes中部署MySQL高可用集群,可以使用StatefulSet和Headless Service来实现。下面是一个示例: 1. 创建一个ConfigMap,用于存储MySQL的配置文件。配置文件可以根据实际需求进行修改。 ```yaml apiVersion: v1 kind: ConfigMap metadata: name: mysql-config data: my.cnf: | [mysqld] server-id=1 log-bin=mysql-bin binlog-format=ROW innodb_flush_log_at_trx_commit=1 sync_binlog=1 ``` 2. 创建一个PersistentVolumeClaim,用于存储MySQL的数据。 ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi ``` 3. 创建一个StatefulSet,用于管理MySQL的Pod。 ```yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: mysql spec: serviceName: mysql replicas: 3 selector: matchLabels: app: mysql template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql:5.7 ports: - containerPort: 3306 env: - name: MYSQL_ROOT_PASSWORD value: password volumeMounts: - name: mysql-data mountPath: /var/lib/mysql - name: mysql-config mountPath: /etc/mysql/conf.d volumes: - name: mysql-data persistentVolumeClaim: claimName: mysql-pvc - name: mysql-config configMap: name: mysql-config ``` 4. 创建一个Headless Service,用于提供MySQL的访问。 ```yaml apiVersion: v1 kind: Service metadata: name: mysql spec: clusterIP: None selector: app: mysql ports: - port: 3306 targetPort: 3306 ``` 以上是一个简单的示例,通过StatefulSet和Headless Service可以实现MySQL的高可用集群部署。在实际环境中,还需要考虑更多的情况,比如数据备份、监控和故障恢复等。可以根据实际需求进行调整和扩展。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Moutai码哥

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值