第5章 云原生应用实战-Kubernetes部署Seata
作者:王珂
邮箱:49186456@qq.com
前言
本节课程主要给大家讲解如何在Kubernetes中搭建一个Seata主从集群。在Kubernetes中部署一个应用我们通常会定义一个Yaml文件,然后将所有声明式API都写在这个Yaml中。但是这样会导致我们的Yaml文件特别庞大,不好维护。当然,我们也可以把不同的资源对象写在单独的Yaml文件中,这种方式有没有对应的规范约束呢?答案是Helm,我就会采用这种方式,具体详见正文。
版本信息
Kubernetes 1.23.0
Nacos 2.2.3
Seata v1.7.1
一、环境准备
- 具备以下环境
-
具备Kubernetes环境。请参考《第2章 云原生应用实战-搭建Kubernetes环境》
-
Kubernetes中部署了MySql。请参考《第3章 云原生应用实战-Kubernetes部署MySql主从》
-
Kubernetes中部署了Nacos。请参考《第4章 云原生应用实战-Kubernetes部署Nacos集群》
- 下载seata-server.tar.gz
下载地址:
https://github.com/seata/seata/releases/download/v1.7.1/seata-server-1.7.1.tar.gz
- seata-server镜像
选用的镜像版本 seata-server:1.7.1
二、部署seata
2.1 创建数据库
seata在进行全局事务控制时,会将数据库快照保存在数据库中。seata支持多种数据库,这里我们选择MySql8版本。
1)创建数据库
CREATE DATABASE IF NOT EXISTS seata_server CHARACTER SET utf8 COLLATE utf8_general_ci;
2)运行初始化脚本
seata数据库的初始化脚本位于:
seata-server-1.7.1\seata\script\server\db\mysql.sql
2.2 Nacos配置
我们选择nacos作为seata的注册中心和配置。因此首先需要在nacos中对seata做相应配置。
- 创建命令空间
在nacos创建两个命名空间
命名空间名称 | 命名空间ID | 描述 |
---|---|---|
microservice-prod | 4b974c46-a7cd-46c3-95ab-e67d0cb121cc | 微服务生产环境 |
seata-prod | 95f1b3fd-2566-4864-8182-09279ba0a779 | seata生产环境 |
- 配置中心配置
在nacos配置中心命名空间seata-prod下配置
Data ID: seataServer.properties
Group: SEATA_GROUP
配置格式:Properties
store.mode=db
# 数据库配置
store.db.datasource=druid
store.db.dbType=mysql
store.db.driverClassName=com.mysql.cj.jdbc.Driver
store.db.url= jdbc:mysql://svc-mysql-write.ns-mysql:3306/seata?characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=Asia/Shanghai
store.db.user=root
store.db.password=123456
store.db.minConn=5
store.db.maxConn=30
store.db.queryLimit=100
store.db.maxWait=5000
store.db.globalTable=global_table
store.db.branchTable=branch_table
store.db.lockTable=lock_table
# 事务配置
server.recovery.committingRetryPeriod=1000
server.recovery.asynCommittingRetryPeriod=1000
server.recovery.rollbackingRetryPeriod=1000
server.recovery.timeoutRetryPeriod=1000
server.maxCommitRetryTimeout=-1
server.maxRollbackRetryTimeout=-1
server.rollbackRetryTimeoutUnlockEnable=false
# 事务组配置
# 默认事务组映射Seata Server的BJ集群
service.vgroupMapping.service_default_tx_group=BJ
# 日志配置
server.undo.logSaveDays=7
server.undo.logDeletePeriod=86400000
# 客户端与服务端传输方式
transport.serialization=seata
transport.compressor=none
# 关闭metrics功能,提高性能
metrics.enabled=false
metrics.registryType=compact
metrics.exporterList=prometheus
metrics.exporterPrometheusPort=9898
2.3 创建Chart
- 使用helm创建seate-1.7.1
helm create seata-1.7.1
- Chart.yaml
apiVersion: v2
name: seata
description: Seata Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
version: 1.0.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application.
appVersion: 1.7.1-slim
- values.yaml
# Default values for seata.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
namespace: ns-seata
image:
repository: seataio/seata-server
pullPolicy: IfNotPresent
imagePullSecrets: []
nameOverride: ""
fullnameOverride: "seata-server"
serviceAccount:
# Specifies whether a service account should be created
create: false
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name:
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: NodePort
port: 8091
nodePort: 30091
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
- namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: ns-seata
- configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: seata-server-config
namespace: {{ .Values.namespace }}
data:
application.yml: |
server:
port: 7091
spring:
application:
name: seata-server
logging:
config: classpath:logback-spring.xml
file:
path: /root/
console:
user:
username: seata
password: seata
seata:
security:
secretKey:
tokenValidityInMilliseconds: 1800000
config:
type: nacos
nacos:
server-addr: nacos-headless.ns-nacos:8848
namespace: 95f1b3fd-2566-4864-8182-09279ba0a779
group: SEATA_GROUP
data-id: seataServer.properties
registry:
type: nacos
nacos:
application: seata-server
server-addr: nacos-headless.ns-nacos:8848
group: MICROSERVICE_GROUP
namespace: 4b974c46-a7cd-46c3-95ab-e67d0cb121cc
cluster: BJ
- deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-seata-server
namespace: {{ .Values.namespace }}
labels:
app: seata-server
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: seata-server
template:
metadata:
labels:
app: seata-server
spec:
containers:
- name: seata-server
image: "{{ .Values.image.repository }}:{{ .Chart.appVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: SEATA_CONFIG_NAME
value: file:/root/seata-config/registry
ports:
- name: http
containerPort: 8091
protocol: TCP
volumeMounts:
- name: seata-config
mountPath: /seata-server/resources/application.yml
subPath: application.yml
volumes:
- name: seata-config
configMap:
name: seata-server-config
- service.yaml
apiVersion: v1
kind: Service
metadata:
name: svc-seata-server
namespace: {{ .Values.namespace }}
labels:
app: seata-server
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: {{ .Values.service.port }}
nodePort: {{ .Values.service.nodePort }}
protocol: TCP
name: http
selector:
app: seata-server
- 执行部署
将nacos-2.2.3上传至目录/opt/kubernetes/,并进入到该目录。
helm install seata ./seata-1.7.1
查看部署
helm list
- 在nacos查看服务
总结
以上就是搭建Kubernetes集群的所有步骤,请大家安装版本时最好选择的版本与我保持一致,因为不同的版本可能会碰到一些意想不到的问题,避免踩坑。