云原生-使用kubesphere实现Devops-商城项目

1.创建用户和企业空间

        1.1创建出用户信息     

  1.2创建企业空间

2.启动Devops并创建项目

          2.1以 admin 用户登录控制台,点击左上角的平台管理,选择集群管理

        2.2点击定制资源定义,在搜索栏中输入 clusterconfiguration,点击搜索结果查看其详细页面

        2.3在自定义资源中,点击 ks-installer 右侧的 ,选择编辑 YAML

        2.4在该 YAML 文件中,搜索 devops,将 enabled 的 false 改为 true。完成后,点击右下角的确定,保存配置。

将应用商店也进行启动

        2.5使用企业空间管理员帐号创建一个项目

3.部署中间件

        3.1部署MySQL有状态副本集

                3.1.1在创建好的项目的配置-配置字典中创建配置文件

                

[client]
default-character-set=utf8mb4
 
[mysql]
default-character-set=utf8mb4
 
[mysqld]
init_connect='SET collation_connection = utf8mb4_unicode_ci'
init_connect='SET NAMES utf8mb4'
character-set-server=utf8mb4
collation-server=utf8mb4_unicode_ci
skip-character-set-client-handshake
skip-name-resolve

                    3.1.2创建有状态副本集(mysql) 应用负载-工作负载-有状态副本集-创建        

设置容器镜像

输入dockerhub中的需要拉取的容器镜像版本,指定资源限制,不要预留资源,使用默认端口

在环境变量中设置mysql的密码信息,并勾选上同步主机时区

设置挂载存储-我这里是直接设置持久卷,也可以先做好持久卷进行直接选择

存储配置如下:

配置文件映射如下图:

配置好后如下图所示:

docker启动MySQL命令如下

docker run -p 3306:3306 --name mysql-01 \
-v /mydata/mysql/log:/var/log/mysql \
-v /mydata/mysql/data:/var/lib/mysql \
-v /mydata/mysql/conf:/etc/mysql/conf.d \
-e MYSQL_ROOT_PASSWORD=root \
--restart=always \
-d mysql:5.7 

部署mysql负载均衡网络

将原有生成的服务进行删除(注意:不要删除其他的容器,或者副本集)

新建一个新的指定工作负载

选择了外部访问即可从外网进行访问mysql

在服务器防火墙开发如下端口即可使用本地数据库工具链接

在集群内部可以使用DNS进行访问

mysql -uroot -h mall-mysql.mall -p

 3.2部署Redis有状态副本集

                3.2.1在创建好的项目的配置-配置字典中创建配置文件

Docker部署redis命令:

mkdir -p /mydata/redis/conf && vim /mydata/redis/conf/redis.conf

appendonly yes
port 6379
bind 0.0.0.0

docker run -d -p 6379:6379 --restart=always \
-v /mydata/redis/conf/redis.conf:/etc/redis/redis.conf \
-v  /mydata/redis-01/data:/data \
 --name redis-01 redis:6.2.5 \
 redis-server /etc/redis/redis.conf

                3.2.2创建有状态副本集(Redis) 应用负载-工作负载-有状态副本集-创建        

redis启动时需要指定配置文件,请勾选启动命令,输入如下

设置存储卷和配置字典

创建后会自动生成一个服务,若想外网访问,可以将其删除,从新创建一个新的指定工作负载

注意:删除的时候请不要删除其他东西

指定端口映射

添加外网访问

开放端口

本地链接测试

给redis添加密码,在配置字典中找到redis.conf进行如下添加,添加后请重新启动副本

如果不想外部访问了,点击如下

3.3通过应用商店部署RabbitMQ

通过应用中的创建或者左上角的应用商店都可以进入商店

在服务中出现了rabbitmq服务,如想外网访问点击如下

暴露端口后,使用帐号密码进行测试下是否能登录进来

3.4部署Nacos有状态副本集

创建nacos配置文件-将本地的nacos的application.properties复制到配置文件中

记得修改成服务器的数据库地址和帐号和密码

创建nacos服务

填写端口

挂载配置文件

如果遇到一下报错

有两种方法处理:

        1.将本地的nacos-logback.xml添加进配置文件中

        2.将每个配置文件进行指定子路径挂载

启动成功如下:

修改端口暴露

进行测试

3.5部署Sentinel有状态副本集

创建sentinel服务

使用默认端口

创建一个外网可以访问的服务

开放端口

成功访问

3.6部署Seata有状态副本集

1.创建seata配置中心的命名空间SEATA_GROUP

在SEATA_GROUP命名空间下创建seata的配置文件seata-server

#For details about configuration items, see https://seata.io/zh-cn/docs/user/configurations.html
#Transport configuration, for client and server
transport.type=TCP
transport.server=NIO
transport.heartbeat=true
transport.enableTmClientBatchSendRequest=false
transport.enableRmClientBatchSendRequest=true
transport.enableTcServerBatchSendResponse=false
transport.rpcRmRequestTimeout=30000
transport.rpcTmRequestTimeout=30000
transport.rpcTcRequestTimeout=30000
transport.threadFactory.bossThreadPrefix=NettyBoss
transport.threadFactory.workerThreadPrefix=NettyServerNIOWorker
transport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler
transport.threadFactory.shareBossWorker=false
transport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector
transport.threadFactory.clientSelectorThreadSize=1
transport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread
transport.threadFactory.bossThreadSize=1
transport.threadFactory.workerThreadSize=default
transport.shutdown.wait=3
transport.serialization=seata
transport.compressor=none

#Transaction routing rules configuration, only for the client
#############################事务分组名#########################
service.vgroupMapping.你自己事务分组名=default
#If you use a registry, you can ignore it
service.default.grouplist=127.0.0.1:8091
service.enableDegrade=false
service.disableGlobalTransaction=false

#Transaction rule configuration, only for the client
client.rm.asyncCommitBufferLimit=10000
client.rm.lock.retryInterval=10
client.rm.lock.retryTimes=30
client.rm.lock.retryPolicyBranchRollbackOnConflict=true
client.rm.reportRetryCount=5
client.rm.tableMetaCheckEnable=false
client.rm.tableMetaCheckerInterval=60000
client.rm.sqlParserType=druid
client.rm.reportSuccessEnable=false
client.rm.sagaBranchRegisterEnable=false
client.rm.sagaJsonParser=fastjson
client.rm.tccActionInterceptorOrder=-2147482648
client.rm.sqlParserType=druid
client.tm.commitRetryCount=5
client.tm.rollbackRetryCount=5
client.tm.defaultGlobalTransactionTimeout=60000
client.tm.degradeCheck=false
client.tm.degradeCheckAllowTimes=10
client.tm.degradeCheckPeriod=2000
client.tm.interceptorOrder=-2147482648
client.undo.dataValidation=true
client.undo.logSerialization=jackson
client.undo.onlyCareUpdateColumns=true
server.undo.logSaveDays=7
server.undo.logDeletePeriod=86400000
client.undo.logTable=undo_log
client.undo.compress.enable=true
client.undo.compress.type=zip
client.undo.compress.threshold=64k
#For TCC transaction mode
tcc.fence.logTableName=tcc_fence_log
tcc.fence.cleanPeriod=1h

#Log rule configuration, for client and server
log.exceptionRate=100
#############################改为db模式#########################
#Transaction storage configuration, only for the server. The file, DB, and redis configuration values are optional.
store.mode=db
store.lock.mode=db
store.session.mode=db
#Used for password encryption
store.publicKey=

#If `store.mode,store.lock.mode,store.session.mode` are not equal to `file`, you can remove the configuration block.
store.file.dir=file_store/data
store.file.maxBranchSessionSize=16384
store.file.maxGlobalSessionSize=512
store.file.fileWriteBufferCacheSize=16384
store.file.flushDiskMode=async
store.file.sessionReloadReadSize=100

#############################改为kubesphere中的mysql配置地址模式#########################
#These configurations are required if the `store mode` is `db`. If `store.mode,store.lock.mode,store.session.mode` are not equal to `db`, you can remove the configuration block.
store.db.datasource=druid
store.db.dbType=mysql
store.db.driverClassName=com.mysql.jdbc.Driver
store.db.url=jdbc:mysql://你自己的数据库地址:3306/seata?useUnicode=true&rewriteBatchedStatements=true
store.db.user=你自己的数据库帐号
store.db.password=你自己的数据库密码
store.db.minConn=5
store.db.maxConn=30
store.db.globalTable=global_table
store.db.branchTable=branch_table
store.db.distributedLockTable=distributed_lock
store.db.queryLimit=100
store.db.lockTable=lock_table
store.db.maxWait=5000

#These configurations are required if the `store mode` is `redis`. If `store.mode,store.lock.mode,store.session.mode` are not equal to `redis`, you can remove the configuration block.
store.redis.mode=single
store.redis.single.host=你自己的redis地址
store.redis.single.port=6379
store.redis.sentinel.masterName=
store.redis.sentinel.sentinelHosts=
store.redis.maxConn=10
store.redis.minConn=1
store.redis.maxTotal=100
store.redis.database=0
store.redis.password=你自己的redis密码
store.redis.queryLimit=100

#Transaction rule configuration, only for the server
server.recovery.committingRetryPeriod=1000
server.recovery.asynCommittingRetryPeriod=1000
server.recovery.rollbackingRetryPeriod=1000
server.recovery.timeoutRetryPeriod=1000
server.maxCommitRetryTimeout=-1
server.maxRollbackRetryTimeout=-1
server.rollbackRetryTimeoutUnlockEnable=false
server.distributedLockExpireTime=10000
server.session.branchAsyncQueueSize=5000
server.session.enableBranchAsyncRemove=true

#Metrics configuration, only for the server
metrics.enabled=false
metrics.registryType=compact
metrics.exporterList=prometheus
metrics.exporterPrometheusPort=9898

新增配置文件,红色区域调整为自己的地址和帐号

registry {
  # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
  type = "nacos"
  loadBalance = "RandomLoadBalance"
  loadBalanceVirtualNodes = 10

  nacos {
    application = "seata-server"
    serverAddr = "127.0.0.1:8848"
    group = "SEATA_GROUP_DEV"
    namespace = ""
    cluster = "default"
    username = "nacos"
    password = "nacos"
  }
  eureka {
    serviceUrl = "http://localhost:8761/eureka"
    application = "default"
    weight = "1"
  }
  redis {
    serverAddr = "localhost:6379"
    db = 0
    password = ""
    cluster = "default"
    timeout = 0
  }
  zk {
    cluster = "default"
    serverAddr = "127.0.0.1:2181"
    sessionTimeout = 6000
    connectTimeout = 2000
    username = ""
    password = ""
  }
  consul {
    cluster = "default"
    serverAddr = "127.0.0.1:8500"
  }
  etcd3 {
    cluster = "default"
    serverAddr = "http://localhost:2379"
  }
  sofa {
    serverAddr = "127.0.0.1:9603"
    application = "default"
    region = "DEFAULT_ZONE"
    datacenter = "DefaultDataCenter"
    cluster = "default"
    group = "SEATA_GROUP"
    addressWaitTime = "3000"
  }
  file {
    name = "file.conf"
  }
}

config {
  # file、nacos 、apollo、zk、consul、etcd3
  type = "nacos"

  nacos {
    serverAddr = "127.0.0.1:8848"
    namespace = ""
    group = "SEATA_GROUP"
    username = "nacos"
    password = "nacos"
  }
  consul {
    serverAddr = "127.0.0.1:8500"
  }
  apollo {
    appId = "seata-server"
    apolloMeta = "http://192.168.1.204:8801"
    namespace = "application"
    apolloAccesskeySecret = ""
  }
  zk {
    serverAddr = "127.0.0.1:2181"
    sessionTimeout = 6000
    connectTimeout = 2000
    username = ""
    password = ""
  }
  etcd3 {
    serverAddr = "http://localhost:2379"
  }
  file {
    name = "file.conf"
  }
}

创建seata服务

添加容器镜像seataio/seata-server:1.4.2

配置挂载文件

服务启动成功

4.部署Devops项目

        4.1构建生产环境

将所有带prod的文件调整为线上的环境,例如地址,帐号,密码等信息

        4.2创建Devops项目和流水线

创建流水线

创建好后的流水线

4.3拉取代码

创建凭证

4.4项目编译

使用admin帐号登录,修改maven镜像加速

 <mirror>
          <id>nexus-aliyun</id>
          <mirrorOf>central</mirrorOf>
          <name>Nexus aliyun</name>
          <url>http://maven.aliyun.com/nexus/content/groups/public</url>
</mirror>

编译命令,与本地相同

mvn clean package -Dmaven.test.skip=true

4.5构建镜像

dockerfile文件不会写的可以参考以下,需要将信息调整为自己项目

FROM openjdk:8-jdk


#启动自行加载   服务名-prod.yml配置
ENV PARAMS="--server.port=8080 --spring.profiles.active=prod --spring.cloud.nacos.server-addr=his-nacos.his:8848 --spring.cloud.nacos.config.file-extension=yml"
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo 'Asia/Shanghai' >/etc/timezone

COPY target/*.jar /app.jar
EXPOSE 8080

#
ENTRYPOINT ["/bin/sh","-c","java -Dfile.encoding=utf8 -Djava.security.egd=file:/dev/./urandom -jar /app.jar ${PARAMS}"]

有多个模块的,可以构建并发操作

4.6推送镜像

将上一步做好的镜像都推送到阿里云镜像仓库中,让每台服务器都能拉取到镜像(PS:若有自己私有的镜像仓库更好,步骤类似)

阿里云镜像服务地址

1.阿里云容器镜像服务中创建命名空间

2.添加阿里云凭证

找到自己的用户名和设置密码

使用凭证

3修改Jenkinsfile 的环境

 environment {
    DOCKER_CREDENTIAL_ID = 'dockerhub-id'
    GITHUB_CREDENTIAL_ID = 'github-id'
    KUBECONFIG_CREDENTIAL_ID = 'demo-kubeconfig'
    REGISTRY = 'registry.cn-beijing.aliyuncs.com'
    DOCKERHUB_NAMESPACE = 'jin-shop'
    GITHUB_ACCOUNT = 'kubesphere'
    APP_NAME = 'devops-java-sample'
  }

4.使用凭证登录镜像仓库

这里也可以经并发操作,建议编辑Jenkinsfile 文件操作,会比较快

  stage('al9gn2') {
      parallel {
        stage('推送yshop-gateway镜像') {
          agent none
          steps {
            container('maven') {
              withCredentials([usernamePassword(credentialsId: 'aliyun-docker-registry', passwordVariable: 'DOCKER_PWD_VAR', usernameVariable: 'DOCKER_USER_VAR')]) {
                sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
                sh 'docker tag  yshop-gateway:v1 $REGISTRY/$DOCKERHUB_NAMESPACE/yshop-gateway:SNAPSHOT-$BUILD_NUMBER '
                sh 'docker push  $REGISTRY/$DOCKERHUB_NAMESPACE/yshop-gateway:SNAPSHOT-$BUILD_NUMBER'
              }

            }

          }
        }

        stage('推送yshop-xxl-job-admin镜像') {
          agent none
          steps {
            container('maven') {
              withCredentials([usernamePassword(credentialsId: 'aliyun-docker-registry', passwordVariable: 'DOCKER_PWD_VAR', usernameVariable: 'DOCKER_USER_VAR')]) {
                sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
                sh 'docker tag  yshop-xxl-job-admin:v1 $REGISTRY/$DOCKERHUB_NAMESPACE/yshop-xxl-job-admin:SNAPSHOT-$BUILD_NUMBER '
                sh 'docker push  $REGISTRY/$DOCKERHUB_NAMESPACE/yshop-xxl-job-admin:SNAPSHOT-$BUILD_NUMBER'
              }

            }

          }
        }

        stage('推送yshop-auth镜像') {
          agent none
          steps {
            container('maven') {
              withCredentials([usernamePassword(credentialsId: 'aliyun-docker-registry', passwordVariable: 'DOCKER_PWD_VAR', usernameVariable: 'DOCKER_USER_VAR')]) {
                sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
                sh 'docker tag  yshop-auth:v1 $REGISTRY/$DOCKERHUB_NAMESPACE/yshop-auth:SNAPSHOT-$BUILD_NUMBER '
                sh 'docker push  $REGISTRY/$DOCKERHUB_NAMESPACE/yshop-auth:SNAPSHOT-$BUILD_NUMBER'
              }

            }

          }
        }

        stage('推送yshop-upms镜像') {
          agent none
          steps {
            container('maven') {
              withCredentials([usernamePassword(credentialsId: 'aliyun-docker-registry', passwordVariable: 'DOCKER_PWD_VAR', usernameVariable: 'DOCKER_USER_VAR')]) {
                sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
                sh 'docker tag  yshop-upms:v1 $REGISTRY/$DOCKERHUB_NAMESPACE/yshop-upms:SNAPSHOT-$BUILD_NUMBER '
                sh 'docker push  $REGISTRY/$DOCKERHUB_NAMESPACE/yshop-upms:SNAPSHOT-$BUILD_NUMBER'
              }

            }

          }
        }

        stage('推送yshop-mall镜像') {
          agent none
          steps {
            container('maven') {
              withCredentials([usernamePassword(credentialsId: 'aliyun-docker-registry', passwordVariable: 'DOCKER_PWD_VAR', usernameVariable: 'DOCKER_USER_VAR')]) {
                sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
                sh 'docker tag  yshop-mall:v1 $REGISTRY/$DOCKERHUB_NAMESPACE/yshop-mall:SNAPSHOT-$BUILD_NUMBER '
                sh 'docker push  $REGISTRY/$DOCKERHUB_NAMESPACE/yshop-mall:SNAPSHOT-$BUILD_NUMBER'
              }

            }

          }
        }

        stage('推送yshop-weixin镜像') {
          agent none
          steps {
            container('maven') {
              withCredentials([usernamePassword(credentialsId: 'aliyun-docker-registry', passwordVariable: 'DOCKER_PWD_VAR', usernameVariable: 'DOCKER_USER_VAR')]) {
                sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
                sh 'docker tag  yshop-weixin:v1 $REGISTRY/$DOCKERHUB_NAMESPACE/yshop-weixin:SNAPSHOT-$BUILD_NUMBER '
                sh 'docker push  $REGISTRY/$DOCKERHUB_NAMESPACE/yshop-weixin:SNAPSHOT-$BUILD_NUMBER'
              }

            }

          }
        }

      }
    }

4.7部署到正式/测试环境

1.给每个模块添加一个deploy.yaml(k8s的部署配置文件)

参考如下,修改中文部分即可

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: 模块名称
  name: 模块名称
  namespace: 你自己的名称空间   #一定要写名称空间
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  selector:
    matchLabels:
      app: 模块名称
  strategy:
    rollingUpdate:
      maxSurge: 50%
      maxUnavailable: 50%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: 模块名称
    spec:
      imagePullSecrets:
        - name: aliyuncs-docker-hub  #提前在项目下配置访问阿里云的账号密码
      containers:
        - image: $REGISTRY/$DOCKERHUB_NAMESPACE/模块名称:SNAPSHOT-$BUILD_NUMBER
#           readinessProbe:
#             httpGet:
#               path: /actuator/health
#               port: 8080
#             timeoutSeconds: 10
#             failureThreshold: 30
#             periodSeconds: 5
          imagePullPolicy: Always
          name: app
          ports:
            - containerPort: 8080
              protocol: TCP
          resources:
            limits:
              cpu: 300m
              memory: 600Mi
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: 模块名称
  name: 模块名称
  namespace: 你自己的名称空间
spec:
  ports:
    - name: http
      port: 8080
      protocol: TCP
      targetPort: 8080
  selector:
    app: 模块名称
  sessionAffinity: None
  type: ClusterIP

2.添加demo-kubeconfig的凭证

配置全局阿里云镜像地址密钥

如果发生以下错误

com.alibaba.nacos.api.exception.NacosException: Request nacos server failed:

请打开Nacos的9848端口即可

4.8发送确认邮件

配置邮件信息

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值