K8S部署

3. Kube的部署

3.1 环境准备

节点地址系统规格角色运行的组件
10.210.28.243CentOS7.44C 8GMasterEtcd
Kube-apiserver
Kube-controller-manage
Kube-scheduler
10.210.28.243CentOS7.44C 8GNode01Kubelete
Kube-proxy
Docker
Flannel
10.210.28.244CentOS7.44C 4GNode02Kubelete
Kube-proxy
Docker
Flannel

3.2 Master的部署

3.2.1 Kuburnetes的安装方式
  • 二进制安装
  • 通过Kubeadmin安装
3.2.2 部署 ETCD
[root@k8s-master data]# yum install etcd -y
[root@k8s-master data]# vim /etc/etcd/etcd.conf 
#etcd的配置文件
[root@k8s-master data]# grep -v ^[#] /etc/etcd/etcd.conf 
#etcd数据存放目录
ETCD_DATA_DIR="/data/etcd/k8s.etcd"
ETCD_LISTEN_CLIENT_URLS="http://10.210.28.23:2379"
ETCD_NAME="k8s"
ETCD_ADVERTISE_CLIENT_URLS="http://10.210.28.23:2379"
#注意授权,默认etcd的用户是etcd
[root@k8s-master data]# chown etcd.etcd /data/etcd
#启动etcd
[root@k8s-master data]# systemctl start  etcd
#设置开机启动
[root@k8s-master data]# systemctl enable etcd
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service
3.2.3 部署Kunbernetes服务端

Kuberneter服务端下载地址

[root@k8s-master ~]# ls
kubernetes-server-linux-amd64.tar.gz
[root@k8s-master ~]# tar xvf kubernetes-server-linux-amd64.tar.gz 
[root@k8s-master ~]# ls
kubernetes  kubernetes-server-linux-amd64.tar.gz
[root@k8s-master ~]# cd kubernetes
[root@k8s-master kubernetes]# ls
addons  kubernetes-src.tar.gz  LICENSES  server

创建用于管理的目录

[root@k8s-master /]# mkdir -p /app/kubernetes/bin
[root@k8s-master /]# mkdir -p /app/kubernetes/config
[root@k8s-master kubernetes]# ls
bin  config
  • bin:用来存放复制过来的程序
  • config: 用来存放配置文件
[root@k8s-master bin]# pwd
/root/kubernetes/server/bin
[root@k8s-master bin]# cp kube-apiserver kube-controller-manager kube-scheduler /app/kubernetes/bin/

解压kubernetes-src.tar.gz文件,获取配置文件

[root@k8s-master ~]# cd kubernetes/cluster/centos/
[root@k8s-master centos]# ls
build.sh  config-build.sh  config-default.sh  config-test.sh  deployAddons.sh  master  node  util.sh
[root@k8s-master centos]# cd master/scripts/
[root@k8s-master scripts]# ls
apiserver.sh  controller-manager.sh  etcd.sh  flannel.sh  post-etcd.sh  scheduler.sh
#将我们需要的文件复制到/app下面
[root@k8s-master scripts]# cp apiserver.sh  controller-manager.sh  scheduler.sh  /app/kubernetes/bin/
[root@k8s-master scripts]# cd /app/kubernetes/bin/
[root@k8s-master bin]# ls -l
[root@k8s-master bin]# ls -l
总用量 397908
-rwxr-xr-x 1 root root      5054 42 11:51 apiserver.sh
-rwxr-xr-x 1 root root      2237 42 11:51 controller-manager.sh
-rwxr-x--- 1 root root 209244331 42 11:38 kube-apiserver
-rwxr-x--- 1 root root 136621177 42 11:38 kube-controller-manager
-rwxr-x--- 1 root root  61566971 42 11:38 kube-scheduler
-rwxr-xr-x 1 root root      1706 42 11:51 scheduler.sh
3.2.4 配置apiserver
[root@k8s-master bin]# vim apiserver.sh
# 主节点的ip地址
MASTER_ADDRESS=${1:-"10.210.28.23"}
# etcd集群的地址
ETCD_SERVERS=${2:-"http://10.210.28.23:2379"}
# 创建pod的ip网段,这里使用10.0.0.0/24
SERVICE_CLUSTER_IP_RANGE=${3:-"10.0.0.0/24"}
ADMISSION_CONTROL=${4:-""}
# 注意配置文件的存放路径
cat <<EOF >/app/kubernetes/config/kube-apiserver
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=4"
KUBE_ETCD_SERVERS="--etcd-servers=${ETCD_SERVERS}"
# etcd的ca认证文件,暂时移除
# KUBE_ETCD_CAFILE="--etcd-cafile=/srv/kubernetes/etcd/ca.pem"
# etcd的ca认证文件,暂时移除
# KUBE_ETCD_CERTFILE="--etcd-certfile=/srv/kubernetes/etcd/client.pem"
# etcd的ca认证文件,暂时移除
# KUBE_ETCD_KEYFILE="--etcd-keyfile=/srv/kubernetes/etcd/client-key.pem"
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--insecure-port=8080"
KUBE_ADVERTISE_ADDR="--advertise-address=${MASTER_ADDRESS}"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=${SERVICE_CLUSTER_IP_RANGE}"
# 控制节点,默认是没有的
KUBE_ADMISSION_CONTROL="--admission-control=${ADMISSION_CONTROL}"
# ca证书去除
# KUBE_API_CLIENT_CA_FILE="--client-ca-file=/srv/kubernetes/ca.crt"
# ca证书去除
# KUBE_API_TLS_PRIVATE_KEY_FILE="--tls-private-key-file=/srv/kubernetes/server.key"
EOF
cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
# 注意配置文件和程序的目录路径
EnvironmentFile=-/app/kubernetes/config/kube-apiserver
ExecStart=/app/kubernetes/bin/kube-apiserver ${KUBE_APISERVER_OPTS}
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver

执行脚本./apiserver.sh+Master的IP地址+etcd的地址,然后检查:

[root@k8s-master bin]# ./apiserver.sh 10.210.28.243  http:10.210.28.243:2379
[root@k8s-master bin]# systemctl status kube-apiserver 
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2018-04-02 13:23:57 CST; 1h 18min ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 12339 (kube-apiserver)
   CGroup: /system.slice/kube-apiserver.service
           └─12339 /app/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=http://10.210.28.243:2379 --insecure-bind-address=0.0.0.0 --in...
3.2.5 配置controller-manager
[root@k8s-master bin]# vim controller-manager.sh 
# Master主节点地址
MASTER_ADDRESS=${1:-"10.210.28.23"}
# 注意配置文件存放的位置,统一存放在/app/kubernetes/config/
cat <<EOF >/app/kubernetes/config/kube-controller-manager
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=4"
KUBE_MASTER="--master=${MASTER_ADDRESS}:8080"
# 将有关ca证书的注释起来
#KUBE_CONTROLLER_MANAGER_ROOT_CA_FILE="--root-ca-file=/srv/kubernetes/ca.crt"
# 将有关ca证书的注释起来
# --service-account-private-key-file="": Filename containing a PEM-encoded private
# RSA key used to sign service account tokens.
#KUBE_CONTROLLER_MANAGER_SERVICE_ACCOUNT_PRIVATE_KEY_FILE="--service-account-private-key-file=/srv/kubernetes/server.key"
...
[Service]
# 注意配置文件和程序文件存放的路径
EnvironmentFile=-/app/kubernetes/config/kube-controller-manager
ExecStart=/app/kubernetes/bin/kube-controller-manager ${KUBE_CONTROLLER_MANAGER_OPTS}
Restart=on-failure
...

执行脚本./controller-manager.sh+Master的IP地址,然后检查:

[root@k8s-master bin]# ./controller-manager.sh 10.210.28.23
[root@k8s-master bin]# systemctl status kube-controller-manager 
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2018-04-02 12:33:14 CST; 1h 58min ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 11656 (kube-controller)
   CGroup: /system.slice/kube-controller-manager.service
           └─11656 /app/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=10.210.28.23:8080 --leader-elect
3.2.6 配置scheduler
[root@k8s-master bin]# vim scheduler.sh
# Master主节点地址 
MASTER_ADDRESS=${1:-"10.210.28.23"}
# 注意配置文件存放的位置,统一存放在/app/kubernetes/config/
cat <<EOF >/app/kubernetes/config/kube-scheduler
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=4"
KUBE_MASTER="--master=${MASTER_ADDRESS}:8080"
KUBE_LEADER_ELECT="--leader-elect"
KUBE_SCHEDULER_ARGS=""
EOF
...
#这里也要注意配置文件路径和程序存放目录
[Service]
EnvironmentFile=-/app/kubernetes/config/kube-scheduler
ExecStart=/app/kubernetes/bin/kube-scheduler ${KUBE_SCHEDULER_OPTS}
Restart=on-failure
...

执行脚本./scheduler.sh+主节点IP地址,启动之后检查:

[root@k8s-master bin]# ./scheduler.sh 10.210.28.23
[root@k8s-master bin]# systemctl status kube-scheduler 
● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2018-04-02 12:33:58 CST; 1h 50min ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 11709 (kube-scheduler)
   CGroup: /system.slice/kube-scheduler.service
           └─11709 /app/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=10.210.28.23:8080 --leader-elect
3.2.7 配置环境变量
echo "export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/app/kubernetes/bin">>/etc/profile
3.2.8 Master测试
[root@k8s-master bin]# kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T20:55:30Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}

Master节点部署完成

3.3 Node节点部署

3.3.1 下载Node节点

K8S的node节点下载地址

3.3.2 部署node节点
[root@k8s-node01 ~]# tar xvf node.gz 
kubernetes/
kubernetes/LICENSES
kubernetes/node/
kubernetes/node/bin/
kubernetes/node/bin/kube-proxy
kubernetes/node/bin/kubelet
kubernetes/node/bin/kubeadm
kubernetes/node/bin/kubectl
kubernetes/kubernetes-src.tar.gz
[root@k8s-node01 ~]# ls
kubernetes  node.gz
#创建程序目录和配置文件目录
[root@k8s-node01 ~]# mkdir -p /app/kubernetes/{bin,config}
#将执行程序复制到bin目录下
[root@k8s-node01 ~]# cp kubernetes/node/bin/kubelet  /app/kubernetes/bin/
[root@k8s-node01 ~]# cp kubernetes/node/bin/kube-proxy /app/kubernetes/bin/
[root@k8s-node01 ~]# cd kubernetes/
[root@k8s-node01 kubernetes]# tar xvf kubernetes-src.tar.gz 
# 找到node的配置文件
[root@k8s-node01 ~]# cd kubernetes/cluster/centos/node/
[root@k8s-node01 node]# ls
bin  scripts
[root@k8s-node01 node]# cd scripts/
# 配置文件安装脚本
[root@k8s-node01 scripts]# ls
docker.sh  flannel.sh  kubelet.sh  proxy.sh
3.3.3 配置kubelet
[root@k8s-node01 bin]# vim kubelet.sh 
# 需要注意的配置项
MASTER_ADDRESS=${1:-"10.210.28.243"}
NODE_ADDRESS=${2:-"10.210.28.244"}
DNS_SERVER_IP=${3:-"192.168.3.1"}
DNS_DOMAIN=${4:-"cluster.local"}
...
cat <<EOF >/usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
# 注意文件存放的位置
EnvironmentFile=-/app/kubernetes/config/kubelet
ExecStart=/app/kubernetes/bin/kubelet ${KUBELET_OPTS}
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
EOF
...

执行脚本kubelet.sh MasterIP NodeIP 默认DNS的IP

[root@k8s-node01 bin]# ./kubelet.sh 10.210.28.243 10.210.28.244 192.168.3.1
3.3.4 检查状态
[root@k8s-node01 bin]# systemctl status kubelet
3.3.5 配置kube-proxy
[root@k8s-node01 bin]# vim proxy.sh 
MASTER_ADDRESS=${1:-"10.210.28.243"}
NODE_ADDRESS=${2:-"10.210.28.244"}
# 注意文件存放位置
cat <<EOF >/app/kubernetes/config/kube-proxy
...
cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
# 注意文件存放位置
EnvironmentFile=-/app/kubernetes/config/kube-proxy
ExecStart=/app/kubernetes/bin/kube-proxy ${KUBE_PROXY_OPTS}
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

执行脚本./proxy.sh 10.210.28.243 10.210.28.244

3.3.6 检查状态
[root@k8s-node01 bin]# systemctl status kube-proxy

3.4 在Master节点上面安装node节点

[root@k8s-master bin]# cp kube-proxy  /app/kubernetes/bin/
[root@k8s-master bin]# cp kubelet /app/kubernetes/bin/
[root@k8s-master bin]# pwd
/root/kubernetes/server/bin
[root@k8s-master /]# ls /app/kubernetes/bin/kube-proxy 
/app/kubernetes/bin/kube-proxy
3.4.1 安装kubelet-proxy
[root@k8s-master /]# ls /app/kubernetes/bin/kubelet
/app/kubernetes/bin/kubelet
#编辑kubelet-proxy配置文件
[root@k8s-master bin]# vim proxy.sh 
#主节点地址
MASTER_ADDRESS=${1:-"10.210.28.243"}
#节点地址
NODE_ADDRESS=${2:-"10.210.28.243"}
#略
...
[root@k8s-master bin]# ./proxy.sh 10.210.28.243 10.210.28.243
# 检查节点状态
[root@k8s-master bin]# systemctl status kube-proxy 
● kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since 五 2018-04-06 14:57:57 CST; 5min ago
 Main PID: 8723 (kube-proxy)
   Memory: 9.2M
   CGroup: /system.slice/kube-proxy.service
           ‣ 8723 /app/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=10.210.28.243 --master=http://10.210.28.243:8080
3.4.2 安装kubelet
[root@k8s-master bin]# vim kubelet.sh 
# 需要注意的配置项
MASTER_ADDRESS=${1:-"10.210.28.243"}
NODE_ADDRESS=${2:-"10.210.28.243"}
DNS_SERVER_IP=${3:-"192.168.3.1"}
DNS_DOMAIN=${4:-"cluster.local"}
...
cat <<EOF >/usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
# 注意文件存放的位置
EnvironmentFile=-/app/kubernetes/config/kubelet
ExecStart=/app/kubernetes/bin/kubelet ${KUBELET_OPTS}
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
EOF
...

执行脚本kubelet.sh MasterIP NodeIP 默认DNS的IP

[root@k8s-master bin]# ./kubelet.sh 10.210.28.243 10.210.28.243 192.168.3.1
3.4.4 检查kubelet
[root@k8s-master bin]# systemctl status kubelet 

4. 部署K8S的网络组件Flannel

4.1 下载Flannel

Flannel下载地址

4.2 部署Flannel
[root@k8s-master ~]# tar xvf flannel-v0.9.0-linux-amd64.tar.gz
flanneld
mk-docker-opts.sh
README.md
[root@k8s-master ~]# etcdctl -endpoint="http://10.210.28.243:2379" set /coreos.com/network/config '{"Network": "172.17.0.0/16"}'
{"Network": "172.17.0.0/16"}
[root@k8s-master ~]# mv flanneld mk-docker-opts.sh  /usr/bin/
4.3 Node节点部署Flannel
4.3.1 下载Flannel
#下载地址
[root@k8s-node01 ~]# wget https://github.com/coreos/flannel/releases/download/v0.9.1/flannel-v0.9.1-linux-amd64.tar.gz
[root@k8s-node01 ~]# tar xvf flannel-v0.9.0-linux-amd64.tar.gz 
flanneld
mk-docker-opts.sh
README.md
4.3.2配置Flannel
#将命令复制到/usr/bin当中
[root@k8s-node01 ~]# mv flanneld mk-docker-opts.sh  /usr/bin/
#编辑flanneld的配置文件
[root@k8s-node01 ~]# vim /etc/sysconfig/flanneld
[root@k8s-node01 ~]# cat /etc/sysconfig/flanneld 
FLANNEL_OPTIONS="--etcd-endpoints=http://10.210.28.243:2379 --ip-masq=true"
4.3.3 通过systemd管理Flannel

由于使用的是二进制安装的方式系统没有自带systemd管理,编辑文件

[root@k8s-master ~]# vim /usr/lib/systemd/system/flanneld.service 
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
ExecStart=/usr/bin/flanneld $FLANNEL_OPTIONS
ExecStartPost=/usr/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
4.3.4 配置docker的启动文件

指定docker启动容器的时候分配的容器ip地址

# 主要是添加环境变量
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS --registry-mirror=https://8nmu96bm.mirror.aliyuncs.com

docker启动之后使用的是Flanneld分配的子网地址
主要的一个参数就是--bip

4.3.4 重启各项服务
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl enable flanneld
[root@k8s-master ~]# systemctl restart flanneld
[root@k8s-master ~]# systemctl restart docker
4.3.5 检查网络链路

主要有两点:
route 路由是否正常
ip 地址是否正常
要注意flannel要和docker0在一个网段中

4.4 检查集群部署状态
[root@k8s-master ~]# kubectl get componentstatus
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
[root@k8s-master ~]# kubectl get node
NAME            STATUS    ROLES     AGE       VERSION
10.210.28.243   Ready     <none>    9h        v1.9.0
10.210.28.244   Ready     <none>    14h       v1.9.0
[root@k8s-master ~]# kubectl get all
NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
svc/kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   20h
4.5 检查服务启动列表
4.5.1 Master 启动服务
[root@k8s-master ~]# systemctl status kubelet kube-proxy  kube-apiserver   kube-scheduler kube-controller flanneld  
4.5.2 Node 启动服务

检查各个服务状态

[root@k8s-node01 ~]# systemctl status kubelet kube-proxy flanneld
5. 测试
5.1 解决没有google的根容器
docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
docker tag registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0
5.2 在主节点上面创建一个pod
[root@k8s-master ~]#kubectl run php-test  --replicas=4 --image=richarvey/nginx-php-fpm --labels='app=demo' --port=80

获取所有容器的信息

[root@k8s-master /]# kubectl get pod -o wide
NAME                        READY     STATUS    RESTARTS   AGE       IP            NODE
demo-69895d6f65-5xpqc       1/1       Running   2          18h       172.17.6.8    10.210.28.244
demo-69895d6f65-9xwdw       1/1       Running   6          2d        172.17.6.7    10.210.28.244
demo-69895d6f65-m8fr5       1/1       Running   2          18h       172.17.6.2    10.210.28.244
demo-69895d6f65-rf46m       1/1       Running   6          2d        172.17.6.5    10.210.28.244
demo404-54d6c746fd-gfv2g    1/1       Running   0          16h       172.17.6.9    10.210.28.244
demo404-54d6c746fd-kflbw    1/1       Running   0          16h       172.17.23.2   10.210.28.243
demo404-54d6c746fd-mk2st    1/1       Running   0          16h       172.17.23.3   10.210.28.243
demo404-54d6c746fd-x6vds    1/1       Running   0          16h       172.17.23.4   10.210.28.243
php-679bc4b9c-trvrk         1/1       Running   11         3d        172.17.6.3    10.210.28.244
php-test-656db4fcd5-fjqnw   1/1       Running   11         2d        172.17.6.4    10.210.28.244
php-test-656db4fcd5-jwvbg   1/1       Running   8          2d        172.17.6.6    10.210.28.244

进入到某一容器中

[root@k8s-master /]# kubectl exec -it php-test-656db4fcd5-fjqnw  bash
bash-4.3# 
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
k8s是一个用于容器编排和管理的开源平台,而Spring Boot是一个用于构建独立的、基于Spring的Java应用程序的框架。使用k8s部署Spring Boot应用程序可以提供更好的可伸缩性、高可用性和容错性。 引用中提到了使用k8s来快速部署一个Spring Boot项目,并体验k8s和实际项目的结合。这意味着通过k8s,你可以轻松地将你的Spring Boot应用程序部署到一个分布式系统中。 引用指出,尽管已经了解了如何通过其他方式部署Spring Boot应用程序,但是了解如何通过k8s部署仍然是必要的。因为k8s提供了许多功能和特性,例如自动扩展、负载均衡和故障恢复等,这些功能可以大大简化和改善应用程序的部署和管理。 引用提到了k8s部署Spring Boot项目的过程是相对简单的,目前可能只是半手动部署,但后续可以引入CICD(持续集成和持续部署)实现真正的自动化部署。这意味着你可以使用k8s和CICD工具来自动化构建、测试和部署Spring Boot应用程序,从而提高开发和部署的效率。 综上所述,通过k8s部署Spring Boot应用程序可以提供更好的可伸缩性、高可用性和容错性,并且可以使用CICD工具实现自动化部署。这将简化和改善应用程序的部署和管理,并提高开发和部署的效率。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *3* [k8s部署springboot项目](https://blog.csdn.net/qq_34285557/article/details/124460872)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] - *2* [教你使用k8s部署springboot服务](https://blog.csdn.net/ww2651071028/article/details/129636489)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值