JavaEE:Kubernetes集群搭建

说明:
用于自动化部署应用到docker,并进行应用扩展和管理。能整合多个运行的docker容器的主机集群。

Kubernetes安装/使用:

一、Master节点,安装/启动etcd服务(Master节点放在独立的centos7虚拟机):

1.下载etcd安装包上传到centos7:

(1)下载etcd-v3.4.10-linux-amd64.tar.gz:

https://github.com/etcd-io/etcd/releases

(2)创建并cd到/usr/local/k8s目录。

(3)使用FinalShell工具上传安装包到/usr/local/k8s目录下,如图:

2.安装etcd服务:

(1)解压安装包(cd到/usr/local/k8s目录):

[root@localhost k8s]# tar -zxvf etcd-v3.4.10-linux-amd64.tar.gz

(2)将etcd和etcdctl两个目录拷贝到/usr/bin目录中(cd到/usr/local/k8s/etcd-v3.4.10-linux-amd64目录):

[root@localhost etcd-v3.4.10-linux-amd64]# cp etcd etcdctl /usr/bin/

(3)配置systemd服务之etcd.service:

[root@localhost etcd-v3.4.10-linux-amd64]# vi /usr/lib/systemd/system/etcd.service

修改如下内容(手动创建/var/lib/etcd目录,否则会导致启动etcd服务失败):

[Unit]
Description=Etcd Server
After=network.target
[Service]
Type=simple
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd
Restart=on-failure
[Install]
WantedBy=multi-user.target

3.启动etcd服务:

(1)重新加载并Created symlink:

[root@localhost etcd-v3.4.10-linux-amd64]# systemctl daemon-reload
[root@localhost etcd-v3.4.10-linux-amd64]# systemctl enable etcd.service

(2)启动服务:

[root@localhost etcd-v3.4.10-linux-amd64]# systemctl start etcd.service

(3)查看服务的一些状态:

#是否启动
[root@localhost etcd-v3.4.10-linux-amd64]# systemctl status etcd
#健康状况
[root@localhost etcd-v3.4.10-linux-amd64]# etcdctl endpoint health

二、Master节点,安装/启动kubernetes服务(Master节点放在独立的centos7虚拟机):

1.下载kubernetes-server安装包上传到centos7:

(1)下载kubernetes-server-linux-amd64.tar.gz安装包:

https://github.com/kubernetes/kubernetes/releases

(2)创建并cd到/usr/local/k8s目录。

(3)使用FinalShell工具上传安装包到/usr/local/k8s目录下,如图:

2.安装kubernetes-server:

(1)解压安装包(cd到/usr/local/k8s目录):

[root@localhost k8s]# tar -zxvf kubernetes-server-linux-amd64.tar.gz

(2)将kube-apiserver、kube-controller-manager、kube-scheduler三个目录和kubectl文件拷贝到/usr/bin目录中(cd到/user/local/k8s/kubernetes/server/bin目录):

[root@localhost bin]# cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/bin/

3.配置kube-apiserver服务:

[root@localhost bin]# vi /usr/lib/systemd/system/kube-apiserver.service

修改如下内容(手动创建/etc/kubernetes目录):

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS
Restart=on-failure
Type=notify
[Install]
WantedBy=multi-user.target

4.配置apiserver服务(手动创建/var/log/kubernetes目录):

[root@localhost kubernetes]# vi /etc/kubernetes/apiserver

修改如下内容:

KUBE_API_ARGS="--storage-backend=etcd3 --etcd-servers=http://127.0.0.1:2379 --insecure-bind-address=0.0.0.0 --insecure-port=8080 --service-cluster-ip-range=169.169.0.0/16 --service-node-port-range=1-65535 --admission-control-NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,DefaultStorageClass,ResourceQuota --logtostderr=true --log-dir=/var/log/kubernetes --v=2"

5.配置kube-controller-manager服务(此服务依赖于kube-apiserver服务):

(1)修改kube-controller-manager.service:

[root@localhost kubernetes]# vi /usr/lib/systemd/system/kube-controller-manager.service

修改如下内容:

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

(2)修改controller-manager:

[root@localhost kubernetes]# vi /etc/kubernetes/controller-manager

修改如下内容(192.168.233.129为本虚拟机IP):

KUBE_CONTROLLER_MANAGER_ARGS="--master=http://192.168.233.129:8080 --logtostderr=true --log-dir=/var/log/kubernetes --v2"

6.配置kube-scheduler服务(此服务依赖于kube-apiserver服务):

(1)修改kube-scheduler.service:

[root@localhost kubernetes]# vi /usr/lib/systemd/system/kube-scheduler.service

修改内容如下:

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

(2)修改scheduler:

[root@localhost kubernetes]# vi /etc/kubernetes/scheduler

修改内容如下(192.168.233.129为本虚拟机IP):

KUBE_SCHEDULER_ARGS="--master=192.168.233.129:8080 --logtostderr=true --log-dir=/var/log/kubernetes --v=2"

7.依次启动服务:

[root@localhost kubernetes]# systemctl daemon-reload
[root@localhost kubernetes]# systemctl enable kube-apiserver.service
[root@localhost kubernetes]# systemctl start kube-apiserver.service
[root@localhost kubernetes]# systemctl enable kube-controller-manager.service
[root@localhost kubernetes]# systemctl start kube-controller-manager.service
[root@localhost kubernetes]# systemctl enable kube-scheduler.service
[root@localhost kubernetes]# systemctl start kube-scheduler.service

三、Node节点安装(可以安装多个Node节点,每个Node放在独立的centos7虚拟机)):

1.配置kubelet服务:

(1)将kubelet、kube-proxy拷贝到/usr/bin目录中(cd到/usr/local/k8s/kubernetes/server/bin目录):

[root@localhost bin]# cp kubelet kube-proxy /usr/bin/

(2)修改kubelet.service:

[root@localhost bin]# vi /usr/lib/systemd/system/kubelet.service

修改内容如下:

[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBELET_API_SERVER \
            $KUBELET_ADDRESS \
            $KUBELET_PORT \
            $KUBELET_HOSTNAME \
            $KUBE_ALLOW_PRIV \
            $KUBELET_POD_INFRA_CONTAINER \
            $KUBELET_ARGS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target

(3)修改kubelet(配置Node节点的IP):

[root@localhost bin]# vi /etc/kubernetes/kubelet

修改内容如下:

KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --hostname-override=192.168.233.129 --logtostderr=false --log-dir=/var/log/kubernetes --v=2 --fail-swap-on=false"

(4)连接Master Apiserver的配置:

vi /etc/kubernetes/kubeconfig

内容如下:

apiVersion: v1
kind: Config
clusters:
  - cluster:
      server: http://192.168.233.129:8080
    name: local
contexts:
  - context:
      cluster: local
    name: mycontext
current-context: mycontext

2.配置kube-proxy服务:

(1)修改kube-proxy.service:

[root@localhost kubernetes]# vi /usr/lib/systemd/system/kube-proxy.service

修改内容如下:

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
Requires=network.service

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
KillMode=process

[Install]
WantedBy=multi-user.target

(2)修改proxy:

[root@localhost kubernetes]# vi /etc/kubernetes/proxy

修改内容如下:

KUBE_PROXY_ARGS="--master=http://192.168.233.128:8080 --hostname-override=192.168.233.130 --logtostderr=true --log-dir=/var/log/kubernetes --v=2"

3.启动服务:

systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl enable kube-proxy
systemctl start kube-proxy

四、健康检查:

1.将kubectl命令拷贝到/usr/bin目录下(cd到/usr/local/k8s/kubernetes/server/bin目录):

cp kubectl /usr/bin/

2.查看集群状态:

kubectl get nodes

不显示解决方案:

vi /etc/kubernetes/apiserver

#去掉ServiceAccount,systemctl restart kube-apiserver重启服务
KUBE_ADMISSION_CONTROL=“-admission_control=…ServiceAccount”

3.查看master集群组件状态:

kubectl get  cs

4.在/usr/local/k8s目录下创建nginx-rc.yaml:

(1)创建文件:

vi nginx-rc.yaml

内容修改如下:

apiversion: v1
kind: ReplicationController
metadata:
  name: nginx
spec:
  replicas: 3
  selector:
    app: nginx
  template:
    metadata:
       labels:
          app: nginx
       spec:
       containers:
      - name: nginx
        Image: nginx
        ports:
        - containerPort: 80

(2)执行命令:

kubectl create -f nginx-rc.yaml

5.在/usr/local/k8s目录下创建nginx-svc.yaml:

(1)创建文件:

vi nginx-svc.yaml

内容修改如下:

apiVersion: v1
kind: Service
metadata:
   name: nginx
spec:
   type: NodePort
   ports:
   - port: 80
      nodePort: 3333
   selector:
      app: nginx

(2)执行命令:

kubectl create -f nginx-svc.yaml

6.pull失败解决方案:

docker pull kubernetes/pause
docker tag docker.io/kubernetes/pause:latest 192.168.133.128:5000/google_containers/pause-amd64.3.0
docker push 192.168.133.128:5000/google_containers/pause-amd64.3.0
vi /etc/kubernetes/kubelet

内容追加(从此IP上拉取镜像):

KUBELET_ARGS=“—pod_infra_container_image=192.168.1233.128:5000/google_containers/pause-amd64.3.0”

重启kubelete服务:

systemctl restart kubelet

五、常用命令:

1.获取当前命名空间下的容器:

kubectl get pods

2.获取所有容器列表:

kubectl get all

3.创建容器

kubectl create -f kubernate-pvc.yaml

4.删除容器:

kubectl delete pods/test-pd
kubectl delete -f rc-nginx.yaml

5.查看指定pod跑在哪个node上:

kubectl get pod /test-pd -o wide

6.查看容器日志:

kubectl logs nginx-8586cf59-mwwtc

7.进入容器终端命令:

kubectl exec -it nginx-8586cf59-mwwtc /bin/bash

8.Pod名为mypod,有多个容器,用—container进入此容器命令行:

kubectl exec -t mypod —container mydocker —bin/bash

9.查看容器状态:

kubectl get svc

 

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值