Ubuntu安装kubernates全过程实记_ubuntu systemctl enable flanneld

img
img
img

既有适合小白学习的零基础资料,也有适合3年以上经验的小伙伴深入学习提升的进阶课程,涵盖了95%以上软件测试知识点,真正体系化!

由于文件比较多,这里只是将部分目录截图出来,全套包含大厂面经、学习笔记、源码讲义、实战项目、大纲路线、讲解视频,并且后续会持续更新

需要这份系统化的资料的朋友,可以戳这里获取

openssl x509 -noout -text -in kubernetes.pem


11.分发证书:



将生成的证书和秘钥文件(后缀名为.pem)拷贝到所有机器的 /etc/kubernetes/ssl 目录下备用;

mkdir -p /etc/kubernetes/ssl
cp *.pem /etc/kubernetes/ssl
scp *.pem root@10.20.100.236:/etc/kubernetes/ssl
scp *.pem root@192.168.174.128:/etc/kubernetes/ssl


然后我发现,这么配置太慢了,研究了一下,终于决定,先不加这些验证文件.  
 所以上面的这些,都没加,下面才是正真的开始.


### 开始安装全部的组件


### 先准备工作目录,下面所有的下载和操作都在这个目录下执行



cd /mnt/
mkdir k8s
cd k8s
sudo swapoff -a


安装etcd



tar zxvf etcd-v3.3.0-linux-amd64.tar.gz

sudo mkdir -p /var/lib/etcd/
sudo mkdir -p /etc/etcd/
sudo vim /etc/etcd/etcd.conf

ETCD_NAME=default
ETCD_DATA_DIR=“/var/lib/etcd/”
ETCD_LISTEN_CLIENT_URLS=“http://0.0.0.0:2379”
ETCD_ADVERTISE_CLIENT_URLS=“http://10.20.100.236:2379”


创建systemd文件



sudo vim /lib/systemd/system/etcd.service

[Unit]
Description=Etcd Server
Documentation=https://github.com/coreos/etcd
After=network.target

[Service]
User=sunht
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/mnt/k8s/etcd-v3.3.0-linux-amd64/etcd
Restart=on-failure
RestartSec=10s
LimitNOFILE=40000

[Install]
WantedBy=multi-user.target


启动服务



sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd


检查服务及端口



sudo systemctl status etcd

netstat -apn | grep 2379


创建一个etcd网络



etcdctl set /coreos.com/network/config ‘{ “Network”: “172.17.0.0/16” }’


这里的etcd网络是给flannel分配docker使用的,目前docker上的子网是172.17.0.1网关.所以这里也就用这个作为网关了.  
 如果部署的是etcd集群,那么每台etcd服务器上都需要执行上述步骤。但我这里只使用了standalone,所以我的etcd服务就搞定了。


### Kubernetes通用配置


创建Kubernetes配置目录



sudo mkdir /etc/kubernetes
sudo vim /etc/kubernetes/config

KUBE_LOGTOSTDERR=“–logtostderr=true”
KUBE_LOG_LEVEL=“–v=0”
KUBE_ALLOW_PRIV=“–allow-privileged=false”
KUBE_MASTER=“–master=http://10.20.100.236:6060”


8080端口被占了,还是用6060吧,看看后面有没有其他的地方要改的.


#### 同样在master的主机上配置kube-apiserver服务



tar -xzvf kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-client-linux-amd64.tar.gz
tar -xzvf kubernetes-node-linux-amd64.tar.gz

sudo vim /etc/kubernetes/apiserver

KUBE_API_ADDRESS=“–address=0.0.0.0”
KUBE_API_PORT=“–port=6060”
KUBELET_PORT=“–kubelet-port=10250”
KUBE_ETCD_SERVERS=“–etcd-servers=http://10.20.100.236:2379”
KUBE_SERVICE_ADDRESSES=“–service-cluster-ip-range=172.17.0.0/16”
KUBE_ADMISSION_CONTROL=“–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota”
KUBE_API_ARGS=“”


#### 创建systemd文件



sudo vim /lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service
Wants=etcd.service

[Service]
User=root
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/mnt/k8s/kubernetes/server/bin/kube-apiserver \ ##这里是kube-apiserver 果然能copy还是不要手敲了,这里的错误找了半天
$KUBE_LOGTOSTDERR
$KUBE_LOG_LEVEL
$KUBE_ETCD_SERVERS
$KUBE_API_ADDRESS
$KUBE_API_PORT
$KUBELET_PORT
$KUBE_ALLOW_PRIV
$KUBE_SERVICE_ADDRESSES
$KUBE_ADMISSION_CONTROL
$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target


### 配置kube-controller-manager服务



sudo vim /etc/kubernetes/controller-manager

KUBE_CONTROLLER_MANAGER_ARGS=“”


创建systemd文件



sudo vim /lib/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=etcd.service
After=kube-apiserver.service
Requires=etcd.service
Requires=kube-apiserver.service

[Service]
User=root
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/mnt/k8s/kubernetes/server/bin/kube-controller-manager
$KUBE_LOGTOSTDERR
$KUBE_LOG_LEVEL
$KUBE_MASTER
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target


### 配置kube-scheduler服务


创建kube-scheduler配置文件



sudo vim /etc/kubernetes/scheduler

KUBE_SCHEDULER_ARGS=“”


创建systemd文件



sudo vim /lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
User=root
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/mnt/k8s/kubernetes/server/bin/kube-scheduler
$KUBE_LOGTOSTDERR
$KUBE_MASTER
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target


### 启动Kubernetes master节点的服务



sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler


启动成功后


### 配置Node上的kubernates


/etc/kubernetes/config 与主的一样.


### flannel配置


#### 创建配置目录和文件



sudo vim /etc/default/flanneld.conf
FLANNEL_ETCD_ENDPOINTS=“http://10.20.100.236:2379”
FLANNEL_ETCD_PREFIX=“/coreos.com/network”


其中,FLANNEL\_ETCD\_PREFIX选项就是刚才配置的etcd网络。


创建systemd文件



sudo vim /lib/systemd/system/flanneld.service

[Unit]
Description=Flanneld
Documentation=https://github.com/coreos/flannel
After=network.target
After=etcd.service
Before=docker.service

[Service]
User=root
EnvironmentFile=/etc/default/flanneld.conf
ExecStart=/mnt/k8s/flanneld
-etcd-endpoints= F L A N N E L E T C D E N D P O I N T S   − e t c d − p r e f i x = {FLANNEL_ETCD_ENDPOINTS} \ -etcd-prefix= FLANNELETCDENDPOINTS etcdprefix={FLANNEL_ETCD_PREFIX}
$FLANNEL_OPTIONS
ExecStartPost=/usr/bin/flannel/mk-docker-opts.sh -k DOCKER_OPTS -d /run/flannel/docker
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service


然后启动flanneld服务



sudo systemctl daemon-reload
sudo systemctl enable flanneld
sudo systemctl start flanneld


查看服务是否启动



sudo systemctl status flanneld

● flanneld.service - Flanneld
Loaded: loaded (/lib/systemd/system/flanneld.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2019-03-27 14:54:17 HKT; 16s ago
Docs: https://github.com/coreos/flannel
Process: 6840 ExecStartPost=/usr/bin/flannel/mk-docker-opts.sh -k DOCKER_OPTS -d /run/flannel/docker (code=exited, status=0/SUCCESS)
Main PID: 6814 (flanneld)
Tasks: 23
Memory: 7.4M
CPU: 113ms
CGroup: /system.slice/flanneld.service
└─6814 /mnt/k8s/flanneld -etcd-endpoints=http://10.20.100.236:2379 -etcd-prefix=/coreos.com/network

Mar 27 14:54:17 ubuntu2 flanneld[6814]: I0327 14:54:17.328085 6814 main.go:505] Defaulting external address to interface address (10.20.100.236)
Mar 27 14:54:17 ubuntu2 flanneld[6814]: I0327 14:54:17.328186 6814 main.go:235] Created subnet manager: Etcd Local Manager with Previous Subnet: 172.17.40.0/24
Mar 27 14:54:17 ubuntu2 flanneld[6814]: I0327 14:54:17.328194 6814 main.go:238] Installing signal handlers
Mar 27 14:54:17 ubuntu2 flanneld[6814]: I0327 14:54:17.329064 6814 main.go:353] Found network config - Backend type: udp
Mar 27 14:54:17 ubuntu2 flanneld[6814]: I0327 14:54:17.343361 6814 local_manager.go:147] Found lease (172.17.40.0/24) for current IP (10.20.100.236), reusing
Mar 27 14:54:17 ubuntu2 flanneld[6814]: I0327 14:54:17.352201 6814 main.go:300] Wrote subnet file to /run/flannel/subnet.env
Mar 27 14:54:17 ubuntu2 flanneld[6814]: I0327 14:54:17.352214 6814 main.go:304] Running backend.
Mar 27 14:54:17 ubuntu2 flanneld[6814]: I0327 14:54:17.352311 6814 udp_network_amd64.go:100] Watching for new subnet leases
Mar 27 14:54:17 ubuntu2 flanneld[6814]: I0327 14:54:17.360031 6814 main.go:396] Waiting for 22h59m59.983567988s to renew lease
Mar 27 14:54:17 ubuntu2 systemd[1]: Started Flanneld.


##Docker的安装和配置



sudo apt-get install docker.io


#### 使flannel作用docker网络


修改docker的systemd配置文件。



sudo mkdir /lib/systemd/system/docker.service.d
sudo vim /lib/systemd/system/docker.service.d/flannel.conf

[Service]
EnvironmentFile=-/run/flannel/docker


重启docker服务。



sudo systemctl daemon-reload
sudo systemctl restart docker


查看docker是否有了flannel的网络。



sudo ps -ef | grep docker

root 7039 1 0 14:58 ? 00:00:00 /usr/bin/dockerd -H fd:// --bip=172.17.40.1/24 --ip-masq=true --mtu=1472


### 配置kubelet服务


#### 创建kubelet的数据目录



sudo mkdir /var/lib/kubelet


创建kubelet配置文件


kubelet的专用配置文件为/etc/kubernetes/kubelet



sudo vim /etc/kubernetes/kubelet

KUBELET_ADDRESS=“–address=127.0.0.1”
KUBELET_HOSTNAME=“–hostname-override=10.20.100.236”
KUBELET_PORT=“–kubelet-port=10250”
#KUBELET_API_SERVER=“–api-servers=http://10.20.100.236:6060”
KUBELET_API_SERVER=“–kubeconfig=/var/lib/kubelet/kubeconfig”

pod infrastructure container

KUBELET_POD_INFRA_CONTAINER=“–pod-infra-container-image=registry.access.RedHat.com/rhel7/pod-infrastructure:latest”
KUBELET_ARGS=“–enable-server=true --enable-debugging-handlers=true”


创建systemd文件



sudo vim /lib/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/local/bin/kubelet
$KUBE_LOGTOSTDERR
$KUBE_LOG_LEVEL
$KUBELET_API_SERVER
$KUBELET_ADDRESS
$KUBELET_PORT
$KUBELET_HOSTNAME
$KUBE_ALLOW_PRIV
$KUBELET_POD_INFRA_CONTAINER
$KUBELET_ARGS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target


启动kubelet服务



sudo systemctl daemon-reload
sudo systemctl enable kubelet
sudo systemctl start kubelet
journalctl -xe
journalctl -xefu kubelet ##我在这一步卡死了,老是报错 exitCode 2 invalidArgument


在v1.8版本之后kubelet不再支持api-server参数,那么在新版本kubelet如何才能与api-server进行通信呢?是通过kubeconfig参数,指定配置文件。(这个地方是一个大坑,如果还是按照这种配置的话,master会找不到这个node)  
 在/etc/kubernetes/kubelet配置文件中有一个配置项



> 
> KUBELET\_ARGS="–fail-swap-on=false --cgroup-driver=cgroupfs --kubeconfig=/var/lib/kubelet/kubeconfig"
> 
> 
> 


###编辑配置文件/var/lib/kubelet/kubeconfig



apiVersion: v1
clusters:

  • cluster:
    server: http://10.20.100.236:6060
    name: myk8s
    contexts:
  • context:
    cluster: myk8s
    user: “”
    name: myk8s-context
    current-context: myk8s-context
    kind: Config
    preferences: {}
    users: []

本地的虚拟机上的配置.



KUBELET_ADDRESS=“–address=127.0.0.1”
KUBELET_HOSTNAME=“–hostname-override=192.168.174.128”
KUBELET_PORT=“–kubelet-port=10250”
#KUBELET_API_SERVER=“–api-servers=http://10.20.100.236:6060”
KUBELET_API_SERVER=“–kubeconfig=/var/lib/kubelet/kubeconfig”

pod infrastructure container

KUBELET_POD_INFRA_CONTAINER=“–pod-infra-container-image=registry.access.RedHat.com/rhel7/pod-infrastructure:latest”
KUBELET_ARGS=“–enable-server=true --enable-debugging-handlers=true”


本地虚拟机上的配置systemd



vi /lib/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/mnt/k8s/kubernetes/server/bin/kubelet
$KUBE_LOGTOSTDERR
$KUBE_LOG_LEVEL
$KUBELET_API_SERVER
$KUBELET_ADDRESS
$KUBELET_PORT
$KUBELET_HOSTNAME
$KUBE_ALLOW_PRIV
$KUBELET_POD_INFRA_CONTAINER
$KUBELET_ARGS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target


### 配置kube-proxy服务


创建kube-proxy配置文件



sudo vi /etc/kubernetes/proxy

KUBE_PROXY_ARGS=“”


#### 创建systemd文件



sudo vim /lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Proxy
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/mnt/k8s/kubernetes/server/bin/kube-proxy
$KUBE_LOGTOSTDERR
$KUBE_LOG_LEVEL
$KUBE_MASTER
$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target


启动proxy



sudo systemctl daemon-reload
sudo systemctl enable kube-proxy
sudo systemctl start kube-proxy


#### 查询node状态


执行kubectl get node命令来查看node状态。都为Ready状态时,则说明node节点已经成功连接到master,如果不是该状态,则需要到该节点上,定位下原因。可通过journalctl -u kubelet.service命令来查看kubelet服务的日志。



$ kubectl get node
NAME STATUS AGE
192.168.56.160 Ready d
192.168.56.161 Ready d ## 这里是抄别人的


因为本地配置的是6060端口,所以需要进行一步额外的操作,将kubectl的端口改正到6060上去.



kubectl --server=http://10.20.100.236:6060 get nodes

NAME STATUS ROLES AGE VERSION
10.20.100.236 Ready 23m v1.9.11
192.168.174.128 Ready 25m v1.9.11


##Kubernetes测试    
   测试Kubernetes是否成功安装。


#### 编写yaml文件


在Kubernetes master上创建一个nginx.yaml,用于创建一个nginx的ReplicationController。



vim rc_nginx.yaml

apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
labels:
name: nginx
spec:
replicas: 2
selector:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx


##创建pod


执行kubectl create命令创建ReplicationController。该ReplicationController配置中有两个副本,并且我们的环境有两个Kubernetes Node,因此,它应该会在两个Node上分别运行一个Pod。  
   注意:这个过程可能会需要很长的时间,它会从网上拉取nginx镜像,还有pod-infrastructure这个关键镜像。



kubectl --server=http://10.20.100.236:6060 create -f ./rc_nginx.yaml


### 查询状态


执行kubectl get pod和rc命令来查看pod和rc状态。刚开始可能会处于containerCreating的状态,待需要的镜像下载完成后,就会创建具体的容器。pod状态应该显示Running状态。



kubectl --server=http://10.20.100.236:6060 get rc

NAME DESIRED CURRENT READY AGE
nginx 2 2 0 2m


对应的pod状态为:



kubectl --server=http://10.20.100.236:6060 get pod -o wide

NAME READY STATUS RESTARTS AGE IP NODE
nginx-mk58l 0/1 ContainerCreating 0 5m 10.20.100.236
nginx-xbx2p 0/1 ContainerCreating 0 5m 192.168.174.128

后来变成了
NAME READY STATUS RESTARTS AGE IP NODE
nginx-mk58l 1/1 Running 1 42m 172.17.40.5 10.20.100.236
nginx-xbx2p 1/1 Running 0 42m 172.17.35.2 192.168.174.128

docker ps -a

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
96ed1c12b646 nginx@sha256:c8a861b8a1eeef6d48955a6c6d5dff8e2580f13ff4d0f549e082e7c82a8617a2 “nginx -g 'daemon …” 9 minutes ago Up 9 minutes k8s_nginx_nginx-mk58l_default_13d7d944-512c-11e9-b685-da6056ccded9_1
00f082a41e57 registry.access.RedHat.com/rhel7/pod-infrastructure:latest “/usr/bin/pod” 9 minutes ago Up 9 minutes k8s_POD_nginx-mk58l_default_13d7d944-512c-11e9-b685-da6056ccded9_1
8d3d4d2c692b nginx@sha256:c8a861b8a1eeef6d48955a6c6d5dff8e2580f13ff4d0f549e082e7c82a8617a2 “nginx -g 'daemon …” 9 minutes ago Exited (0) 9 minutes ago k8s_nginx_nginx-mk58l_default_13d7d944-512c-11e9-b685-da6056ccded9_0
54dd5cafa1cc registry.access.RedHat.com/rhel7/pod-infrastructure:latest “/usr/bin/pod” 10 minutes ago Exited (0) 9 minutes ago k8s_POD_nginx-mk58l_default_13d7d944-512c-11e9-b685-da6056ccded9_0_20ebccfa


剩下的等待吧.不知道多久能部署完成.


好吧, 终于大功告成了.!!


然后考虑了一下,停了本地的虚拟机,过了一段时间查看一下



kubectl --server=http://10.20.100.236:6060 get rc

NAME DESIRED CURRENT READY AGE
nginx 2 2 2 1h

kubectl --server=http://10.20.100.236:6060 get pods

NAME READY STATUS RESTARTS AGE
nginx-dbfr4 1/1 Running 0 17m
nginx-mk58l 1/1 Running 1 1h
nginx-xbx2p 1/1 Unknown 0 1h

img
img

网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。

需要这份系统化的资料的朋友,可以戳这里获取

一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!


剩下的等待吧.不知道多久能部署完成.


好吧, 终于大功告成了.!!


然后考虑了一下,停了本地的虚拟机,过了一段时间查看一下



kubectl --server=http://10.20.100.236:6060 get rc

NAME DESIRED CURRENT READY AGE
nginx 2 2 2 1h

kubectl --server=http://10.20.100.236:6060 get pods

NAME READY STATUS RESTARTS AGE
nginx-dbfr4 1/1 Running 0 17m
nginx-mk58l 1/1 Running 1 1h
nginx-xbx2p 1/1 Unknown 0 1h

[外链图片转存中…(img-zqmjFJqH-1715883552925)]
[外链图片转存中…(img-zJekADmt-1715883552925)]

网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。

需要这份系统化的资料的朋友,可以戳这里获取

一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值