kubernete入门学习二--安装

一、总体架构

etcd 一款开源软件。提供可靠的分布式数据存储服务,用于持久化存储K8s集群的配置和状态。

K8s API server 用户程序(如kubectl)、K8s其它组件之间通信的接口。K8s其它组件之间不直接通信,而是通过API server通信的。这一点在上图的连接中可以体现,例如,只有API server连接了etcd,即其它组件更新K8s集群的状态时,只能通过API server读写etcd中的数据。

Scheduler 排程组件,为用户应用的每一可部署组件分配工作结点。

Controller Manager 执行集群级别的功能,如复制组件、追踪工作结点状态、处理结点失败等。Controller Manager组件是由多个控制器组成的,其中很多控制器是按K8s的资源类型划分的,如Replication Manager(管理ReplicationController 资源),ReplicaSet Controller,PersistentVolume controller。

kube-proxy 在应用组件间负载均衡网络流量。

Kubelet 管理工作结点上的容器。

Container runtime Docker, rkt等实际运行容器的组件

 

API Server作为kubernetes系统的入口,封装了核心对象的增删改查操作,以RESTFul接口方式提供给外部客户和内部组件调用。

 

二、两种部署方式:

  1. k8s、docke等组件r都是直接安装到物理机上(系统级别的守护进程)--传统手动方式--复杂--网上也有相关脚本

 

  1. Kubeadm方式--kubelet和docker部署到物理机上,其他组件以容器方式部署为pod。--简单--静态pod (也可以运行为动态pod)

 

 

三、部署步骤(以上第二种模式安装)

  1. Master,nodes:安装kubelet、kubadm、docker
  2. Master:kubadm init
  3. Nodes:kubadm join

四:实验部署

host
192.168.6.100 master
192.168.6.101 node1
192.168.6.102 node2

pod的默认ip 为 10.244.0.0/16
service的默认ip  10.92.0.0/12
节点网络为:172.20.0.0/16

系统初始化
关闭防火墙等
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables 
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables 

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0

然后yum repolist验证是否可用

wget http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
rpm --import yum-key.gpg

yum install docker-ce kubelet kubeadm kubectl

启动docker服务
systemctl daemon-reload
systemctl start docker
systemctl enable docker
docker info


key
[root@master ~]# ssh-keygen
[root@master ~]# ssh-copy-id root@192.168.6.101
[root@master ~]# ssh-copy-id root@192.168.6.102
[root@master ~]# ssh node1
The authenticity of host 'node1 (192.168.6.101)' can't be established.
ECDSA key fingerprint is SHA256:Xz7Q7WFVuisRYJASODutHkF//Kw/RJZPpondCB6US7o.
ECDSA key fingerprint is MD5:ae:cc:de:f5:09:a4:e3:ec:2b:1a:93:3f:49:60:5f:b1.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1' (ECDSA) to the list of known hosts.
Last login: Mon Feb 25 08:28:30 2019 from 10.249.100.226
[root@node01 ~]# exit
logout
Connection to node1 closed.

[root@master ~]# kubeadm config images list
I0225 22:03:15.247260   35483 version.go:94] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable-1.txt: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
I0225 22:03:15.248140   35483 version.go:95] falling back to the local client version: v1.13.3
k8s.gcr.io/kube-apiserver:v1.13.3
k8s.gcr.io/kube-controller-manager:v1.13.3
k8s.gcr.io/kube-scheduler:v1.13.3
k8s.gcr.io/kube-proxy:v1.13.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.6

docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.13.3
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.13.3
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.13.3
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.13.3
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.2.24
docker pull coredns/coredns:1.2.6

docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.13.3 k8s.gcr.io/kube-apiserver:v1.13.3
docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.13.3 k8s.gcr.io/kube-controller-manager:v1.13.3
docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.13.3 k8s.gcr.io/kube-scheduler:v1.13.3
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.13.3 k8s.gcr.io/kube-proxy:v1.13.3
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6

kubeadm init  --kubernetes-version=v1.13.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap  

执行提示
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config 

保存,客户端执行
kubeadm join 192.168.6.100:6443 --token 2wh7mm.9zd1w12wyuqsdm7i --discovery-token-ca-cert-hash sha256:d0864f7935eff8a635f134e767c85b1a99629d3bc13508d3fc1c586e8a7f55b4

组件状态信息
kubectl get cs
kubectl get componentstatus

节点信息
kubectl get nodes

[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   24h   v1.13.3


---------------------------------------
部署网络
https://github.com/coreos/flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
docker images ls
kubectl get pods
kubectl get pods -n kube-system
kubectl get ns

[root@master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-86c58d9df4-29lwf         1/1     Running   0          21m
kube-system   coredns-86c58d9df4-5n755         1/1     Running   0          21m
kube-system   etcd-master                      1/1     Running   0          20m
kube-system   kube-apiserver-master            1/1     Running   0          20m
kube-system   kube-controller-manager-master   1/1     Running   0          20m
kube-system   kube-flannel-ds-amd64-6fj5c      1/1     Running   0          10m
kube-system   kube-proxy-ldskb                 1/1     Running   0          21m
kube-system   kube-scheduler-master            1/1     Running   0          20m


维护命令
kubectl --namespace kube-system logs  kube-flannel-ds-amd64-6fj5c
-------------------------------
相关仓库文件scp到客户端
将docker和kubelet设置为开机启动
启动docker

客户端安装
yum install docker-ce kubelet kubeadm
执行之前服务器的命令
这里遇到token过期,重新生成token即可
kubeadm join 192.168.6.100:6443 --token o6vu26.dlq1knjxwqq93aht --discovery-token-ca-cert-hash sha256:d0864f7935eff8a635f134e767c85b1a99629d3bc13508d3fc1c586e8a7f55b4

[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES    AGE    VERSION
master   Ready      master   25h    v1.13.3
node01   NotReady   <none>   49s    v1.13.3
node02   NotReady   <none>   105s   v1.13.3
这里环境安装完成
==================
错误一
启动kubelet报错(这里不能先启动,要先初始化)
Feb 25 21:56:53 master systemd: kubelet.service holdoff time over, scheduling restart.
Feb 25 21:56:53 master systemd: Stopped kubelet: The Kubernetes Node Agent.
Feb 25 21:56:53 master systemd: Started kubelet: The Kubernetes Node Agent.
Feb 25 21:56:53 master kubelet: F0225 21:56:53.242340   33085 server.go:189] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
Feb 25 21:56:53 master systemd: kubelet.service: main process exited, code=exited, status=255/n/a
Feb 25 21:56:53 master systemd: Unit kubelet.service entered failed state.
Feb 25 21:56:53 master systemd: kubelet.service failed

错误二:
[root@master ~]# docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.13.3
Error response from daemon: Get https://registry-1.docker.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused
解决:
去掉代理

问题三
Feb 25 22:39:53 master kubelet: W0225 22:39:53.428422    3123 cni.go:203] Unable to update cni config: No networks found in /etc/cni/net.d
Feb 25 22:39:53 master kubelet: E0225 22:39:53.428599    3123 kubelet.go:2192] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
这个是flannel或者calico的cni配置文件还没生成
生成之后就好了
[root@master ~]# ll /etc/cni/net.d/
total 4
-rw-r--r--. 1 root root 267 Feb 25 22:51 10-flannel.conflist
 
问题四
docker: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting

CentOS 7升级新版的Linux内核
添加 ELRepo仓库
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

启用仓库后,列出可用的kernel.related包
yum --disablerepo="*" --enablerepo="elrepo-kernel" list available

接下来安装最新的稳定版内核
yum --enablerepo=elrepo-kernel install kernel-ml

设置默认的启动内核
根据/boot/目录内的文件,自动创建GRUB内核配置开机选单
grub2-mkconfig -o /boot/grub2/grub.cfg

查看可选的GRUB内核配置开机选单
awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg

通过命令指定默认的内核
grub2-set-default 0

编辑/etc/default/grub文件,指定默认运行的内核版本。(0代表第一项)
GRUB_DEFAULT=0

更换内核之后启动,恢复正常

问题五
[root@node02 ~]# kubeadm join 192.168.6.100:6443 --token 2wh7mm.9zd1w12wyuqsdm7i --discovery-token-ca-cert-hash sha256:d0864f7935eff8a635f134e767c85b1a99629d3bc13508d3fc1c586e8a7f55b4
[preflight] Running pre-flight checks
    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.2. Latest validated version: 18.06
[discovery] Trying to connect to API Server "192.168.6.100:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.6.100:6443"
[discovery] Failed to connect to API Server "192.168.6.100:6443": token id "2wh7mm" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token

token过期导致
kubeadm reset
使用命令查看
[root@master ~]# kubeadm token list
TOKEN     TTL       EXPIRES   USAGES    DESCRIPTION   EXTRA GROUPS
这里为空
重新创建一个
[root@master ~]# kubeadm token create
o6vu26.dlq1knjxwqq93aht
[root@master ~]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION   EXTRA GROUPS
o6vu26.dlq1knjxwqq93aht   23h       2019-02-27T23:40:25+08:00   authentication,signing   <none>        system:bootstrappers:kubeadm:default-node-token

 

群名称:k8s学习群   群   号:153144292

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值