kubernetes1.30集群部署+dashboard+heapster

本文详细介绍了如何在k8s环境中部署1.13.0版本的集群,包括系统配置、初始化、安装Docker、部署Pod网络、添加Slave节点和安装Dashboard。同时,文章还涵盖了集成Heapster进行监控,并提供了集群拆卸的步骤。
摘要由CSDN通过智能技术生成

v2.1

1、系统配置

1.1、禁用防火墙、禁用selinux

#防火墙禁用
systemctl stop firewalld
systemctl disable firewalld

#SELinux禁用
setenforce 0
sed -i '/^SE/s/enforcing/disabled/' /etc/selinux/config 

1.2、创建修改内核文件/etc/sysctl.d/k8s.conf

#写入文件
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

#使用sysctl使其生效
sysctl -p /etc/sysctl.d/k8s.conf

1.3、禁用swap


#关闭系统的Swap
swapoff -a

#永久生效
echo "vm.swappiness=0" >> /etc/sysctl.d/k8s.conf

#修改 /etc/fstab 文件,注释掉 SWAP 的自动挂载
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

2、配置yum源

#配置阿里的docker-ce 
yum-config-manager --add-repo  https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

#配置kubernetes
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

3、安装docker

#建立缓存
yum makecache fast

#安装依赖
yum install -y xfsprogs yum-utils device-mapper-persistent-data lvm2

#安装最新版本的docker
yum install -y --setopt=obsoletes=0 docker-ce-18.06.1.ce-3.el7

#可以使用以下命令查看docker版本
yum list docker-ce.x86_64  --showduplicates |sort -r

#启动docker
systemctl start docker
systemctl enable docker

 

4、在各节点安装kubeadm、kubelet和kubectl

#安装kubelet kubeadm kubectl
yum install -y kubelet kubeadm kubectl

5、准备需要用到的镜像

#因为kubeadm使用的镜像源是k8s.gcr.io,但是网站已被墙。所以需要提前准备所需要的镜像
#可以使用kubeadm 查看默认需要准备的镜像。并且在此可以看到k8s.gcr.io是无法连接的

[root@K8s-master ~]# kubeadm config images list
I0116 10:11:34.091410   91268 version.go:94] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0116 10:11:34.107299   91268 version.go:95] falling back to the local client version: v1.13.2
k8s.gcr.io/kube-apiserver:v1.13.2
k8s.gcr.io/kube-controller-manager:v1.13.2
k8s.gcr.io/kube-scheduler:v1.13.2
k8s.gcr.io/kube-proxy:v1.13.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.6

根据提示,到docker hub 搜索到对应的镜像后pull下来再tag标签

kube-apiserver:v1.13.0 
kube-controller-manager:v1.13.0
kube-scheduler:v1.13.0
kube-proxy:v1.13.0
pause:3.1
etcd:3.2.24
coredns:1.2.6

 下载完后,tag标签为k8s.gcr.io/***

如:

docker pull mirrorgooglecontainers/kube-apiserver:v1.13.0

docker tag  mirrorgooglecontainers/kube-apiserver:v1.13.0  k8s.gcr.io/kube-apiserver:v1.13.0 
 

[root@K8s-master ~]# docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                  v1.13.0             8fa56d18961f        6 weeks ago         80.2MB
k8s.gcr.io/kube-apiserver              v1.13.0             f1ff9b7e3d6e        6 weeks ago         181MB
k8s.gcr.io/kube-controller-manager     v1.13.0             d82530ead066        6 weeks ago         146MB
k8s.gcr.io/kube-scheduler              v1.13.0             9508b7d8008d        6 weeks ago         79.6MB
k8s.gcr.io/coredns                     1.2.6               f59dcacceff4        2 months ago        40MB
k8s.gcr.io/etcd                        3.2.24              3cab8e1b9802        3 months ago        220MB
k8s.gcr.io/pause                       3.1                 da86e6ba6ca1        13 months ago       742kB

镜像准备完毕,开始集群初始化

 

6、初始化集群

master 节点

6.1初始化您的主节点

主节点是集群里运行控制面的机器,包括 etcd (集群的数据库)和 API 服务(kubectl CLI 与之交互)。

  1. 选择一个 Pod 网络插件,并检查是否在 kubeadm 初始化过程中需要传入什么参数。这个取决于 您选择的网络插件,您可能需要设置 --Pod-network-cidr 来指定网络驱动的 CIDR。请参阅安装网络插件
  2. (可选) 除非特别指定,kubeadm 会使用默认网关所在的网络接口广播其主节点的 IP 地址。若需使用其他网络接口,请 给 kubeadm init 设置 --apiserver-advertise-address=<ip-address> 参数。如果需要部署 IPv6 的集群,则需要指定一个 IPv6 地址,比如 --apiserver-advertise-address=fd00::101
  3. (可选) 在运行 kubeadm init 之前请先执行 kubeadm config images pull 来测试与 gcr.io 的连接。

-- 取自kubernetes官网  

https://kubernetes.io/zh/docs/setup/independent/create-cluster-kubeadm/

开始初始化

#master 集群初始化
[root@K8s-master ~]# kubeadm init --kubernetes-version=v1.13.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.146.10
[init] Using Kubernetes version: v1.13.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.146.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.146.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.146.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值