K8S 1.19 部署学习记录

  1. docker部署OK
  2. 准备环境:
  • 防火墙
  • selinux
  • swap
  • ipforward
[root@vms129 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables =1
> net.ipv4.ip_forward = 1
> EOF
  1. k8s部署安装

(1) 安装基础服务,所有节点执行

[root@vms129 ~]# yum install -y kubelet-1.19.0-0 kubeadm-1.19.0-0 kubectl-1.19.0-0 --disableexclude=kubernetes
[root@vms129 ~]# systemctl start kubelet ; systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

(2) 初始化集群

# kubeadm init --config kube.conf  使用本机已有配置文件进行部署
[root@vms129 ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.19.0 --pod-network-cidr=10.244.0.0/16
W0906 23:32:02.870615    4817 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.0
[preflight] Running pre-flight checks
	[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local vms129.example.com] and IPs [10.96.0.1 192.168.139.129]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost vms129.example.com] and IPs [192.168.139.129 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost vms129.example.com] and IPs [192.168.139.129 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 18.509319 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node vms129.example.com as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node vms129.example.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: zte0k0.7rth90a4jk5qx1fc
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.139.129:6443 --token zte0k0.7rth90a4jk5qx1fc \
    --discovery-token-ca-cert-hash sha256:9d3a523e9c0301b5c83ae4b90dacb67bdee7d57699babb83aee92eedeabf4309

kubeconfig认证
默认使用.kube/config文件,可以使用export=KUBECONFIG=/etc/xxx来配置。按照提示的语句配置即可。

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

节点加入:

kubeadm join 192.168.139.129:6443 --token zte0k0.7rth90a4jk5qx1fc \
    --discovery-token-ca-cert-hash sha256:9d3a523e9c0301b5c83ae4b90dacb67bdee7d57699babb83aee92eedeabf4309

如果master节点执行完后,没有保存下来这个命令,可通过命令查询:

kubeadm token create --print-join-command
W0906 23:49:01.109037    6363 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join 192.168.139.129:6443 --token lka8tw.pej201bzp6mb7e3f     --discovery-token-ca-cert-hash sha256:9d3a523e9c0301b5c83ae4b90dacb67bdee7d57699babb83aee92eedeabf4309

(3)calico网络

wget https://docs.projectcalico.org/v3.14/manifests/calico.yam

文件下载后,需修改配置,和kubeadm init时配置的CIDR地址一致:
在这里插入图片描述

(4)kubectl命令快捷键配置

[root@vms129 ~]# kubectl --help | grep bash
  completion    Output shell completion code for the specified shell (bash or zsh)
[root@vms129 ~]# kubectl completion bash
# profile文件新增一行配置
[root@vms129 ~]# cat /etc/profile
# /etc/profile
source <(kubectl completion bash)

# System wide environment and startup programs, for login setup
# Functions and aliases go in /etc/bashrc

[root@vms129 ~]# source /etc/profile
  1. 其他注意事项

问题一: 为什么重启节点之后kubectl不能正常使用

为什么重启节点之后kubectl不能正常使用,请检查以下三项

  1. 所有网卡都是静态的配置的IP
  2. swap和selinux重启也是关闭的
  3. 请确保kubelet 设置了开机自启动
  4. docker设置了开机自启动
[root@vms129 ~]# systemctl is-active docker; systemctl is-enabled docker
active
enabled
[root@vms129 ~]# systemctl is-active kubelet ; systemctl is-enabled kubelet
activating
enabled

问题二:vim粘贴时格式不对,如何配置
在这里插入图片描述
vim配置:
:set paste

问题三:
查看kubectl配置文件:

  1. 默认配置文件位置 ~/.kube/config
    查看配置格式
[root@master ~]# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.13.29:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
[root@master ~]# 
[root@master ~]# #查看当前集群的配置信息,如集群网络配置,镜像仓库地址,版本号等。可将此配置保存下来,再kubeadm init
[root@master ~]# kubeadm config view
apiServer:
  extraArgs:
    authorization-mode: Node,RBAC
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.18.5
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}
[root@master ~]# 
  1. 配置环境变量 $KUBECONFIG, 写入/etc/profile即可永久保存
[root@master ~]# cat /etc/profile
# /etc/profile

# System wide environment and startup programs, for login setup
# Functions and aliases go in /etc/bashrc

# It's NOT a good idea to change this file unless you know what you
# are doing. It's much better to create a custom.sh shell script in
# /etc/profile.d/ to make custom changes to your environment, as this
# will prevent the need for merging in future updates.

pathmunge () {
    case ":${PATH}:" in
        *:"$1":*)
            ;;
        *)
            if [ "$2" = "after" ] ; then
                PATH=$PATH:$1
            else
                PATH=$1:$PATH
            fi
    esac
}


if [ -x /usr/bin/id ]; then
    if [ -z "$EUID" ]; then
        # ksh workaround
        EUID=`/usr/bin/id -u`
        UID=`/usr/bin/id -ru`
    fi
    USER="`/usr/bin/id -un`"
    LOGNAME=$USER
    MAIL="/var/spool/mail/$USER"
fi

# Path manipulation
if [ "$EUID" = "0" ]; then
    pathmunge /usr/sbin
    pathmunge /usr/local/sbin
else
    pathmunge /usr/local/sbin after
    pathmunge /usr/sbin after
fi

HOSTNAME=`/usr/bin/hostname 2>/dev/null`
HISTSIZE=1000
if [ "$HISTCONTROL" = "ignorespace" ] ; then
    export HISTCONTROL=ignoreboth
else
    export HISTCONTROL=ignoredups
fi

export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL

# By default, we want umask to get set. This sets it for login shell
# Current threshold for system reserved uid/gids is 200
# You could check uidgid reservation validity in
# /usr/share/doc/setup-*/uidgid file
if [ $UID -gt 199 ] && [ "`/usr/bin/id -gn`" = "`/usr/bin/id -un`" ]; then
    umask 002
else
    umask 022
fi

for i in /etc/profile.d/*.sh ; do
    if [ -r "$i" ]; then
        if [ "${-#*i}" != "$-" ]; then 
            . "$i"
        else
            . "$i" >/dev/null
        fi
    fi
done

unset i
unset -f pathmunge
export KUBECONFIG=/etc/kubernetes/admin.conf
  1. 重置集群
[root@master ~]# ^C
[root@master ~]# # 集群节点删除后可重置集群
[root@master ~]# kubeadm delete node
[root@master ~]# kubeadm reset
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值