学习K8S笔记

学习Kubernetes笔记

参考的李老师的博客
https://www.funtl.com/zh/service-mesh-kubernetes/

系统准备

参考上一篇文章 正确Ubantu与安装Kubernetes

本次安装采用 Ubuntu Server X64 18.04 LTS 版本安装 kubernetes 集群环境,集群节点为 1 主 2 从模式,此次对虚拟机会有些基本要求,如下:

  • OS:Ubuntu Server X64 18.04 LTS(16.04 版本步骤相同,再之前则不同)
  • CPU:最低要求,1 CPU 2 核
  • 内存:最低要求,2GB
  • 磁盘:最低要求,20GB

对虚拟机系统的配置:

  • 关闭交换空间:sudo swapoff -a

  • 避免开机启动交换空间:注释 /etc/fstab 中的 swap

  • 关闭防火墙:ufw disable

  • 修改 cloud.cfg

  • vi /etc/cloud/cloud.cfg

     	#该配置默认为 false,修改为 true 即可
     	preserve_hostname: true
    

安装软件

1.安装

# 更新软件源
sudo apt-get update
# 安装所需依赖
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
# 安装 GPG 证书
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
# 新增软件源信息
sudo add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
# 再次更新软件源
sudo apt-get -y update
# 安装 Docker CE 版
sudo apt-get -y install docker-ce

2.验证

docker version

3.配置加速器

[https://www.daocloud.io/mirror](https://www.daocloud.io/mirror)

systemctl restart docker

4.安装 kubeadm

kubeadm 是 kubernetes 的集群安装工具,能够快速安装 kubernetes 集群。

  • 配置软件源

    安装系统工具

    apt-get update && apt-get install -y apt-transport-https

    安装 GPG 证书

    curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -

    写入软件源;注意:我们用系统代号为 bionic,但目前阿里云不支持,所以沿用 16.04 的 xenial

    cat << EOF >/etc/apt/sources.list.d/kubernetes.list

    deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
    EOF

    • 安装 kubeadm,kubelet,kubectl
    • kubeadm:用于初始化 Kubernetes 集群
    • kubectl:Kubernetes 的命令行工具,主要作用是部署和管理应用,查看各种资源,创建,删除和更新组件
    • kubelet:主要负责启动 Pod 和容器

    apt-get update
    apt-get install -y kubelet kubeadm kubectl

5.修改主机名

  • 分别对master和node修改主机名

    hostnamectl set-hostname kubernetes-master
    hostnamectl set-hostname kubernetes-node1
    hostnamectl set-hostname kubernetes-node2

6.改为固定IP

vi /etc/netplan/50-cloud-init.yaml

 network:
     ethernets:
         ens33:
           addresses: [192.168.141.134/24]
           gateway4: 192.168.141.2
           nameservers:
             addresses: [192.168.141.2]
     version: 2

7.修改DNS

vi /etc/systemd/resolved.conf
[Resolve]
DNS=114.114.114.114

配置Master

安装 kubernetes 主要是安装它的各个镜像,而 kubeadm 已经为我们集成好了运行 kubernetes 所需的基本镜像。但由于国内的网络原因,在搭建环境时,无法拉取到这些镜像。此时我们只需要修改为阿里云提供的镜像服务即可解决该问题。

1.创建并修改配置

# 导出配置文件
kubeadm config print init-defaults --kubeconfig ClusterConfiguration > kubeadm.yml

   # 修改配置为如下内容
   apiVersion: kubeadm.k8s.io/v1beta1
   bootstrapTokens:
   - groups:
     - system:bootstrappers:kubeadm:default-node-token
     token: abcdef.0123456789abcdef
     ttl: 24h0m0s
     usages:
     - signing
     - authentication
   kind: InitConfiguration
   localAPIEndpoint:
     advertiseAddress: 192.168.174.201
     bindPort: 6443
   nodeRegistration:
     criSocket: /var/run/dockershim.sock
     name: kubernetes-master
     taints:
     - effect: NoSchedule
       key: node-role.kubernetes.io/master
   ---
   apiServer:
     timeoutForControlPlane: 4m0s
   apiVersion: kubeadm.k8s.io/v1beta1
   certificatesDir: /etc/kubernetes/pki
   clusterName: kubernetes
   controllerManager: {}
   dns:
     type: CoreDNS
   etcd:
     local:
       dataDir: /var/lib/etcd
   imageRepository: registry.aliyuncs.com/google_containers
   kind: ClusterConfiguration
   kubernetesVersion: v1.15.1
   networking:
     dnsDomain: cluster.local
     podSubnet: "10.244.0.0/16"
     serviceSubnet: 10.96.0.0/12
   scheduler: {}
   ---
   # 开启 IPVS 模式
   apiVersion: kubeproxy.config.k8s.io/v1alpha1
   kind: KubeProxyConfiguration
   ...
   kubeProxy:
     SupportIPVSProxyMode: true
     config:
       mode: ipvs
   ...

2.查看和拉取镜像

# 查看所需镜像列表
kubeadm config images list --config kubeadm.yml
# 拉取镜像
kubeadm config images pull --config kubeadm.yml

3.安装 kubernetes 主节点

kubeadm init --config=kubeadm.yml --experimental-upload-certs | tee kubeadm-init.log

  # 安装成功则会有如下输出
  [init] Using Kubernetes version: v1.14.1
  [preflight] Running pre-flight checks
          [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  [preflight] Pulling images required for setting up a Kubernetes cluster
  [preflight] This might take a minute or two, depending on the speed of your internet connection
  [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  [kubelet-start] Activating the kubelet service
  [certs] Using certificateDir folder "/etc/kubernetes/pki"
  [certs] Generating "ca" certificate and key
  [certs] Generating "apiserver" certificate and key
  [certs] apiserver serving cert is signed for DNS names [kubernetes-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.141.130]
  [certs] Generating "apiserver-kubelet-client" certificate and key
  [certs] Generating "front-proxy-ca" certificate and key
  [certs] Generating "front-proxy-client" certificate and key
  [certs] Generating "etcd/ca" certificate and key
  [certs] Generating "etcd/peer" certificate and key
  [certs] etcd/peer serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.141.130 127.0.0.1 ::1]
  [certs] Generating "etcd/server" certificate and key
  [certs] etcd/server serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.141.130 127.0.0.1 ::1]
  [certs] Generating "etcd/healthcheck-client" certificate and key
  [certs] Generating "apiserver-etcd-client" certificate and key
  [certs] Generating "sa" key and public key
  [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  [kubeconfig] Writing "admin.conf" kubeconfig file
  [kubeconfig] Writing "kubelet.conf" kubeconfig file
  [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  [kubeconfig] Writing "scheduler.conf" kubeconfig file
  [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  [control-plane] Creating static Pod manifest for "kube-apiserver"
  [control-plane] Creating static Pod manifest for "kube-controller-manager"
  [control-plane] Creating static Pod manifest for "kube-scheduler"
  [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  [apiclient] All control plane components are healthy after 20.003326 seconds
  [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
  [upload-certs] Storing the certificates in ConfigMap "kubeadm-certs" in the "kube-system" Namespace
  [upload-certs] Using certificate key:
  2cd5b86c4905c54d68cc7dfecc2bf87195e9d5d90b4fff9832d9b22fc5e73f96
  [mark-control-plane] Marking the node kubernetes-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
  [mark-control-plane] Marking the node kubernetes-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  [bootstrap-token] Using token: abcdef.0123456789abcdef
  [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
  [addons] Applied essential addon: CoreDNS
  [addons] Applied essential addon: kube-proxy
  
  Your Kubernetes control-plane has initialized successfully!
  
  To start using your cluster, you need to run the following as a regular user:
  
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
  
  You should now deploy a pod network to the cluster.
  Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    https://kubernetes.io/docs/concepts/cluster-administration/addons/
  
  Then you can join any number of worker nodes by running the following on each as root:
  
  # 后面子节点加入需要如下命令
  kubeadm join 192.168.141.130:6443 --token abcdef.0123456789abcdef \
      --discovery-token-ca-cert-hash sha256:cab7c86212535adde6b8d1c7415e81847715cfc8629bb1d270b601744d662515

4.配置 kubectl

mkdir -p $HOME/.kube
  cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  
  # 非 ROOT 用户执行
  chown $(id -u):$(id -g) $HOME/.kube/config

5.验证是否成功

kubectl get node
# 能够打印出节点信息即表示成功
NAME                STATUS     ROLES    AGE     VERSION
kubernetes-master   NotReady   master   8m40s   v1.14.1

6.配置Nodes

第3步最后一句,放在Nodes节点执行

kubeadm join 192.168.141.130:6443 --token abcdef.0123456789abcdef \
	    --discovery-token-ca-cert-hash sha256:cab7c86212535adde6b8d1c7415e81847715cfc8629bb1d270b601744d662515

7.配置网络

Kubernetes 中的 CNI 插件

CNI 的初衷是创建一个框架,用于在配置或销毁容器时动态配置适当的网络配置和资源。插件负责为接口配置和管理 IP 地址,并且通常提供与 IP 管理、每个容器的 IP 分配、以及多主机连接相关的功能。容器运行时会调用网络插件,从而在容器启动时分配 IP 地址并配置网络,并在删除容器时再次调用它以清理这些资源。

运行时或协调器决定了容器应该加入哪个网络以及它需要调用哪个插件。然后,插件会将接口添加到容器网络命名空间中,作为一个 veth 对的一侧。接着,它会在主机上进行更改,包括将 veth 的其他部分连接到网桥。再之后,它会通过调用单独的 IPAM(IP地址管理)插件来分配 IP 地址并设置路由。

在 Kubernetes 中,kubelet 可以在适当的时间调用它找到的插件,为通过 kubelet 启动的 pod进行自动的网络配置。

Kubernetes 中可选的 CNI 插件如下:

  • Flannel
  • Calico
  • Canal
  • Weave
使用Calico配置网络

参考官方文档安装:
https://docs.projectcalico.org/v3.7/getting-started/kubernetes/

  • 下载 Calico 配置文件并修改

    wget https://docs.projectcalico.org/v3.7/manifests/calico.yaml
    vi calico.yaml

修改第 611 行,将 192.168.0.0/16 修改为 10.244.0.0/16,可以通过如下命令快速查找

  • 显示行号::set number
  • 查找字符:/要查找的字符,输入小写 n 下一个匹配项,输入大写 N 上一个匹配项
安装Calico
kubectl apply -f calico.yaml
# 输出如下
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.extensions/calico-node created
serviceaccount/calico-node created
deployment.extensions/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

至此,Kubernetes 高可用集群算是彻底部署成功

Ingress 统一访问入口

部署 Tomcat

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: tomcat-app
spec:
  replicas: 2
  template:
    metadata:
      labels:
        name: tomcat
    spec:
      containers:
      - name: tomcat
        image: tomcat
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: tomcat-http
spec:
  ports:
    - port: 8080
      targetPort: 8080
  # ClusterIP, NodePort, LoadBalancer
  type: ClusterIP
  selector:
    name: tomcat
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
1.修改配置文件,找到配置如下位置 (搜索 serviceAccountName) 在下面增加一句 hostNetwork: true
2.救命文章 折腾一晚上 就是一直处于ContainerCreating状态,最后感觉是拉取镜像的问题

在之后终于找到这篇文章https://www.cnblogs.com/guyeshanrenshiwoshifu/p/9147238.html

3.所有节点
docker pull yxmu2006/nginx-ingress-controller:0.23.0
4.修改Image
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0

改为

image: yxmu2006/nginx-ingress-controller:0.23.0
5.启动
kubectl apply -f useingress.yml
6.检查
kubectl get deployment
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
nginx-app    2/2     2            2           4h3m
tomcat-app   2/2     2            2           90m
kubectl get service
NAME          TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes    ClusterIP      10.96.0.1        <none>        443/TCP        4h27m
nginx-http    LoadBalancer   10.100.127.160   <pending>     80:31191/TCP   4h3m
tomcat-http   ClusterIP      10.107.117.196   <none>        8080/TCP       91m
kubectl get pods -n ingress-nginx -o wide
NAME                                        READY   STATUS    RESTARTS   AGE     IP                NODE               NOMINATED NODE   READINESS GATES
nginx-ingress-controller-564dcb55c8-ldpzv   1/1     Running   0          3m59s   192.168.174.202   kubernetes-node2   <none>           <none>

7.修改hosts文件
# 添加
192.168.174.202 k8s.test.com
8.ping k8s.test.com
9.访问 k8s.test.com
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值