在Ubuntu安装Kubernetes

系统拓扑

External Network             Internal Network
  |                            |
  |                            |
  +--eth0--<Controller>--eth1--+--eth0--<Worker1>
  |                            |
  |                            |
  |                            |
  |                            |
  |                            +--eth0--<Worker2>
  |                            |
  |                            |
  |                            |
  |                            |
  |                            |

配置NAT

Controller配置

  • 修改配置文件/etc/netplan/*.yaml为eth1配置静态IP (通常命名为01-network-manager-all.yaml)
#Netplan Yaml
#eth0 is using DHCP hence no need to provisioned.
network:
  version: 2
  renderer: NetworkManager
  ethernets:
    eth1:
      dhcp4: no
      addresses:
      - 192.168.4.100/24
# IPTables
$ sudo nano -l /etc/sysctl.conf
net.ipv4.ip_forward=1
$ sudo iptables -F          # flush the default tables
$ sudo iptables -t nat -F   # flush the nat table
$ sudo iptables -P INPUT ACCEPT   # default behavior. just in case
$ sudo iptables -P FORWARD ACCEPT # default behavior. just in case
$ sudo iptables -t nat -A POSTROUTING -s 192.168.4.0/24 -o eth0 -j MASQUERADE
$ sudo iptables -t nat -L  # verify configuration
$ sudo apt install iptables-persistent
$ sudo service netfilter-persistent status  # check the status of the service

Worker配置

  • 修改配置文件/etc/netplan/*.yaml配置静态IP和DNS (通常命名为01-network-manager-all.yaml)
network:
  version: 2
  renderer: NetworkManager
  ethernets:
    eth0:
      dhcp4: no
      addresses:
      - 192.168.4.101/24
      gateway4: 192.168.4.100
      nameservers:
        addresses:
        - <dns address assigned by your ISP>
        - 8.8.8.8 # or well known dns servers. Please note # does not work in yaml

在Controller和Worker上配置hosts

$ cat /etc/hosts
...
192.168.4.100 controller
192.168.4.101 worker1
192.168.4.102 worker2
  • 在Worker上测试连通性
$ ping www.bing.com

在所有机器上安装openssh服务器

# install opensshd on all VMs. 
$ sudo apt autoremove openssh-client
$ sudo apt install openssh-server

# generate ssh key on controller node
$ ssh-keygen -t rsa

# copy key to worker nodes
$ ssh-copy-id <uname>@worker1
$ ssh-copy-id <uname>@worker2

在所有机器上安装Container运行环境

Install containerd

$ wget https://github.com/containerd/containerd/releases/download/v1.7.2/containerd-1.7.2-linux-amd64.tar.gz
$ sudo tar Cxzvf /usr/local containerd-1.7.2-linux-amd64.tar.gz
$ sudo mkdir -p /etc/containerd/
$ containerd config default > config.toml
$ sudo cp config.toml /etc/containerd/
  • Then make the following update to config.toml
  • Configuring the systemd cgroup driver
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
  SystemdCgroup = true
  • Change the pause image version
[plugins."io.containerd.grpc.v1.cri"]
    sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"

Install the rests.

$ wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
$ sudo mkdir -p /usr/local/lib/systemd/system
$ sudo cp containerd.service /usr/local/lib/systemd/system/
$ sudo systemctl daemon-reload
$ sudo systemctl enable --now containerd

$ wget https://github.com/opencontainers/runc/releases/download/v1.1.7/runc.amd64
$ sudo install -m 755 runc.amd64 /usr/local/sbin/runc

$ wget https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz
$ sudo mkdir -p /opt/cni/bin
$ sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.3.0.tgz
$ cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

$ sudo modprobe overlay
$ sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboots
$ cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# Apply sysctl params without reboot
$ sudo sysctl --system
$ lsmod | grep br_netfilter
$ lsmod | grep overlay
$ sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

# turn off swap though there is already alpha support in kubernetes 1.22
$ sudo swapoff -a
$ cat /etc/fstab
# comment out swap file
...
# /swapfile                                 none            swap    sw              0       0
...

安装kubeadm, kubelet, kubectl

$ sudo apt update
$ sudo apt upgrade
$ sudo apt-get install -y apt-transport-https ca-certificates curl
$ curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
$ cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
$ sudo apt update
$ sudo apt install -y kubelet kubeadm kubectl
$ sudo apt-mark hold kubelet kubeadm kubectl

启动Controller Node

# Note: In v1.22 and later, if the user does not set the cgroupDriver field under KubeletConfiguration, kubeadm defaults it to systemd. hence no configuration file is created specifically.

# update /etc/hosts to include controller-endpoint
controller $ sudo nano -l /etc/hosts
192.168.4.<controller ip>    controller-endpoint

# uses aliyun mirror
controller $ sudo kubeadm init --image-repository registry.aliyuncs.com/google_containers \
--service-cidr 10.1.0.0/16 \
--pod-network-cidr 10.2.0.0/16 \
--apiserver-advertise-address 192.168.4.<controller ip> \
--control-plane-endpoint controller-endpoint \
--v=5

# run as current user without sudo
controller $ mkdir -p $HOME/.kube
controller $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
controller $ sudo chown $(id -u):$(id -g) $HOME/.kube/config

加入Worker Nodes

# in case the original joining period expires, or you forgot the token
# issue the following on controller node
# controller$ kubeadm token list
# or recreate one
# controller$ sudo kubeadm token create --print-join-command

# on worker nodes, just copy & paste (remember to prefix with sudo)
worker $ sudo nano -l /etc/hosts
192.168.19.<controller ip> controller-endpoint
worker $ sudo kubeadm join <controller_192.168.19_ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>

Sanity Check

$ kubectl get nodes
NAME         STATUS     ROLES           AGE     VERSION
controller   NotReady   control-plane   4m42s   v1.27.4
worker1      NotReady   <none>          7s      v1.27.3

$ kubectl get pod --namespace=kube-system
# Please noted that coredns is not up as no network plugins have been installed yet.
NAME                                 READY   STATUS    RESTARTS        AGE
coredns-7bdc4cb885-564sh             0/1     Pending   0               6m45s
coredns-7bdc4cb885-7l7tg             0/1     Pending   0               6m45s
etcd-controller                      1/1     Running   1 (4m57s ago)   6m59s
kube-apiserver-controller            1/1     Running   1 (4m57s ago)   6m59s
kube-controller-manager-controller   1/1     Running   1 (4m57s ago)   7m
kube-proxy-kmbqm                     1/1     Running   0               2m27s
kube-proxy-twqt7                     1/1     Running   1 (4m57s ago)   6m45s
kube-scheduler-controller            1/1     Running   1 (4m57s ago)   6m59s

安装网络插件

$ sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
$ tar -zxvf helm-v3.12.2-linux-amd64.tar.gz
$ sudo mv linux-amd64/helm /usr/local/bin/helm
$ cilium install --version 1.14.0
# version 1.14.0 refers to the cilium version, can check per the available helm charts on https://helm.cilium.io/

$ kubectl get pod --namespace=kube-system
NAME                                 READY   STATUS     RESTARTS      AGE
cilium-4ccd8                         1/1     Running    0             53s
cilium-b27hq                         0/1     Init:0/6   0             53s
cilium-operator-76c55fc6b6-wr5qb     1/1     Running    0             53s
coredns-7bdc4cb885-564sh             1/1     Running    0             61m
coredns-7bdc4cb885-7l7tg             1/1     Running    0             61m
etcd-controller                      1/1     Running    2 (16m ago)   61m
kube-apiserver-controller            1/1     Running    2 (16m ago)   61m
kube-controller-manager-controller   1/1     Running    2 (16m ago)   61m
kube-proxy-kmbqm                     1/1     Running    1 (16m ago)   56m
kube-proxy-twqt7                     1/1     Running    2 (16m ago)   61m
kube-scheduler-controller            1/1     Running    2 (16m ago)   61m
$ kubectl describe pod cilium-operator-76c55fc6b6-wr5qb --namespace=kube-system
$ cilium status --wait

# acceptance test
$ cilium connectivity test
$ kubectl get pod --namespace=cilium-test

# enabled hubble
$ cilium hubble enable
$ cilium hubble status

# install hubble client (on amd64)
$ HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
$ HUBBLE_ARCH=amd64
$ curl -L --fail --remote-name-all https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-${HUBBLE_ARCH}.tar.gz
$ sudo tar xzvfC hubble-linux-${HUBBLE_ARCH}.tar.gz /usr/local/bin

# enable hubble service
$ cilium hubble port-forward&
$ cilium hubble enable --ui
$ cilium hubble ui
# a web browser should open

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值