Kubernetes集群安装

Kubernetes集群安装

环境准备

192.168.1.53 k8s-master

192.168.1.52 k8s-node-1

192.168.1.51 k8s-node-2

设置三台机器的主机名:

Master上执行:
[root@localhost ~]#  hostnamectl --static set-hostname  k8s-master
  Node1上执行:
[root@localhost ~]# hostnamectl --static set-hostname  k8s-node-1
  Node2上执行:
[root@localhost ~]# hostnamectl --static set-hostname  k8s-node-2

在三台机器上设置hosts,均执行如下命令:

echo '192.168.1.53    k8s-master
192.168.1.53    etcd
192.168.1.53    registry
192.168.1.52    k8s-node-1
192.168.1.51    k8s-node-2' >> /etc/hosts

cat /etc/hosts

关闭三台机器上的防火墙

systemctl disable firewalld.service
systemctl stop firewalld.service

安装相关工具
k8s-master k8s-node-1 k8s-node-2 上都执行

yum install -y kubelet kubeadm kubectl kubernetes-cni
可能不能访问源,添加源

#docker yum源
cat >> /etc/yum.repos.d/docker.repo <<EOF
[docker-repo]
name=Docker Repository
baseurl=http://mirrors.aliyun.com/docker-engine/yum/repo/main/centos/7
enabled=1
gpgcheck=0
EOF

#kubernetes yum源
cat >> /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF

vim /etc/resolv.conf 最后增加
nameserver 8.8.8.8

yum install -y kubelet kubeadm kubectl kubernetes-cni
 # kubectl delete node --all
 卸载 sudo yum remove kubelet kubeadm kubectl kubernetes-cni
kubelet --version
kubeadm version
systemctl enable docker.service
systemctl enable kubelet.service

systemctl start kubelet
systemctl status kubelet 查看状态 启动失败
禁用SELinux,让容器可以读取主机文件系统
setenforce 0
vim /etc/sysconfig/selinux
SELINUX=disabled

和docker vim /usr/lib/systemd/system/docker.service的用户systemd一致就可以了,不需要修改
–exec-opt native.cgroupdriver=systemd

在k8s-master上执行

kubeadm init --apiserver-advertise-address=192.168.1.53 --kubernetes-version=v1.11.3 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=swap 执行报错

echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables

systemctl enable docker.service 有警告的
systemctl enable kubelet.service 有警告的 

k8s-master 上执行------------参看node的这个文件配置,这里有的配置可能多余-------

vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf  
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
#Environment="KUBELET_EXTRA_ARGS=--v=2 --fail-swap-on=false --pod-infra-container-image=k8s.gcr.io/pause-amd64:3.1"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
#EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS


systemctl daemon-reload
systemctl start kubelet
systemctl status kubelet
journalctl -xefu kubelet 查看日志

添加下载源
cat /etc/docker/daemon.json 
{
  "registry-mirrors": ["http://68e02ab9.m.daocloud.io"]
}
systemctl restart docker

https://hub.docker.com/r/warrior/

先下载镜像  -----------使用下面的镜像版本,这里版本不对应-----

docker pull warrior/pause-amd64:3.0
docker tag warrior/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0
docker pull warrior/etcd-amd64:3.0.17
docker tag warrior/etcd-amd64:3.0.17 gcr.io/google_containers/etcd-amd64:3.0.17
docker pull warrior/kube-apiserver-amd64:v1.6.0
docker tag warrior/kube-apiserver-amd64:v1.6.0 gcr.io/google_containers/kube-apiserver-amd64:v1.6.0
docker pull warrior/kube-scheduler-amd64:v1.6.0
docker tag warrior/kube-scheduler-amd64:v1.6.0 gcr.io/google_containers/kube-scheduler-amd64:v1.6.0
docker pull warrior/kube-controller-manager-amd64:v1.6.0
 docker tag warrior/kube-controller-manager-amd64:v1.6.0 gcr.io/google_containers/kube-controller-manager-amd64:v1.6.0
docker pull warrior/kube-proxy-amd64:v1.6.0
docker tag warrior/kube-proxy-amd64:v1.6.0  gcr.io/google_containers/kube-proxy-amd64:v1.6.0
 docker pull gysan/dnsmasq-metrics-amd64:1.0
docker tag gysan/dnsmasq-metrics-amd64:1.0 gcr.io/google_containers/dnsmasq-metrics-amd64:1.0
docker pull warrior/k8s-dns-kube-dns-amd64:1.14.1
docker tag warrior/k8s-dns-kube-dns-amd64:1.14.1 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1
 docker pull warrior/k8s-dns-dnsmasq-nanny-amd64:1.14.1
docker tag warrior/k8s-dns-dnsmasq-nanny-amd64:1.14.1 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1
docker pull warrior/k8s-dns-sidecar-amd64:1.14.1
docker tag warrior/k8s-dns-sidecar-amd64:1.14.1 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1 
docker pull awa305/kube-discovery-amd64:1.0 
docker tag awa305/kube-discovery-amd64:1.0 gcr.io/google_container/kube-discovery-amd64:1.0 
docker pull gysan/exechealthz-amd64:1.2
docker tag gysan/exechealthz-amd64:1.2 gcr.io/google_container/exechealthz-amd64:1.2


kubeadm init --apiserver-advertise-address=192.168.1.53 --kubernetes-version=v1.11.3 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=all


kubeadm config images list --kubernetes-version=v1.11.3
三台机器上下载镜像 ----------------------------------------------------
docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.11.3
docker tag  mirrorgooglecontainers/kube-apiserver-amd64:v1.11.3 k8s.gcr.io/kube-apiserver-amd64:v1.11.3
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.3
docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.3 k8s.gcr.io/kube-controller-manager-amd64:v1.11.3
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.11.3
docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.11.3 k8s.gcr.io/kube-scheduler-amd64:v1.11.3
docker pull mirrorgooglecontainers/etcd-amd64:3.2.18
docker tag mirrorgooglecontainers/etcd-amd64:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18
                
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.3 
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.11.3  k8s.gcr.io/kube-proxy-amd64:v1.11.3     
docker pull mirrorgooglecontainers/pause:3.1 
docker tag mirrorgooglecontainers/pause:3.1  k8s.gcr.io/pause:3.1
docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0
docker tag mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
docker pull coredns/coredns:1.1.3
docker tag coredns/coredns:1.1.3  k8s.gcr.io/coredns:1.1.3
docker pull mirrorgooglecontainers/k8s-dns-sidecar-amd64:1.14.11
docker tag mirrorgooglecontainers/k8s-dns-sidecar-amd64:1.14.11 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.11
docker pull mirrorgooglecontainers/k8s-dns-kube-dns-amd64:1.14.11
docker tag mirrorgooglecontainers/k8s-dns-kube-dns-amd64:1.14.11 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.11
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1

出现如下信息,表示安装成功
kubeadm join 192.168.1.53:6443 --token e1d3u3.ilw4fb5cpt51xjf0 --discovery-token-ca-cert-hash sha256:72d07cb010102ae7f1733753a1ac07d0a402125f3326a41056174c69de6fe228

安装成功后,按提示执行下面命令                                
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

k8s-node-1 k8s-node-2执行

echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables

kubeadm join 192.168.1.53:6443 --token e1d3u3.ilw4fb5cpt51xjf0 --discovery-token-ca-cert-hash sha256:72d07cb010102ae7f1733753a1ac07d0a402125f3326a41056174c69de6fe228 --ignore-preflight-errors=swap

因为测试主机上还运行其他服务,关闭swap可能会对其他服务产生影响,所以这里修改kubelet的启动参数 --fail-swap-on=false 去掉这个限制
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf  
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
#EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_DNS_ARGS $KUBELET_CGROUP_ARGS $KUBELET_EXTRA_ARGS

systemctl daemon-reload 
systemctl start kubelet 启动失败没有关系 执行下面命令会去启动 生成配置文件 
kubeadm join 192.168.1.53:6443 --token e1d3u3.ilw4fb5cpt51xjf0 --discovery-token-ca-cert-hash sha256:72d07cb010102ae7f1733753a1ac07d0a402125f3326a41056174c69de6fe228 --ignore-preflight-errors=swap

然后查看状态启动成功了
systemctl status kubelet
journalctl -xefu kubelet 查看日志

master 上执行

https://blog.csdn.net/zhuchuangang/article/details/76572157/ 参考 安装flannel

kubectl get nodes 状态为NotReady 网络不通

mkdir /docker
cd /docker/
kubectl --namespace kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel-rbac.yml
 
wget https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel.yml
yum -y install wget

cat kube-flannel.yml 
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "type": "flannel",
      "delegate": {
        "isDefaultGateway": true
      }
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.8.0-amd64
        command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ]
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      - name: install-cni
        image: quay.io/coreos/flannel:v0.8.0-amd64
        command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done" ]
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg



kubectl --namespace kube-system apply -f ./kube-flannel.yml
kubectl get cs

kubectl get nodes 过一会状态为Ready了

参考链接:
https://blog.csdn.net/zhuchuangang/article/details/76572157/

  • 3
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

beyond阿亮

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值