离线单机二进制方式部署kubernetes全记录

  • 基础环境搭建

环境依赖安装

yum install -y epel-release

yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget

防火墙配置

systemctl stop firewalld

systemctl disable firewalld

iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat

iptables -P FORWARD ACCEPT

关闭selinux和swap分区

setenforce 0

sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

swapoff -a

sed -i 's/.*swap.*/#&/' /etc/fstab

ipvs内核模块加载

cat > /etc/sysconfig/modules/ipvs.modules <<EOF

#!/bin/bash

ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack_ipv4"

for kernel_module in \${ipvs_modules}; do

    /sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1

    if [ $? -eq 0 ]; then

        /sbin/modprobe \${kernel_module}

    fi

done

EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

设置主机名

hostnamectl set-hostname k8s-test

内核升级(可选)

rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

# 安装完成后检查 /boot/grub2/grub.cfg 中对应内核 menuentry 中是否包含 initrd16 配置, 如果没有, 再安装一次!

yum --enablerepo=elrepo-kernel install -y kernel-lt

# 设置开机从新内核启动

grub2-set-default 0

编辑 /etc/hosts 文件,添加域名解析

cat <<EOF >>/etc/hosts

192.168.144.135 k8s-test

EOF

配置内核参数,将桥接的IPv4流量传递到iptables的链

cat > /etc/sysctl.d/k8s.conf <<EOF

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

sysctl --system

设置系统时区

# 调整系统 TimeZone

timedatectl set-timezone Asia/Shanghai

# 将当前的 UTC 时间写入硬件时钟

timedatectl set-local-rtc 0

# 重启依赖于系统时间的服务

systemctl restart rsyslog

systemctl restart crond

配置docker源并安装docker

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

 

yum install -y docker-ce-18.06.1.ce-3.el7

systemctl enable docker && systemctl start docker

docker version

  • 集群核心组件安装

Kubernetes二进制文件下载

https://github.com/kubernetes/kubernetes/releases

etcd数据库下载

https://github.com/coreos/etcd/releases/

生成自签发证书

使用OpenSSL工具在Master服务器上创建CA证书和私钥相关的文件:

 

yum -y install openssl

 

cd   /var/run/kubernetes/   (证书默认存放目录)

 

openssl genrsa -out ca.key 2048

 

openssl req -x509 -new -nodes -key ca.key -subj "/CN=k8s-test" -days 5000 -out ca.crt

 

openssl genrsa -out server.key 2048

 

注意:在生成ca.crt时,-subj参数中“”/CN“”的值为Master主机名。

 

准备master_ssl.cnf文件,该文件用于x509 v3版本的证书。在该文件中主要需要设置Master服务器的hostname(k8s-test)、IP地址(192.168.144.135)、以及K8S Master Service 的虚拟服务名称(kubernetes.default 等)和该虚拟服务的ClusterIP地址(192.168.0.1)。

 

master_ssl.cnf 文件的配置如下:

 

[req]

 

req_extensions = v3_req

 

distinguished_name = req_distinguished_name

 

[req_distinguished_name]

 

[ v3_req ]

 

basicConstraints = CA:FALSE

 

keyUsage = nonRepudiation, digitalSignature, keyEncipherment

 

subjectAltName = @alt_names

 

[alt_names]

 

DNS.1 = kubernetes

 

DNS.2 = kubernetes.default

 

DNS.3 = kubernetes.default.svc

 

DNS.4 = kubernetes.default.svc.cluster.local

 

DNS.5 = k8s-test

 

IP.1 = 192.168.0.1

 

IP.2 = 192.168.144.135

 

基于master_ssl.cnf 创建server.csr 和 server.crt 文件。在生成server.csr 时,-subj参数中“”/CN“”的值需为Master的主机名:

 

openssl req -new -key server.key -subj "/CN=k8s-test" -config master_ssl.cnf -out server.csr

 

openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 5000 -extensions v3_req -extfile master_ssl.cnf -out server.crt

 

在全部执行完后会生成6个文件:ca.crt  ca.key  ca.srl  master_ssl.cnf  server.crt  server.csr  server.key

然后设置kube-apiserver的三个启动参数"--client-ca-file","--tls-private-key-file","--tls-cert-file",分别代表CA根证书文件、服务端私钥文件、服务端证书文件:

--client-ca-file=/var/run/kubernetes/ca.crt

--tls-private-key-file=/var/run/kubernetes/server.key

--tls-cert-file=/var/run/kubernetes/server.crt

(红色可选)

同时,可以关闭非安全端口(设置--insecure-port=0),设置安全端口为6443(默认值):

--insecure-port=0 --secure-port=6443

最后重启kube-apiserver服务

systemctl restart kube-apiserver

Master节点安装

Etcd安装

(1)安装

将下载的etcd文件包进行解压,解压后将etcd、etcdctl二进制文件复制到/usr/bin目录。

(2)设置服务文件etcd.service

在/usr/lib/systemd/system/目录下创建文件etcd.service,内容为:

[Unit]

Description=Etcd Server

 

[Service]

Type=notify

TimeoutStartSec=0

Restart=always

WorkingDirectory=/var/lib/etcd/

EnvironmentFile=-/etc/etcd/etcd.conf

ExecStart=/usr/bin/etcd

 

[Install]

WantedBy=multi-user.target

 

Mkdir –p /var/lib/etcd

其中WorkingDirectory为etcd数据库目录,需要在etcd安装前创建

(3)创建配置文件/etc/etcd/etcd.conf

ETCD_NAME=ETCD Server

ETCD_DATA_DIR="/var/lib/etcd/"

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"

ETCD_ADVERTISE_CLIENT_URLS="http://192.168.144.135:2379"

(4)配置开机启动并运行

#systemctl daemon-reload

#systemctl enable etcd.service

#systemctl start etcd.service

(5)检验etcd是否安装成功

#etcdctl cluster-health

kube-apiserver、kube-controller-manager、kube-scheduler服务安装

解压kubernetes压缩包,复制二进制文件到/usr/bin目录

将kube-apiserver、kube-controller-manager、kube-scheduler 三个可执行文件复制到/usr/bin目录

 

kube-apiserver安装

(1)新建并编辑kube-apiserver.service 文件

路径:/usr/lib/systemd/system/kube-apiserver.service,内容为:

 

[Unit]

Description=Kubernetes API Server

After=etcd.service

Wants=etcd.service

 

[Service]

EnvironmentFile=/etc/kubernetes/apiserver

ExecStart=/usr/bin/kube-apiserver  \

        $KUBE_API_ARGS

Restart=on-failure

Type=notify

LimitNOFILE=65536

 

[Install]

WantedBy=multi-user.target

其中EnvironmentFile为kube-apiserver的配置文件

(2)配置文件

mkdir -p /var/log/kubernets/apiserver

apiserver配置文件路径为:/etc/kubernetes/apiserver,内容为:

KUBE_API_ARGS="--etcd-servers=http://127.0.0.1:2379 \

--insecure-bind-address=0.0.0.0 \

--insecure-port=8080 \

--service-cluster-ip-range=192.168.0.0/16 \ (可以自定义)

--service-node-port-range=1-65535 \

--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \

##-- service-account-key-file=/var/run/kubernetes/apiserver.key \

--tls-cert-file=/var/run/kubernetes/apiserver.crt \可以自定义

--tls-private-key-file=/ var/run/kubernetes/apiserver.key \ 可以自定义

--logtostderr=false \

--log-dir=/var/log/kubernetes/apiserver \

--v=0"

kube-controller-manager安装

(1)新建并编辑kube-controller-manager.service 文件

路径:/usr/lib/systemd/system/kube-controller-manager.service,内容为:

 

[Unit]

Description=Kubernetes Scheduler

After=kube-apiserver.service

Requires=kube-apiserver.service

 

[Service]

EnvironmentFile=-/etc/kubernetes/controller-manager

ExecStart=/usr/bin/kube-controller-manager \

        $KUBE_CONTROLLER_MANAGER_ARGS

Restart=on-failure

LimitNOFILE=65536

 

[Install]

WantedBy=multi-user.target

(2)配置文件

apiserver配置文件路径为:/etc/kubernetes/controller-manager,内容为(包括apiserver生成的apiserver.crt和apiserver.key文件):红色部分配置kube-dns时必须开启

 

KUBE_CONTROLLER_MANAGER_ARGS ="--master=http://192.168.144.135:8080 --service-account-private-key-file=/var/run/kubernetes/apiserver.key  可以自定义--root-ca-file=/var/run/kubernetes/ca.crt (可以自定义) --kubeconfig=/etc/kubernetes/kubeconfig"

 

创建配置文件 vi /etc/kubernetes/kubeconfig 向master进行注册(kube-scheduler和kube-proxy、kubelet可以复用)

apiVersion: v1

kind: Config

users:

- name: kubelet

clusters:

- name: kubernetes

  cluster:

    server: http://192.168.144.135:8080

contexts:

- context:

    cluster: kubernetes

    user: kubelet

  name: service-account-context

current-context: service-account-context

kube-scheduler安装

(1)新建并编辑kube-scheduler 文件

路径:/usr/lib/systemd/system/kube-scheduler.service,内容为:

 

[Unit]

Description=Kubernetes Scheduler

After=kube-apiserver.service

Requires=kube-apiserver.service

 

[Service]

User=root

EnvironmentFile=-/etc/kubernetes/scheduler

ExecStart=/usr/bin/kube-scheduler \

        $KUBE_SCHEDULER_ARGS

Restart=on-failure

LimitNOFILE=65536

 

[Install]

WantedBy=multi-user.target

(2)配置文件

mkdir  -p  /var/log/kubernetes/scheduler

kube-scheduler配置文件路径为:/etc/kubernetes/scheduler,内容为:

KUBE_SCHEDULER_ARGS="--master=http://192.168.144.135:8080 --logtostderr=true  --kubeconfig=/etc/kubernetes/kubeconfig --log-dir=/var/log/kubernetes/scheduler --v=2"

 

将各组件加入开机自启

systemctl daemon-reload

systemctl enable kube-apiserver.service

systemctl start kube-apiserver.service

systemctl enable kube-controller-manager.service

systemctl start kube-controller-manager.service

systemctl enable kube-scheduler.service

systemctl start kube-scheduler.service

 

安装完后检验正确

运行命令 kubectl get cs

 

kube-proxy安装

# vi /usr/lib/systemd/system/kube-proxy.service

[Unit]

Description=Kubernetes Kube-Proxy Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=network.target

 

[Service]

EnvironmentFile=/etc/kubernetes/proxy

ExecStart=/usr/bin/kube-proxy \

            $KUBE_PROXY_ARGS

Restart=on-failure

LimitNOFILE=65536

 

[Install]

WantedBy=multi-user.target

 

 

Mkdir /var/log/kubernetes/proxy

创建配置目录,并添加配置文件

# vi /etc/kubernetes/proxy

KUBE_PROXY_ARGS="--master=http://192.168.144.135:8080  --logtostderr=false

--log-dir=/var/log/kubernetes/proxy  --v=2 --kubeconfig=/etc/kubernetes/kubeconfig --proxy-mode=ipvs"

启动服务

# systemctl daemon-reload

# systemctl enable kube-proxy

# systemctl start kube-proxy

kubelet安装

# vi /usr/lib/systemd/system/kubelet.service

[Unit]

Description=Kubernetes Kubelet Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=docker.service

Requires=docker.service

 

[Service]

WorkingDirectory=/var/lib/kubelet

EnvironmentFile=/etc/kubernetes/kubelet

ExecStart=/usr/bin/kubelet $KUBELET_ARGS

Restart=on-failure

KillMode=process

 

[Install]

WantedBy=multi-user.target

# mkdir -p /var/lib/kubelet  (注意要创建目录)

# vi /etc/kubernetes/kubelet(红色在访问https接口时必须)

KUBELET_ARGS="--address=0.0.0.0 --pod-infra-container-image=reg.docker.tb/harbor/pod-infrastructure:latest --hostname-override=192.168.144.135 --enable-server=true --enable-debugging-handlers=true --fail-swap-on=false --kubeconfig=/etc/kubernetes/kubeconfig --cluster_dns=192.168.1.1 --cluster_domain=cluster.local   --tls-cert-file=/var/run/kubernetes/apiserver.crt  --tls-private-key-file=/var/run/kubernetes/apiserver.key --cert-dir=/var/run/kubernetes/ "可以自定义证书

其中 “--hostname-override=192.168.144.135” 为node主机IP地址。

创建配置文件 vi /var/lib/kubelet/kubeconfig 向master进行注册

apiVersion: v1

kind: Config

users:

- name: kubelet

clusters:

- name: kubernetes

  cluster:

    server: http://192.168.144.135:8080

contexts:

- context:

    cluster: kubernetes

    user: kubelet

  name: service-account-context

current-context: service-account-context

启动kubelet并进行验证。

 

# systemctl daemon-reload

# systemctl start kubelet.service

 

Flannel安装

下载安装包

wget https://github.com/coreos/flannel/releases/download/v0.9.1/flannel-v0.9.1-linux-amd64.tar.gz

 

解压

tar -xzvf flannel-v0.9.1-linux-amd64.tar.gz

将flanneld和.sh文件拷贝到/usr/bin目录下

向etcd注册网段

#etcdctl --endpoints="http://192.168.144.135:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'

 

查看注册信息

#etcdctl --endpoints="http://192.168.144.135:2379" get /coreos.com/network/config

 

配置文件

vi /etc/kubernetes/flanneld

 

内容如下

FLANNEL_OPTIONS="--etcd-endpoints=http://192.168.144.135:2379"

 

配置服务信息 vi /usr/lib/systemd/system/flannel.service

内容如下

 

[Unit]

Description=Flanneld overlay address etcd agent

After=network-online.target network.target

Before=docker.service

 

[Service]

Type=notify

EnvironmentFile=/etc/kubernetes/flanneld

ExecStart=/usr/bin/flanneld --ip-masq $FLANNEL_OPTIONS

ExecStartPost=/usr/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env

Restart=on-failure

 

[Install]

WantedBy=multi-user.target

 

Docker服务文件修改

修改内容

vi /usr/lib/systemd/system/docker.service

……

[Service]

Type=notify

EnvironmentFile=/run/flannel/subnet.env

# the default is not to use systemd for cgroups because the delegate issues still

# exists and systemd currently does not support the cgroup feature set required

# for containers run by docker

ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS

红色部分为修改内容

 

重启docker

systemctl daemon-reload & systemctl restart docker

 

kube-dns安装

定义环境变量

export DNS_SERVER_IP="192.168.1.1" (自定义)

export DNS_DOMAIN="cluster.local"

export DNS_REPLICAS=1

 

设置 Cluster DNS Service的IP为192.168.1.1(不能和已分配的IP重复),Cluster DNS的本地域为 cluster.local

 

修改每台Node上的kubelet启动参数

vim /etc/kubernetes/kubelet

在KUBELET_ARGS里增加:

--cluster_dns=192.168.1.1

--cluster_domain=cluster.local

cluster_dns为DNS服务的ClusterIP地址

cluster_domain为DNS服务中设置的域名

 

重启kubelet服务

systemctl restart kubelet

 

上官网下载部署所需yaml文件

上官方网址下载需要的yaml部署文件:https://github.com/kubernetes/kubernetes/tree/release-1.8/cluster/addons/dns

包括四个yaml文件:

kubedns-cm.yaml  kubedns-controller.yaml.base  kubedns-sa.yaml  kubedns-svc.yaml.base

修改文件名称:

mv kubedns-controller.yaml.base kubedns-controller.yaml

mv kubedns-svc.yaml.base kubedns-svc.yaml

修改yaml文件

kubedns-cm.yaml和kubedns-sa.yaml文件不需要修改

修改kubedns-controller.yaml(包括镜像名称修改

        args:

        - --domain=cluster.local.                    #修改$DNS_DOMAINpod节点kubelet服务启动参数--cluster-domain设置的值

        - --dns-port=10053

        - --config-dir=/kube-dns-config

        - --v=2

        env:

        - name: PROMETHEUS_PORT

          value: "10055"

        ports:

        - containerPort: 10053

          name: dns-local

          protocol: UDP

        - containerPort: 10053

          name: dns-tcp-local

          protocol: TCP

        #- --no-negcache

        - --log-facility=-

        - --server=/cluster.local/127.0.0.1#10053                #修改$DNS_DOMAINpod节点kubelet服务启动参数--cluster-domain设置的值

        - --server=/in-addr.arpa/127.0.0.1#10053

        - --server=/ip6.arpa/127.0.0.1#10053

        ports:

        - containerPort: 53

          name: dns

          protocol: UDP

        - containerPort: 53

          name: dns-tcp

          protocol: TCP

        # see: https://github.com/kubernetes/kubernetes/issues/29055 for details

        resources:

          requests:

            cpu: 150m

            memory: 20Mi

        volumeMounts:

        - name: kube-dns-config

        args:

        - --v=2

        - --logtostderr

        - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A                #修改$DNS_DOMAINpod节点kubelet服务启动参数--cluster-domain设置的值

        - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A            #修改$DNS_DOMAINpod节点kubelet服务启动参数--cluster-domain设置的值

        ports:

        - containerPort: 10054

          name: metrics

          protocol: TCP

        resources:

          requests:

            memory: 20Mi

            cpu: 10m

      dnsPolicy: Default  # Don't use cluster DNS.

      serviceAccountName: kube-dns   

                                                                                                                  

修改kubedns-svc.yaml文件

apiVersion: v1

kind: Service

metadata:

  name: kube-dns

  namespace: kube-system

  labels:

    k8s-app: kube-dns

    kubernetes.io/cluster-service: "true"

    addonmanager.kubernetes.io/mode: Reconcile

    kubernetes.io/name: "KubeDNS"

spec:

  selector:

    k8s-app: kube-dns

  clusterIP: 10.254.0.2                     #修改为kubelet服务启动参数--cluster-dns设置的值

  ports:

  - name: dns

    port: 53

    protocol: UDP

  - name: dns-tcp

    port: 53

    protocol: TCP

部署kubedns

kubectl create -f kubedns-cm.yaml -f kubedns-sa.yaml -f kubedns-controller.yaml -f kubedns-svc.yaml

 

查看部署状态

kubectl get pod,svc -n kube-system

 

  • 私有镜像仓库搭建

假设registry的域名为:registry.guoyuan.com

生成自签发证书

> mkdir -p certs

> openssl req -newkey rsa:2048 -nodes -sha256 -keyout certs/domain.key -x509 -days 365 -out certs/domain.crt

生成鉴权密码文件

注意使用时username替换为你自己的用户名,password替换为你自己的密码。

$ mkdir auth

$ docker run --entrypoint htpasswd registry:2 -Bbn username password  > auth/htpasswd

启动Registry

docker run -d -p 443:443 --restart=always --name registry \

   -v /root/auth:/auth \

   -e REGISTRY_HTTP_ADDR=0.0.0.0:443 \

   -e "REGISTRY_AUTH=htpasswd" \

   -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \

   -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \

   -v /root/data:/var/lib/registry \

   -v /root/certs:/certs \

   -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \

   -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \

   registry:2

证书拷贝

需要将生成的自签发证书(.crt文件)拷贝到/etc/docker/certs.d/registry.guoyuan.com文件夹下,文件夹可以自行创建。

 

私有镜像仓库登录

docker login回车

根据提示输入用户名、密码。

生成k8s集群秘钥secret

kubectl create secret docker-registry guoyuan \

         --docker-server=registry.guoyuan.com \

         --docker-username=guoyuan \

         --docker-password='guoyuan2020' \

         --docker-email=yhj2046@126.com

生成秘钥名为guoyuan的secret。

使用名为guoyaun的secret去私有镜像仓库拉去镜像:

 

 
 

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

  name: test-pressure

  labels:

    name:test-pressure

spec:

  replicas: 1

  selector:

    matchLabels:

      name: test-pressure-pod

  template:

    metadata:

      labels:

        name: test-pressure-pod

    spec:

      containers:

      - name: test-pressure

        image: registry.guoyuan.com/test-pressure:1.0

        imagePullPolicy: IfNotPresent

        ports:

        - containerPort: 22035

      imagePullSecrets:

      - name: guoyuan

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  • 离线安装docker

准备安装文件

在连接外网的机器上下载好需要安装的docker版本文件:

yum install --downloadonly --downloaddir=/docker/dockerRpm docker-ce-18.06.1.ce-3.el7

将下载的rmp包进行拷贝到局域网机器上,eg: /home/docker/packages

构建本地yum源

进行下载createrepo用于构建本地源

yum install --downloadonly --downloaddir=/docker/createrepo createrepo

进行安装createrepo通过rpm -ivh进行安装,一定要按照依赖关系进行安装

删除/etc/yum.repos.d/目录下面的文件,创建docker-ce.repo并进行配置,如下:

[docker-ce]

name=docker

baseurl=file:///home/docker/

gpgcheck=0

enabled=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Centos-7

 

createrepo -d /home/docker/ 设置本地源,成功就表示本地源设置完成

输入yum repolist看是否可以看到自己构建的本地源

清除缓存:yum clean all

创建缓存:yum makecache

查看本地源是否成功,通过yum list是否输出新的rpm包,查询到则证明成功

安装docker

yum install -y docker-ce-18.06.1.ce-3.el7

systemctl enable docker && systemctl start docker

docker version

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值