一个centos初始化脚本

概述

说真的我要好好研究k8s了,其他的都先放下了,对于搭建k8s国内最头痛的就是网络问题了。接下来我会说说怎么在国内无痛搭建k8s,只要看完我的文章你就可以搭建出来k8s这样子

安装docker的基础环境

首先你要准备一台至少2g内存2核心装着centos7的机器,我的是4g内存3核心,当然是虚拟机了,系统也可以是其他的但是我的教程是centos7,所以你看着办咯

设置静态ip

vim /etc/sysconfig/network-scripts/ifcfg-enp0s3

这个太简单就不说了,参考文件在下面

TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPADDR="192.168.1.222"
NETMASK="255.255.255.0"
GATEWAY="192.168.1.1"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="enp0s3"
UUID="b099c0f5-d2cd-4ec8-9557-299599bb9f75"
DEVICE="enp0s3"
ONBOOT="yes"
DNS1="114.114.114.114"
复制代码

systemctl restart network

设置hostname

hostnamectl set-hostname k8s-master

编辑hosts文件

vim /etc/hosts

加入

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 k8s-master
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6 k8s-master
复制代码

关闭firewalld

systemctl stop firewalld

systemctl disable firewalld

关闭selinux

setenforce 0

vim /etc/selinux/config

把里面的SELINUX改为disabled

SELINUX=disabled

接着就是安装docker了,对于docker我推荐使用最新的版本,我个人的建议是什么东西可以使用最新稳定版本的就使用最新的稳定版本,别用一个版本,不管是工作中还是学习时候,人要与时俱进嘛,真的是,搞不懂有的公司,总是只专注于一个版本的框架,一个版本的系统,这样做除了可以让产品越来越垃圾还可以怎样。输入下面的命令安装docker

curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun

之后安装docker-compose,可能这个东西在本个教程很少使用,但是兄弟,一个docker的基础环境少了这个怎么行,所以安装一个

wget https://github.com/docker/compose/releases/download/1.23.1/docker-compose-Linux-x86_64

mv docker-compose-Linux-x86_64 docker-compose

mv docker-compose /usr/local/bin/

接着就是配置镜像加速器了编辑下面这个文件

vim /etc/docker/daemon.json

加入

{
  "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"]
}
复制代码

之后重启容器

systemctl restart docker

让容器开机启动

systemctl enable docker

上面就是安装一个基础的容器环境的所有的步骤。

安装kubeadm,kubectl,kubelet

因为一些原因,安装kubeadm使用官方的仓库肯定是不行的了,所以我们使用阿里云的仓库,编辑下面文件

vim /etc/yum.repos.d/kubernetes.repo

添加

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
复制代码

之后

yum makecache

接着安装kubeadm, kubelet和kubectl

yum install kubeadm kubelet kubectl

之后启动kubelet

systemctl enable kubelet && systemctl start kubelet

准备镜像

首先查看我们需要的镜像,输入下面的命令查看我们需要的镜像

kubeadm config images list

k8s.gcr.io/kube-apiserver:v1.12.2
k8s.gcr.io/kube-controller-manager:v1.12.2
k8s.gcr.io/kube-scheduler:v1.12.2
k8s.gcr.io/kube-proxy:v1.12.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.2
复制代码

没错就是这么多,但是这些镜像都是在gcr.io中的,所以我们需要借助github和dockerhub去中转这些镜像,之后下载到本地。我们先新建一个文件夹比如k8s-images-1.12.2在这里阿敏使用镜像的名字新建很多文件夹,比如下面接着在每个文件夹里面新建一个Dockerfile 比如下面

➜  code tree k8s-images-1.12
k8s-images-1.12
├── coredns
│   └── Dockerfile
├── etcd
│   └── Dockerfile
├── kube-apiserver
│   └── Dockerfile
├── kube-controller-manager
│   └── Dockerfile
├── kube-proxy
│   └── Dockerfile
├── kube-scheduler
│   └── Dockerfile
└── pause
    └── Dockerfile

7 directories, 7 files
复制代码

每个dockerfile里面只有一句话比如kube-oroxy里面写的就是

FROM k8s.gcr.io/kube-proxy:v1.12.2

pause里面写的就是

FROM k8s.gcr.io/pause:3.1

其他的都是一样的,所以你懂我意思了吗,之后我们把这个文件夹推送到github上这步很简单,不用我详细说了吧

如果嫌麻烦可以直接fork我的

https://github.com/bboysoulcn/k8s-images.git

之后我们配置dockerhub,登录dockerhub之后点击右上角create automated build 选择github这个图标,之后选中我们的仓库名字,接着修改名字,比如我第一个想要创建的景象是k8s.gcr.io/kube-apiserver:v1.12.2,name我就写这个名字是kube-apiserver,描述也写k8s.gcr.io/kube-apiserver:v1.12.2这个就好了,点击create 之后点击Build Settings在Dockerfile Location里面写上镜像dockerfile的路径,比如这个镜像就是/k8s-images-1.12/kube-apiserver/标签最好和原来的版本号相同,比如这个就是v1.12.2,最后点击trigger->save就好,其他的镜像也按照这个步骤全部创建

接着我们就pull所有的景象重新tag成为k8s.gcr.io/kube-apiserver:v1.12.2 这样子的就好了

当然你如果嫌麻烦,而且是和我的教程步骤来的,那么直接使用下面这个脚本直接pull所有的镜像就好了,并且会自动tag

https://raw.githubusercontent.com/bboysoulcn/k8s-images/master/pull-1.12.2.sh

启动集群

直接输入命令

kubeadm init --apiserver-advertise-address 192.168.1.101 --pod-network-cidr=10.244.0.0/16

上面--apiserver-advertise-address是用来指定master的哪一个interface和cluster的其他节点通信

--pod-network-cidr 指定pod网络的范围

发现报错

tps://storage.googleapis.com/kubernetes-release/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I1106 05:32:05.346780   24728 version.go:94] falling back to the local client version: v1.12.2
[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[preflight] Some fatal errors occurred:
        [ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
复制代码

这是因为开启了swap的缘故,默认k8s是不允许系统中有swap的,所以关闭就好了

swapoff -a

之后重新运行

kubeadm init --apiserver-advertise-address 192.168.1.101 --pod-network-cidr=10.244.0.0/16

就可以了

执行完成是下面这个状态

[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.101]
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.1.101 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 24.503533 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node k8s-master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node k8s-master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation
[bootstraptoken] using token: w6m8lw.0juxhx3d9nsa7w58
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.1.101:6443 --token w6m8lw.0juxhx3d9nsa7w58 --discovery-token-ca-cert-hash sha256:7ee51b8a8a545a6ec15b2db2781d37b865479410a22525381c88d666ca39b48d
复制代码

为了让root可以使用kubectl这个命令你还需要执行下面这些命令,不然会报错

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
复制代码

如果不执行报错信息是下面这些

The connection to the server localhost:8080 was refused - did you specify the right host or port?

接着你确认下kubelet的服务是不是正常

systemctl status kubelet

最后执行kubectl get nodes

如果显示下面节点信息那么就表示你的节点创建成功了

[root@k8s-master home]# kubectl get nodes
NAME         STATUS     ROLES    AGE     VERSION
k8s-master   NotReady   master   7m25s   v1.12.2
复制代码

创建网络

为了使k8s可以跨节点通信,那么我们就要安装pod网络,k8s支持很多的网络方案,比如Calico,weave,flannel。我选择的是flannel

输入

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

来创建网络

完成这些之后我们的k8smaster节点就创建完成了

添加worker节点

这个很简单,首先我们要执行前面我们说的安装docker的基础环境中的所有步骤

之后pull我们需要的镜像,为了方便就直接使用我的脚本吧

https://raw.githubusercontent.com/bboysoulcn/k8s-images/master/pull-1.12.2.sh

chmod +x pull-1.12.2.sh

./pull-1.12.2.sh

之后安装kubectl,kubeadm,kubelet步骤就是我上面所说的步骤,添加软件源之后直接安装

执行完成之后就是把这个节点加入集群中

同样记得要关闭swap

swapoff -a

永久关闭swap就是把/etc/fstab中的那条关于swap的注释掉就可以了

加入命令很简单直接

kubeadm join 192.168.1.101:6443 --token w6m8lw.0juxhx3d9nsa7w58 --discovery-token-ca-cert-hash sha256:7ee51b8a8a545a6ec15b2db2781d37b865479410a22525381c88d666ca39b48d

就是我们kubeadm init之后出现的那条命令,如果你没有记住没关系 token可以使用下面命令获得,在master节点执行

kubeadm token list

但是要注意的一点是这个token只有24小时的有效期,之后就失效了,如果24小时之后你就要重新生成

kubeadm token create

最后的--discovery-token-ca-cert-hash你忘记了怎么办呢,在master节点上使用下面的命令查看

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

执行完成之后你可以使用下面命令查看当前的节点

kubectl get nodes

[root@k8s-master home]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   36m     v1.12.2
k8s-slave    Ready    <none>   5m40s   v1.12.2
复制代码

如果是slave 节点是not ready的状态不要着急,等会就好了

同样你可以查看当前所有pods的状态

kubectl get pods --all-namespaces

[root@k8s-master home]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-576cbf47c7-lr6cq             1/1     Running   0          38m
kube-system   coredns-576cbf47c7-r675s             1/1     Running   0          38m
kube-system   etcd-k8s-master                      1/1     Running   0          31m
kube-system   kube-apiserver-k8s-master            1/1     Running   0          31m
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          31m
kube-system   kube-flannel-ds-amd64-4qppn          1/1     Running   0          33m
kube-system   kube-flannel-ds-amd64-m5plt          1/1     Running   0          7m49s
kube-system   kube-proxy-jbtq7                     1/1     Running   0          7m49s
kube-system   kube-proxy-jkgpm                     1/1     Running   0          38m
kube-system   kube-scheduler-k8s-master            1/1     Running   0          31m
复制代码

安装dashboard

集群安装完成了,那么我们安装一个dashboard来玩玩

首先pull镜像

docker pull bboysoul/kubernetes-dashboard:v1.10.0

之后重新tag

docker tag bboysoul/kubernetes-dashboard:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0

运行

kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

之后执行

kubectl proxy --address='0.0.0.0' -p 8081 --accept-hosts '^.*'

上面的意思就是允许我的dashboard允许所有地址访问并且端口是8081

之后再自己的电脑上访问下面地址就可以访问到dashboard了

http://192.168.1.101:8081/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

上面显示了两种认证方式,一种是使用 token的方式,一种是kubeconfig的方式,为了简化配置流程,其实就是我懒得配置,我们直接把dashboard赋予admin的权限就好了

首先下载那个dashboard的yml文件

wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

修改

转载于:https://juejin.im/post/5c18fb3d518825438f6b9dd3

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值