目录
1.安装前的准备
1.准备3台,2G或更大内存,2核或以上CPU,30G以上硬盘 物理机或云主机或虚拟机
2.系统centos 7.x,内核版本3.1以上的
2.安装流程
2.1.环境准备
需要一步一步仔细安装,选做的可以不做,不然后面有检查和报错
1.#修改主机名hostname(在3台机器上分别运行)
hostnamectl set-hostname master01
hostnamectl set-hostname node01
hostnamectl set-hostname node02
2.#检查是否修改完成
hostname
3.#修改hosts文件(在3台机器上分别运行),打开/etc/hosts文件,添加如下,其中ip替换成这3台主机的IP地址
192.168.0.200 master01
192.168.0.201 node01
192.168.0.202 node024.#设置免登录(选做,在master上运行)
ssh-keygen
ssh-copy-id root@node01
ssh-copy-id root@node025.#关闭防火墙(在3台机器上分别运行)
systemctl stop firewalld && systemctl disable firewalld
6.#查看防火墙是否关闭
systemctl status firewalld
7.#关闭selinux(在3台机器上分别运行),禁用文件安全权限,执行如下命令:
sed -i 's/enforcing/disabled/' /etc/selinux/config && setenforce 0
或者直接修改/etc/selinux/config文件
SELINUX=enforcing ===> SELINUX=disabled
8.#检查是否关闭selinux
sestatus
9.#关闭swap(在3台机器上分别运行)
swapoff -a && sed -ri 's/.*swap.*/#&/' /etc/fstab
sysctl --system
10.#检查是否关闭swap
free -mh
如果swap选项下全是0,这表示禁用了空间交换swap
11.#时间同步(选做,在3台机器上分别运行)
yum install ntpdate -y && timedatectl set-timezone Asia/Shanghai && ntpdate time.windows.com
12.docker会大量操作iptables,需要确认nf-call的值是否为1(在3台机器上分别运行)
cat /proc/sys/net/bridge/bridge-nf-call-iptables
cat /proc/sys/net/bridge/bridge-nf-call-ip6tables#如果不为1,则
/etc/sysctl.conf文件中加入以下内容
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1#然后执行生效
modprobe br_netfilter
ls /proc/sys/net/bridge
sysctl -p
2.2.docker安装
3台机器上都需要安装,使用 root 权限登录 Centos7。
1.#确保 yum 包更新到最新(选做)
yum update
2.#卸载旧版本(如果安装过旧版本的话)
yum remove docker docker-common docker-selinux docker-engine
3.#安装需要的软件包, yum-util 提供yum-config-manager功能,另外两个是devicemapper驱动依赖的
yum install -y yum-utils device-mapper-persistent-data lvm2
4.#设置yum源
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
5.#可以查看所有仓库中所有docker版本,并选择特定版本安装
yum list docker-ce --showduplicates | sort -r
6.#选择合适的版本,不一定要最新版本,安装docker,这里选择docker-ce-18.03.1.ce
yum install docker-ce-18.03.1.ce
7.#添加docker软件源信息
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
8.#启动并加入开机自启动
systemctl start docker
systemctl enable docker9.#验证安装是否成功(有client和service两部分表示docker安装启动都成功了)
docker version
10.#相同目录下添加docker-ce的yum源和kubernetes的yum源
cd /etc/yum.repos.d/
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
vim kubernetes.repo
kubernetes.repo的内容如下:
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg11.检查yum源是否可用
yum repolist
由图可知两个yum源都可用
2.3.安装kubelet、kubeadm、kubectl
3台机器上都需要安装,使用 root 权限登录 Centos7
- kubeadm:用来初始化集群的指令。
- kubelet:在集群中的每个节点上用来启动 pod 和容器等,简单的讲就是将k8s命令转换成docker命令去管理容器。
- kubectl:用来与集群通信的命令行工具。
添加kubernetes阿里YUM源,在2.2.章节最后顺手都做完了,这里就不需要重复做了
1.#安装命令
yum install -y kubelet kubeadm kubectl
2.#启动并加入开机自启动
systemctl enable kubelet && systemctl start kubelet
2.4.下载所需的镜像(master)
1.#先查询到kubernetes所需镜像
kubeadm config images list
2.#假如第一步返回的是1.20.4版本,准备一个down.sh脚本,内容如下:
images=(kube-proxy:v1.20.4 kube-scheduler:v1.20.4 kube-controller-manager:v1.20.4 kube-apiserver:v1.20.4 etcd:3.4.13-0 coredns:1.7.0 pause:3.2)
for imageName in ${images[@]} ; do
docker pull registry.aliyuncs.com/google_containers/$imageName
docker tag registry.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
docker rmi registry.aliyuncs.com/google_containers/$imageName
done一个delete.sh脚本,内容如下:
#!/bin/bash
images=(kube-proxy:v1.20.4 kube-scheduler:v1.20.4 kube-controller-manager:v1.20.4 kube-apiserver:v1.20.4 etcd:3.4.13-0 coredns:1.7.0 pause:3.2)
for imageName in ${images[@]}; do
docker rmi k8s.gcr.io/${imageName}
done3.#添加执行权限并执行脚本,下载所需要的的镜像
chmod ugo+x down.sh
chomd ugo+x delete.sh
./down.sh4.#查看所需要的的镜像是否下载成功
docker images
2.5.初始化Kubernetes Master
只在master上运行,使用 root 权限登录 Centos7。
1.#初始化master
kubeadm init --kubernetes-version=v1.20.4 --pod-network-cidr=10.244.0.0/16
当出现如下图所示的时候,代表master节点初始化成功
2.#按照提示执行以下三句话
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
2.6.部署扁平化CNI网络插件
只在master上运行,使用 root 权限登录 Centos7。这里插件选择的是flannel
1.#下载flannel镜像
docker pull quay.io/coreos/flannel:v0.13.1-rc2
2.#执行yml文件,完成网络查件部署
kubectl apply -f kube-flannel.yml
kube-flannel.yml的内容如下:
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.13.1-rc1
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.13.1-rc1
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
2.7.加入Kubernetes Node节点
只在node节点上运行,使用 root 权限登录 Centos7。
1.#下载node节点所需要的镜像
与master类似,down.sh脚本文件,内容如下:
images=(kube-proxy:v1.20.4 pause:3.2)
for imageName in ${images[@]} ; do
docker pull registry.aliyuncs.com/google_containers/$imageName
docker tag registry.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
docker rmi registry.aliyuncs.com/google_containers/$imageName
donedelete.sh脚本文件,内容如下:
#!/bin/bash
images=(kube-proxy:v1.20.4 pause:3.2)
for imageName in ${images[@]}; do
docker rmi k8s.gcr.io/${imageName}
done2.#添加执行权限并执行脚本,下载所需要的的镜像
chmod ugo+x down.sh
chomd ugo+x delete.sh
./down.sh3.#拷贝之前master初始化后,生成的kubeadm join命令,具体如下:
kubeadm join 10.0.15.192:6443 --token ptc2mv.z4xyu5z50jg86sze \
--discovery-token-ca-cert-hash sha256:a22f5ac5211b03ac92b2d4570fb32f6bee0c81d5cadba0e6d24eeffcaf906ef84.#如果忘记master最后kubeadm init后生成的token和sha256值,解决办法如下:
4.1#获取token,在master节点上执行:
kubeadm token list
默认的token的有效期是24小时,可以自己生成永久有效期的token,具体如下:
#创建token(24小时过期)
kubeadm token create
#生成一条永久有效的token
kubeadm token create --ttl 04.2#获取sha256,在master节点上执行:
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
到此kubernetes集群搭建完毕。
3.常见命令
kubectl get nodes -A
kubectl get pods -A
kubectl logs -f <pod名称> -n kube-system
kubectl describe pod <pod名称> -n kube-system
journalctl -u kubelet -f
kubectl drain <node名称> --delete-local-data #删除本地配置
kubectl delete node <node名称> #移除node节点
kubeadm reset
kubectl get service <service名称> -A
1.陈述式资源管理方法
#命名空间
kubectl get namespace
kubectl get ns
#查找default空间下的所有资源
kubectl get all -n default或者kubectl get all
kubectl create namespace app
kubectl delete namespace app
#pod控制器---deployment
kubectl create deployment nginx-dp --image=harbor.od.com/public/nginx:1.7.9 -n kube-public
kubectl get deployment -n kube-public
kubectl get pods -n kubectl-public -o wide (扩展的方式查看资源)
kubectl describe deployment nginx-dp -n kube-public(详细查看)
kubectl delete deployment nginx-dp -n kube-public
kubectl scale deployment nginx-dp --replicas=2 -n kube-public(扩容)
#pod资源
kubectl get pods -n kube-public
kubectl exec -it nginx-dp-5dfc689474-x5nhb /bin/bash -n kube-public(可以跨主机)
kubectl delete pod nginx-dp-5dfc689474-x5nhb -n kube-public
#service资源
kubectl expose deployment nginx-dp --port=80 -n kubectl-public
ipvsadm -Ln
kubectl get svc -n kube-public
kubectl describe svc nginx-dp -n kube-public
参考k8s中文网站:http://docs.kubernetes.org.cn/683.html
增,查,删很容易,改很困难(kubectl patch ....)
2.声明式资源管理方法
kubectl get pods -n kube-public
kubectl get pods nginx-dp-5dfc689474-x5nhb -o yaml -n kube-public
kubectl explain service.metadata
vim nginx-ds-svc.yaml手写yaml文件
模板内容如下:
appVersion: v1
kind: Service
metadata:
labels:
app: nginx-ds
name: nginx-ds
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx-ds
sessionAffinity: None
type: ClusterIPkubectl create -f nginx-ds.yaml
#离线式修改
kubectl deploy-f nginx-ds.yaml
#在线式修改
kubectl edit svc nginx-ds
依赖统一资源配置清单文件(yaml文件),语法格式:kubectl create/apply/delete -f /path/to/yaml
tip1: 多看别人写的,能读懂的
tip2:能照着现成的文件改着用
tip3:遇到不懂的,善用kubectl explain ...查
tip4:初学者切忌上来无中生有,自己憋着写
4.学习cfssl
在线安装cfssl,cfssl-json,cfssl-cerinfo
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/bin/cfssl-json
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/bin/cfssl-cerinfo
chmod +x /usr/bin/cfssl*
创建ca根证书,创建ca证书的请求文件ca-csr.json
mkdir /opt/certs && cd /opt/certs
ca-csr.json内容如下:
{
"CN": "OldboyEdu",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops",
}
],
"ca": {
"expiry": "175200h"
}
}有效期20年
cfssl gencert -initca ca-csr.json | cfssl-json -bare ca
出现了ca.pem(ca证书文件)和ca-key.pem(ca私钥文件)
5.学习harbor
下载harbor-offline-installer-v1.8.3.tgz(强烈建议下载1.7.5以上的版本,老版本有安全漏洞)
tar xf harbor-offline-installer-v1.8.3.tgz -C /opt/
cd /opt/harhor
vim harbor.yml
修改hostname: hatbor.od.com
修改端口成180
http:
port: 180
默认密码:Harbor12345(可修改)
location:(可修改)/data/harbor/logs
data_volume: /dataharbor
依赖docker-compose
yum install -y docker-compose
看版本
rpm -qa docker-compose
执行harbor安装脚本
cd /opt/harbor
./install.sh
这里依赖docker和docker-compose运行环境
docker-compose ps
安装nginx进行反向代理
yum install nginx -y
vim /etc/nginx/conf.d/harbor.od.com.conf内容如下:
server {
listen 80;
server_name harbor.od.com;
client_max_body_size 1000m;
location / {
proxy_pass http://127.0.0.1:180;
}
}nginx -t
systemctl start nginx && systemctl enable nginx
修改DNS做映射配置
vim /var/named/od.com.zone
harbor A 10.4.7.200
注意serial 前滚一个序号
systemctl restart named
dig -t A harbor.od.com +short
浏览器打开harbor.od.com即可
vim /etc/docker/daemon.json
添加如下:
{
"insecure-registries":["harbor.od.com"]
}docker login harbor.od.com
用户名/密码:admin/Harbor12345
下一章:二进制安装kubernetes集群(二)
https://blog.csdn.net/weixin_42211693/article/details/115077859