云原生之kubernetes服务器

fkubernetes(由于k与s中间有8个字符,因此都K8S)是一个开源的,用于管理云平台中多个主机上的容器化的应用.

安装K8S首先需要安装 DOCKER

1、安装yum相关组件及更新

yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager     --add-repo     http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

2、安装docker及配置加速器

sudo yum install docker-ce docker-ce-cli containerd.io
sudo systemctl enable docker --now
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://osgtxnjt.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

 我这里用的是阿里云加速器。阿里云给每个用户都有独立的加速器地址,需要登录阿里云,打开镜像工具-》镜架加速器。即出现如下界面:

 到这里DOCKER安装完成

3、设备Linux的Hostname,禁用Linux安全,关闭swap

hostnamectl set-hostname admin-domain
sudo setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab

4、允许iptables检查桥接流量

cat <<EOF |sudo tee /etc/modules-load.d/k8s.conf 
 br_netfilter
 EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf 
 net.bridge.bridge-nf-call-ip6tables=1
 net.bridge.bridge-nf-call-iptables=1
 EOF

sudo sysctl --system

5、安装kubelet,kubectl,kubeadm并启动

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes

systemctl enable --now kubelet

6、安装相关的镜像。如果是分节点只安装 kube-proxy镜像即可。主节点则需要全部安装 

sudo tee ./images.sh <<- 'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
EOF

chmod +x images.sh 
./images.sh

7、设置master主节点,首先在所有服务器的/etc/hosts文件中加入主节点的IP。 即将master的域名加入hosts文件中。

 echo "10.112.2.8 cluster-endpoint" >> /etc/hosts

进行主节点初始化 。信息只在主节点进行操作。其中--apiserver-advertise-address指定主节点IP,--control-plane-endpoint指定主节点域名。其他参数尽量不要进行修改。

kubeadm init \
--apiserver-advertise-address=10.112.2.8 \
--control-plane-endpoint=cluster-endpoint \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.168.0.0/16

执行成功后显示 如下 信息:

[init] Using Kubernetes version: v1.20.9
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.12. Latest validated version: 19.03
        [WARNING Hostname]: hostname "admin-domain" could not be reached
        [WARNING Hostname]: hostname "admin-domain": lookup admin-domain on 192.168.201.2:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [admin-domain cluster-endpoint kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.112.2.8]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [admin-domain localhost] and IPs [10.112.2.8 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [admin-domain localhost] and IPs [10.112.2.8 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.002278 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node admin-domain as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node admin-domain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 63xza1.0qp2nkzbd61oedxx
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join cluster-endpoint:6443 --token 63xza1.0qp2nkzbd61oedxx \
    --discovery-token-ca-cert-hash sha256:f368ceeb91b38bddcbdaab51581ff7a44e02da15993829e3a3d10bb85002cd3b \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join cluster-endpoint:6443 --token 63xza1.0qp2nkzbd61oedxx \
    --discovery-token-ca-cert-hash sha256:f368ceeb91b38bddcbdaab51581ff7a44e02da15993829e3a3d10bb85002cd3b 

执行成功后,我们需要按照返回信息中的说明一步步的操作。

在这个显示信息中的最后两句,用于将其他的主节点加入,或其他的工作节点加入。即通过这两个命令可以使网络中的任意节点变为主节点或工作节点。这两个命令是24小时有效。因此需要在24小时内使用这两个命令。当过期后需要使用 kubeadm token create --print-join-command 这个命令重新生成
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join cluster-endpoint:6443 --token 63xza1.0qp2nkzbd61oedxx \
    --discovery-token-ca-cert-hash sha256:f368ceeb91b38bddcbdaab51581ff7a44e02da15993829e3a3d10bb85002cd3b \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join cluster-endpoint:6443 --token 63xza1.0qp2nkzbd61oedxx \
    --discovery-token-ca-cert-hash sha256:f368ceeb91b38bddcbdaab51581ff7a44e02da15993829e3a3d10bb85002cd3b 

另外在这两句上面这样一段话:

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

 根据这段话的要求我们执行以下命令用于修改K8S的相关权限 。 :

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

安装完成后,我们可以通过kubectl get nodes 查看下集群中的节点信息:

[root@admin-domain ~]# kubectl get nodes
NAME           STATUS     ROLES                  AGE   VERSION
admin-domain   NotReady   control-plane,master   13m   v1.20.9

8、安装网络插件

curl https://docs.projectcalico.org/manifests/calico.yaml -O

 kubectl apply -f calico.yaml

kuberctl apply -f  这个命令用于根据配置文件给集群配置资源。

配置完成后,可以看一个集群有哪些节点,通过kubectl get nodes

查看集群有哪些资源,可以通过:kubectl get pod -A

9、安装K8S可视化界面

首先需要下载dashboard的yaml文件,然后根据这个文件进行安装 。如果需要删除安装 可以通过kubectl delete 

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
kubectl apply -f recommended.yaml

如果在Linux系统 中https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml

无法进行下载 。可以通过网页打开这个文件,然后在Linux中创建一个文件,然后将内容复制到文件中。然后再通过kubectl的安装命令进行安装 。

通过kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard命令打开配置文件

将配置文件中type:ClusterIP 修改为type:NodePort

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

配置完成后通过 kubectl get svc -A 查看刚才的配置信息。NodePort就是刚才修改的值。而32128就是我们访问控制台的端口。

 通过https://ip:32128访问即出现下面的界面

下一步需要创建一个登录帐号。

首先创建一个dash.yaml的文件 ,然后将下面的内容复制到这个文件中

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

 最后执行  kubectl apply -f dash.yaml ,生成用户信息,然后调用下面命令生成token,通过token就可以登录了。

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

10、kubectl操作镜像

        创建镜像通过 kubectr run POD名称 --image=镜像名称; 删除镜像通过 kubectl delete pod 镜像名称 ; 查看日志  kubectl describe pod POD名称

查看创建的POD信息:kubectl get pod -owide

进入POD内部进行操作:kubectl exec -it pod名称 -- /bin/bash

创建部署:kubectl create deployment 部署名称 --image=镜像名称  --replicas=部署几份

删除部署:kubectl delete deployment 部署名称

部署是有自愈功能的。

部署扩容\缩容:kubectl scale deploy/部署名称 --replicas=部署几份

部署新版本(版本更新):kubectl set image deploy/部署名称 服务名称=服务名称:版本号 --record; 如:kubectl set image deploy/mytomcat tomcat=tomcat:7.0.59 --record

版本回退:查看历史版本:kubectl rollout history deploy/u部署名; 回退:kubectl rollout undo deploy/部署名 --to-revision=版本号(版本号通查看历史版本中查看)

kubectl run mynginx --image=nginx
kubectl describe pod mynginx
kubectl delete pod mynginx
kubectl get pod -owide
kubectl exec -it mynginx -- /bin/bash
kubectl create deployment mytomcat --image=tomcat:8.5.68 --replicas=3
kubectl get deployment
kubectl delete deployment mytomcat
kubectl scale deploy/mytomcat --replicas=2
kubectl set image deploy/mytomcat tomcat=tomcat:7.0.59 --record
kubectl rollout history deploy/mytomcat
kubectl rollout undo deploy/mytomcat --to-revision=2

在kubernetes中deployment用于部署无状态应用,如微服务等属于无状态应用。而部署redis,nginx等则属于有状态应用,需要通过statefulset进行部署。有状态应用可以为POD提供稳定的网络或存储环境。通过daemonset部署守护型应用,daemonset有且只有一份。如日志收集等。通过job/cronjob部署定时任务.。

11、service服务操作-暴露端口

创建相关POD后,我需要在本地访问相关的服务可以通过 kubectl expose deploy 部署名 --port=pod内部端口 --target-port=本地端口 这个命令进行操作。

也可以通过  kubectl expose deploy 部署名 --port=pod内部端口 --target-port=本地端口 --type=ClusterIP 命令控制集群内部访问。 kubectl expose deploy 部署名 --port=pod内部端口 --target-port=本地端口 --type=NodePort 集群外也可以访问。NodePort默认开放3000-32767之间。通过nodeport将在所有的Pod节点上开放一个端口。如:

[root@admin-domain ~]# kubectl get service
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP          2d6h
mytomcat     NodePort    10.96.214.112   <none>        8080:30988/TCP   32s
其中的30988即所有Pod开放的端口。

另外kubectl get service == kubectl get svc

删除服务 kubectl delete service 服务名

kubectl expose deploy mytomcat --port=8080 --target-port=8080
kubectl get service
kubectl get pod --show-labels
kubectl delete svc mytomcat
kubectl expose deploy mytomcat --port=8080 --target-port=8080 --type=NodePort

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值