Kubernetes集群的备份和还原、Kubernetes集群优化、全链路监控Skywalking介绍、Skywalking部署和Skywalking配置和使用

一、Kubernetes集群的备份和还原

1. etcd数据库备份和恢复

1)获取二进制etcdctl文件
由于我们是使用kubeadm部署,机器上没有etcdctl命令,所以需要下载个二进制包先获取对应的版本:

kubectl -n kube-system exec -it $(kubectl get po -n kube-system |grep etcd- |head -1|awk '{print $1}') -- etcd --version

[root@k8s-master01 ~]#    kubectl -n kube-system exec -it $(kubectl get po -n kube-system |grep etcd- |head -1|awk '{print $1}') -- etcd --version
etcd Version: 3.5.9
Git SHA: bdbbde998
Go Version: go1.19.9
Go OS/Arch: linux/amd64
[root@k8s-master01 ~]# 

 然后下载合适的包:

wget https://github.com/etcd-io/etcd/releases/download/v3.5.9/etcd-v3.5.9-linux-amd64.tar.gz

解压

[root@k8s-master01 ~]# tar -zxvf etcd-v3.5.9-linux-amd64.tar.gz  -C /opt
etcd-v3.5.9-linux-amd64/
etcd-v3.5.9-linux-amd64/README.md
etcd-v3.5.9-linux-amd64/READMEv2-etcdctl.md
etcd-v3.5.9-linux-amd64/etcdutl
etcd-v3.5.9-linux-amd64/etcdctl
etcd-v3.5.9-linux-amd64/Documentation/
etcd-v3.5.9-linux-amd64/Documentation/README.md
etcd-v3.5.9-linux-amd64/Documentation/dev-guide/
etcd-v3.5.9-linux-amd64/Documentation/dev-guide/apispec/
etcd-v3.5.9-linux-amd64/Documentation/dev-guide/apispec/swagger/
etcd-v3.5.9-linux-amd64/Documentation/dev-guide/apispec/swagger/v3election.swagger.json
etcd-v3.5.9-linux-amd64/Documentation/dev-guide/apispec/swagger/rpc.swagger.json
etcd-v3.5.9-linux-amd64/Documentation/dev-guide/apispec/swagger/v3lock.swagger.json
etcd-v3.5.9-linux-amd64/README-etcdutl.md
etcd-v3.5.9-linux-amd64/README-etcdctl.md
etcd-v3.5.9-linux-amd64/etcd

 将可执行文件软链到/bin/下

ln -s /opt/etcd-v3.5.9-linux-amd64/etcdctl /bin/

 2)备份(需要在一个master上执行)

无论是单节点etcd还是集群模式,都一样

mkdir -p /opt/etcd_backup/

ETCDCTL_API=3 etcdctl \
snapshot save /opt/etcd_backup/snap-etcd-$(date +%F-%H-%M-%S).db \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key 

[root@k8s-master01 ~]# mkdir -p /opt/etcd_backup/
[root@k8s-master01 ~]# cd /opt/etcd_backup/
[root@k8s-master01 etcd_backup]# ETCDCTL_API=3 etcdctl \
> snapshot save /opt/etcd_backup/snap-etcd-$(date +%F-%H-%M-%S).db \
> --endpoints=https://127.0.0.1:2379 \
> --cacert=/etc/kubernetes/pki/etcd/ca.crt \
> --cert=/etc/kubernetes/pki/etcd/server.crt \
> --key=/etc/kubernetes/pki/etcd/server.key
{"level":"info","ts":"2024-08-12T18:15:23.605478+0800","caller":"snapshot/v3_snapshot.go:65","msg":"created temporary db file","path":"/opt/etcd_backup/snap-etcd-2024-08-12-18-15-23.db.part"}
{"level":"info","ts":"2024-08-12T18:15:23.61649+0800","logger":"client","caller":"v3@v3.5.9/maintenance.go:212","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":"2024-08-12T18:15:23.616586+0800","caller":"snapshot/v3_snapshot.go:73","msg":"fetching snapshot","endpoint":"https://127.0.0.1:2379"}
{"level":"info","ts":"2024-08-12T18:15:23.694468+0800","logger":"client","caller":"v3@v3.5.9/maintenance.go:220","msg":"completed snapshot read; closing"}
{"level":"info","ts":"2024-08-12T18:15:23.70707+0800","caller":"snapshot/v3_snapshot.go:88","msg":"fetched snapshot","endpoint":"https://127.0.0.1:2379","size":"5.1 MB","took":"now"}
{"level":"info","ts":"2024-08-12T18:15:23.707218+0800","caller":"snapshot/v3_snapshot.go:97","msg":"saved","path":"/opt/etcd_backup/snap-etcd-2024-08-12-18-15-23.db"}
Snapshot saved at /opt/etcd_backup/snap-etcd-2024-08-12-18-15-23.db
[root@k8s-master01 etcd_backup]# ls
snap-etcd-2024-08-12-18-15-23.db
[root@k8s-master01 etcd_backup]# 

如果不是kubeadm形式部署,例如手动部署etcd,并且有自己的ssl证书(假设证书路径/etc/etcd/ssl),则备份命令有所差异:

mkdir -p /opt/etcd_backup/
ETCDCTL_API=3 etcdctl \
snapshot save /opt/etcd_backup/snap-etcd-$(date +%F-%H-%M-%S).db \
--endpoints=https://192.168.100.11:2379 \
--cacert=/etc/etcd/ssl/ca.pem \
--cert=/etc/etcd/ssl/server.pem \
--key=/etc/etcd/ssl/server-key.pem

3)恢复--单节点etcd

为了验证效果,可以在恢复之前删除掉测试的deployment

kubectl delete deploy testdp

停掉kube-apiserver和etcd Pod

[root@k8s-master01 etcd_backup]# mv /etc/kubernetes/manifests/ /etc/kubernetes/manifests_bak
[root@k8s-master01 etcd_backup]# 

挪走现有etcd相关数据

mv /var/lib/etcd/ /var/lib/etcd_bak

恢复etcd数据

ETCDCTL_API=3 /opt/etcd-v3.5.9-linux-amd64/etcdutl snapshot restore /opt/etcd_backup/snap-etcd-2024-08-12-18-15-23.db --data-dir=/var/lib/etcd
##/var/lib/etcd/目录会自动生成

[root@k8s-master01 etcd_backup]# ETCDCTL_API=3 /opt/etcd-v3.5.9-linux-amd64/etcdutl snapshot restore /opt/etcd_backup/snap-etcd-2024-08-12-18-15-23.db --data-dir=/var/lib/etcd
2024-08-12T18:38:09+08:00	info	snapshot/v3_snapshot.go:248	restoring snapshot	{"path": "/opt/etcd_backup/snap-etcd-2024-08-12-18-15-23.db", "wal-dir": "/var/lib/etcd/member/wal", "data-dir": "/var/lib/etcd", "snap-dir": "/var/lib/etcd/member/snap", "stack": "go.etcd.io/etcd/etcdutl/v3/snapshot.(*v3Manager).Restore\n\tgo.etcd.io/etcd/etcdutl/v3/snapshot/v3_snapshot.go:254\ngo.etcd.io/etcd/etcdutl/v3/etcdutl.SnapshotRestoreCommandFunc\n\tgo.etcd.io/etcd/etcdutl/v3/etcdutl/snapshot_command.go:147\ngo.etcd.io/etcd/etcdutl/v3/etcdutl.snapshotRestoreCommandFunc\n\tgo.etcd.io/etcd/etcdutl/v3/etcdutl/snapshot_command.go:117\ngithub.com/spf13/cobra.(*Command).execute\n\tgithub.com/spf13/cobra@v1.1.3/command.go:856\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tgithub.com/spf13/cobra@v1.1.3/command.go:960\ngithub.com/spf13/cobra.(*Command).Execute\n\tgithub.com/spf13/cobra@v1.1.3/command.go:897\nmain.Start\n\tgo.etcd.io/etcd/etcdutl/v3/ctl.go:50\nmain.main\n\tgo.etcd.io/etcd/etcdutl/v3/main.go:23\nruntime.main\n\truntime/proc.go:250"}
2024-08-12T18:38:09+08:00	info	membership/store.go:141	Trimming membership information from the backend...
2024-08-12T18:38:09+08:00	info	membership/cluster.go:421	added member	{"cluster-id": "cdf818194e3a8c32", "local-member-id": "0", "added-peer-id": "8e9e05c52164694d", "added-peer-peer-urls": ["http://localhost:2380"]}
2024-08-12T18:38:09+08:00	info	snapshot/v3_snapshot.go:269	restored snapshot	{"path": "/opt/etcd_backup/snap-etcd-2024-08-12-18-15-23.db", "wal-dir": "/var/lib/etcd/member/wal", "data-dir": "/var/lib/etcd", "snap-dir": "/var/lib/etcd/member/snap"}
[root@k8s-master01 etcd_backup]# 

启动kube-apiserver和etcd Pod

mv /etc/kubernetes/manifests_bak /etc/kubernetes/manifests #目录挪回去,服务会自动起来

[root@k8s-master01 etcd_backup]# mv /etc/kubernetes/manifests_bak /etc/kubernetes/manifests
[root@k8s-master01 etcd_backup]# kubectl get pods
NAME                      READY   STATUS    RESTARTS       AGE
testdp-5b77968464-lt46x   1/1     Running   1 (101m ago)   2d13h
[root@k8s-master01 etcd_backup]# 

此时,查看删除掉的Pod

kubectl get po ##刚被删除的deployment已经有了

4)恢复--集群模式etcd
说明:该模式为三个master,并且三个etcd分布在三个master上因为备份的时候,只在其中一个master上操作,所以恢复时需要将备份文件拷贝到另外两台机器,并且将etcd相关二进制文件也拷贝过去

scp /opt/etcd_backup/snap-etcd-2024-08-12-18-15-23.db k8s-master02:/tmp/
scp /opt/etcd_backup/snap-etcd-2024-08-12-18-15-23.db k8s-master03:/tmp/
scp -r /opt/etcd-v3.5.9-linux-amd64/ k8s-master02:/opt/etcd-v3.5.9-linux-amd64/
scp -r /opt/etcd-v3.5.9-linux-amd64/ k8s-master03:/opt/etcd-v3.5.9-linux-amd64/

停掉kube-apiserver和etcd Pod(三个master上都操作)

mv /etc/kubernetes/manifests/ /etc/kubernetes/manifests_bak ##将目录改名,就会自动停掉

[root@k8s-master02 ~]# mv /etc/kubernetes/manifests/ /etc/kubernetes/manifests_bak 
[root@k8s-master02 ~]# 

挪走现有etcd相关数据(三个master上都操作)

mv /var/lib/etcd/ /var/lib/etcd_bak

[root@k8s-master02 ~]# mv /var/lib/etcd/ /var/lib/etcd_bak

三个节点分别恢复
master01上恢复etcd数据

ETCDCTL_API=3 /opt/etcd-v3.5.9-linux-amd64/etcdutl snapshot restore /opt/etcd_backup/snap-etcd-2024-08-12-18-15-23.db --data-dir=/var/lib/etcd --name k8s-master01 --initial-cluster="k8s-master01=https://192.168.100.11:2380,k8s-master02=https://192.168.100.12:2380,k8s-master03=https://192.168.100.13:2380" --initial-advertise-peer-urls="https://192.168.100.11:2380"

[root@k8s-master01 ~]# ETCDCTL_API=3 /opt/etcd-v3.5.9-linux-amd64/etcdutl snapshot restore /opt/etcd_backup/snap-etcd-2024-08-12-18-15-23.db --data-dir=/var/lib/etcd --name k8s-master01 --initial-cluster="k8s-master01=https://192.168.100.11:2380,k8s-master02=https://192.168.100.12:2380,k8s-master03=https://192.168.100.13:2380" --initial-advertise-peer-urls="https://192.168.100.11:2380"
2024-08-12T22:30:31+08:00	info	snapshot/v3_snapshot.go:248	restoring snapshot	{"path": "/opt/etcd_backup/snap-etcd-2024-08-12-18-15-23.db", "wal-dir": "/var/lib/etcd/member/wal", "data-dir": "/var/lib/etcd", "snap-dir": "/var/lib/etcd/member/snap", "stack": "go.etcd.io/etcd/etcdutl/v3/snapshot.(*v3Manager).Restore\n\tgo.etcd.io/etcd/etcdutl/v3/snapshot/v3_snapshot.go:254\ngo.etcd.io/etcd/etcdutl/v3/etcdutl.SnapshotRestoreCommandFunc\n\tgo.etcd.io/etcd/etcdutl/v3/etcdutl/snapshot_command.go:147\ngo.etcd.io/etcd/etcdutl/v3/etcdutl.snapshotRestoreCommandFunc\n\tgo.etcd.io/etcd/etcdutl/v3/etcdutl/snapshot_command.go:117\ngithub.com/spf13/cobra.(*Command).execute\n\tgithub.com/spf13/cobra@v1.1.3/command.go:856\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tgithub.com/spf13/cobra@v1.1.3/command.go:960\ngithub.com/spf13/cobra.(*Command).Execute\n\tgithub.com/spf13/cobra@v1.1.3/command.go:897\nmain.Start\n\tgo.etcd.io/etcd/etcdutl/v3/ctl.go:50\nmain.main\n\tgo.etcd.io/etcd/etcdutl/v3/main.go:23\nruntime.main\n\truntime/proc.go:250"}
2024-08-12T22:30:31+08:00	info	membership/store.go:141	Trimming membership information from the backend...
2024-08-12T22:30:31+08:00	info	membership/cluster.go:421	added member	{"cluster-id": "7076056d9cda336e", "local-member-id": "0", "added-peer-id": "ae638890e671854", "added-peer-peer-urls": ["https://192.168.100.13:2380"]}
2024-08-12T22:30:31+08:00	info	membership/cluster.go:421	added member	{"cluster-id": "7076056d9cda336e", "local-member-id": "0", "added-peer-id": "2ba6e48b8cf1a0c1", "added-peer-peer-urls": ["https://192.168.100.11:2380"]}
2024-08-12T22:30:31+08:00	info	membership/cluster.go:421	added member	{"cluster-id": "7076056d9cda336e", "local-member-id": "0", "added-peer-id": "c70ab18d2d82b1e7", "added-peer-peer-urls": ["https://192.168.100.12:2380"]}
2024-08-12T22:30:31+08:00	info	snapshot/v3_snapshot.go:269	restored snapshot	{"path": "/opt/etcd_backup/snap-etcd-2024-08-12-18-15-23.db", "wal-dir": "/var/lib/etcd/member/wal", "data-dir": "/var/lib/etcd", "snap-dir": "/var/lib/etcd/member/snap"}
[root@k8s-master01 ~]# 

查看数据

[root@k8s-master01 ~]# ls /var/lib/etcd
member
[root@k8s-master01 ~]# du -sh !$
du -sh /var/lib/etcd
66M	/var/lib/etcd
[root@k8s-master01 ~]# 

 master02上恢复etcd数据

ETCDCTL_API=3 /opt/etcd-v3.5.9-linux-amd64/etcdutl snapshot restore /tmp/snap-etcd-2024-08-12-18-15-23.db --data-dir=/var/lib/etcd --name k8s-master02 --initial-cluster="k8s-master01=https://192.168.100.11:2380,k8s-master02=https://192.168.100.12:2380,k8s-master03=https://192.168.100.13:2380" --initial-advertise-peer-urls="https://192.168.100.12:2380"

[root@k8s-master02 opt]# ETCDCTL_API=3 /opt/etcd-v3.5.9-linux-amd64/etcdutl snapshot restore /tmp/snap-etcd-2024-08-12-18-15-23.db --data-dir=/var/lib/etcd --name k8s-master02 --initial-cluster="k8s-master01=https://192.168.100.11:2380,k8s-master02=https://192.168.100.12:2380,k8s-master03=https://192.168.100.13:2380" --initial-advertise-peer-urls="https://192.168.100.12:2380"2024-08-12T23:30:34+08:00	info	snapshot/v3_snapshot.go:248	restoring snapshot	{"path": "/tmp/snap-etcd-2024-08-12-18-15-23.db", "wal-dir": "/var/lib/etcd/member/wal", "data-dir": "/var/lib/etcd", "snap-dir": "/var/lib/etcd/member/snap", "stack": "go.etcd.io/etcd/etcdutl/v3/snapshot.(*v3Manager).Restore\n\tgo.etcd.io/etcd/etcdutl/v3/snapshot/v3_snapshot.go:254\ngo.etcd.io/etcd/etcdutl/v3/etcdutl.SnapshotRestoreCommandFunc\n\tgo.etcd.io/etcd/etcdutl/v3/etcdutl/snapshot_command.go:147\ngo.etcd.io/etcd/etcdutl/v3/etcdutl.snapshotRestoreCommandFunc\n\tgo.etcd.io/etcd/etcdutl/v3/etcdutl/snapshot_command.go:117\ngithub.com/spf13/cobra.(*Command).execute\n\tgithub.com/spf13/cobra@v1.1.3/command.go:856\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tgithub.com/spf13/cobra@v1.1.3/command.go:960\ngithub.com/spf13/cobra.(*Command).Execute\n\tgithub.com/spf13/cobra@v1.1.3/command.go:897\nmain.Start\n\tgo.etcd.io/etcd/etcdutl/v3/ctl.go:50\nmain.main\n\tgo.etcd.io/etcd/etcdutl/v3/main.go:23\nruntime.main\n\truntime/proc.go:250"}
2024-08-12T23:30:35+08:00	info	membership/store.go:141	Trimming membership information from the backend...
2024-08-12T23:30:35+08:00	info	membership/cluster.go:421	added member	{"cluster-id": "7076056d9cda336e", "local-member-id": "0", "added-peer-id": "ae638890e671854", "added-peer-peer-urls": ["https://192.168.100.13:2380"]}
2024-08-12T23:30:35+08:00	info	membership/cluster.go:421	added member	{"cluster-id": "7076056d9cda336e", "local-member-id": "0", "added-peer-id": "2ba6e48b8cf1a0c1", "added-peer-peer-urls": ["https://192.168.100.11:2380"]}
2024-08-12T23:30:35+08:00	info	membership/cluster.go:421	added member	{"cluster-id": "7076056d9cda336e", "local-member-id": "0", "added-peer-id": "c70ab18d2d82b1e7", "added-peer-peer-urls": ["https://192.168.100.12:2380"]}
2024-08-12T23:30:35+08:00	info	snapshot/v3_snapshot.go:269	restored snapshot	{"path": "/tmp/snap-etcd-2024-08-12-18-15-23.db", "wal-dir": "/var/lib/etcd/member/wal", "data-dir": "/var/lib/etcd", "snap-dir": "/var/lib/etcd/member/snap"}
[root@k8s-master02 opt]# 
[root@k8s-master02 opt]# ls /var/lib/etcd
member
[root@k8s-master02 opt]# du -sh !$
du -sh /var/lib/etcd
66M	/var/lib/etcd
[root@k8s-master02 opt]# 

 master03上恢复etcd数据

ETCDCTL_API=3 /opt/etcd-v3.5.9-linux-amd64/etcdutl snapshot restore /tmp/snap-etcd-2024-08-12-18-15-23.db --data-dir=/var/lib/etcd --name k8s-master03 --initial-cluster="k8s-master01=https://192.168.100.11:2380,k8s-master02=https://192.168.100.12:2380,k8s-master03=https://192.168.100.13:2380" --initial-advertise-peer-urls="https://192.168.100.13:2380"

[root@k8s-master03 ~]# ETCDCTL_API=3 /opt/etcd-v3.5.9-linux-amd64/etcdutl snapshot restore /tmp/snap-etcd-2024-08-12-18-15-23.db --data-dir=/var/lib/etcd --name k8s-master03 --initial-cluster="k8s-master01=https://192.168.100.11:2380,k8s-master02=https://192.168.100.12:2380,k8s-master03=https://192.168.100.13:2380" --initial-advertise-peer-urls="https://192.168.100.13:2380"
2024-08-12T23:32:57+08:00	info	snapshot/v3_snapshot.go:248	restoring snapshot	{"path": "/tmp/snap-etcd-2024-08-12-18-15-23.db", "wal-dir": "/var/lib/etcd/member/wal", "data-dir": "/var/lib/etcd", "snap-dir": "/var/lib/etcd/member/snap", "stack": "go.etcd.io/etcd/etcdutl/v3/snapshot.(*v3Manager).Restore\n\tgo.etcd.io/etcd/etcdutl/v3/snapshot/v3_snapshot.go:254\ngo.etcd.io/etcd/etcdutl/v3/etcdutl.SnapshotRestoreCommandFunc\n\tgo.etcd.io/etcd/etcdutl/v3/etcdutl/snapshot_command.go:147\ngo.etcd.io/etcd/etcdutl/v3/etcdutl.snapshotRestoreCommandFunc\n\tgo.etcd.io/etcd/etcdutl/v3/etcdutl/snapshot_command.go:117\ngithub.com/spf13/cobra.(*Command).execute\n\tgithub.com/spf13/cobra@v1.1.3/command.go:856\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tgithub.com/spf13/cobra@v1.1.3/command.go:960\ngithub.com/spf13/cobra.(*Command).Execute\n\tgithub.com/spf13/cobra@v1.1.3/command.go:897\nmain.Start\n\tgo.etcd.io/etcd/etcdutl/v3/ctl.go:50\nmain.main\n\tgo.etcd.io/etcd/etcdutl/v3/main.go:23\nruntime.main\n\truntime/proc.go:250"}
2024-08-12T23:32:57+08:00	info	membership/store.go:141	Trimming membership information from the backend...
2024-08-12T23:32:57+08:00	info	membership/cluster.go:421	added member	{"cluster-id": "7076056d9cda336e", "local-member-id": "0", "added-peer-id": "ae638890e671854", "added-peer-peer-urls": ["https://192.168.100.13:2380"]}
2024-08-12T23:32:57+08:00	info	membership/cluster.go:421	added member	{"cluster-id": "7076056d9cda336e", "local-member-id": "0", "added-peer-id": "2ba6e48b8cf1a0c1", "added-peer-peer-urls": ["https://192.168.100.11:2380"]}
2024-08-12T23:32:57+08:00	info	membership/cluster.go:421	added member	{"cluster-id": "7076056d9cda336e", "local-member-id": "0", "added-peer-id": "c70ab18d2d82b1e7", "added-peer-peer-urls": ["https://192.168.100.12:2380"]}
2024-08-12T23:32:57+08:00	info	snapshot/v3_snapshot.go:269	restored snapshot	{"path": "/tmp/snap-etcd-2024-08-12-18-15-23.db", "wal-dir": "/var/lib/etcd/member/wal", "data-dir": "/var/lib/etcd", "snap-dir": "/var/lib/etcd/member/snap"}
[root@k8s-master03 ~]# 
[root@k8s-master03 ~]# ls /var/lib/etcd
member
[root@k8s-master03 ~]# du -sh !$
du -sh /var/lib/etcd
66M	/var/lib/etcd
[root@k8s-master03 ~]# 

启动kube-apiserver和etcd Pod(三台机器上都操作)

mv /etc/kubernetes/manifests_bak /etc/kubernetes/manifests #目录挪回去,服务会自动起来

二、Kubernetes集群优化

Kubernetes 自 v1.6 以来,官方就宣称单集群最大支持 5000 个节点。不过这只是理论上,在具体实践中从 0 到5000,还是有很长的路要走,需要见招拆招。

1.官方标准如下:

  • 不超过 5000 个节点
  • 不超过 150000 个 pod
  • 不超过 300000 个容器
  • 每个节点不超过 100 个 pod

2.Master节点配置优化

Master节点上CPU和内存推荐配置:

工作节点数CPU配置内存配置CPU配置内存配置
1-5 1核3.75G1核3.75G
6-10 2核7.5G2核7.5G
11-100 4核15G4核15G
101-250 8核30G8核30G
251-500 16核60G16核60G
500以上32核120G32核120G

3.kube-apiserver 优化

高可用

启动多个kube-apiserver实例通过外部LB做负载均衡,而LB本身也需要做高可用,所以我们用的是
Keepalived+Haproxy模式,如果LB和Kube-apiserver是分离的,可以使用Keepalived+LVS,效果会更好.

控制连接数

小常识
在Kubernetes中,"突变请求"(Mutating Requests)是指对资源进行更改的请求,例如创建、更新或删除资源。这些请求会修改集群中的状态或配置
相对地,"非突变请求"(Non-Mutating Requests)是指不会对资源进行更改的请求,通常是读取或观察资源的请求,例如获取资源的信息或列表。
Kubernetes API Server(apiserver)区分这两种类型的请求,以便能够采取不同的处理策略和并发限制。因为突变请求对集群状态产生更直接的影响,所以通常需要更加谨慎和有限制的处理。

kube-apiserver 以下两个参数可以控制连接数:

--max-mutating-requests-inflight : 用于限制并发进行的突变请求的数量。 确保在任何给定时间点上并发进行的突变请求数量不超过设置的限制。这可以帮助防止对apiserver的过度负载,防止资源竞争和故障。如果设置为0表示不限制,默认值为 200。
--max-requests-inflight : 用于限制非突变请求的最大并发数。 如果设置为0表示不限制,默认值为 400。

如何设置参数呢?
以直接修改/etc/kubernetes/manifests/kube-apiserver.yaml,增加参数即可

  • 节点数量 1000 - 3000 之间时,推荐:

--max-requests-inflight=1500

--max-mutating-requests-inflight=500

  • 节点数量大于 3000 时,推荐:

--max-requests-inflight=3000
--max-mutating-requests-inflight=1000

另外一个和内存相关的配置参数

--target-ram-mb: 用于限制apiserver进程使用的内存量,以防止过度消耗内存资源,它的值是以兆字节(MB)为单位的整数。 比如,可以4G内存的服务器,将该参数设置为1024

4.kube-scheduler与kube-controller-manager优化

高可用

kube-controller-manager 和 kube-scheduler 是通过 leader election 实现高可用,启用时需要添加以下参数:

--leader-elect=true
--leader-elect-lease-duration=15s
--leader-elect-renew-deadline=10s
--leader-elect-resource-lock=endpoints
--leader-elect-retry-period=2s

按照我们在前面高可用部署方法部署就自动增加了--leader-elect=true该参数需要修改/etc/kubernetes/manifests/kube-controller-manager.yaml和/etc/kubernetes/manifests/kube-scheduler.yaml

控制 QPS(Controller-manager)

与 kube-apiserver 通信的 qps 限制,推荐为:

--kube-api-qps=100

控制burst(Controller-manager)

--kube-api-burst 是 Kubernetes API Server(kube-apiserver)的一个参数,用于控制 API 请求的突发请求数。该参数定义了 kube-apiserver 在短时间内可以处理的请求数量上限,用于应对突发的请求流量。突发请求数是在超过 --kube-api-qps 参数定义的每秒请求数限制时允许的短期爆发请求的数量。
例如,--kube-api-burst=200 表示在 --kube-api-qps 所定义的每秒请求限制之上,kube-apiserver 可以接受最多 200 个突发请求。推荐为:

--kube-api-burst=200

以上两个参数需要修改 /etc/kubernetes/manifests/kube-controller-manager.yaml

5)Kubelet 优化

  • 设置 --image-pull-progress-deadline=30m: 该参数定义了这个拉取过程的超时时间。如果在超时时间内未完成镜像的拉取操作,kubelet 将终止该操作并报告失败
  • 设置 --serialize-image-pulls=false:该参数用于控制容器镜像的并发拉取行为,当设置为 true 时,kubelet 将以串行方式拉取容器镜像。这意味着每个容器将按顺序进行镜像拉取,一个接一个地执行。所以,这里设置为false,可以做到并发拉取镜像
  • 设置--max-pods=110:Kubelet 单节点允许运行的最大 Pod 数,默认是 110,可以根据实际需要设置。

6)Etcd优化

高可用部署

最好是单独拆分出来,部署成一个集群(可以使用kubeadm部署),然后在初始化时再指定外部Etcd集群即可参考: https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/highavailability/

磁盘IO

对于Etcd使用的磁盘最好是使用SSD盘,提高磁盘的IO性能。

提高Etcd进程的IO优先级

由于 ETCD 必须将数据持久保存到磁盘日志文件中,因此来自其他进程的磁盘活动可能会导致增加写入时间,结果导致 ETCD 请求超时和临时 leader 丢失。当给定高磁盘优先级时,ETCD 服务可以稳定地与这些进程一起运行:

ionice -c2 -n0 -p $(pgrep etcd)

提高存储配额

默认 ETCD 空间配额大小为 2G,超过 2G 将不再写入数据。通过给 ETCD 配置 --quota-backend-bytes 参数增大空间配额,最大支持 8G。

vi /etc/kubernetes/manifests/etcd.yaml 增加参数

分离 events 存储

集群规模大的情况下,集群中包含大量节点和服务,会产生大量的 event,这些 event 将会对 etcd 造成巨大压力并占用大量 etcd 存储空间,为了在大规模集群下提高性能,可以将 events 存储在单独的 ETCD 集群中

配置 kube-apiserver:

--etcd-servers="http://etcd1:2379,http://etcd2:2379,http://etcd3:2379" --etcd-serversoverrides="/ events#http://etcd4:2379,http://etcd5:2379,http://etcd6:2379"

三、全链路监控Skywalking介绍

1. APM   

APM (Application Performance Management )应用性能管理,通过各种探针采集并上报数据,收集关键指标,同时搭配数据展示以实现对应用程序性能管理和故障管理的系统化解决方案
目前主要的一些 APM 工具有: Cat、Zipkin、Pinpoint、SkyWalking,这里主要介绍 SkyWalking ,它是一款优秀的国产APM 工具,包括了分布式追踪、性能指标分析、应用和服务依赖分析等。
Zabbix、Premetheus、open-falcon等监控系统主要关注服务器硬件指标与系统服务运行状态 等,而APM系统则更重视程序内部执行过程指标和服务之间链路调用情况的监控 ,APM更有利于深入代码找到请求响应“慢”的根本问题,与Zabbix/Prometheus之类的监控是互补关系。
APM可以解决什么问题?
对于一个大型的几十个、几百个微服务构成的微服务架构系统,通常会遇到下面一些问题,比如:

  • 如何串联整个调用链路,快速定位问题?如:应用与三方服务之间的数据流向,应用与应用之间的调用。
  • 如何缕清各个微服务之间的依赖关系?如:应用A会调用应用B,而应用B又会调用应用C。
  • 如何进行各个微服务接口的性能分折?如何跟踪整个业务流程的调用处理顺序?

使用APM工具,比如Skywalking就可以快速自动将上面几个问题搞清楚。     

2. Skywalking介绍

Skywalking是一个国产开源框架,2015年由吴晟开源 , 2017年加入Apache孵化器。Skywalking是分布式系统的应用程序性能监视工具,专为微服务、云原生架构和基于容器(Docker、K8s、
Mesos)架构而设计。它是一款优秀的 APM工具,包括了分布式追踪、性能指标分析、应用和服务依赖分析等。

官网:https://skywalking.apache.org/
Github: https://github.com/apache/skywalking
官方文档:https://skywalking.apache.org/docs/

1)Skywalking监控维度

在许多不同的场景下, SkyWalking 为观察和监控分布式系统提供了解决方案。首先是像传统的方式那样, SkyWalking 为服务提供了自动打点的代理, 如 Java, C# , Node.js , Go , PHP 以及 Nginx
LUA(包括 Python 和 C++ 调用的 SDK 捐献)。

对于多数语言,持续部署环境,云原生基础设施正变得更加强大,但也更加复杂。

Skywalking 的服务网格接收器可以让 Skywalking 接收来自服务网格框架(例如 Istio)的遥测数据,以帮助用户理解整个分布式系统。

总之, SkyWalking 为服务(service)、服务实例(service instance)以及 端点(endpoint) 提供了可观测能力。服务(Service)、实例(Instance) 以及 端点(Endpoint) 等概念在如今随处可见, 所以让我们先了解一下他们在 SkyWalking 中都表示什么意思:

  • 服务(Service):表示对请求提供相同行为的一组工作负载,在使用打点代理或 SDK 的时候,你可以定义服务的名字.SkyWalking 还可以使用在 Istio 等平台中定义的名称。
  • 服务实例(Service Instance):上述的一组工作负载中的每一个工作负载称为一个实例,就像 Kubernetes 中的 pods 一样,服务实例未必就是操作系统上的一个进程. 但当你在使用打点代理
    的时候, 一个服务实例实际就是操作系统上的一个真实进程.
  • 端点(Endpoint):对于特定服务所接收的请求路径, 如 HTTP 的URI 路径和 gRPC 服务的类名 + 方法签名。

使用 SkyWalking 时, 用户可以看到服务与端点之间的拓扑结构, 每个服务/服务实例/端点的性能指标, 还可以设置报警规则。

2)Skywalking架构

SkyWalking 逻辑上分为四部分: 探针、平台后端、存储、和用户界面(UI)。

  • 探针:基于不同的来源可能是不一样的, 但作用都是收集数据,将数据格式转化为 SkyWalking 适用的格式。
  • 平台后端:支持数据聚合, 数据分析以及驱动数据流从探针到用户界面的流程。分析包括 Skywalking 原生追踪和性能指标以及第三方来源,包括 Istio 及 Envoy telemetry , Zipkin 追踪
    格式化等。 你甚至可以使用 Observability Analysis Language 对原生度量指标 和 用于扩展度量的计量系统 自定义聚合分析。
  • 存储:通过开放的插件化的接口存放 SkyWalking 数据。你可以选择一个既有的存储系统, 如 ElasticSearch, H2 或 MySQL集群(Sharding-Sphere 管理),也可以选择自己实现一个存储系
    统。
  • UI:一个基于接口高度定制化的Web系统,用户可以可视化查看和管理 SkyWalking 数据。

3)探针

探针表示集成到目标系统中的代理或SDK库,它负责收集遥测数据,包括链路追踪和性能指标。根据目标系统的技术栈,探针可能有差异巨大的方式来达到以上功能。但从根本上来说都是一样的,
即收集并格式化数据,并发送到后端。

从高层次上来讲,SkyWalking 探针可分为以下三组:

  • 基于语言的原生代理,这种类型的代理运行在目标服务的用户空间中,就像用户代码的一部分一样。如SkyWalking Java 代理,使用 -javaagent 命令行参数在运行期间对代码进行操作。另一种代理是使用目标库提供的钩子函数或拦截机制。这些探针是基于特定的语言和库。
  • 服务网格探针,服务网格探针从服务网格的 Sidecar 和控制面板收集数据。在以前,代理只用作整个集群的入口,但是有了服务网格和 Sidecar 之后,我们可以基于此进行观测了。
  • 第三方打点类库, SkyWalking 也能够接收其他流行的打点库产生的数据格式。SkyWalking 通过分析数据,将数据格式化成自身的链路和度量数据格式。该功能最初只能接收 Zipkin 的span 数据。

四、Skywalking部署

使用Helm部署Skywalking存储使用ElasticSearch,而ElasticSearch需要独立安装,也用Helm安装

1. 版本信息

Kubernetes 1.26.2
Skywalking 9.5.0
Elasticsearch 8.8.1

2. 安装ElasticSearch

1)首先确认是否已经加载bitnami仓库

helm repo list |grep bitnami

如果没有,还需额外增加此仓库

helm repo add bitnami
https://charts.bitnami.com/bitnami
helm repo update

2)下载ElasticSearch的chart包

helm pull bitnami/elasticsearch --untar --version 19.10.2

[root@k8s-master01 ~]# helm pull bitnami/elasticsearch --untar --version 19.10.2
[root@k8s-master01 ~]# ls
anaconda-ks.cfg  calico.yaml  elasticsearch  etcd-v3.5.9-linux-amd64.tar.gz  helm-v3.13.3-linux-amd64.tar.gz
[root@k8s-master01 ~]# 

3)安装ElasticSearch

注意:要保证K8S所有节点内存大于4G,否则ElasticSearch跑不动
修改values.yaml

假设你已经配置好NFS

cd elasticsearch
vi values.yaml
定义 storageClass: "nfs-client"
搜索 memory: 2048Mi 改为 memory: 1024Mi
搜索 heapSize: 1024m 改为 heapSize: 512m
搜索 replicaCount: 2 改为 replicaCount: 1 ##如果是生
产环境不要改

node节点内存分配最好保证在4096M以上。

安装

helm install skywalking-es .

[root@k8s-master01 elasticsearch]# helm install skywalking-es .
NAME: skywalking-es
LAST DEPLOYED: Tue Aug 13 02:33:00 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: elasticsearch
CHART VERSION: 19.10.2
APP VERSION: 8.8.1

-------------------------------------------------------------------------------
 WARNING

    Elasticsearch requires some changes in the kernel of the host machine to
    work as expected. If those values are not set in the underlying operating
    system, the ES containers fail to boot with ERROR messages.

    More information about these requirements can be found in the links below:

      https://www.elastic.co/guide/en/elasticsearch/reference/current/file-descriptors.html
      https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html

    This chart uses a privileged initContainer to change those settings in the Kernel
    by running: sysctl -w vm.max_map_count=262144 && sysctl -w fs.file-max=65536

** Please be patient while the chart is being deployed **

  Elasticsearch can be accessed within the cluster on port 9200 at skywalking-es-elasticsearch.default.svc.cluster.local

  To access from outside the cluster execute the following commands:

    kubectl port-forward --namespace default svc/skywalking-es-elasticsearch 9200:9200 &
    curl http://127.0.0.1:9200/
[root@k8s-master01 elasticsearch]# 

查看pod

[root@k8s-master01 elasticsearch]# kubectl get pod
NAME                                         READY   STATUS    RESTARTS   AGE
skywalking-es-elasticsearch-coordinating-0   0/1     Running   0          <invalid>
skywalking-es-elasticsearch-data-0           0/1     Running   0          <invalid>
skywalking-es-elasticsearch-data-1           0/1     Running   0          <invalid>
skywalking-es-elasticsearch-ingest-0         0/1     Running   0          <invalid>
skywalking-es-elasticsearch-ingest-1         0/1     Running   0          <invalid>
skywalking-es-elasticsearch-master-0         0/1     Running   0          <invalid>
skywalking-es-elasticsearch-master-1         0/1     Running   0          <invalid>
testdp-5b77968464-svch7                      1/1     Running   0          8h
[root@k8s-master01 elasticsearch]# 

查看svc

kubernetes                                    ClusterIP   10.15.0.1      <none>        443/TCP             3d12h
skywalking-es-elasticsearch                   ClusterIP   10.15.125.50   <none>        9200/TCP,9300/TCP   2m19s
skywalking-es-elasticsearch-coordinating-hl   ClusterIP   None           <none>        9200/TCP,9300/TCP   2m19s
skywalking-es-elasticsearch-data-hl           ClusterIP   None           <none>        9200/TCP,9300/TCP   2m19s
skywalking-es-elasticsearch-ingest-hl         ClusterIP   None           <none>        9200/TCP,9300/TCP   2m19s
skywalking-es-elasticsearch-master-hl         ClusterIP   None           <none>        9200/TCP,9300/TCP   2m19s
testsvc                                       NodePort    10.15.56.110   <none>        80:30545/TCP        3d11h

访问es

curl http://10.15.180.179:9200

3. 安装Skywalking

1)添加repo

helm repo add skywalking https://apache.jfrog.io/artifactory/skywalking-helm
helm repo update

[root@k8s-master01 ~]# helm repo add skywalking https://apache.jfrog.io/artifactory/skywalking-helm
"skywalking" has been added to your repositories
[root@k8s-master01 ~]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "aliyun" chart repository
...Successfully got an update from the "skywalking" chart repository
...Successfully got an update from the "bitnami" chart repository
...Unable to get an update from the "helm_sh" chart repository (https://charts.helm.sh/stable):
	context deadline exceeded (Client.Timeout or context cancellation while reading body)
Update Complete. ⎈Happy Helming!⎈
[root@k8s-master01 ~]# 

2)下载chart

helm pull skywalking/skywalking --version 4.3.0

[root@k8s-master01 ~]# helm pull skywalking/skywalking --version 4.3.0
[root@k8s-master01 ~]# ls
anaconda-ks.cfg  elasticsearch                   helm-v3.13.3-linux-amd64.tar.gz         nfs-subdir-external-provisioner.zip
calico.yaml      etcd-v3.5.9-linux-amd64.tar.gz  nfs-subdir-external-provisioner-master  skywalking-4.3.0.tgz
[root@k8s-master01 ~]# 

3)修改values.yaml

tar zxf skywalking-4.3.0.tgz
cd skywalking
vi values.yaml #有几个地方需要改
elasticsearch:
config:
host: skywalking-es-elasticsearch.default ##查
看svc可以查到,后面加default,表示default命名空间
port:
http: 9200
enabled: false ##把true改为false,意思是不自动安装
es,因为我们前面已经手动安装过了
oap:
image:
tag: 9.5.0
javaOpts: -Xmx512m -Xms512m ##内存减少,如果是生产
环境,可以适当调大
replicas: 1 ##把2改为1,降低资源使用,生产环境不要改
为1
storageType: elasticsearch ##使用es作为存储
ui:
image:
tag: 9.5.0

4)安装

helm install skywalking . 

[root@k8s-master01 skywalking]# helm install skywalking  .
Error: INSTALLATION FAILED: cannot load values.yaml: error converting YAML to JSON: yaml: line 166: mapping values are not allowed in this context
[root@k8s-master01 skywalking]# vim values.yaml +166
[root@k8s-master01 skywalking]# helm install skywalking  .
NAME: skywalking
LAST DEPLOYED: Tue Aug 13 18:46:07 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
************************************************************************
*                                                                      *
*                 SkyWalking Helm Chart by SkyWalking Team             *
*                                                                      *
************************************************************************

Thank you for installing skywalking.

Your release is named skywalking.

Learn more, please visit https://skywalking.apache.org/

Get the UI URL by running these commands:
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl port-forward svc/skywalking-ui 8080:80 --namespace default

 5)端口映射

nohup kubectl port-forward svc/skywalking-ui --address 192.168.100.11 8080:80 &

说明:这是将ingress的80端口给映射到k8s-master01上的8080端口了 

另外将oap 的11800也映射出来,方便外部agent连接

6)访问ui

http://192.168.100.11:8080/

7)查看es数据

curl 10.15.125.50:9200/_cat/indices?v

[root@k8s-master01 skywalking]# curl 10.15.125.50:9200/_cat/indices?v
health status index                   uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   sw_metrics-all-20240813 t7bPXdH2Q0G1wPIonyPlvg   1   1          0            0       494b           247b
[root@k8s-master01 skywalking]# 

四、Skywalking配置和使用

1. 配置一个java应用

1)再单独拿一台机器或者使用其中一个k8s节点,安装docker
具体步骤参考前面章节

yum install -y docker-ce

2)安装git

yum install -y git

3)克隆zrlog源码

git clone https://gitee.com/94fzb/zrlog-docker.git

4)编译

cd zrlog-docker
##下载Skywalking的java agent
which wget &> /dev/null || yum install -y wget
wget
https://archive.apache.org/dist/skywalking/javaagent/
8.15.0/apache-skywalking-java-agent-8.15.0.tgz
tar zxf apache-skywalking-java-agent-8.15.0.tgz
##将刚刚解压的包,用zip压缩
which zip &>/dev/null || yum install -y zip
zip -r java-agent.zip skywalking-agent
##编辑zrlog包里的启动脚本run.sh,目的是为了增加
skywalking的java agent
which unzip &>/dev/null || yum install -y unzip
wget http://dl.zrlog.com/release/zrlog.zip
mkdir zrlog
unzip zrlog.zip -d zrlog/
vi zrlog/bin/run.sh ##内容改为如下
java -javaagent:/opt/tomcat/skywalkingagent/
skywalking-agent.jar -
Dskywalking.agent.service_name=app1 -
Dskywalking.collector.backend_service=192.168.222.101:11800
-Xmx128m -Dfile.encoding=UTF-8 -jar zrlogstarter.
jar
#说明:
#javaagent:指定skywalking-agent.jar文件路径
#skywalking.agent.service_name: 本应用在skywalking中
的名称
#skywalking.collector.backend_service: skywalking 服
务端地址,grpc上报地址,默认端口是 11800
##将zrlog重新打包
cd zrlog
zip -r zrlog.zip ./*
cd ..
/bin/mv zrlog/zrlog.zip ./
##修改Dockerfile
vi Dockerfile #改为如下
FROM openjdk:17
MAINTAINER “xiaochun” xchun90@163.com
CMD [“/bin/bash”]
ARG DUMMY
RUN mkdir -p /opt/tomcat
#RUN curl -o /opt/tomcat/ROOT.zip
http://dl.zrlog.com/release/zrlog.zip?${DUMMY}
COPY zrlog.zip /opt/tomcat/ROOT.zip
RUN cd /opt/tomcat && jar -xf ROOT.zip
ADD /java-agent.zip /opt/tomcat/java-agent.zip
RUN cd /opt/tomcat && jar -xf java-agent.zip
ADD /bin/run.sh /run.sh
RUN chmod a+x /run.sh
#RUN rm /opt/tomcat/ROOT.zip
CMD /run.sh
##到此结束
##编译成镜像
sh build.sh

5)安装zrlog应用

先将容器运行起来

docker run -itd -p 28080:8080 -e DOCKER_MODE='true' zrlog /run.sh

访问
首先查看zrlog对外映射端口

docker ps

通过下面地址访问
http://192.168.222.101:28080
此时还需要安装一个mysql,具体步骤略,可以直接使用helm在k8s
部署一个即可。

2. 访问Skywalking ui

http://192.168.100.11:8080/

 

  • 12
    点赞
  • 19
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值