hyper-v kubernetes commands

修改Ubuntu分辨率

目前的解决方法只能是手工指定分辨率,下面是具体步骤。

打开文件/etc/default/grub
找到GRUB_CMDLINE_LINUX_DEFAULT所在行,在最后加上
video=hyperv_fb:[分辨率]
比如我想要的分辨率是1600×900,这一行改完后就是

GRUB_CMDLINE_LINUX_DEFAULT=“quiet splash video=hyperv_fb:1280x720”
修改完毕后在Terminal环境里运行sudo update-grub
重启机器后,便可以看到Ubuntu运行在新的分辨率下了。

#环境配置,例如设置静态ip,主机名,修改hosts文件,关闭防火墙、selinux等
hostnamectl set-hostname master
#hostnamectl set-hostname node1
#hostnamectl set-hostname node2

cat >> /etc/hosts << EOF 
    192.168.8.140 master
    192.168.8.141 node1
    192.168.8.142 node2
    
    EOF

#关闭防火墙和selinux
service firewalld stop
systemctl disable firewalld 
#临时关闭selinux,永久关闭请修改/etc/selinux/config配置文件
setenforce 0

#关闭swap分区
swapoff -a # 临时关闭
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab #永久关闭


apt-get install -y docker.io

#由于 docker 官方镜像在国内访问特别的慢,docker 环境准备完成之后请配置国内的加速镜像:
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://registry.docker-cn.co"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
	
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://registry.docker-cn.co"]  
}
EOF
#重启docker
systemctl restart docker

#安装各个组件
#
#kubelet 是 work node 节点负责 Pod 生命周期状态管理以及和 master 节点交互的组件
#kubectl k8s 的命令行工具,负责和 master 节点交互
#kubeadm 搭建 k8s 的官方工具
apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl

systemctl daemon-reload
systemctl restart kubelet


#提前准备coredns:1.8.4的镜像,后面需要使用,需要在每台机器上下载镜像
docker pull coredns/coredns:1.8.4
 
#改名
docker tag coredns/coredns:1.8.4 registry.aliyuncs.com/google_containers/coredns:v1.8.4  

kubeadm reset

kubeadm init \
	--apiserver-advertise-address=192.168.129.9 \
	--image-repository registry.aliyuncs.com/google_containers \
	--service-cidr=10.1.0.0/16 \
	--pod-network-cidr=10.244.0.0/16

kubeadm init \
	--apiserver-advertise-address=192.168.129.9 \
	--pod-network-cidr=10.244.0.0/16

关机过程
0.node节点
[root@dn02 ~]# systemctl stop kube-proxy
[root@dn02 ~]# systemctl stop kubelet
1.master
[root@dn01 ~]# systemctl stop kube-scheduler
[root@dn01 ~]# systemctl stop kube-controller-manager
[root@dn01 ~]# systemctl stop kube-apiserver.service
2.关闭 node节点的flanneld 服务
[root@dn02 ~]# systemctl stop flanneld
3.全部节点关闭etcd
[root@dn0X~]# systemctl stop etcd
[root@dn0X ~]# systemctl stop docker
4.全部关机
[root@dn01 ~]# init 0
开机过程
systemctl start etcd  三节点
systemctl start flanneld (node 节点)
systemctl start kube-apiserver.service  (默认设置了开机启动, master 节点)
systemctl start kube-scheduler (默认设置了开机启动, master 节点)
systemctl start kube-controller-manager (默认设置了开机启动, master 节点)
systemctl start kubelet(node 节点)
systemctl stop kube-proxy(node 节点)

[ERROR FileAvailable–etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
kubeadm reset


root@k8s-master:/home/shi# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.24.2
k8s.gcr.io/kube-controller-manager:v1.24.2
k8s.gcr.io/kube-scheduler:v1.24.2
k8s.gcr.io/kube-proxy:v1.24.2
k8s.gcr.io/pause:3.7
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/coredns/coredns:v1.8.6

sudo docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.24.2 
sudo docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.24.2 k8s.gcr.io/kube-apiserver:v1.24.2 
sudo docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.24.2 
sudo docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.24.2 k8s.gcr.io/kube-controller-manager:v1.24.2 
sudo docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.2 
sudo docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.2 k8s.gcr.io/kube-scheduler:v1.24.2 
sudo docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.24.2 
sudo docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.24.2 k8s.gcr.io/kube-proxy:v1.24.2 
sudo docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7 
sudo docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7 k8s.gcr.io/pause:3.7
sudo docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.3-0 
sudo docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.3-0 k8s.gcr.io/etcd:3.5.3-0
sudo docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.8.6 
sudo docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.8.6 k8s.gcr.io/coredns:1.8.6


root@k8s-master:/usr/local/etc/clash#

 kubeadm init --image-repository registry.aliyuncs.com/google_containers

[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.129.9]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.129.9 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.129.9 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher


systemctl enable kubelet && systemctl start kubelet


required cgroups disabled

sed -i "s/cgroupDriver: systemd/cgroupDriver: cgroupfs/g" /var/lib/kubelet/config.yaml
systemctl daemon-reload
systemctl restart kubelet

cat /var/lib/kubelet/config.yaml


'systemctl status kubelet'
'journalctl -xeu kubelet'

wget https://github.com/coreos/flannel/releases/download/v0.12.0/flannel-v0.12.0-linux-amd64.tar.gz
tar -xf flannel-v0.12.0-linux-amd64.tar.gz 
cp flanneld /usr/local/bin/
cp mk-docker-opts.sh /usr/local/bin/
 
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值