虚拟机 centos 单机安装 k8s

参考安装文档, 本机环境做相应调用

https://www.kubernetes.org.cn/5462.html

 

因为本地笔记本只启了一个虚拟机

 

设置主机名

 

hostnamectl set-hostname node0

 

设置主机名解析

 

cat <<EOF >>/etc/hosts

192.168.220.128 node0

EOF

 

使用的过程中, 错误的使用 kubeadm reset , k8s集群用不了了, 

参考的文章是这个,千万要注意,它写的是单机,我的是2台服务器 https://blog.csdn.net/baobaoxiannv/article/details/86987171

 

重新开始部署k8s

 

1, 在node0集群初始化

kubeadm init --kubernetes-version=1.14.2 --apiserver-advertise-address=192.168.220.128 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16

 

这里面用的版本是1.14.2, 等版本升级的话,再次部署的时候已经报错了,

 

Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
                [ERROR KubeletVersion]: the kubelet version is higher than the control plane version. This is not a supported version skew and may lead to a malfunctional cluster. Kubelet version: "1.15.0" Control plane version: "1.14.2"

换最新的版本部署

kubeadm init --kubernetes-version=1.15.0 --apiserver-advertise-address=192.168.106.101 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16

 

获得key

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.220.128:6443 --token vewwp4.h7ptrt13yyic888n \
    --discovery-token-ca-cert-hash sha256:04840487116f410a96bc7fc432595a0ef3ce02b6a97d09f56682d9d9738dee99

2.配置kubectl工具

mkdir -p /root/.kube

cp /etc/kubernetes/admin.conf /root/.kube/config

(如果子node也想使用kubctl , 需要把这个文件copy到各子节点)

kubectl get nodes

kubectl get cs

 

3.部署flannel网络

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

 

4, 在从点 node 1加入集群 (2,3步不能错, 否则报下面的错)

 

kubeadm join 192.168.220.128:6443 --token vewwp4.h7ptrt13yyic888n \
    --discovery-token-ca-cert-hash sha256:04840487116f410a96bc7fc432595a0ef3ce02b6a97d09f56682d9d9738dee99

得到报错, 此时在node1 执行 kubeadm reset

 

preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
        [ERROR FileAvailable--etc-kubernetes-bootstrap-kubelet.conf]: /etc/kubernetes/bootstrap-kubelet.conf already exists
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

执行 kubeadm reset后,再执行 join命令

kubeadm join 192.168.220.128:6443 --token vewwp4.h7ptrt13yyic888n \
    --discovery-token-ca-cert-hash sha256:04840487116f410a96bc7fc432595a0ef3ce02b6a97d09f56682d9d9738dee99

执行完验证pods报错了, 按步骤2,3重新执行后, master不报错了,但是从点仍然报错

[root@node1 ~]# kubectl get nodes
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
 

找到了github上面的讨论,意思是清空了一个集群又创建发生的问题,跟我的情况一样

https://github.com/kubernetes/kops/issues/964

 

下面开始重新安装吧.仔细阅读安装的步骤,强化理解

发现没的按教程来, 教程里面 kubectl init 后,要复制文件,部署flannel网络, 已修复上面的步骤记录

但是master 节点 node0好了,可是node1还是不行,仍然报错

 

测试ngnix pods

kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort  这里面会随机指定外部访问的端口, 访问网址是容器所在的节点的ip

kubectl get pod,svc

换种暴露的方式
kubectl expose deployment nginx --port=8080 --target-port=80 --external-ip=192.168.220.129

可以直接访问 http://192.168.220.129:8080/
在网上看到的解释看的一头雾水, target port就是容器里的ngnix端口, port就是外面可以访问的端口

 

开始部署dashboard

wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
sed -i 's/k8s.gcr.io/loveone/g' kubernetes-dashboard.yaml
sed -i '/targetPort:/a\ \ \ \ \ \ nodePort: 30001\n\ \ type: NodePort' kubernetes-dashboard.yaml
kubectl create -f kubernetes-dashboard.yaml

如果提醒已经存在,则删除

 

kubectl get deployment kubernetes-dashboard -n kube-system

kubectl get pods -n kube-system -o wide

kubectl get services -n kube-system

netstat -ntlp|grep 30001

访问网址 node0 主节点的 30001端口

https://192.168.220.128:30001/

 

获取登陆token

kubectl create serviceaccount  dashboard-admin -n kube-system
kubectl create clusterrolebinding  dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

至此k8s又回来了,只是里面安装的所有软件都没有了

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值