使用Kops创建Kubernetes集群需要一个顶级域或一个子域并设置Route 53托管区域。 该域允许工作节点发现主服务器,而主节点则发现所有etcd服务器。 为了使kubectl
能够直接与主机对话,这也是必需的。 这工作得很好,但对开发人员而言又是一个麻烦。
Kops 1.6.2增加了对基于八卦的节点发现的实验支持。 这使得使用Kops无需DNS设置Kubernetes集群的过程变得更加简单。
让我们来看看!
-
- 安装或升级Kops:
brew upgrade kops
-
- 检查版本:
~ $ kops version
Version 1.6.2
-
- 创建一个S3存储桶作为“状态存储”:
aws s3api create-bucket --bucket kubernetes-arungupta-me
export KOPS_STATE_STORE=s3://kubernetes-arungupta-me
-
- 创建一个Kubernetes集群:
kops create cluster cluster.k8s.local --zones us-east-1a --yes
输出显示为:
I0622 16:52:07.494558 83656 create_cluster.go:655] Inferred --cloud=aws from zone "us-east-1a"
I0622 16:52:07.495012 83656 create_cluster.go:841] Using SSH public key: /Users/argu/.ssh/id_rsa.pub
I0622 16:52:08.540445 83656 subnets.go:183] Assigned CIDR 172.20.32.0/19 to subnet us-east-1a
I0622 16:52:16.327523 83656 apply_cluster.go:396] Gossip DNS: skipping DNS validation
I0622 16:52:25.539755 83656 executor.go:91] Tasks: 0 done / 67 total; 32 can run
I0622 16:52:29.843320 83656 vfs_castore.go:422] Issuing new certificate: "kubecfg"
I0622 16:52:30.108046 83656 vfs_castore.go:422] Issuing new certificate: "kubelet"
I0622 16:52:30.139629 83656 vfs_castore.go:422] Issuing new certificate: "kube-scheduler"
I0622 16:52:31.072229 83656 vfs_castore.go:422] Issuing new certificate: "kube-proxy"
I0622 16:52:31.082560 83656 vfs_castore.go:422] Issuing new certificate: "kube-controller-manager"
I0622 16:52:31.579158 83656 vfs_castore.go:422] Issuing new certificate: "kops"
I0622 16:52:32.742807 83656 executor.go:91] Tasks: 32 done / 67 total; 13 can run
I0622 16:52:43.057189 83656 executor.go:91] Tasks: 45 done / 67 total; 18 can run
I0622 16:52:50.047375 83656 executor.go:91] Tasks: 63 done / 67 total; 3 can run
I0622 16:53:02.047610 83656 vfs_castore.go:422] Issuing new certificate: "master"
I0622 16:53:03.027007 83656 executor.go:91] Tasks: 66 done / 67 total; 1 can run
I0622 16:53:04.197637 83656 executor.go:91] Tasks: 67 done / 67 total; 0 can run
I0622 16:53:04.884362 83656 update_cluster.go:229] Exporting kubecfg for cluster
Kops has set your kubectl context to cluster.k8s.local
Cluster is starting. It should be ready in a few minutes.
Suggestions:
* validate cluster: kops validate cluster
* list nodes: kubectl get nodes --show-labels
* ssh to the master: ssh -i ~/.ssh/id_rsa admin@api.cluster.k8s.local
The admin user is specific to Debian. If not using Debian please use the appropriate user based on your OS.
* read about installing addons: https://github.com/kubernetes/kops/blob/master/docs/addons.md
等待几分钟以创建集群。
- 验证集群:
~ $ kops validate cluster
Using cluster from kubectl context: cluster.k8s.local
Validating cluster cluster.k8s.local
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-east-1a Master m3.medium 1 1 us-east-1a
nodes Node t2.medium 2 2 us-east-1a
NODE STATUS
NAME ROLE READY
ip-172-20-36-52.ec2.internal node True
ip-172-20-38-117.ec2.internal master True
ip-172-20-49-179.ec2.internal node True
Your cluster cluster.k8s.local is ready
- 使用
kubectl
获取节点列表:
~ $ kubectl get nodes
NAME STATUS AGE VERSION
ip-172-20-36-52.ec2.internal Ready,node 4h v1.6.2
ip-172-20-38-117.ec2.internal Ready,master 4h v1.6.2
ip-172-20-49-179.ec2.internal Ready,node 4h v1.6.2
- 删除集群也很简单:
kops delete cluster cluster.k8s.local --yes
而已!
github.com/arun-gupta/kubernetes-java-sample提供了Kubernetes入门的几个示例。
文件问题在github.com/kubernetes/kops/issues上 。
翻译自: https://www.javacodegeeks.com/2017/06/gossip-based-kubernetes-cluster-aws-using-kops.html