使用Spring Boot和Couchbase的Kubernetes 1.4入门介绍了如何在Amazon Web Services上使用Kubernetes 1.4入门。 在集群中创建一个Couchbase服务,Spring Boot应用程序在数据库中存储一个JSON文档。 它使用来自Kubernetes二进制下载文件(位于github.com/kubernetes/kubernetes/releases/download/v1.4.0/kubernetes.tar.gz)的 kube-up.sh
脚本来启动集群。 该脚本只能使用单个主机创建Kubernetes集群。 这是分布式应用程序的一个基本缺陷,在该应用程序中,主服务器成为单点故障。
认识警察 – Kubernetes Operations的缩写。
这是启动和运行高度可用的Kubernetes集群的最简单方法。 kubectl
脚本是用于针对正在运行的集群运行命令的CLI。 将kops
视为集群的kubectl 。
此博客将展示如何创建使用亚马逊高可用集群Kubernetes kops
。 创建集群后,它将在其上创建一个Couchbase服务并运行Spring Boot应用程序以将JSON文档存储在数据库中。
非常感谢Kubernetes松弛频道上的justinsb ,sarahz,razic,jaygorrell,耸耸肩,bkpandey和其他人, 这些帮助我了解了细节!
下载kops和kubectl
Usage:
kops [command]
Available Commands:
create create resources
delete delete clusters
describe describe objects
edit edit items
export export clusters/kubecfg
get list or get objects
import import clusters
rolling-update rolling update clusters
secrets Manage secrets & keys
toolbox Misc infrequently used commands
update update clusters
upgrade upgrade clusters
version Print the client version information
Flags:
--alsologtostderr log to standard error as well as files
--config string config file (default is $HOME/.kops.yaml)
--log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log_dir string If non-empty, write log files in this directory
--logtostderr log to standard error instead of files (default false)
--name string Name of cluster
--state string Location of state storage
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level log level for V logs
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
Use "kops [command] --help" for more information about a command.
- 下载
kubectl
:
curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/v1.4.1/bin/darwin/amd64/kubectl && chmod +x kubectl
- 在您的
PATH
包括kubectl
。
在亚马逊上创建存储桶和NS记录
目前涉及一些设置,希望可以在以后的版本中进行清理。 在AWS上建立集群可提供详细步骤和更多背景信息。 以下是博客的内容:
- 选择将托管Kubernetes集群的域。 该博客使用
kubernetes.arungupta.me
域。 您可以选择顶级域或子域。 - Amazon Route 53是一项高度可用且可扩展的DNS服务。 登录到Amazon Console并使用Route 53服务为该域创建托管区域
创建的区域如下所示:
“值”列中显示的Value
很重要,因为它们稍后将用于创建NS记录。
- 使用Amazon Console创建S3存储桶以存储集群配置-这称为
state store
。
- 域
kubernetes.arungupta.me
托管在GoDaddy上。 对于Route53托管区域的值列中显示的每个值,使用GoDaddy域控制中心为此域创建一个NS记录。选择记录类型:
对于每个值,如下所示添加记录:
完整的记录集外观
启动Kubernetes Multimaster集群
让我们对亚马逊地区和区域有所了解:
Amazon EC2在全球的多个位置托管。 这些位置由区域和可用区组成。 每个地区都是一个单独的地理区域。 每个区域都有多个孤立的位置,称为可用区。
可以跨区域创建跨区域但不能跨区域创建的高可用性Kubernetes集群。
- 找出区域内的可用区域:
aws ec2 describe-availability-zones --region us-west-2
{
"AvailabilityZones": [
{
"State": "available",
"RegionName": "us-west-2",
"Messages": [],
"ZoneName": "us-west-2a"
},
{
"State": "available",
"RegionName": "us-west-2",
"Messages": [],
"ZoneName": "us-west-2b"
},
{
"State": "available",
"RegionName": "us-west-2",
"Messages": [],
"ZoneName": "us-west-2c"
}
]
}
- 创建一个多主集群:
kops-darwin-amd64 create cluster --name=kubernetes.arungupta.me --cloud=aws --zones=us-west-2a,us-west-2b,us-west-2c --master-size=m4.large --node-count=3 --node-size=m4.2xlarge --master-zones=us-west-2a,us-west-2b,us-west-2c --state=s3://kops-couchbase --yes
大多数开关是不言自明的。 一些开关需要一些解释:
- 使用
--master-zones
指定多个区域(必须为奇数)可跨AZ创建多个主区域 - 如果可以从区域推断云,则
--cloud=aws
是可选的 -
--yes
用于指定立即创建集群。 否则,只有状态会存储在存储桶中,并且需要单独创建集群。
可以看到完整的CLI开关集:
./kops-darwin-amd64 create cluster --help
Creates a k8s cluster.
Usage:
kops create cluster [flags]
Flags:
--admin-access string Restrict access to admin endpoints (SSH, HTTPS) to this CIDR. If not set, access will not be restricted by IP.
--associate-public-ip Specify --associate-public-ip=[true|false] to enable/disable association of public IP for master ASG and nodes. Default is 'true'. (default true)
--channel string Channel for default versions and configuration to use (default "stable")
--cloud string Cloud provider to use - gce, aws
--dns-zone string DNS hosted zone to use (defaults to last two components of cluster name)
--image string Image to use
--kubernetes-version string Version of kubernetes to run (defaults to version in channel)
--master-size string Set instance size for masters
--master-zones string Zones in which to run masters (must be an odd number)
--model string Models to apply (separate multiple models with commas) (default "config,proto,cloudup")
--network-cidr string Set to override the default network CIDR
--networking string Networking mode to use. kubenet (default), classic, external. (default "kubenet")
--node-count int Set the number of nodes
--node-size string Set instance size for nodes
--out string Path to write any local output
--project string Project to use (must be set on GCE)
--ssh-public-key string SSH public key to use (default "~/.ssh/id_rsa.pub")
--target string Target - direct, terraform (default "direct")
--vpc string Set to use a shared VPC
--yes Specify --yes to immediately create the cluster
--zones string Zones in which to run the cluster
Global Flags:
--alsologtostderr log to standard error as well as files
--config string config file (default is $HOME/.kops.yaml)
--log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log_dir string If non-empty, write log files in this directory
--logtostderr log to standard error instead of files (default false)
--name string Name of cluster
--state string Location of state storage
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level log level for V logs
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
- 创建集群后,获取有关集群的更多详细信息:
kubectl cluster-info
Kubernetes master is running at https://api.kubernetes.arungupta.me
KubeDNS is running at https://api.kubernetes.arungupta.me/api/v1/proxy/namespaces/kube-system/services/kube-dns
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
- 检查集群客户端和服务器版本:
kubectl version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.1", GitCommit:"33cf7b9acbb2cb7c9c72a10d6636321fb180b159", GitTreeState:"clean", BuildDate:"2016-10-10T18:19:49Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.3", GitCommit:"4957b090e9a4f6a68b4a40375408fdc74a212260", GitTreeState:"clean", BuildDate:"2016-10-16T06:20:04Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
- 检查集群中的所有节点:
kubectl get nodes
NAME STATUS AGE
ip-172-20-111-151.us-west-2.compute.internal Ready 1h
ip-172-20-116-40.us-west-2.compute.internal Ready 1h
ip-172-20-48-41.us-west-2.compute.internal Ready 1h
ip-172-20-49-105.us-west-2.compute.internal Ready 1h
ip-172-20-80-233.us-west-2.compute.internal Ready 1h
ip-172-20-82-93.us-west-2.compute.internal Ready 1h
或仅找出主节点:
kubectl get nodes -l kubernetes.io/role=master
NAME STATUS AGE
ip-172-20-111-151.us-west-2.compute.internal Ready 1h
ip-172-20-48-41.us-west-2.compute.internal Ready 1h
ip-172-20-82-93.us-west-2.compute.internal Ready 1h
- 检查所有集群:
kops-darwin-amd64 get clusters --state=s3://kops-couchbase
NAME CLOUD ZONES
kubernetes.arungupta.me aws us-west-2a,us-west-2b,us-west-2c
Kubernetes仪表板插件
默认情况下,使用kops创建的集群没有UI仪表板。 但这可以作为附加项添加:
kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.4.0.yaml
deployment "kubernetes-dashboard-v1.4.0" created
service "kubernetes-dashboard" created
现在可以看到有关集群的完整详细信息:
kubectl cluster-info
Kubernetes master is running at https://api.kubernetes.arungupta.me
KubeDNS is running at https://api.kubernetes.arungupta.me/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://api.kubernetes.arungupta.me/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Kubernetes UI仪表板位于所示的URL。 在我们的例子中,这是https://api.kubernetes.arungupta.me/ui
,看起来像:
可以使用kubectl config view
命令获取访问此仪表板的凭据。 值显示如下:
- name: kubernetes.arungupta.me-basic-auth
user:
password: PASSWORD
username: admin
部署Couchbase服务
如使用Spring Boot和Couchbase的Kubernetes 1.4入门中所述 ,让我们运行Couchbase服务:
kubectl create -f ~/workspaces/kubernetes-java-sample/maven/couchbase-service.yml
service "couchbase-service" created
replicationcontroller "couchbase-rc" created
该配置文件位于github.com/arun-gupta/kubernetes-java-sample/blob/master/maven/couchbase-service.yml中 。
获取服务列表:
kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
couchbase-service 100.65.4.139 <none> 8091/TCP,8092/TCP,8093/TCP,11210/TCP 27s
kubernetes 100.64.0.1 <none> 443/TCP 2h
描述服务:
kubectl describe svc/couchbase-service
Name: couchbase-service
Namespace: default
Labels: <none>
Selector: app=couchbase-rc-pod
Type: ClusterIP
IP: 100.65.4.139
Port: admin 8091/TCP
Endpoints: 100.96.5.2:8091
Port: views 8092/TCP
Endpoints: 100.96.5.2:8092
Port: query 8093/TCP
Endpoints: 100.96.5.2:8093
Port: memcached 11210/TCP
Endpoints: 100.96.5.2:11210
Session Affinity: None
获取豆荚:
kubectl get pods
NAME READY STATUS RESTARTS AGE
couchbase-rc-e35v5 1/1 Running 0 1m
运行Spring Boot应用程序
Spring Boot应用程序针对Couchbase服务运行,并在其中存储JSON文档。
启动Spring Boot应用程序:
kubectl create -f ~/workspaces/kubernetes-java-sample/maven/bootiful-couchbase.yml
job "bootiful-couchbase" created
该配置文件位于github.com/arun-gupta/kubernetes-java-sample/blob/master/maven/bootiful-couchbase.yml中 。
查看所有吊舱清单:
<
kubectl get pods --show-all
NAME READY STATUS RESTARTS AGE
bootiful-couchbase-ainv8 0/1 Completed 0 1m
couchbase-rc-e35v5 1/1 Running 0 3m
检查完整pod的日志:
kubectl logs bootiful-couchbase-ainv8
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v1.4.0.RELEASE)
2016-11-02 18:48:56.035 INFO 7 --- [ main] org.example.webapp.Application : Starting Application v1.0-SNAPSHOT on bootiful-couchbase-ainv8 with PID 7 (/maven/bootiful-couchbase.jar started by root in /)
2016-11-02 18:48:56.040 INFO 7 --- [ main] org.example.webapp.Application : No active profile set, falling back to default profiles: default
2016-11-02 18:48:56.115 INFO 7 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@108c4c35: startup date [Wed Nov 02 18:48:56 UTC 2016]; root of context hierarchy
2016-11-02 18:48:57.021 INFO 7 --- [ main] com.couchbase.client.core.CouchbaseCore : CouchbaseEnvironment: {sslEnabled=false, sslKeystoreFile='null', sslKeystorePassword='null', queryEnabled=false, queryPort=8093, bootstrapHttpEnabled=true, bootstrapCarrierEnabled=true, bootstrapHttpDirectPort=8091, bootstrapHttpSslPort=18091, bootstrapCarrierDirectPort=11210, bootstrapCarrierSslPort=11207, ioPoolSize=8, computationPoolSize=8, responseBufferSize=16384, requestBufferSize=16384, kvServiceEndpoints=1, viewServiceEndpoints=1, queryServiceEndpoints=1, searchServiceEndpoints=1, ioPool=NioEventLoopGroup, coreScheduler=CoreScheduler, eventBus=DefaultEventBus, packageNameAndVersion=couchbase-java-client/2.2.8 (git: 2.2.8, core: 1.2.9), dcpEnabled=false, retryStrategy=BestEffort, maxRequestLifetime=75000, retryDelay=ExponentialDelay{growBy 1.0 MICROSECONDS, powers of 2; lower=100, upper=100000}, reconnectDelay=ExponentialDelay{growBy 1.0 MILLISECONDS, powers of 2; lower=32, upper=4096}, observeIntervalDelay=ExponentialDelay{growBy 1.0 MICROSECONDS, powers of 2; lower=10, upper=100000}, keepAliveInterval=30000, autoreleaseAfter=2000, bufferPoolingEnabled=true, tcpNodelayEnabled=true, mutationTokensEnabled=false, socketConnectTimeout=1000, dcpConnectionBufferSize=20971520, dcpConnectionBufferAckThreshold=0.2, dcpConnectionName=dcp/core-io, callbacksOnIoPool=false, queryTimeout=7500, viewTimeout=7500, kvTimeout=2500, connectTimeout=5000, disconnectTimeout=25000, dnsSrvEnabled=false}
2016-11-02 18:48:57.245 INFO 7 --- [ cb-io-1-1] com.couchbase.client.core.node.Node : Connected to Node couchbase-service
2016-11-02 18:48:57.291 INFO 7 --- [ cb-io-1-1] com.couchbase.client.core.node.Node : Disconnected from Node couchbase-service
2016-11-02 18:48:57.533 INFO 7 --- [ cb-io-1-2] com.couchbase.client.core.node.Node : Connected to Node couchbase-service
2016-11-02 18:48:57.638 INFO 7 --- [-computations-4] c.c.c.core.config.ConfigurationProvider : Opened bucket books
2016-11-02 18:48:58.152 INFO 7 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
Book{isbn=978-1-4919-1889-0, name=Minecraft Modding with Forge, cost=29.99}
2016-11-02 18:48:58.402 INFO 7 --- [ main] org.example.webapp.Application : Started Application in 2.799 seconds (JVM running for 3.141)
2016-11-02 18:48:58.403 INFO 7 --- [ Thread-5] s.c.a.AnnotationConfigApplicationContext : Closing org.springframework.context.annotation.AnnotationConfigApplicationContext@108c4c35: startup date [Wed Nov 02 18:48:56 UTC 2016]; root of context hierarchy
2016-11-02 18:48:58.404 INFO 7 --- [ Thread-5] o.s.j.e.a.AnnotationMBeanExporter : Unregistering JMX-exposed beans on shutdown
2016-11-02 18:48:58.410 INFO 7 --- [ cb-io-1-2] com.couchbase.client.core.node.Node : Disconnected from Node couchbase-service
2016-11-02 18:48:58.410 INFO 7 --- [ Thread-5] c.c.c.core.config.ConfigurationProvider : Closed bucket books
更新后的仪表板现在看起来像:
删除Kubernetes集群
Kubernetes集群可以删除为:
kops-darwin-amd64 delete cluster --name=kubernetes.arungupta.me --state=s3://kops-couchbase --yes
TYPE NAME ID
autoscaling-config master-us-west-2a.masters.kubernetes.arungupta.me-20161101235639 master-us-west-2a.masters.kubernetes.arungupta.me-20161101235639
autoscaling-config master-us-west-2b.masters.kubernetes.arungupta.me-20161101235639 master-us-west-2b.masters.kubernetes.arungupta.me-20161101235639
autoscaling-config master-us-west-2c.masters.kubernetes.arungupta.me-20161101235639 master-us-west-2c.masters.kubernetes.arungupta.me-20161101235639
autoscaling-config nodes.kubernetes.arungupta.me-20161101235639 nodes.kubernetes.arungupta.me-20161101235639
autoscaling-group master-us-west-2a.masters.kubernetes.arungupta.me master-us-west-2a.masters.kubernetes.arungupta.me
autoscaling-group master-us-west-2b.masters.kubernetes.arungupta.me master-us-west-2b.masters.kubernetes.arungupta.me
autoscaling-group master-us-west-2c.masters.kubernetes.arungupta.me master-us-west-2c.masters.kubernetes.arungupta.me
autoscaling-group nodes.kubernetes.arungupta.me nodes.kubernetes.arungupta.me
dhcp-options kubernetes.arungupta.me dopt-9b7b08ff
iam-instance-profile masters.kubernetes.arungupta.me masters.kubernetes.arungupta.me
iam-instance-profile nodes.kubernetes.arungupta.me nodes.kubernetes.arungupta.me
iam-role masters.kubernetes.arungupta.me masters.kubernetes.arungupta.me
iam-role nodes.kubernetes.arungupta.me nodes.kubernetes.arungupta.me
instance master-us-west-2a.masters.kubernetes.arungupta.me i-8798eb9f
instance master-us-west-2b.masters.kubernetes.arungupta.me i-eca96ab3
instance master-us-west-2c.masters.kubernetes.arungupta.me i-63fd3dbf
instance nodes.kubernetes.arungupta.me i-21a96a7e
instance nodes.kubernetes.arungupta.me i-57fb3b8b
instance nodes.kubernetes.arungupta.me i-5c99ea44
internet-gateway kubernetes.arungupta.me igw-b624abd2
keypair kubernetes.kubernetes.arungupta.me-18:90:41:6f:5f:79:6a:a8:d5:b6:b8:3f:10:d5:d3:f3 kubernetes.kubernetes.arungupta.me-18:90:41:6f:5f:79:6a:a8:d5:b6:b8:3f:10:d5:d3:f3
route-table kubernetes.arungupta.me rtb-e44df183
route53-record api.internal.kubernetes.arungupta.me. Z6I41VJM5VCZV/api.internal.kubernetes.arungupta.me.
route53-record api.kubernetes.arungupta.me. Z6I41VJM5VCZV/api.kubernetes.arungupta.me.
route53-record etcd-events-us-west-2a.internal.kubernetes.arungupta.me. Z6I41VJM5VCZV/etcd-events-us-west-2a.internal.kubernetes.arungupta.me.
route53-record etcd-events-us-west-2b.internal.kubernetes.arungupta.me. Z6I41VJM5VCZV/etcd-events-us-west-2b.internal.kubernetes.arungupta.me.
route53-record etcd-events-us-west-2c.internal.kubernetes.arungupta.me. Z6I41VJM5VCZV/etcd-events-us-west-2c.internal.kubernetes.arungupta.me.
route53-record etcd-us-west-2a.internal.kubernetes.arungupta.me. Z6I41VJM5VCZV/etcd-us-west-2a.internal.kubernetes.arungupta.me.
route53-record etcd-us-west-2b.internal.kubernetes.arungupta.me. Z6I41VJM5VCZV/etcd-us-west-2b.internal.kubernetes.arungupta.me.
route53-record etcd-us-west-2c.internal.kubernetes.arungupta.me. Z6I41VJM5VCZV/etcd-us-west-2c.internal.kubernetes.arungupta.me.
security-group masters.kubernetes.arungupta.me sg-3e790f47
security-group nodes.kubernetes.arungupta.me sg-3f790f46
subnet us-west-2a.kubernetes.arungupta.me subnet-3cdbc958
subnet us-west-2b.kubernetes.arungupta.me subnet-18c3f76e
subnet us-west-2c.kubernetes.arungupta.me subnet-b30f6deb
volume us-west-2a.etcd-events.kubernetes.arungupta.me vol-202350a8
volume us-west-2a.etcd-main.kubernetes.arungupta.me vol-0a235082
volume us-west-2b.etcd-events.kubernetes.arungupta.me vol-401f5bf4
volume us-west-2b.etcd-main.kubernetes.arungupta.me vol-691f5bdd
volume us-west-2c.etcd-events.kubernetes.arungupta.me vol-aefe163b
volume us-west-2c.etcd-main.kubernetes.arungupta.me vol-e9fd157c
vpc kubernetes.arungupta.me vpc-e5f50382
internet-gateway:igw-b624abd2 still has dependencies, will retry
keypair:kubernetes.kubernetes.arungupta.me-18:90:41:6f:5f:79:6a:a8:d5:b6:b8:3f:10:d5:d3:f3 ok
instance:i-5c99ea44 ok
instance:i-63fd3dbf ok
instance:i-eca96ab3 ok
instance:i-21a96a7e ok
autoscaling-group:master-us-west-2a.masters.kubernetes.arungupta.me ok
autoscaling-group:master-us-west-2b.masters.kubernetes.arungupta.me ok
autoscaling-group:master-us-west-2c.masters.kubernetes.arungupta.me ok
autoscaling-group:nodes.kubernetes.arungupta.me ok
iam-instance-profile:nodes.kubernetes.arungupta.me ok
iam-instance-profile:masters.kubernetes.arungupta.me ok
instance:i-57fb3b8b ok
instance:i-8798eb9f ok
route53-record:Z6I41VJM5VCZV/etcd-events-us-west-2a.internal.kubernetes.arungupta.me. ok
iam-role:nodes.kubernetes.arungupta.me ok
iam-role:masters.kubernetes.arungupta.me ok
autoscaling-config:nodes.kubernetes.arungupta.me-20161101235639 ok
autoscaling-config:master-us-west-2b.masters.kubernetes.arungupta.me-20161101235639 ok
subnet:subnet-b30f6deb still has dependencies, will retry
subnet:subnet-3cdbc958 still has dependencies, will retry
subnet:subnet-18c3f76e still has dependencies, will retry
autoscaling-config:master-us-west-2a.masters.kubernetes.arungupta.me-20161101235639 ok
autoscaling-config:master-us-west-2c.masters.kubernetes.arungupta.me-20161101235639 ok
volume:vol-0a235082 still has dependencies, will retry
volume:vol-202350a8 still has dependencies, will retry
volume:vol-401f5bf4 still has dependencies, will retry
volume:vol-e9fd157c still has dependencies, will retry
volume:vol-aefe163b still has dependencies, will retry
volume:vol-691f5bdd still has dependencies, will retry
security-group:sg-3f790f46 still has dependencies, will retry
security-group:sg-3e790f47 still has dependencies, will retry
Not all resources deleted; waiting before reattempting deletion
internet-gateway:igw-b624abd2
security-group:sg-3f790f46
volume:vol-aefe163b
route-table:rtb-e44df183
volume:vol-401f5bf4
subnet:subnet-18c3f76e
security-group:sg-3e790f47
volume:vol-691f5bdd
subnet:subnet-3cdbc958
volume:vol-202350a8
volume:vol-0a235082
dhcp-options:dopt-9b7b08ff
subnet:subnet-b30f6deb
volume:vol-e9fd157c
vpc:vpc-e5f50382
internet-gateway:igw-b624abd2 still has dependencies, will retry
volume:vol-e9fd157c still has dependencies, will retry
subnet:subnet-3cdbc958 still has dependencies, will retry
subnet:subnet-18c3f76e still has dependencies, will retry
subnet:subnet-b30f6deb still has dependencies, will retry
volume:vol-0a235082 still has dependencies, will retry
volume:vol-aefe163b still has dependencies, will retry
volume:vol-691f5bdd still has dependencies, will retry
volume:vol-202350a8 still has dependencies, will retry
volume:vol-401f5bf4 still has dependencies, will retry
security-group:sg-3f790f46 still has dependencies, will retry
security-group:sg-3e790f47 still has dependencies, will retry
Not all resources deleted; waiting before reattempting deletion
subnet:subnet-b30f6deb
volume:vol-e9fd157c
vpc:vpc-e5f50382
internet-gateway:igw-b624abd2
security-group:sg-3f790f46
volume:vol-aefe163b
route-table:rtb-e44df183
volume:vol-401f5bf4
subnet:subnet-18c3f76e
security-group:sg-3e790f47
volume:vol-691f5bdd
subnet:subnet-3cdbc958
volume:vol-202350a8
volume:vol-0a235082
dhcp-options:dopt-9b7b08ff
subnet:subnet-18c3f76e still has dependencies, will retry
subnet:subnet-b30f6deb still has dependencies, will retry
internet-gateway:igw-b624abd2 still has dependencies, will retry
subnet:subnet-3cdbc958 still has dependencies, will retry
volume:vol-691f5bdd still has dependencies, will retry
volume:vol-0a235082 still has dependencies, will retry
volume:vol-202350a8 still has dependencies, will retry
volume:vol-401f5bf4 still has dependencies, will retry
volume:vol-aefe163b still has dependencies, will retry
volume:vol-e9fd157c still has dependencies, will retry
security-group:sg-3e790f47 still has dependencies, will retry
security-group:sg-3f790f46 still has dependencies, will retry
Not all resources deleted; waiting before reattempting deletion
internet-gateway:igw-b624abd2
security-group:sg-3f790f46
volume:vol-aefe163b
route-table:rtb-e44df183
volume:vol-401f5bf4
subnet:subnet-18c3f76e
security-group:sg-3e790f47
volume:vol-691f5bdd
subnet:subnet-3cdbc958
volume:vol-202350a8
volume:vol-0a235082
dhcp-options:dopt-9b7b08ff
subnet:subnet-b30f6deb
volume:vol-e9fd157c
vpc:vpc-e5f50382
subnet:subnet-b30f6deb still has dependencies, will retry
volume:vol-202350a8 still has dependencies, will retry
internet-gateway:igw-b624abd2 still has dependencies, will retry
subnet:subnet-18c3f76e still has dependencies, will retry
volume:vol-e9fd157c still has dependencies, will retry
volume:vol-aefe163b still has dependencies, will retry
volume:vol-401f5bf4 still has dependencies, will retry
volume:vol-691f5bdd still has dependencies, will retry
security-group:sg-3e790f47 still has dependencies, will retry
security-group:sg-3f790f46 still has dependencies, will retry
subnet:subnet-3cdbc958 still has dependencies, will retry
volume:vol-0a235082 still has dependencies, will retry
Not all resources deleted; waiting before reattempting deletion
internet-gateway:igw-b624abd2
security-group:sg-3f790f46
volume:vol-aefe163b
route-table:rtb-e44df183
subnet:subnet-18c3f76e
security-group:sg-3e790f47
volume:vol-691f5bdd
volume:vol-401f5bf4
volume:vol-202350a8
subnet:subnet-3cdbc958
volume:vol-0a235082
dhcp-options:dopt-9b7b08ff
subnet:subnet-b30f6deb
volume:vol-e9fd157c
vpc:vpc-e5f50382
subnet:subnet-18c3f76e ok
volume:vol-e9fd157c ok
volume:vol-401f5bf4 ok
volume:vol-0a235082 ok
volume:vol-691f5bdd ok
subnet:subnet-3cdbc958 ok
volume:vol-aefe163b ok
subnet:subnet-b30f6deb ok
internet-gateway:igw-b624abd2 ok
volume:vol-202350a8 ok
security-group:sg-3f790f46 ok
security-group:sg-3e790f47 ok
route-table:rtb-e44df183 ok
vpc:vpc-e5f50382 ok
dhcp-options:dopt-9b7b08ff ok
Cluster deleted
couchbase.com/containers提供了有关如何在不同容器框架中运行Couchbase的更多详细信息。
有关Couchbase的更多信息:
翻译自: https://www.javacodegeeks.com/2016/11/multimaster-kubernetes-cluster-amazon-using-kops.html