Docker Swarm vs Kubernetes:如何在两个虚拟机中同时设置

This tells Docker Compose to build the Dockerfile from the “sambaonly” directory, upload/pull built containers to my newly setup private registry, and export port 445 from the container.

这告诉Docker Compose从“ sambaonly”目录构建Dockerfile,将构建的容器上传/拉到我新设置的私有注册表中,并从该容器导出端口445。

To deploy this manifest, I followed Docker Swarm’s tutorial. I first used Docker Compose to build and upload the container to the private registry:

为了部署此清单, 我遵循了Docker Swarm的教程 。 我首先使用Docker Compose来构建容器并将其上传到私有注册表:

docker-compose build

docker-compose push

After the container is built, the app can be deployed with docker stack deploy command, specifying the service name:

构建容器后,可以使用docker stack deploy命令docker stack deploy服务,并指定服务名称:

$ docker stack deploy --compose-file docker-compose.yml samba-swarm
Ignoring unsupported options: build
Creating network samba-swarm_default
Creating service samba-swarm_samba
zhuowei@dora:~/Documents/docker$ docker stack services samba-swarm
ID           NAME                  MODE       REPLICAS IMAGE PORTS
yg8x8yfytq5d samba-swarm_samba     replicated 1/1

And now the app is running under Samba Swarm. I tested that it still works with smbclient:

现在,该应用程序正在Samba Swarm下运行。 我测试了它仍然可以与smbclient

zhuowei@dora:~$ smbclient \\\\localhost\\workdir -U %
WARNING: The "syslog" option is deprecated
Try "help" to get a list of possible commands.
smb: \> ls
.                               D        0  Fri Oct  5 12:14:43 2018
..                              D        0  Sun Oct  7 22:09:49 2018
hello.txt                       N       13  Fri Oct  5 11:17:34 2018

102685624 blocks of size 1024. 72252576 blocks available
smb: \>
添加另一个节点 (Adding another node)

Once again, Docker Swarm’s simplicity shines through here. To setup a second node, I first installed Docker, then ran the command that Docker gave me when I setup the swarm:

Docker Swarm的简单性再次在这里闪耀。 要设置第二个节点,我首先安装了Docker,然后运行了在设置群集时Docker给我的命令:

ralph:~# docker swarm join --token SWMTKN-1-abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwx-abcdefghijklmnopqrstuvwxy 10.133.7.100:2377

This node joined a swarm as a worker.

To run my application on both nodes, I ran Docker Swarm’s scale command on the manager node:

为了在两个节点上运行我的应用程序,我在管理器节点上运行了Docker Swarm的scale命令:

zhuowei@dora:~/Documents/docker$ docker service scale samba-swarm_samba=2
samba-swarm_samba scaled to 2 overall progress: 2 out of 2 tasks
1/2: running [==================================================>]
2/2: running [==================================================>] verify: Service converged

On the new worker node, the new container showed up:

在新的工作程序节点上,出现了新的容器:

ralph:~# docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7539549283bd 127.0.0.1:5000/samba:latest "/usr/sbin/smbd -FS …" 20 seconds ago Up 18 seconds 445/tcp samba-swarm_samba.1.abcdefghijklmnopqrstuvwxy
测试负载平衡 (Testing load balancing)

Docker Swarm includes a built-in load balancer called the Mesh Router: requests made to any node’s IP address is automatically distributed across the Swarm.

Docker Swarm包含一个内置的负载均衡器,称为Mesh Router:对任何节点IP地址的请求会自动在Swarm中分配。

To test this, I made 1000 connections to the manager node’s IP address with nc:

为了测试这一点,我使用nc与管理器节点的IP地址建立了1000个连接:

print("#!/bin/bash")
for i in range(1000):
    print("nc -v 10.133.7.100 445 &")
print("wait")

Samba spawns one new process for each connection, so if the load balancing works, I would expect about 500 Samba processes on each node in the Swarm. This is indeed what happens.

Samba为每个连接生成一个新进程,因此,如果负载平衡有效,我预计Swarm中每个节点上将有大约500个Samba进程。 确实是这样。

After I ran the script to make 1000 connections, I checked the number of Samba processes on the manager (10.133.7.100):

运行脚本以建立1000个连接后,我检查了管理器上的Samba进程数(10.133.7.100):

zhuowei@dora:~$ ps -ef|grep smbd|wc
506 5567 42504

and on the worker node (10.133.7.50):

在工作节点(10.133.7.50)上:

ralph:~# ps -ef|grep smbd|wc
506 3545 28862

So exactly half of the requests made to the manager node were magically redirected to the first worker node, showing that the Swarm cluster is working properly.

因此,准确地将对管理器节点的一半请求重定向到第一个工作器节点,这表明Swarm集群运行正常。

I found Docker Swarm to be very easy to setup, and it performed well under (a light) load.

我发现Docker Swarm的设置非常容易,并且在轻负载下表现良好。

Kubernetes (Kubernetes)

Kubernetes is becoming the industry standard for container orchestration. It’s significantly more flexible than Docker Swarm, but this also makes it harder to setup. I found that it’s not too hard, though.

Kubernetes正在成为容器编排的行业标准。 它比Docker Swarm灵活得多,但这也使设置更加困难。 我发现,这不是硬,虽然。

For this experiment, instead of using a pre-built Kubernetes dev environment such as minikube, I decided to setup my own cluster, using Kubeadm, WeaveNet, and MetalLB.

在本实验中,我决定使用Kubeadm,WeaveNet和MetalLB来设置自己的集群,而不是使用诸如minikube类的预先构建的Kubernetes开发环境。

设置Kubernetes (Setting up Kubernetes)

Kubernetes has a reputation for being difficult to setup: you might’ve heard of the complicated multi-step process from the Kubernetes the Hard Waytutorial.

Kubernetes有声誉的难以设置:你可能听说过在复杂的多步骤的过程中Kubernetes坚硬方式教程。

That reputation is no longer accurate: Kubernetes developers have automated almost every step into a very easy-to-use setup script called kubeadm.

这种声誉不再准确:Kubernetes开发人员几乎已经将每个步骤自动化到了一个非常易于使用的名为kubeadm安装脚本中。

Unfortunately, because Kubernetes is so flexible, there’s still a few steps that the tutorial on using kubeadm doesn't cover, so I had to figure out which network and load balancer to use myself.

不幸的是,由于Kubernetes是如此灵活,所以仍然存在一些使用 kubeadm教程无法涵盖的步骤,因此我不得不弄清楚自己要使用哪个网络和负载均衡器。

Here’s what I ended up running.

这就是我最终运行的东西。

First I had to disable Swap on each node:

首先,我必须在每个节点上禁用“交换”:

root@dora:~# swapoff -a
root@dora:~# systemctl restart kubelet.service

Next, I setup the master node (10.133.7.100) with the following command:

接下来,我使用以下命令设置主节点(10.133.7.100):

sudo kubeadm init --pod-network-cidr=10.134.0.0/16 --apiserver-advertise-address=10.133.7.100 --apiserver-cert-extra-sans=10.0.2.15

The --pod-network-cidr option assigns an internal network address to all the nodes on the network, used for internal communications within Kubernetes.

--pod-network-cidr选项将内部网络地址分配给网络上的所有节点,用于Kubernetes内部的内部通信。

The --apiserver-advertise-address and --apiserver-cert-extra-sansoptions were added because of a quirk in my VirtualBox setup: the main virtual network card on the VMs (which has IP 10.0.2.15) can only access the Internet. I had to clarify that other nodes have to access the master using the 10.133.7.100 IP address.

添加了--apiserver-advertise-address--apiserver-cert-extra-sans选项,这是因为我的VirtualBox设置中有一个怪癖:VM(具有IP 10.0.2.15)上的主虚拟网卡只能访问互联网。 我必须说明其他节点必须使用10.133.7.100 IP地址访问主服务器。

After running this command, Kubeadm printed some instructions:

运行此命令后,Kubeadm打印了一些说明:

Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node as root:

kubeadm join 10.133.7.100:6443 --token abcdefghijklmnopqrstuvw --discovery-token-ca-cert-hash sha256:abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijkl

I missed these instructions the first time, so I didn’t actually finish setup. I then spent a whole week wondering why none of my containers worked!

我第一次错过了这些说明,因此实际上并没有完成设置。 然后,我花了整整一个星期,想知道为什么我的容器都不起作用!

The Kubernetes developers must’ve be like:

Kubernetes开发人员必须像:

After I finally read the instructions, I had to do three more things:

最终阅读说明后,我不得不做三件事:

  • First, I had to run the commands given by kubeadm to setup a configuration file.

    首先,我必须运行kubeadm给出的命令来设置配置文件。

  • By default, Kubernetes won’t schedule containers on the master node, only on worker nodes. Since I only have one node right now, the tutorial showed me this command to allow running containers on the only node:

    默认情况下,Kubernetes不会在主节点上调度容器,而只会在工作节点上调度容器。 由于我现在只有一个节点,因此本教程向我展示了此命令,以允许在唯一的节点上运行容器:

kubectl taint nodes --all node-role.kubernetes.io/master-
  • Finally, I had to choose a network for my cluster.

    最后,我必须为集群选择一个网络。
安装网络 (Installing networking)

Unlike Docker Swarm, which must use its own mesh-routing layer for both networking and load balancing, Kubernetes offers multiple choices for networking and load-balancing.

与必须在网络和负载平衡中使用自己的网状路由层的Docker Swarm不同,Kubernetes为网络和负载平衡提供了多种选择。

The networking component allows containers to talk to each other internally. I did some research, and this comparison article suggested Flannel or WeaveNet as they are easy to setup. Thus, I decided to try WeaveNet. I followed the instructions from the kubeadm tutorial to apply WeaveNet’s configuration:

网络组件允许容器在内部相互通信。 我做了一些研究, 这篇比较文章建议Flannel或WeaveNet易于安装。 因此,我决定尝试WeaveNet。 我按照kubeadm教程中的说明应用了WeaveNet的配置:

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Next, to allow containers to talk to the outside world, I need a load balancer. From my research, I had the impression that most Kubernetes load balancer implementations are focused on HTTP services only, not raw TCP. Thankfully, I found MetalLB, a recent (one-year old) project that’s plugging this gap.

接下来,为了使容器与外界对话,我需要一个负载平衡器。 从我的研究中,我给人的印象是,大多数Kubernetes负载均衡器实现仅专注于HTTP服务,而不是原始TCP。 值得庆幸的是,我找到了MetalLB,这是一个最近的项目(已有一年的历史),正在填补这一空白。

To install MetalLB, I followed its Getting Started tutorial, and first deployed MetalLB:

要安装MetalLB,我遵循其入门教程 ,并首先部署了MetalLB:

kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml

Next, I allocated the IP range 10.133.7.200–10.133.7.230 to MetalLB, by making and applying this configuration file:

接下来,通过制作并应用以下配置文件 ,我将IP范围10.133.7.200–10.133.7.230分配给MetalLB:

kubectl apply -f metallb-config.yaml
部署应用 (Deploying the app)

Kubernetes’ service configuration files are more verbose than Docker Swarm’s, due to Kubernetes’ flexibiity. In addition to specifying the container to run, like Docker Swarm, I have to specify how each port should be treated.

由于Kubernetes的灵活性,Kubernetes的服务配置文件比Docker Swarm的冗长。 除了指定要运行的容器(如Docker Swarm)外,我还必须指定如何对待每个端口。

After reading Kubernetes’ tutorial, I came up with this Kubernetes configuration, made of one Service and one Deployment.

阅读Kubernetes的教程之后 ,我想到了这种Kubernetes配置,它由一项服务和一项部署组成。

# https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/
kind: Service
apiVersion: v1
metadata:
  name: samba
  labels:
    app: samba
spec:
  ports:
    - port: 445
      protocol: TCP
      targetPort: 445
  selector:
    app: samba
  type: LoadBalancer

---

This Service tells Kubernetes to export TCP port 445 from our Samba containers to the load balancer.

服务告诉Kubernetes将TCP端口445从我们的Samba容器导出到负载均衡器。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: samba
  labels:
    app: samba
spec:
  selector:
    matchLabels:
      app: samba
  replicas: 1
  template:
    metadata:
      labels:
        app: samba
    spec:
      containers:
        - image: 127.0.0.1:5000/samba:latest
          name: samba
          ports:
            - containerPort: 445
          stdin: true

This Deployment object tells Kubernetes to run my container and export a port for the Service to handle.

该部署对象告诉Kubernetes运行我的容器并导出端口以供Service处理。

Note the replicas: 1 — that's how many instances of the container I want to run.

请注意replicas: 1这是我要运行的容器实例数。

I can deploy this service to Kubernetes using kubectl apply:

我可以使用kubectl apply将服务部署到Kubernetes:

zhuowei@dora:~/Documents/docker$ kubectl apply -f kubernetes-samba.yaml
service/samba configured
deployment.apps/samba configured

and, after rebooting my virtual machine a few times, the Deployment finally started working:

并且,在几次重启我的虚拟机之后,Deployment终于开始工作了:

zhuowei@dora:~/Documents/docker$ kubectl get pods
NAME                   READY STATUS  RESTARTS AGE
samba-57945b8895-dfzgl 1/1   Running 0        52m
zhuowei@dora:~/Documents/docker$ kubectl get service samba
NAME  TYPE         CLUSTER-IP     EXTERNAL-IP  PORT(S)       AGE
samba LoadBalancer 10.108.157.165 10.133.7.200 445:30246/TCP 91m

My service is now available at the External-IP assigned by MetalLB:

我的服务现在可以在MetalLB分配的External-IP上使用:

zhuowei@dora:~$ smbclient \\\\10.133.7.200\\workdir -U %
WARNING: The "syslog" option is deprecated
Try "help" to get a list of possible commands.
smb: \> ls
.                               D        0  Fri Oct  5 12:14:43 2018
..                              D        0  Sun Oct  7 22:09:49 2018
hello.txt                       N       13  Fri Oct  5 11:17:34 2018

102685624 blocks of size 1024. 72252576 blocks available
smb: \>
添加另一个节点 (Adding another node)

Adding another node in a Kubernetes cluster is much easier: I just had to run the command given by kubeadm on the new machine:

在Kubernetes集群中添加另一个节点要容易得多:我只需要在新机器上运行kubeadm给出的命令:

zhuowei@davy:~$ sudo kubeadm join 10.133.7.100:6443 --token abcdefghijklmnopqrstuvw --discovery-token-ca-cert-hash sha256:abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijkl

(snip...)

This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.
我的设置的怪癖 (Odd quirks of my setup)

I had to make two changes due to my VirtualBox setup:

由于我的VirtualBox设置,我不得不进行两项更改:

First, since my virtual machine has two network cards, I have to manually tell Kubernetes my machine’s IP address. According to this issue, I had to edit

首先,由于我的虚拟机有两个网卡,因此我必须手动告诉Kubernetes我的计算机的IP地址。 根据这个问题 ,我不得不编辑

/etc/systemd/system/kubelet.service.d/10-kubeadm.conf

and change one line to

并将一行更改为

Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml --node-ip=10.133.7.101"

before restarting Kubernetes:

在重启Kubernetes之前:

root@davy:~# systemctl daemon-reload
root@davy:~# systemctl restart kubelet.service

The other tweak is for the Docker registry: since the new node can’t access my private registry on the master node, I decided to do a terrible hack and share the registry from my master node to the new machine using ssh:

另一个调整是针对Docker注册表的:由于新节点无法访问主节点上的私有注册表,因此我决定进行一次骇人的攻击,并使用ssh将注册表从我的主节点共享到新计算机:

zhuowei@davy:~$ ssh dora.local -L 5000:localhost:5000

This forwards port 5000 from the master node, dora (which runs my Docker registry) to localhost, where Kubernetes can find it on this machine.

这会将端口5000从主节点dora (运行我的Docker注册表)转发到localhost,在那里Kubernetes可以在此机器上找到它。

In actual production, one would probably host the Docker registry on a separate machine, so all nodes can access it.

在实际生产中,可能会将Docker注册表托管在单独的计算机上,以便所有节点都可以访问它。

扩大应用程序 (Scaling up the application)

With the second machine setup, I modified my original Deployment to add another instance of the app:

在第二台计算机设置中,我修改了原始的Deployment以添加该应用程序的另一个实例:

replicas: 2

After rebooting both the master and the worker a few times, the new instance of my app finally exited CreatingContainer status and started to run:

在几次重新启动主服务器和工作服务器之后,我的应用程序的新实例终于退出了CreatingContainer状态并开始运行:

zhuowei@dora:~/Documents/docker$ kubectl get pods
NAME                   READY STATUS  RESTARTS AGE
samba-57945b8895-dfzgl 1/1   Running 0        62m
samba-57945b8895-qhrtl 1/1   Running 0        12m
测试负载平衡 (Testing Load Balancing)

I used the same procedure to open 1000 connections to Samba running on Kubernetes. The result is interesting.

我使用相同的过程打开了与Kubernetes上运行的Samba的1000个连接。 结果很有趣。

Master:

主:

zhuowei@dora$ ps -ef|grep smbd|wc
492 5411 41315

Worker:

工人:

zhuowei@davy:~$ ps -ef|grep smbd|wc
518 5697 43499

Kubernetes/MetalLB also balanced the load across the two machines, but the master machine got slightly fewer connections than the worker machine. I wonder why.

Kubernetes / MetalLB也平衡了两台计算机之间的负载,但是主计算机的连接数比工作计算机少。 我想知道为什么。

Anyways, this shows that I finally managed to setup Kubernetes after a bunch of detours. When I saw the containers working, I felt like this guy.

无论如何,这表明我经过一番弯路后终于设法设置了Kubernetes。 当我看到这些容器在工作时,我感觉就像这个家伙

比较和结论 (Comparison and conclusion)

Features common to both: Both can manage containers and intelligently load balance requests across the same TCP application across two different virtual machines. Both have good documentation for initial setup.

两者共有的功能 :两者都可以管理容器,并且可以智能地在两个不同虚拟机上的同一TCP应用程序中实现负载均衡请求。 两者都有良好的初始安装文档。

Docker Swarm’s strengths: simple setup with no configuration needed, tight integration with Docker.

Docker Swarm的优势 :设置简单,无需配置,与Docker紧密集成。

Kubernetes’ strengths: flexible components, many available resources and add-ons.

Kubernetes的优势 :灵活的组件,许多可用资源和附加组件。

Kubernetes vs Docker Swarm is a tradeoff between simplicity and flexibility.

Kubernetes vs Docker Swarm是简单性和灵活性之间的权衡。

I found it easier to setup Docker Swarm, but I can’t just, for example, swap out the load balancer for another component — there’s no way to configure it: I would have to disable it all together.

我发现设置Docker Swarm更加容易,但是例如,我不能只是将负载均衡器换成另一个组件-无法对其进行配置:我必须全部禁用它

On Kubernetes, finding the right setup took me a while thanks to the daunting amount of choices, but in exchange, I can swap out parts of my cluster as needed, and I can easily install add-ons, such as a fancy dashboard.

在Kubernetes上,由于艰巨的选择量,找到正确的设置花了我一段时间,但作为交换,我可以根据需要换出集群的一部分,并且可以轻松安装附加组件,例如精美的仪表板

If you just want to try Kubernetes without all this setup, I suggest using minikube, which offers a prebuilt Kubernetes cluster virtual machine, no setup needed.

如果您只想在没有所有设置的情况下尝试Kubernetes,建议您使用minikube ,它提供了一个预先构建的Kubernetes集群虚拟机,不需要进行设置。

Finally, I’m impressed that both engines supported raw TCP services: other infrastructure-as-a-service providers such as Heroku or Glitch only supports HTTP(s) website hosting. The availability of TCP services means that one can deploy one’s database servers, cache servers, and even Minecraft servers using the same tools to deploy web applications, making container orchestration management a very useful skill.

最后,令我印象深刻的是,这两个引擎都支持原始TCP服务:其他基础设施即服务的提供商(例如HerokuGlitch)仅支持HTTP网站托管。 TCP服务的可用性意味着人们可以使用相同的工具来部署数据库服务器,缓存服务器,甚至是Minecraft服务器来部署Web应用程序,从而使容器编排管理成为一项非常有用的技能。

In conclusion, if I were building a cluster, I would use Docker Swarm. If I were paying someone else to build a cluster for me, I would ask for Kubernetes.

总之,如果要构建集群,则可以使用Docker Swarm。 如果我付钱给别人为我构建集群,我会要求Kubernetes。

我学到的是 (What I learned)

  • How to work with Docker containers

    如何使用Docker容器
  • How to setup a two-node Docker Swarm cluster

    如何设置两节点Docker Swarm集群
  • How to setup a two-node Kubernetes cluster, and which choices would work for a TCP-based app

    如何设置两个节点的Kubernetes集群以及哪些选择适用于基于TCP的应用程序
  • How to deploy an app to Docker Swarm and Kubernetes

    如何将应用程序部署到Docker Swarm和Kubernetes
  • How to fix anything by rebooting a computer enough times, like I’m still using Windows 98

    如何通过足够多次重新引导计算机来解决任何问题,例如我仍在使用Windows 98
  • Kubernetes and Docker Swarm aren’t as intimidating as they sound

    Kubernetes和Docker Swarm并没有听起来那么吓人
图片积分 (Image Credits)

翻译自: https://www.freecodecamp.org/news/docker-swarm-vs-kubernetes-how-to-setup-both-in-two-virtual-machines-f8897fce7967/

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值