Docker Orchestration

Docker Orchestration

From

Container good learm place

Getting Started With Swarm Mode

In this scenario, you will learn how to initialise a Docker Swarm Mode cluster and deploy networked containers using the built-in Docker Orchestration. The environment has been configured with two Docker hosts.

What is Swarm Mode
In 1.12, Docker introduced Swarm Mode. Swarm Mode enables the ability to deploy containers across multiple Docker hosts, using overlay networks for service discovery with a built-in load balancer for scaling the services.

Swarm Mode is managed as part of the Docker CLI, making it a seamless experience to the Docker ecosystem.

Key Concepts
Docker Swarm Mode introduces three new concepts which we’ll explore in this scenario.

  • Node: A Node is an instance of the Docker Engine connected to the Swarm. Nodes are either managers or workers. Managers schedules which containers to run where. Workers execute the tasks. By default, Managers are also workers.

  • Services: A service is a high-level concept relating to a collection of tasks to be executed by workers. An example of a service is an HTTP Server running as a Docker Container on three nodes.

  • Load Balancing: Docker includes a load balancer to process requests across all containers in the service.

This scenario will help you learn how to deploy these new concepts.

Step 1 - Initialise Swarm Mode

Turn single host Docker host into a Multi-host Docker Swarm Mode. Becomes Manager By default, Docker works as an isolated single-node. All containers are only deployed onto the engine. Swarm Mode turns it into a multi-host cluster-aware engine.

The first node to initialise the Swarm Mode becomes the manager. As new nodes join the cluster, they can adjust their roles between managers or workers. You should run 3-5 managers in a production environment to ensure high availability.

Task: Create Swarm Mode Cluster
Swarm Mode is built into the Docker CLI. You can find an overview the possibility commands via docker swarm --help

The most important one is how to initialise Swarm Mode. Initialisation is done via init.

docker swarm init

After running the command, the Docker Engine knows how to work with a cluster and becomes the manager.

In the next step, we will add more nodes and deploy containers across these hosts.

Step 2 - Join Cluster

With Swarm Mode enabled, it is possible to add additional nodes and issues commands across all of them. If nodes happen to disappear, for example, because of a crash, the containers which were running on those hosts will be automatically rescheduled onto other available nodes. The rescheduling ensures you do not lose capacity and provides high-availability.

On each additional node, you wish to add to the cluster, use the Docker CLI to join the existing group. Joining is done by pointing the other host to a current manager of the cluster. In this case, the first host.

Docker now uses an additional port, 2377, for managing the Swarm. The port should be blocked from public access and only accessed by trusted users and nodes. We recommend using VPNs or private networks to secure access.

Task

On the second host, join the cluster by requesting access via the manager.

docker swarm join 172.17.0.40:2377

By default, the manager will automatically accept new nodes being added to the cluster. You can view all nodes in the cluster using docker node ls

Step 3 - Create Overlay Network

Swarm Mode also introduces an improved networking model. In previous versions, Docker required the use of an external key-value store, such as Consul, to ensure consistency across the network. The need for consensus and KV has now been incorporated internally into Docker and no longer depends on external services.

The improved networking approach follows the same syntax as previously. The overlay network is used to enable containers on different hosts to communicate. Under the covers, this is a Virtual Extensible LAN (VXLAN), designed for large scale cloud based deployments.

Task

The following command will create a new overlay network called skynet. All containers registered to this network can communicate with each other, regardless of which node they are deployed onto.

docker network create -d overlay skynet

Step 4 - Deploy Service

By default, Docker uses a spread replication model for deciding which containers should run on which hosts. The spread approach ensures that containers are deployed across the cluster evenly. This one of the nodes are removed from the cluster, the containers running are spread across the other nodes available.

A new concept of Services is used to run containers across the cluster. This is a higher-level concept than containers. A service allows you to define how applications should be deployed at scale. By updating the service, Docker updates the container required in a managed way.

Task

In this case, we are deploying the Docker Image katacoda/docker-http-server. We are defining a friendly name of a service called http and that it should be attached to the newly created skynet network.

For ensuring replication and availability, we are running two instances, of replicas, of the container across our cluster.

Finally, we load balance these two containers together on port 80. Sending an HTTP request to any of the nodes in the cluster will process the request by one of the containers within the cluster. The node which accepted the request might not be the node where the container responses. Instead, Docker load-balances requests across all available containers.

docker service create --name http --network skynet --replicas 2 -p 80:80 katacoda/docker-http-server

You can view the services running on the cluster using the CLI command docker service ls

As containers are started you will see them using the ps command. You should see one instance of the container on each host.

List containers on the first host - docker ps

List containers on the second host - docker ps

If we issue an HTTP request to the public port, it will be processed by the two containers curl docker.

Step 5 - Inspect State

The Service concept allows you to inspect the health and state of your cluster and the running applications.

Task

You can view the list of all the tasks associated with a service across the cluster. In this case, each task is a container docker service tasks http

You can view the details and configuration of a service via docker service inspect --pretty http

On each node, you can ask what tasks it is currently running. Self refers to the manager node Leader: docker node tasks self

Using the ID of a node you can query individual hostsdocker node tasks $(docker node ls -q | tail -n1)

In the next step, we will scale the service to run more instances of the container.

Step 6 - Scale Service

A Service allows us to scale how many instances of a task is running across the cluster. As it understands how to launch containers and which containers are running, it can easily start, or remove, containers as required. At the moment the scaling is manual. However, the API could be hooked up to an external system such as a metrics dashboard.

Task

At present, we have two load-balanced containers running, which are processing our requests curl docker

The command below will scale our http service to be running across five containers.

docker service scale http=5

On each host, you will see additional nodes being started docker ps

The load balancer will automatically be updated. Requests will now be processed across the new containers. Try issuing more commands via curl docker

Try scaling the service down to see the result.

Securely Joining Swarm Mode Cluster

This scenario covers how to securely manage new nodes joining a Swarm Mode cluster. The scenario covers how to define a password for joining along with manually accepting nodes into the cluster.

By default, anyone who can communicate with a Swarm Master via port 2377 can join the cluster. This port should be locked down and restricted to only trusted machines.

On a large network with additional security requirements, companies might want to add additional security to ensure that only verified nodes are allowed.

Step 1 - Initialise Swarm Mode

The first stage to securing the Swarm Mode cluster is to require a password for any new nodes joining the cluster.

When the manager initialises the cluster, they can define a secret. This secret must be used when additional hosts want to join.

Task

Initialise the cluster using docker swarm init --secret mypassword

In the next step, we will use the password so our second host can join.

Step 2 - Join Cluster using password

With the cluster requiring a password, it must be specified when a host wants to join the cluster.

Example

If an attempt to join the cluster without a password is made then it will return an error.

docker swarm join 172.17.0.21:2377

Likewise, if you attempt an incorrect password it will error.

docker swarm join 172.17.0.21:2377 --secret test

The only way to join is if the node knows the correct password for the cluster.

docker swarm join 172.17.0.21:2377 --secret mypassword

Afterwards, it will be added as expected docker node ls

Step 3 - Disable Auto Accept

Once a node knows the password and joins the cluster, it will automatically start accepting workloads. A secondary check can be added which requires that nodes need to be manually accepted before work can be assigned.

The Swarm Cluster previously created has been removed. The next steps will create one which requires a password and manually accepting nodes.

Task

When initialising the cluster, specify the auto-accept to none to force both workers and managers to require acceptance after joining.

docker swarm init --auto-accept=none --secret mypassword

Step 4 - Join without Auto-Accept

When auto accept is turn off, nodes can still join. However, the nodes will not be issued workloads until the node has been manually accepted into the cluster via the CLI.

Task

Join the cluster as before with the password as before.

docker swarm join 172.17.0.21:2377 --secret mypassword

However, this time, the node will be marked as pending. This indicates that it needs to be manually accepted.

docker node ls

Step 5 - Manually Accept Nodes

With the pending node, someone is required to use the Docker CLI to accept the node.

Task

The first task is to identify the ID or Hostname of the pending node. This is done via the CLI using docker node ls

If the pending node is at the bottom of the list, run docker node accept $(docker node ls -q | tail -n1)

If the pending node is at the top of the list, run docker node accept $(docker node ls -q | head -n1)

The sub-command returns the ID of the pending node. The ID is then used as a parameter to the accept command. The accept command also works using the hostname of the pending node.

With the node accepted, the status should change in the node list docker node ls

Step 6 - Deploy Service

The accepted node is now able to start processing requests and deploying containers

Task

Deploy a HTTP server. After a few moments you should see it deployed onto the newly accepted host.

docker service create --replicas 2 -p 80:80 katacoda/docker-http-server

docker ps

Load Balance and Service Discover in Swarm Mode

In this scenario, you will learn how to use Docker to load balance network traffic to different containers. With the introduction of Swarm Mode and Services, containers can now be logically grouped by a friendly name and port.

Requests to this name/port will be load balanced across all available containers in the cluster. This increases availability and the load distribution.

This functionality is provided as part of Swarm’s routing mesh. Internally it’s using the Linux IPVS, an in-kernel Layer 4 multi-protocol load balancer.

The environment has been configured with two Docker Hosts.

Step 1 - Initialise Cluster

Before beginning, initialise Swarm Mode and add the second host to the cluster.

Click the commands below to execute them.

docker swarm init

docker swarm join 172.17.0.28:2377

Step 2 - Port Load Balance

By default, requests to Services are load balanced based on the public port.

Task

The command below will create a new service called lbapp1 with two containers running. The service is exposed via port 81.

docker service create --name lbapp1 --replicas 2 -p 81:80 katacoda/docker-http-server

When requests are made to a node in our cluster on port 81, it will distribute the load across the two containers.

curl docker:81

The HTTP response indicates which container processed the request. Running the command on the second host has the same results, with it processing the request across both hosts.

curl docker:81

In the next step, we will explore how to use this to deploy a realistic application.

Step 3 - Virtual IP and Service Discovery

Docker Swarm Mode includes a Routing Mesh that enables multi-host networking. It allows containers on two different hosts to communicate as if they are on the same host. It does this by creating a Virtual Extensible LAN (VXLAN), designed for cloud-based networking.

The routing works in two different ways. Firstly, based on the public port exposed on the service. Any requests to the port will be distributed. Secondly, the service is given a Virtual IP address that is routable only inside the Docker Network. When requests are made to the IP address, they are distributed to the underlying containers. This Virtual IP is registered with the Embedded DNS server in Docker. When a DNS lookup is made based on the service name, the Virtual IP is returned.

In this step, you will create a load balanced http that is attached to an overlay network and look up it is Virtual IP.

Task
docker network create -d overlay eg1

This network will be a “swarm-scoped network”. This means that only containers launched as a service can attach itself to the network.

docker service create --name http --network eg1 --replicas 2 katacoda/docker-http-server

By calling the service http, Docker adds an entry to it is embedded DNS server. Other containers on the network can use the friendly name to discovery the IP address. Along with ports, it is this IP address which can be used inside the network to reach the load balanced.

Use Dig to find the internal Virtual IP. Note: Interactive Services are not currently supported in Docker 1.12, this is a workaround.

docker service create --network eg1 benhall/dig dig http

docker logs $(docker ps -aq --filter="ancestor=benhall/dig:latest")

Pinging the name should also discover the IP address.

docker service create --network eg1 alpine ping -c5 http

docker logs $(docker ps -aq --filter="ancestor=alpine:latest")

This should match the Virtual IP given to the Service. You can discover this by inspecting the service.

docker service inspect http -format="{{.Endpoint.VirtualIPs}}"

Each container will still be given a unique IP addresses.

docker inspect --format="{{.NetworkSettings.Networks.eg1.IPAddress}}" $(docker ps -aq --filter="ancestor=katacoda/docker-http-server:latest" | head -n1)

This Virtual IP ensures that the load balancing works as expected within the cluster. While the IP address ensures it works outside the cluster.

Step 4 - Multi-Host LB and Service Discovery

Both the Virtual IP and Port Load Balancing and Service Discovery can be used in a multi-host scenario with applications communicating to different services on different hosts.

In this step, we will deploy a replicated Node.js application that communicates with Redis to store data.

Task

To start there needs to be an overlay network that the application and data store can connect to.

docker network create -d overlay app1-network

When deploying Redis, the network can be attached. The application expects to be able to connect to a Redis instance, named Redis. To enable the application to discover the Virtual IP via the Embedded DNS we call the service Redis.

docker service create --name redis --network app1-network redis:alpine

When deploying the application, a public port can be exposed allowing it to load balance the requests between the two containers.

docker service create --name app1-web --network app1-network --replicas 4 -p 80:3000 katacoda/redis-node-docker-example

Each host should have a Node.js container instance with one host storing Redis. docker ps

Calling the HTTP server will store the request in Redis and return the results. This is load balanced, with two containers talking across the overlay network to the Redis container.

curl docker

The application is now distributed across multiple hosts.

Apply Rolling Updates Across Swarm Cluster

In this scenario, you will learn how to apply rolling updates to your Services for configuration changes and new Docker Image versions without any downtime. The environment has been configured with two Docker Hosts.

A service is a high-level concept relating to a collection of tasks to be executed by workers. An example of a service is an HTTP Server running as a Docker Container on three nodes.

Step 1 - Update Limits

Services can be updated dynamically to control various settings and options. Internally, Docker manages how the updates should be applied. For certain commands, Docker will stop, remove and re-create the container. Potentially having all containers stopped at once is an important consideration regarding managing connections and uptime.

There are various settings you can control, view the help via docker service update --help

Task

To start, deploy a HTTP service. We will use this to update/modify the container settings.

docker service create --name http --replicas 2 -p 80:80 katacoda/docker-http-server:v1

Once started, various properties can be updated. For example, adding a new environment variable to the containers. docker service update --env KEY=VALUE http

Alternatively, updating the CPU and memory limits. docker service update --limit-cpu 2 --limit-memory 512mb http

Once executed the results will be visible when you inspect the service. docker service inspect --pretty http

However, listing all container, you will see that they have been recreated with every update. docker ps -a.

Step 2 - Update Replicas

Not all updates require every container to be re-created. For example, scaling the number of replicas does not effect the existing containers.

Task

As an alternative to docker service scale, it is possible to use the update to define update how many replicas should be running. Below will update the replicas from two to six. Docker will then reschedule the additional four containers to be deployed.

docker service update --replicas=6 http

The number of replicas is viewable when inspecting the service docker service inspect --pretty http

Step 3 - Update Image

The most common scenario where updates will be used is when releasing a new version of the application via an updated Docker Image. As the Docker Image is a property of a container, it can be updated like the previous steps.

Task

The following command will re-create the instances of our HTTP service with :v2 tag of the Docker Image.

docker service update --image katacoda/docker-http-server:v2 http

If you list the running container, you will notice that they will all the containers are stopped and re-created at the same time.

docker ps

This will cause your application not to be able to process any requests until the new image has been pulled and created.

curl http://docker

In the next step, you will learn how to roll out updates without downtime.

Step 4 - Rolling Updates

The aim is to deploy a new Docker Image without incurring any downtime. Zero downtime can be achieved by setting parallelism and a delay in the rollout. Docker can batch updates and perform them as a rollout across the cluster.

update-parallelism defines how many containers Docker should update at once. Depending on the number of replicas depends on how large you would batch up the requests.

update-delay defines how long to wait in-between each update batch. The delay is useful if you are application has a warm-up time, for example, starting the JVM or CLR. By specifying a delay, you can ensure that requests can still be processed while the process is starting.

Task

The two parameters are applied when running docker service update. In the example it will update one container at a time, waiting 10 seconds in-between each update. The update will be affecting the Docker Image used, but the parameters can apply to any of the possible update values

docker service update --update-delay=10s --update-parallelism=1 --image katacoda/docker-http-server:v3 http

After launching you will slowly see new v3 versions of the containers start and replace the existing v2. docker ps

Issuing HTTP requests to the load balancer will request it them being handled by both v2 and v3 containers resulting in a different output.

curl http://docker

It is important that your application can take this into account and handle two different versions being live concurrently.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值