Container Technologies

12 篇文章 0 订阅
3 篇文章 0 订阅
本文深入探讨了容器和虚拟机的技术原理,强调容器在资源效率和应用隔离方面的优势。通过对比,展示了容器如何利用Linux Namespaces和Control Groups实现进程隔离,并详细解释了Docker平台如何简化容器的打包、分发和运行过程。
摘要由CSDN通过智能技术生成
A process running in a container

A process running in a container is actually running inside the host’s operating system, just like all the other processes (unlike VMs, where processes run in separate operating systems).But the process in the container is still isolated from other processes. To the process itself, it appears as though it is running on a dedicated machine and operating system all by itself.

COMPARING VIRTUAL MACHINES TO CONTAINERS

Compared to VMs, containers are much more lightweight, which allows us to run higher numbers of software components on the same hardware, mainly because each VM needs to run its own set of system processes, which requires additional(额外的) compute resources next to(除了) those consumed by the component’s own process. A container, on the other hand, is nothing more than a single isolated process running in the host OS, consuming just the resources that the app consumes, without the overhead(开销) of any additional processes.

Underneath in VM

Underneath those VMs is a hypervisor (超级监督者;管理程序), which divides the actual hardware resources into smaller sets of virtual resources that can be used by the operating system inside each VM. Applications running inside those VMs perform (执行) system calls to the Kernel inside the VM and the Kernel then performs x86 instructions on the host’s physical CPU through the Hypervisor.

Containers

Containers, on the other hand, all perform system calls on the exact same Kernel and this single Kernel is the only one performing x86 instructions on the host’s CPU. The CPU doesn’t need to do any kind of virtualization like it needs to with VMs

benefit

The main benefit of virtual machines is the full isolation they provide, since they share just the hardware, while containers all call out to the same Linux kernel, which can clearly pose a security risk. If you have a limited amount of hardware resources, VMs may only be an option when you have a small number of processes that you want to isolate. To run bigger numbers of isolated processes on the same machine, containers are a much better choice because of their low overhead. Remember, each VM runs its own set of system services, while containers don’t – the host OS runs only one set of them. Then there’s also the fact that a VM needs to boot up first, before it can run your process, while a container is just that single process, and it can start up immediately.

THE MECHANISMS THAT MAKE CONTAINER ISOLATION POSSIBLE

By this point, you’re probably wondering how exactly containers are able to isolate processes if they are running on the same operating system. There are two mechanisms that make this possible.

  • The first one, Linux Namespaces, makes sure each process sees its own personal
    view of the system (files, processes, network interfaces, hostname, etc.).
  • The second one is Linux Control Groups (cgroups), which limit the amount of resources the process can consume (cpu, memory, network bandwith, etc.).
ISOLATING PROCESSES WITH LINUX NAMESPACES

By default, each Linux system initially has one single namespace. All system resources, like filesystems, process IDs, user IDs, network interfaces and others, belong to the single namespace. But you can create additional namespaces and organize resources across them. When running a process, you can run it inside one of those namespaces. This way, the process will only see resources that are inside the same namespace. Well, there are actually multiple kinds of namespaces, so a process doesn’t belong to just one namespace, but to one namespace of each kind.

Without going into more details, let’s just look at a simple example. The UTS namespace determines what hostname and domain name the process running inside that namespace will see. So, by assigning two different UTS namespaces to a pair of processes, you can make them see different local hostnames. In other words, to the two processes, it will appear as though they are running on two different machines (at least as far as the hostname is concerned).

LIMITING RESOURCES AVAILABLE TO A PROCESS

The other half of container isolation deals with how to limit the amount of system resources a container can consume. This is achieved with cgroups, a Linux kernel feature that allows limiting the resource usage of a process (or a group of processes). A process will not be able to use more than the configured amount of CPU, memory, network bandwidth, and so on. This way, processes cannot hog resources reserved for other processes, exactly like if each process was running on a separate machine.

Docker container platform

While container technologies have been around for a long time, they have only become more widely known with the rise the Docker container platform. Docker was the first container system that made containers easily portable across different machines. It made it really simple to package up not only the application, but also all of its libraries and other dependencies, even the whole OS file system, into a simple portable package that can be used to provision the application to any other machine running Docker.

When you run an application packaged with Docker, it will always see the exact filesystem contents that you’ve prepared for it. It will see the same files whether it is running on your development machine or a production machine, even if it the production server is running a completely different Linux OS. The application won’t see anything from the server it is running on, so it doesn’t matter if the server has a completely different set of installed libraries as your computer does. For example, if you’ve packaged up your application with the files of the whole Red Hat Enterprise Linux operating system, the application will think it is running inside RHEL, both when you run it on your development computer that runs Fedora and when you run it on a server running Debian or some other Linux distribution.

This is similar to creating a VM image by installing an operating system inside a VM, then installing the app in there and then distributing the whole VM image around and running it. Docker achieves the same effect, but instead of using VMs to achieve app isolation, it uses Linux containers technologies to provide (almost) the same
level of isolation as VMs do. But, in contrast, instead of using large, monolithic VM image files for distributing the application, its container images are composed of layers, which can be shared across multiple images.

Docker concepts

Docker is a platform for packaging, distributing and running applications. As we’ve already stated, it allows you to package your application together with its whole environment. This can be either just a few libraries that the app requires, or even all the files that are usually available on the filesystem of an installed operating system. Docker makes it possible to transfer this package to a central repository, from where it can then be transferred to any computer running Docker and executed there (for the most part, but not always, as we’ll soon explain).
There are three main concepts in Docker.

  • Images – A Docker container image is something you package your application and its environment into. It contains the filesystem that will be available to the application, and other metadata, such as the path to the executable that should be executed when the image is run.
  • Registries – A Docker Registry is a repository that stores your Docker images and facilitates easy sharing of those images between different people and computers. When you build your image, you can either run it on the same computer you’ve built it on, or you can push (upload) the image to a registry and then pull (download) it on another computer and run it there. Some registries are public, allowing anyone to pull images from it, while others are private, only accessible to certain people or machines.
  • Containers – A Docker Container is a running Docker Image. A running container is a process running on the computer running Docker, but it is completely isolated from the host and all other processes running on that host. The process is also resource-constrained, meaning it can only access and use up the amount of resources (CPU, RAM, etc.) that are allocated to it.

Docker images, registries and containers:
在这里插入图片描述

COMPARING VIRTUAL MACHINES AND DOCKER CONTAINERS

I’ve already explained how Linux containers in general are like virtual machines, but much more light-weight. Now let’s take a quick look at how Docker containers specifically compare to virtual machines (and how Docker images compare to VM images).
在这里插入图片描述
Notice that apps A and B have access to the same binaries and libraries both when running in a VM and when running as two separate containers.

IMAGE LAYERS

Docker images are composed of layers. Different images can contain the exact same layers, because every Docker image is built on top of another image. This means two different images can both use the same parent image as their base. This speeds up the distribution of images across the network, because layers that have already been transferred as part of one image, don’t need to be transferred again when transferring another image, if the two images contain the same base layers.
But layers don’t just make distribution more efficient, they also help reduce the storage footprint of images(减少存储). Each layer is only stored once. Two containers created from two images based on the same base layers, can therefore read the same files, but they still maintain complete isolation from each other. You see, if one container changes a file in one of those shared layers, other containers will not see the change. Docker image layers are read-only. When a container is run, there’s always a new writable layer created on top of all the layers, and any modification of a file from one of the underlying layers will create a copy of the whole file in the top-most, writable layer.

In theory, a Docker image can be run on any Linux machine running Docker, but there is a small caveat(警告) – one related to the fact that all containers run on the same Linux kernel. There is actually no guarantee that a container image that runs on one machine will also run on other machines. If these different machines run different versions of the Linux kernel, or don’t have the same kernel modules available, and the app has specific kernel-related requirements, it is clear it won’t be able to run across all of those machines.
So, while(虽然) containers sharing the same kernel does bring a lot of benefits over VMs, it also imposes(施加) certain constraints(强制,约束条件) on the apps running inside them. VMs have no such constraints, since each VM runs its own kernel. And, to conclude, it’s not just about the kernel. It should also be clear that a containerized app built for a specific hardware architecture or operating system can only run on other machines that have the same architecture and operating system. Unless(除非) you run a virtual machine emulating( 仿真) that hardware and then run Docker inside it.

rkt – an alternative to Docker

Docker was the first container platform that made containers mainstream(主流). Docker doesn’t provide process containerization itself. The actual isolation of containers is done at the Linux kernel level. Docker just makes it easy to use. After the success of Docker, the Open Container Initiative(主动) (OCI) (开发容器计划)was born to create open industry standards(工业标准) around container formats and runtime. Docker is part of that initiative(计划), as is rkt (pronounced “rock-it”), which is another Linux container engine.

Like Docker, rkt is a platform for running containers. It puts a strong emphasis(强调) on security,
composability(可构建性) and conforming to open standards. It uses the OCI container image format and
can even run regular(常规) Docker container images.

mention Kubernetes

Docker was initially the only one supported by Kubernetes. Recently, Kubernetes has also started supporting rkt as the container runtime and will eventually support others as well.
Kubernetes is not a container orchestration(编排) system made specifically for Docker containers.
In fact, the essence(本质,核心) of Kubernetes is not actually orchestrating containers. It’s a whole lot more(它还有很多). Containers just happen(碰巧,恰好)to be best way to run apps on different cluster nodes.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值