KVM

source: redhat pdf: KVM – KERNEL BASED VIRTUAL MACHINE


Kernel-based Virtual Machine (KVM) project represents the latest generation of open source virtualization.

The goal of the project was to create a modern hypervisor that builds on the experience of previous
generations of technologies and leverages the modern hardware available today.
KVM is implemented as a loadable kernel module that converts the Linux kernel into a bare metal
hypervisor. There are two key design principals that the KVM project adopted that have helped it mature
rapidly into a stable and high performance hypervisor and overtake other open source hypervisors.
Firstly, because KVM was designed after the advent of hardware assisted virtualiaztion, it did not have to
implement features that were provided by hardware. The KVM hypervisor requires Intel VT-X or AMD-V
enabled CPUs and leverages those features to virtualize the CPU.
By requiring hardware support rather than optimizing with it if available, KVM was able to design an
optimized hypervisor solution without requiring the “baggage” of supporting legacy hardware or requiring
modifications to the guest operating system.
Secondly the KVM team applied a tried and true adage – “don't reinvent the wheel”.
There are many components that a hypervisor requires in addition to the ability to virtualize the CPU and
memory, for example: a memory manager, a process scheduler, an I/O stack, device drivers, a security
manager, a network stack, etc. In fact a hypervisor is really a specialized operating system, differing only
from it's general purpose peers in that it runs virtual machines rather than applications.
Since the Linux kernel already includes the core features required by a hypervisor and has been hardened
into an mature and stable enterprise platform by over 15 years of support and development it is more
efficient to build on that base rather than writing all the required components such as a memory manager,
scheduler, etc from the ground up.
In this regard the KVM projected benefited from the experience of the Xen. One of the key challenges of the
Xen architecture is the split architecture of domain0 and the Xen hypervisor. Since the Xen hypervisor
provides the core platform features within the stack, it has needed to implement these features, such as
scheduler and memory manager from the ground up.
For example while the Linux kernel has a mature and proven memory manager including support for NUMA
and large scale systems, the Xen hypervisor has needed to build this support from scratch. Likewise
features like power management which are already mature and field proven in Linux had to be
re-implemented in the Xen hypervisor.
Another key decision made by the KVM team was to incorporate the KVM into the upstream Linux kernel.
The KVM code was submitted to the Linux kernel community in December of 2006 and was accepted into
the 2.6.20 kernel in January of 2007. At this point KVM became a core part of Linux and is able to inherit key
features from the Linux kernel. By contrast the patches required to build the Linux Domain0 for Xen are still
not part of the Linux kernel and require vendors to create and maintain a fork of the Linux kernel. This has
lead to an increased burden on distributors of Xen who cannot easily leverage the features of the upstream
kernel. Any new feature, bug fix or patch added to the upstream kernel must be back-ported to work with the
Xen patch sets.
In addition to the broad Linux community KVM is supported by some of the leading vendors in the software
industry including Red Hat, AMD, HP, IBM, Intel, Novell, Siemens, SGI and others


KVM ARCHITECTURE
In the KVM architecture the virtual machine is implemented as regular Linux process, schedule by the
standard Linux scheduler. In fact each virtual CPU appears as a regular Linux process. This allows KVM to
benefit from all the features of the Linux kernel.
Device emulation is handle by a modified version of QEMU that
provides an emulated BIOS, PCI bus, USB bus and a standard set of
devices such as IDE and SCSI disk controllers, network cards, etc.
Security
Since a virtual machine is implemented as a Linux process it
leverages the standard Linux security model to provide isolation and
resource controls. The Linux kernel includes SELinux
(Security-Enhanced Linux) a project developed by the US National
Security Agency to add mandatory access controls, multi-level and
multi-category security as well as policy enforcement. SELinux
provides strict resource isolation and confinement for processes
running in the Linux kernel. The sVirt project builds on SELinux to
providean infrastructure to allow an administrator to define policies for
virtual machine isolation. Out of the box sVirt ensures that a virtual machines resources can not be
accessed by any other process (or virtual machine) and this can be extended by the administrator to define
fine grained permissions, for example to group virtual machines together to share resources.
Any virtual environment is only as secure as the hypervisor itself, as organizations look to deploy
virtualization more pervasively throughout their infrastructure security becomes a key concern, even more so
in cloud computing environments. The hypervisor is undoubtedly a tempting target for hackers as an
exploited hypervisor could lead to the compromise of all virtual machines it is hosting, in fact we have
already seen hypervisor exploits for example the “Invisible Things Lab” exploit in 2008 where a Xen domU
was able to compromise the domain0 host. SELinux and sVirt provide an infrastructure that provides a level
of security and isolation unmatched in industry.
Memory Management
KVM inherits the powerful memory management features of Linux
The memory of a virtual machine is stored as memory is for any other
Linux process and can be swapped, backed by large pages for better
performance, shared or backed by a disk file.
NUMA support allows virtual machines to efficiently access large
amounts of memory.
KVM supports the latest memory virtualization features from CPU vendors with support for Intel's Extended
Page Table (EPT) and AMD's Rapid Virtualization Indexing (RVI) to deliver reduced CPU utilization and
higher throughput.
Memory page sharing is supported through a kernel feature called Kernel Same-page Merging(KSM).
KSM scans the memory of each virtual machine and where virtual machines have identical memory pages
KSM merges these into a single page that it shared between the virtual machines, storing only a single copy.
If a guest attempts to change this shared page it will be given it's own private copy.
When consolidating many virtual machines onto a
host there are many situations in which memory
pages may be shared – for example unused
memory within a Windows virtual machine,
common DLLs, libraries, kernels or other objects
common between virtual machines.
With KSM more virtual machines can be
consolidated on each host, reducing hardware
costs and improving server utilization.
Hardware support
Since KVM is a part of Linux it leverages the entire hardware ecosystem, so any hardware device supported
by Linux can be used by KVM. Linux enjoys one of the largest ecosystem of hardware vendors and the
nature of the open source community, where hardware vendors are able to participate in the development of
the Linux kernel, ensures that the latest hardware features are rapidly adopted in the Linux kernel, allowing
KVM to utilize a wide variety of hardware platforms.
As new features are added to the Linux kernel KVM inherits these without additional engineering and the
ongoing tuning and optimization of Linux immediately benefits KVM.
Storage
KVM is able to use any storage supported by Linux to store virtual machine images, including local disks
with IDE, SCSI and SATA, Network Attached Storage (NAS) including NFS and SAMBA/CIFS or SAN with
support for iSCSI and Fiber Channel. Multipath I/O may be used to improve storage throughput and to
provide redundancy. Again, because KVM is part of the Linux kernel it can leverage a proven and reliable
storage infrastructure with support from all the leading storage vendors with a storage stack that has been
proven in production deployments worldwide.
KVM also supports virtual machine images on shared file systems such as the Global File System (GFS2) to
allow virtual machine images to be shared between multiple hosts or
shared using logical volumes.
Disk images support thin provisioning allowing improved storage utilization
by only allocating storage when it is required by the virtual machine rather
than allocating the entire storage up front.
The native disk format for KVM is QCOW2 which includes support for
snapshots allowing multiple levels of snapshots, compression and encryption.
Live Migration
KVM supports live Migration which provides the ability to move a
running virtual machine between physical hosts with no interruption to
service.
Live Migration is transparent to the end user, the virtual machine
remains powered on, network connections remain active and user
applications continues to run while the virtual machine is relocated to a
new physical host.
In addition to live migration KVM supports saving a virtual machine's
current state to disk to allow it to be stored and resumed at a later time.
Guest Support
KVM supports a wide variety of guest operating systems, from mainstream operating systems such as Linux
and Windows to other platforms including OpenBSD, FreeBSD, OpenSolaris, Solaris x86 and MS DOS.
In Red Hat's enterprise offerings, KVM has been certified under Microsoft's Server Virtualization Validation
Program (SVVP) to ensure users deploying Microsoft Windows Server on Red Hat Enterprise Linux and Red
Hat Enterprise Virtualization Hypervisor (RHEV-H) will receive full commercial support from Microsoft.
Device Drivers
KVM supports hybrid virtualization where paravirtualized drivers are installed in the guest operating system
to allow virtual machines to use an optimized I/O interface rather than emulated devices to deliver high
performance I/O for network and block devices.
The KVM hypervisor uses the VirtIO standard developed by IBM and Red Hat in conjunction with the Linux
community for paravirtualized drivers which is a hypervisor independent interface for building device drivers
allowing the same set of device drivers to be used for multiple hypervisors, allowing for better guest
interoperability. Today many hypervisors use proprietary interfaces for paravirtualized device drivers which
means that guest images are not portable between hypervisor platforms. As more vendors adopt the VirtIO
framework guest images will become more easily transferable between platforms and reduce certification
testing and overhead.
VirtIO drivers are included in modern Linux kernels (later than 2.6.25), included in Red Hat Enterprise Linux
4.8+, 5.3+ and available for Red Hat Enterprise Linux 3.
Red Hat had developed VirtIO drivers for Microsoft Windows guests for optimized network and disk I/O that
have been certified under Microsoft's Windows Hardware Quality Labs certification program (WHQL).
Performance and Scalability
KVM inherits the performance and scalabiltiy of Linux, supporting virtual machines with up to 16 virtual
CPUs and 256GB of ram and host systems with 256 cores and over 1TB or RAM.
With up to 95%-135% performance relative to bare metal for real-world enterprise workloads like SAP,
Oracle, LAMP and Microsoft Exchange; more than 1 million messages per second and sub 200 micro-
second latency in virtual machines running on a standard server; and the highest consolidation ratios with
more than 600 virtual machines running enterprise workloads on a single server, KVM allows even the most
demanding application workloads to be virtualized.
Improved scheduling and resource control
In the KVM model, a virtual machine (Windows or Linux) is a Linux process. It is scheduled and managed by
the standard Linux kernel. Over the past several years, the community has advanced the core Linux kernel
to a point where it has industry leading features, performance stability, security and enterprise robustness.
The current version of the Red Hat Enterprise Linux kernel supports setting relative priorities for any process
including virtual machines. This priority is for an aggregate measure of CPU, memory, network and disk IO
for a given virtual machine, and provides the first level of Quality of Service (QoS) infrastructure for virtual
machines.
The modern Linux scheduler accrues some further enhancements that will allow a much finer-grain control
of the resources allocated to a Linux process and will allow guaranteeing a QoS for a particular process.
Since in the KVM model, a virtual machine is a Linux process, these kernel advancements naturally accrue
to virtual machines operating under the KVM architectural paradigm. Specifically, enhancements including
CFS, control-groups, network name spaces and real-time extensions will form the core kernel level
infrastructure for QoS, service levels and accounting for VMs.
The Linux kernel includes a new advanced process scheduler called the completely fair scheduler (CFS) to
provide advanced process scheduling facilities based on experience gained from large system deployments.
The CFS scheduler has been extended to include the CGroups (control groups) resource manager that
allows processes, and in the case of KVM – virtual machines, to be given shares of the system resources
such as memory, cpu and I/O. Unlike other virtual machine schedulers that give proportions of resources to
a virtual machine based on weights, cgroups allow minimums to be set not just maximums, allowing
guaranteed resources to a virtual machine but allowing the virtual machine to use more resources if
available. Network name-spaces is a similar kernel-level infrastructure that allows finer grain controls and
guarantees a minimum network SLA for a given virtual machine.
These advanced features in the kernel allow resource management and control at all levels – CPU, memory,
network and I/O.
Lower latency and higher determinism
In addition to leveraging the new process scheduler and resource management features of the kernel to
guarantee the CPU, memory, network and disk IO SLA for each VM, the Linux kernel will also feature real-
time extensions. These allow much lower latency for applications in virtual machines, and a higher degree of
determinism which is important for mission critical enterprise workloads. Under this operating model, kernel
processes that require a long CPU time slice are divided into smaller components and scheduled/processed
accordingly by the kernel. In addition, mechanisms are put in place that allow interrupts from virtual
machines to be prioritized better than if long kernel processes were to consume more CPU. Hence, requests
from virtual machines can be processed faster, thereby significantly reducing application processing latency
and improving determinism.


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值