https://wiki.archlinux.org/index.php/KVM_(%E7%AE%80%E4%BD%93%E4%B8%AD%E6%96%87)
在宿主机启用kvm_intel
模块的嵌套虚拟化功能:
# modprobe -r kvm_intel # modprobe kvm_intel nested=1
使嵌套虚拟化永久生效(请参考Kernel modules#Setting module options):
/etc/modprobe.d/modprobe.conf
options kvm_intel nested=1
检验嵌套虚拟化功能是否已被激活:
$ systool -m kvm_intel -v | grep nested
nested = "Y"
使用以下命令来运行guest虚拟机:
$ qemu-system-x86_64 -enable-kvm -cpu host
启动虚拟机并检查是否有vmx标志:
$ grep vmx /proc/cpuinfo
Live snapshots
A feature called external snapshotting allows one to take a live snapshot of a virtual machine without turning it off. Currently it only works with qcow2 and raw file based images.
Once a snapshot is created, KVM attaches that new snapshotted image to virtual machine that is used as its new block device, storing any new data directly to it while the original disk image is taken offline which you can easily copy or backup. After that you can merge the snapshotted image to the original image, again without shutting down your virtual machine.
Here's how it works.
Current running vm
# virsh list --all Id Name State ---------------------------------------------------- 3 archey running
List all its current images
# virsh domblklist archey Target Source ------------------------------------------------ vda /vms/archey.img
Notice the image file properties
# qemu-img info /vms/archey.img image: /vms/archey.img file format: qcow2 virtual size: 50G (53687091200 bytes) disk size: 2.1G cluster_size: 65536
Create a disk-only snapshot. The switch --atomic
makes sure that the VM is not modified if snapshot creation fails.
# virsh snapshot-create-as archey snapshot1 --disk-only --atomic
List if you want to see the snapshots
# virsh snapshot-list archey Name Creation Time State ------------------------------------------------------------ snapshot1 2012-10-21 17:12:57 -0700 disk-snapshot
Notice the new snapshot image created by virsh and its image properties. It weighs just a few MiBs and is linked to its original "backing image/chain".
# qemu-img info /vms/archey.snapshot1 image: /vms/archey.snapshot1 file format: qcow2 virtual size: 50G (53687091200 bytes) disk size: 18M cluster_size: 65536 backing file: /vms/archey.img
At this point, you can go ahead and copy the original image with cp -sparse=true
or rsync -S
. Then you can merge the original image back into the snapshot.
# virsh blockpull --domain archey --path /vms/archey.snapshot1
Now that you have pulled the blocks out of original image, the file /vms/archey.snapshot1
becomes the new disk image. Check its disk size to see what it means. After that is done, the original image /vms/archey.img
and the snapshot metadata can be deleted safely. The virsh blockcommit
would work opposite to blockpull
but it seems to be currently under development in qemu-kvm 1.3 (including snapshot-revert feature), scheduled to be released sometime next year.
This new feature of KVM will certainly come handy to the people who like to take frequent live backups without risking corruption of the file system.
KVM虚拟化之嵌套虚拟化nested
本文测试物理机为centos6.5
物理机使用Intel-V虚拟化架构,安装qemu-kvm版本0.12
我们知道,在Intel处理器上,KVM使用Intel的vmx(virtul machine eXtensions)来提高虚拟机性能, 即硬件辅助虚拟化技术, 现在如果我们需要测试一个openstack集群,又或者单纯的需要多台具备"vmx"支持的主机, 但是又没有太多物理服务器可使用, 如果我们的虚拟机能够和物理机一样支持"vmx",那么问题就解决了,而正常情况下,一台虚拟机无法使自己成为一个hypervisors并在其上再次安装虚拟机,因为这些虚拟机并不支持"vmx"
嵌套式虚拟nested是一个可通过内核参数来启用的功能。它能够使一台虚拟机具有物理机CPU特性,支持vmx或者svm(AMD)硬件虚拟化,关于nested的具体介绍,可以看这里
1.首先查看一台普通的KVM虚拟机CPU信息
[root@localhost ~]# lscpu Architecture: x86_64 ... Vendor ID: GenuineIntel Hypervisor vendor: KVM Virtualization type: full ... [root@localhost ~]# cat /pro/cpuinfo processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 13 model name : QEMU Virtual CPU version (cpu64-rhel6) ... flags : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm unfair_spinlock pni cx16 hypervisor lahf_lm
可以看到,这个虚拟机使用为全虚拟化,使用的CPU为QEMU模拟出来的CPU,并且不支持硬件虚拟化(flags中没有vmx)
2.物理服务器上开启nested支持
要使物理机内核支持nested,第一步需要升级系统内核到Linux 3.X版本,第二步要为内核添加新的引导参数
默认情况下,系统并不支持nested
#查看当前系统是否支持nested
systool -m kvm_intel -v | grep -i nested
nested = "N" #或者这样查看 cat /sys/module/kvm_intel/parameters/nested N
第一步升级内核,用3.18内核做测试,升级内核很简单,下载编译好的内核rpm包,这里是下载地址,安装,然后修改grub.conf默认引导内核为新内核
第二步添加引导参数同样很简单,只需要在 kernel 那一行的末端加上 "kvm-intel.nested=1"
#升级内核 rpm -ivh kernel-ml-3.18.3-1.el6.elrepo.x86_64.rpm #修改grub.conf default=0 #使用新内核 timeout=5 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title Red Hat Enterprise Linux Server (3.18.3-1.el6.elrepo.x86_64) root (hd0,0) kernel /vmlinuz-3.18.3-1.el6.elrepo.x86_64 ro root=UUID=9c1afc64-f751-473c-aaa6-9161fff08f6f rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcy rheb-sun16 crashkernel=auto KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet kvm-intel.nested=1
...
上面修改之后,重启系统,用"uname -r"查看系统内核,并检查nested是否支持
3. 建立一台支持"vmx"的虚拟机
如果你使用libvirt管理虚拟机,需要修改虚拟机xml文件中CPU的定义,下面三种定义都可以
#可以使用这种 <cpu mode='custom' match='exact'> <model fallback='allow'>core2duo</model> <feature policy='require' name='vmx'/> </cpu> #这种方式为虚拟机定义需要模拟的CPU类型"core2duo",并且为虚拟机添加"vmx"特性
#也可以使用这种 <cpu mode='host-model'> <model fallback='allow'/> </cpu>
#或者这样 <cpu mode='host-passthrough'> <topology sockets='2' cores='2' threads='2'/> </cpu>
#CPU穿透,在虚拟机中看到的vcpu将会与物理机的CPU同样配置,这种方式缺点在于如果要对虚拟机迁移,迁移的目的服务器硬件配置必须与当前物理机一样
如果你使用qemu-kvm命令行启动虚拟机,那么可以简单的添加
-enable-kvm -cpu qemu64,+vmx
#设置虚拟机CPU为qemu64型号,添加vmx支持
然后启动虚拟机,查看配置
#下面虚拟机CPU定义为"host-model"
cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 26 model name : Intel Core i7 9xx (Nehalem Class Core i7)
... wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc unfair_spinlock pni vmx ssse3 cx16 pcid sse4_1 sse4_2 x2apic ...
本文出自http://www.cnblogs.com/jython/p/4458807.html,转载请注明出处
参考文章
http://kashyapc.com/2012/01/14/nested-virtualization-with-kvm-intel/
http://networkstatic.net/nested-kvm-hypervisor-support/