NUMA and vNUMA

本文探讨了NUMA(非统一内存访问)技术及其在虚拟化环境中的应用——vNUMA。随着SMP系统的普及,CPU与内存之间的通信成为瓶颈。NUMA通过引入本地内存与远程内存解决了这一问题,并在 Nehalem 处理器中得到了进一步发展。当使用 vNUMA 时,虚拟机可以利用底层物理服务器的 NUMA 架构,从而提高内存访问速度。
摘要由CSDN通过智能技术生成
NUMA and vNUMA

posted by szamosattila on march 04, 2012
Tutorial, Virtualization

With the spread of SMP (Symmetric Multi-Processing) systems a new scalability issue came up: CPU-memory communication became a bottleneck (again.) In spite of the newest and fastest multicore CPUs, bandwidth of Northbridge wasn’t enough to take advantage of new developments.

This is where NUMA came to the stage. NUMA (Non-Uniform Memory Access) introduced local memory and remote memory. DIMMs have no longer been connected to the IOH, but to the CPU itself. With the apperance of Nehalem processors Intel departed the traditional Front-Side Bus architecture and started to build integrated memory controllers into its processors with two or three memory channels. This new architecture meant that processors were able to read/write from/to local DIMMs faster than DIMMs attached to other CPUs.

With the appearance of NUMA, operating systems and applications had to be prepared for taking advantage of this technology, so a lot of popular OSs and ESX(i) became NUMA aware. The scheduling of NUMA is fully transparent to administrators, ESXi deals with placing vCPUs onto physical CPU cores. If a VM has equal or less vCPUs than the number of cores that a NUMA node has, then the scheduler of ESXi assigns these vCPUs to cores within the same NUMA node. This results in fast memory access speed, because guest memory will be allocated to the virtual machine from the same NUMA node. What if we have VMs with a greater number of vCPUs than the number of cores a NUMA node has? Then so called wide virtual machines are going to be created. In this case a virtual machine cannot fit into a single NUMA node, so this VM has local and remote memory at the same time. This can cause performance degradation in specific systems. If a virtual machine has more vCPUs than the number of cores a NUMA node has, but less virtual processors (because of hyperthreading), there is an option to tell the scheduler to use hyperthreaded logical processors instead of physical cores in another physical CPU. This results in the usage of only local memory modules, which could be useful for memory intensive applications. You can set this option per VM basis with the following flag:

numa.vcpu.preferHT

So basically that is NUMA all about. But what about vNUMA? NUMA and vNUMA are the same technology, the only difference is that while NUMA is for ESXi, vNUMA is for the guest OS.

vNUMA is a new feature of vSphere 5.0. By default, vNUMA is only enabled for virtual machines with more than 8 vCPUs. If vNUMA is enabled for a VM, the guest OS running inside can take advantage of NUMA technology. Homogeneousity of host servers is very important when using vNUMA. A vNUMA-enabled virtual machine uses a NUMA topology that the underlying physical server has when the VM is powered-on. If the VM is vMotioned to another host, the new host’s NUMA topology might be different from the previous one, which could result in performance degradation.

- See more at: http://blog.szamosattila.hu/index.php/2012/03/numa-and-vnuma/#sthash.2gQCkYjg.dpuf

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值