What's new in windows server 2012 networking (Part 4)

Introduction

In Part 1 of my on-going series on What’s New in Windows Server 2012 Networking, I touched briefly on the topic of Data Center Bridging (DCB). It’s also a part of the low latency technologies, the rest of which will be discussed in Part 5.

But DCB deserves a little more attention, so I’m taking a little detour this month to take a more detailed look at this feature. Unfortunately, the documentation on the TechNet web site is very incomplete at the time of this writing; note that the planning, deployment, operations and troubleshooting content is all marked “not available.”

Low Latency Workloads technologies

The next new feature on our “What’s New” list is really a whole set of features, some of which are new to Server 2012 and some of which were introduced in previous versions of Windows Server and have been improved, all of which work to help with the needs of scenarios in which low latency (that is, reduced delays between the steps of processing a workload) is important. There are a number of different types of scenarios that fit this description.

If you have applications or processes where you need the fastest possible communication between processes and/or between computers, these features can help to reduce network latency that’s caused by many different things, both hardware and software related.

Data Center Bridging (DCB) is one of these low latency features and we’re going to look now into how it can help to ameliorate the effects of those pesky latency issues.

Data Center Bridging (DCB) overview

Data Center Bridging has been around for a while; it’s based on a set of IEEE standards for enhancements to Ethernet that allow physical Ethernet devices to communicate with a SAN or LAN, thus making Ethernet work more reliably in the data center and allowing many different protocols to run over the same physical infrastructure. DCB has also been known as Data Center Ethernet (DCE) and Converged Enhanced Ethernet (CEE).

The relevant IEEE standards are:

  • 802.1Qbb (priority-based flow control, a.k.a. per-priority PAUSE)
  • 802.1Qaz (enhanced transmission selection/bandwidth management)
  • 802.1au (congestion notification)
  • Data Center Bridging Capabilities Exchange Protocol (DCBX).

All of these work together to reduce packet loss and make the network more reliable.

With DCB, bandwidth allocation is hardware based and flow control is priority-based. Instead of the operating system having to handle the traffic, it’s handled by a converged network adapter. A converged network adapter (CNA or C-NIC) combines the functions of a traditional NIC with those of an interface to a SAN using Fibre Channel over Ethernet (FCoE), iSCSI, or Remote Direct Memory Access (RDMA) over converged Ethernet. It has a Host Bus Adapter that can connect to Fibre Channel. Such adapters are offered by a number of vendors, including HP, Dell, QLogic, Broadcom and others.

So what are the advantages of convergence? You no longer have to run your Fibre Channel or iSCSI network separately from your Ethernet network. This saves money and simplifies management. The equipment requires less space in the data center and uses less power; it generates less heat and consequently lowers the need for cooling. You also need less cabling. The number of switches can also be reduced.

DCB and Fibre Channel

Gigabit Ethernet has become very affordable, and the prices for 10GigE are falling. That makes Ethernet a very cost-effective networking fabric. Fibre Channel is expensive in comparison to Ethernet, and no longer has a performance advantage, but companies that already have FC SANs don’t want to scrap that hardware, in which they have already invested considerable money.

Microsoft has also built into Windows Server 2012 the ability to connect to Fibre Channel directly from within Hyper-V virtual machines so you can support virtual workloads with your existing Fibre Channel storage devices. This further helps to extend the usability of your organization’s investments in Fibre Channel. You can find out more about that in the Hyper-V Virtual Fibre Channel Overviewon the TechNet website.

By running Fibre Channel over Ethernet (FCoE), the Fibre Channel frames are encapsulated within Ethernet frames. One problem with Fibre Channel is that it’s “finicky” – it requires a reliable network with very little (or ideally no) loss of packets. That means there has to be a way to reduce or eliminate the usual packet loss that occurs on Ethernet networks when they’re congested. With DCB, organizations can create an Ethernet based converged network that can communicate with the FC SAN.

DCB and iSCSI

What about iSCSI? This is a protocol for running SCSI traffic across the Ethernet IP network, generally for shared storage. It usually costs less than Fibre Channel so it’s especially attractive to organizations that need expanded storage on a tight budget. Deployment is generally not quite as complex, either – which can save money indirectly in terms of administrative overhead and/or consulting fees.

DCB can make an iSCSI deployment more reliable and predictable and enhances performance through its bandwidth allocation capabilities. DCB makes it easier to upgrade to 10GigE, converge the network traffic, and maintain performance for applications, and it’s very scalable, from small business to enterprise level. You’ll need a NIC with an iSCSI Host Bus Adapter that supports iSCSI over DCB as well as a storage array that supports it, and finally, a DCB-capable Ethernet switch.

DCB and RDMA

RDMA over Converged Ethernet (RcCE) is a link layer protocol by which two hosts can communicate with remote direct memory access over an Ethernet network, for shared storage or cluster computing. RDMA can also be done over an InfiniBand network, another point-to-point architecture similar to Fibre Channel that offers high bandwidth and low latency, but InfiniBand is not routable and doesn’t scale well. It’s also less familiar to most IT pros than Ethernet, which they’ve been working with for decades. RoCE is used primarily in High Performance Computing (HPC) environments (supercomputing).

How DCB works

With DCB, the different types of traffic can run across the same physical media and you can manage them according to priorities. DCB produces a “lossless” environment; that is, no data frames are lost thanks to the use of flow control. This increases overall performance because there is no longer a need to retransmit lost frames, which can slow everything down.

As mentioned above, the four protocols that make up DCB create an Ethernet infrastructure that’s suitable for Fibre Channel and iSCSI:

Priority-based flow control, as its name implies, allows you to implement flow control on a per-priority basis. It works at the MAC Control Sublayer. It puts additional fields in a PAUSE frame so transmission of specific frames can be inhibited.

Enhanced Transmission Selection lets you allocate bandwidth according to priorities. It also adds a new field, the PGID (Priority Group ID). Priorities can be assigned to a PGID and you can allocate bandwidth, on a percentage basis, to each PGID. This limits the amount of bandwidth that can be used.

Congestion notification transmits congestion information and comes up with a measure of the degree of network congestion; it uses algorithms to reduce transmission rates upstream to avert secondary bottlenecks. There are two algorithms involved: CP (Congestion Point Dynamics) and RP (Reaction Point Dynamics, also known as a Rate Limiter).

Data Center Bridging Exchange can exchange configuration data between peers and can also detect misconfigurations.

You can find much more detailed explanations of how each of these protocols work in this Data Center Bridging Tutorialfrom the University of New Hampshire.

Microsoft’s DCB solution

There are plenty of DCB solutions out there from different vendors. Windows Server’s implementation of DCB provides some distinct advantages. First, of course, is the fact that it’s a part of the Windows Server operating system and so you don’t have to buy a separate solution. Since it’s based on the IEEE standards, you get interoperability that you might not get with all proprietary DCB solutions. Note that you will need DCB-enabled Ethernet NICs and DCB-capable switches on the network in order to deploy Windows Server 2012 DCB.

Enabling DCB in Server 2012

To enable Data Center Bridging through the GUI, on a Windows Server 2012 machine with a DCB-capable network adapter, Open Server Manager and perform the following steps:

  1. Click Add Roles and Features.
  2. Select Roles-based or feature-based installation.
  3. In the Select destination server dialog box, click Select a server from the computer pool.
  4. In the Select server roles dialog box, click Next.
  5. In the Features list of the Select Features dialog box, check the checkbox next to Data Center Bridging.

As mentioned, Microsoft has been slow to produce documentation for Windows Server 2012 DCB, but we hope that will be remedied in the near future.

Meanwhile, PowerShell fans will also be happy to know that the Windows Server 2012 implementation of DCB can be installed and configured via PowerShell cmdlets. You can find more information on that in the DCB Windows PowerShell User Scripting Guide on the TechNet website.

Summary

Data Center Bridging is an important technology for businesses that want to increase performance, decrease administrative overhead and leverage existing Fibre Channel or iSCSI mass storage assets. Microsoft’s implementation of DCB in Windows Server 2012 will make it easier and less expensive to deploy on Windows-based networks.


本系列文章转自Windows server network网站,http://www.windowsnetworking.com



评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值