对Havana OpenStack VMware Plugin的一些理解( by quqi99 )

作者:张华  发表于:2013-12-04
版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明

( http://blog.csdn.net/quqi99 )


没搞过VMware,也没看过Nova VMware Driver的代码,但今天同事找我咨询Nova VMware Driver能不能用Neutron来建立Vlan网络的问题,接受他们的一些反馈再经理论分析后的认识如下。

首先Dan(Neutron的前PTL)在这个网页(https://communities.vmware.com/message/2315081)说了一段话如下:
The only Neutron plugin that is validated to work with vSphere is the Nicira NVP (now renamed to VMware NSX) plugin.  With other plugins like the OVS plugin, the Quantum calls will succeed and an IP will be allocated, but the underlying "plumbing" won't work correctly, which is why DHCP traffic is not getting through.
 
If you wanted to use the OVS plugin in VLAN mode and only create a single network, it is technically possible to make DHCP work, as you could map the br-int port group to the same VLAN that the OVS plugin is putting the VM + DHCP server traffic on to, but this is really only a hack for very limited scenarios.
 
The Nicira NVP / VMware NSX plugin is available only via VMware and requires direct engagement with the VMware networking team (i.e., no generally available trial download).  You can send me a private message via this communities page if you have a near term production deployment opportunity and would like to be put in touch with the VMware networking team.
 
In the future, we are considering a basic Neutron plugin that uses vsphere port-groups, though the value of such a model is somewhat limited, as it won't support many key features, including security groups.  

也需要了解VMware中打tag的三种方式,VGT即在虚机里就打标签,VST在vmware虚拟交换机上打tag, EGT即在外部物理交换机上打tag,可参见我的博客:http://blog.csdn.net/quqi99/article/details/8727130 , 稍微翻看了一下nova vmware driver的代码,显然tag是打在虚拟交换机上的(在vmware的tag叫port group)也即VST模式。

具体结论为:
1,nova-network能够调用vcenter api来打tag。
2,neutron nvp也能够调用vcenter api来打tag,但是这需要使用Vmware NSX SDN控制器,这是收费软件,显然我们希望使用OVS.
3, 如果使用neturon ml2 ovs plugin的话,因为ovs agent显然是不可能调用vcenter api去自动打tag的(即创建port group),所以显然需要再写一个vmware agent来自动做这件事


要实现这个VMware Agent非常简单, 下面谈谈具体的实现步骤.
第一步,你要非常清楚Agent究竟是来做什么的,解释一下:

1,创建network和subnet时,只是在DB里记了一下,没有做什么事

2,对于内外部网的物理tap及网桥由l3-agent通过internal_network_added和external_gateway_added方法调用interface.py里的plug创建

3, 对于虚机的tap及网桥由nova-compute在spwan时通过vif.py里的plug创建

4,在上面2,3两点创建实际的物理tap及网桥后, neutron agent(运行在l3-agent和nova-compute所在节点上)来处理tap的数据据抽象, 它检测到br-int上的tap增加了,然后就从DB取出和tap相关的port信息继续去设置和port相关的事情,如vlan的流规则,security group的iptables规则等.


第二步,很多代码都是现成的。
首先你要明白l3-agent是做网关的,所以vmware的虚机是可以通过openvswitch的做网关的port出去的。但是我们在使用ovs agent的时候由于它没有调用vCenter api去设置vlan(即port group),所以我们需要写这么一个VMware Agent,那样我们当然可以直接将ovs agent的代码拿过来在设置vlan的地方修改成直接调用vCenter设置port group的rest api就可以了。但是因为ovs agent的代码复杂一些,建议使用hyper-v agent做框架,将必要的函数从ovs agent上挪过来即可。至于如何调用vCenter soap api (wsdl_location=http://127.0.0.1:8080/vmware/SDK/wsdl/vim25/vimService.wsdl)去设置port group, 原有的nova-network中就有这些代码($nova/virt/vmwareapi/vim.py),即需要将$nova/virt/vmwareapi/vif.py中的ensure_vlan_bridge方法挪到neutron agent中的port_bound方法的if network_type == p_const.TYPE_VLAN:分支即可。但是这却有一个问题:

对于ovs来说,一般是由nova-compute driver完成物理tap及bridge的创建,并将tap插入bridge;
neutron ovs agent只要是发现新添加的tap之从DB中取完关联的port信息完成后续的vlan及security group的设置。

ESXi的端口组来说不存在建tap,建bridge,并插tap到bridge这三步曲啊。所以说我们不需要写vmware agent,
仍然用ovs agent去处理l3产生的网关的port即可,但需要写一个vmware mech driver即可。

使用它有一个前置条件,即在每一台EXSi上都使用相同的物理网卡(vlan_interface=vmnic0),想使用不同的物理网卡的话,那就要在mech driver里设置port-binding里的binding:profile字段了,既然用到了port-binding特性(port-binding用于nova与neutron之间交换数据用的),vmware neutron agent肯定也是少不了的,否则AgentMechanismDriverBase类中的host_agents方法过不去哦)


如果不想把那么多nova-network的代码挪到neutron的话,直接在nova这边稍微改一下也能让vCenter支持neutron(未测试)

在neutron地边改时是创建network时就会创建port group, 在nova这边改,则是创建每一个虚机时才会创建port group。

[hua@zhhuabj nova]$ git diff nova/network/neutronv2/api.py nova/virt/vmwareapi/vif.py
diff --git a/nova/network/neutronv2/api.py b/nova/network/neutronv2/api.py
index 7596567..c591760 100644
--- a/nova/network/neutronv2/api.py
+++ b/nova/network/neutronv2/api.py
@@ -199,6 +199,7 @@ class API(base.Base):
             are already formatted for the quantum v2 api.
             See nova/virt/driver.py:dhcp_options_for_instance for an example.
         """
+#         import pydevd;pydevd.settrace()
         hypervisor_macs = kwargs.get('macs', None)
         available_macs = None
         if hypervisor_macs is not None:
@@ -951,6 +952,8 @@ class API(base.Base):
         for net in networks:
             if port['network_id'] == net['id']:
                 network_name = net['name']
+                should_create_vlan = net['provider:network_type']
+                vlan = net['provider:segmentation_id']

                 break

         bridge = None
@@ -975,7 +978,9 @@ class API(base.Base):
             bridge=bridge,
             injected=CONF.flat_injected,
             label=network_name,
-            tenant_id=net['tenant_id']
+            tenant_id=net['tenant_id'],
+            should_create_vlan=should_create_vlan,
+            vlan=vlan
             )
         network['subnets'] = subnets
         port_profile = port.get('binding:profile')
diff --git a/nova/virt/vmwareapi/vif.py b/nova/virt/vmwareapi/vif.py
index c4bd19e..31a4d63 100644
--- a/nova/virt/vmwareapi/vif.py
+++ b/nova/virt/vmwareapi/vif.py
@@ -52,10 +52,9 @@ def _get_associated_vswitch_for_interface(session, interface, cluster=None):
     return vswitch_associated


-def ensure_vlan_bridge(session, vif, cluster=None, create_vlan=True):
+def ensure_vlan_bridge(session, vif, cluster=None, create_vlan=True, bridge):
     """Create a vlan and bridge unless they already exist."""
     vlan_num = vif['network'].get_meta('vlan')
-    bridge = vif['network']['bridge']
     vlan_interface = CONF.vmware.vlan_interface

     network_ref = network_util.get_network_with_the_name(session, bridge,
@@ -143,11 +142,11 @@ def get_neutron_network(session, network_name, cluster, vif):

def get_network_ref(session, cluster, vif, is_neutron):
     if is_neutron:
-        network_name = (vif['network']['bridge'] or
-                        CONF.vmware.integration_bridge)
-        network_ref = get_neutron_network(session, network_name, cluster, vif)
+        port_group_name = vif['network']['label']
     else:
-        create_vlan = vif['network'].get_meta('should_create_vlan', False)
-        network_ref = ensure_vlan_bridge(session, vif, cluster=cluster,
-                                         create_vlan=create_vlan)
+        port_group_name = vif['network']['bridge']
+    create_vlan = vif['network'].get_meta('should_create_vlan', False)
+    network_ref = ensure_vlan_bridge(session, vif, cluster=cluster,
+                                     create_vlan=create_vlan,
+                                     bridge=port_group_name)
     return network_ref



如果要把代码挪到neutron 的话原型如下(未测试),也将完整未测试的草稿patch附在附件二中(所有代码都是花了十几分钟从nova-network中挪出来的,未测试,只是说明一个原理)。

class VMwareVCMechanismDriver(mech_agent.AgentMechanismDriverBase):
    """Attach to networks using openvswitch L2 agent.

    The VMwareVCMechanismDriver integrates the ml2 plugin with the
    openvswitch L2 agent.
    """

    def __init__(self):
        super(VMwareVCMechanismDriver, self).__init__(
            constants.AGENT_TYPE_OVS,
            portbindings.VIF_TYPE_OVS,
            True)
        self._session = VMwareAPISession(scheme='https')
        self._cluster = cfg.CONF.VMWARE.cluster_name

    def check_segment_for_agent(self, segment, agent):
            return False

    def create_network_postcommit(self, context):
        """Provision the network on the vmware vCenter."""

        network = context.current
        vif.ensure_vlan_bridge(self._session, network,
                               cluster=self._cluster, create_vlan=True)


Complete Operation Process:

Two related components:
vCenter driver(nova-compute): running the vcenter driver "nova.virt.vmwareapi.driver.VMwareVCDriver" on any host to control vCenter, it uses EXSi vSwitch to privde L2 function.
vmware mech driver: creting the port group, thus the ovs bridge at l3-agent(ovs) and vwmare vSwitch at EXSi can interconnect via standard 802.1q vlan protocal
l3-agent with ovs agent, privide L3 function for vmware vm.

Precondition:
1, create EXSi vSwitch named br-int with the kernel physical NIC device named vmnic0 manually, and related configuration is as below:
   integration_bridge=br-int    # default port group name
   vlan_interface=vmnic0        # physical nic name

   esxcfg-vswitch -d vSwitch0
   esxcfg-vswitch -a vSwitch0
   esxcfg-vswitch --link=vmnic0 vSwitch0

Process:
1, create network and subnet, neutron-api will record following info into DB.
   neutron net-create net_vlan --tenant_id=$TENANT_ID  --provider:network_type vlan --provider:physical_network physnet1 --provider:segmentation_id 122
   neutron subnet-create --tenant_id $TENANT_ID --ip_version 4 --gateway 10.0.1.1 net_vlan 10.0.1.0/24

   and related configuration is as below:
     tenant_network_type=vlan
     network_vlan_ranges = physnet1:1:4094
     bridge_mappings = physnet1:br-phy
2, after creating network and subnet, vmware mech driver will also creating port group(network) for vmware.
3, after creating network and subnet, l3-agent will create ovs port as the gateway for the subnet and plug it into ovs bridge br-phy when collecting the routers.
4, VMwareVCDriver will retrieve the port group(network) info via get_network_ref method in $nova/virt/vmwareapi/vif.py for us when creating a VM using the command "nova boot --flavor 1 --image <image-uuid> --nic net-id=<$NETWORK_ID> vm1".


附录1,EXSi 命令操作port group

1,列出vSwitch
esxcfg-vswitch -l
2, 删除,添加vSwitch, 并将物理网卡vmnic3,vmnic4添加到vSwitch中
esxcfg-vswitch -d MainGuestVirtualSwitch
esxcfg-vswitch -a MainGuestVirtualSwitch
esxcfg-vswitch --link=vmnic3 MainGuestVirtualSwitch
esxcfg-vswitch --link=vmnic4 MainGuestVirtualSwitch
3, 添加两个端口组
esxcfg-vswitch --add-pg=PrivateNetwork MainGuestVirtualSwitch
esxcfg-vswitch --add-pg=ShopFloor MainGuestVirtualSwitch
4,将端口组和vlan号关联,这样,端口组就相当于一个友好的network name了。
esxcfg-vswitch --vlan=334 --pg=PrivateNetwork MainGuestVirtualSwitch
esxcfg-vswitch --vlan=332 --pg=ShopFloor MainGuestVirtualSwitch


附录2,未经测试的从nova-network中挪出来的写VMware Mech Driver要用到的相关代码

commit 26c130145dfe1f6c96026eb62024a87f3537cde7
Author: zhhuabj <zhhuabj@XXX.com>
Date:   Fri Jan 10 22:54:38 2014 +0800

    The Draft of VMware vCenter Mech Driver
    
    Change-Id: I6d2bbdd38df74d050bbbe677549f32655fb838fd

diff --git a/neutron/plugins/ml2/drivers/vmware/__init__.py b/neutron/plugins/ml2/drivers/vmware/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/neutron/plugins/ml2/drivers/vmware/apisession.py b/neutron/plugins/ml2/drivers/vmware/apisession.py
new file mode 100644
index 0000000..52b6b87
--- /dev/null
+++ b/neutron/plugins/ml2/drivers/vmware/apisession.py
@@ -0,0 +1,224 @@
+import time
+
+from eventlet import event
+from oslo.config import cfg
+
+from neutron.common import exceptions as exception
+from neutron.openstack.common.gettextutils import _
+from neutron.openstack.common import log as logging
+from neutron.openstack.common import loopingcall
+from neutron.plugins.ml2.drivers.vmware import error_util
+from neutron.plugins.ml2.drivers.vmware import vim
+from neutron.plugins.ml2.drivers.vmware import vim_util
+
+
+LOG = logging.getLogger(__name__)
+
+vmwareapi_opts = [
+    cfg.StrOpt('host_ip',
+               help='URL for connection to VMware ESX/VC host.'),
+    cfg.StrOpt('host_username',
+               help='Username for connection to VMware ESX/VC host.'),
+    cfg.StrOpt('host_password',
+               help='Password for connection to VMware ESX/VC host.',
+               secret=True),
+    cfg.MultiStrOpt('cluster_name',
+               help='Name of a VMware Cluster ComputeResource. Used only if '
+                    'compute_driver is vmwareapi.VMwareVCDriver.'),
+    cfg.IntOpt('api_retry_count',
+               default=10,
+               help='The number of times we retry on failures, e.g., '
+                    'socket error, etc.'),
+    ]
+
+CONF = cfg.CONF
+CONF.register_opts(vmwareapi_opts, 'VMWARE')
+
+TIME_BETWEEN_API_CALL_RETRIES = 2.0
+
+
+class VMwareAPISession(object):
+    """
+    Sets up a session with the VC/ESX host and handles all
+    the calls made to the host.
+    """
+
+    def __init__(self, host_ip=CONF.VMWARE.host_ip,
+                 username=CONF.VMWARE.host_username,
+                 password=CONF.VMWARE.host_password,
+                 retry_count=CONF.VMWARE.api_retry_count,
+                 scheme="https"):
+        self._host_ip = host_ip
+        self._host_username = username
+        self._host_password = password
+        self._api_retry_count = retry_count
+        self._scheme = scheme
+        self._session_id = None
+        self.vim = None
+        self._create_session()
+
+    def _get_vim_object(self):
+        """Create the VIM Object instance."""
+        return vim.Vim(protocol=self._scheme, host=self._host_ip)
+
+    def _create_session(self):
+        """Creates a session with the VC/ESX host."""
+
+        delay = 1
+
+        while True:
+            try:
+                # Login and setup the session with the host for making
+                # API calls
+                self.vim = self._get_vim_object()
+                session = self.vim.Login(
+                               self.vim.get_service_content().sessionManager,
+                               userName=self._host_username,
+                               password=self._host_password)
+                # Terminate the earlier session, if possible ( For the sake of
+                # preserving sessions as there is a limit to the number of
+                # sessions we can have )
+                if self._session_id:
+                    try:
+                        self.vim.TerminateSession(
+                                self.vim.get_service_content().sessionManager,
+                                sessionId=[self._session_id])
+                    except Exception as excep:
+                        # This exception is something we can live with. It is
+                        # just an extra caution on our side. The session may
+                        # have been cleared. We could have made a call to
+                        # SessionIsActive, but that is an overhead because we
+                        # anyway would have to call TerminateSession.
+                        LOG.debug(excep)
+                self._session_id = session.key
+                return
+            except Exception as excep:
+                LOG.critical(_("Unable to connect to server at %(server)s, "
+                    "sleeping for %(seconds)s seconds"),
+                    {'server': self._host_ip, 'seconds': delay})
+                time.sleep(delay)
+                delay = min(2 * delay, 60)
+
+    def __del__(self):
+        """Logs-out the session."""
+        if hasattr(self, 'vim') and self.vim:
+            self.vim.Logout(self.vim.get_service_content().sessionManager)
+
+    def _is_vim_object(self, module):
+        """Check if the module is a VIM Object instance."""
+        return isinstance(module, vim.Vim)
+
+    def _call_method(self, module, method, *args, **kwargs):
+        """
+        Calls a method within the module specified with
+        args provided.
+        """
+        args = list(args)
+        retry_count = 0
+        exc = None
+        last_fault_list = []
+        while True:
+            try:
+                if not self._is_vim_object(module):
+                    # If it is not the first try, then get the latest
+                    # vim object
+                    if retry_count > 0:
+                        args = args[1:]
+                    args = [self.vim] + args
+                retry_count += 1
+                temp_module = module
+
+                for method_elem in method.split("."):
+                    temp_module = getattr(temp_module, method_elem)
+
+                return temp_module(*args, **kwargs)
+            except error_util.VimFaultException as excep:
+                # If it is a Session Fault Exception, it may point
+                # to a session gone bad. So we try re-creating a session
+                # and then proceeding ahead with the call.
+                exc = excep
+                if error_util.FAULT_NOT_AUTHENTICATED in excep.fault_list:
+                    # Because of the idle session returning an empty
+                    # RetrievePropertiesResponse and also the same is returned
+                    # when there is say empty answer to the query for
+                    # VMs on the host ( as in no VMs on the host), we have no
+                    # way to differentiate.
+                    # So if the previous response was also am empty response
+                    # and after creating a new session, we get the same empty
+                    # response, then we are sure of the response being supposed
+                    # to be empty.
+                    if error_util.FAULT_NOT_AUTHENTICATED in last_fault_list:
+                        return []
+                    last_fault_list = excep.fault_list
+                    self._create_session()
+                else:
+                    # No re-trying for errors for API call has gone through
+                    # and is the caller's fault. Caller should handle these
+                    # errors. e.g, InvalidArgument fault.
+                    break
+            except error_util.SessionOverLoadException as excep:
+                # For exceptions which may come because of session overload,
+                # we retry
+                exc = excep
+            except Exception as excep:
+                # If it is a proper exception, say not having furnished
+                # proper data in the SOAP call or the retry limit having
+                # exceeded, we raise the exception
+                exc = excep
+                break
+            # If retry count has been reached then break and
+            # raise the exception
+            if retry_count > self._api_retry_count:
+                break
+            time.sleep(TIME_BETWEEN_API_CALL_RETRIES)
+
+        LOG.critical(_("In vmwareapi:_call_method, "
+                     "got this exception: %s") % exc)
+        raise
+
+    def _get_vim(self):
+        """Gets the VIM object reference."""
+        if self.vim is None:
+            self._create_session()
+        return self.vim
+
+    def _wait_for_task(self, instance_uuid, task_ref):
+        """
+        Return a Deferred that will give the result of the given task.
+        The task is polled until it completes.
+        """
+        done = event.Event()
+        loop = loopingcall.FixedIntervalLoopingCall(self._poll_task,
+                                                    instance_uuid,
+                                                    task_ref, done)
+        loop.start(CONF.vmware.task_poll_interval)
+        ret_val = done.wait()
+        loop.stop()
+        return ret_val
+
+    def _poll_task(self, instance_uuid, task_ref, done):
+        """
+        Poll the given task, and fires the given Deferred if we
+        get a result.
+        """
+        try:
+            task_info = self._call_method(vim_util, "get_dynamic_property",
+                            task_ref, "Task", "info")
+            task_name = task_info.name
+            if task_info.state in ['queued', 'running']:
+                return
+            elif task_info.state == 'success':
+                LOG.debug(_("Task [%(task_name)s] %(task_ref)s "
+                            "status: success"),
+                          {'task_name': task_name, 'task_ref': task_ref})
+                done.send("success")
+            else:
+                error_info = str(task_info.error.localizedMessage)
+                LOG.warn(_("Task [%(task_name)s] %(task_ref)s "
+                          "status: error %(error_info)s"),
+                         {'task_name': task_name, 'task_ref': task_ref,
+                          'error_info': error_info})
+                done.send_exception(exception.NeutronException(error_info))
+        except Exception as excep:
+            LOG.warn(_("In vmwareapi:_poll_task, Got this error %s") % excep)
+            done.send_exception(excep)
diff --git a/neutron/plugins/ml2/drivers/vmware/error_util.py b/neutron/plugins/ml2/drivers/vmware/error_util.py
new file mode 100644
index 0000000..9b5a570
--- /dev/null
+++ b/neutron/plugins/ml2/drivers/vmware/error_util.py
@@ -0,0 +1,163 @@
+# vim: tabstop=4 shiftwidth=4 softtabstop=4
+
+# Copyright (c) 2011 Citrix Systems, Inc.
+# Copyright 2011 OpenStack Foundation
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+"""
+Exception classes and SOAP response error checking module.
+"""
+
+from neutron.common import exceptions
+from neutron.openstack.common.gettextutils import _
+
+
+FAULT_NOT_AUTHENTICATED = "NotAuthenticated"
+FAULT_ALREADY_EXISTS = "AlreadyExists"
+
+
+class VimException(Exception):
+    """The VIM Exception class."""
+
+    def __init__(self, exception_summary, excep):
+        Exception.__init__(self)
+        self.exception_summary = exception_summary
+        self.exception_obj = excep
+
+    def __str__(self):
+        return self.exception_summary + str(self.exception_obj)
+
+
+class SessionOverLoadException(VimException):
+    """Session Overload Exception."""
+    pass
+
+
+class VimAttributeError(VimException):
+    """VI Attribute Error."""
+    pass
+
+
+class VimFaultException(Exception):
+    """The VIM Fault exception class."""
+
+    def __init__(self, fault_list, excep):
+        Exception.__init__(self)
+        self.fault_list = fault_list
+        self.exception_obj = excep
+
+    def __str__(self):
+        return str(self.exception_obj)
+
+
+class FaultCheckers(object):
+    """
+    Methods for fault checking of SOAP response. Per Method error handlers
+    for which we desire error checking are defined. SOAP faults are
+    embedded in the SOAP messages as properties and not as SOAP faults.
+    """
+
+    @staticmethod
+    def retrievepropertiesex_fault_checker(resp_obj):
+        """
+        Checks the RetrievePropertiesEx response for errors. Certain faults
+        are sent as part of the SOAP body as property of missingSet.
+        For example NotAuthenticated fault.
+        """
+        fault_list = []
+        if not resp_obj:
+            # This is the case when the session has timed out. ESX SOAP server
+            # sends an empty RetrievePropertiesResponse. Normally missingSet in
+            # the returnval field has the specifics about the error, but that's
+            # not the case with a timed out idle session. It is as bad as a
+            # terminated session for we cannot use the session. So setting
+            # fault to NotAuthenticated fault.
+            fault_list = ["NotAuthenticated"]
+        else:
+            for obj_cont in resp_obj.objects:
+                if hasattr(obj_cont, "missingSet"):
+                    for missing_elem in obj_cont.missingSet:
+                        fault_type = missing_elem.fault.fault.__class__
+                        # Fault needs to be added to the type of fault for
+                        # uniformity in error checking as SOAP faults define
+                        fault_list.append(fault_type.__name__)
+        if fault_list:
+            exc_msg_list = ', '.join(fault_list)
+            raise VimFaultException(fault_list, Exception(_("Error(s) %s "
+                    "occurred in the call to RetrievePropertiesEx") %
+                    exc_msg_list))
+
+
+class VMwareDriverException(exceptions.NeutronException):
+    """Base class for all exceptions raised by the VMware Driver.
+
+    All exceptions raised by the VMwareAPI drivers should raise
+    an exception descended from this class as a root. This will
+    allow the driver to potentially trap problems related to its
+    own internal configuration before halting the nova-compute
+    node.
+    """
+    msg_fmt = _("VMware Driver fault.")
+
+
+class VMwareDriverConfigurationException(VMwareDriverException):
+    """Base class for all configuration exceptions.
+    """
+    msg_fmt = _("VMware Driver configuration fault.")
+
+
+class UseLinkedCloneConfigurationFault(VMwareDriverConfigurationException):
+    msg_fmt = _("No default value for use_linked_clone found.")
+
+class NotFound(Exception):
+    msg_fmt = _("Resource could not be found.")
+    code = 404
+
+class DatastoreNotFound(NotFound):
+    msg_fmt = _("Could not find the datastore reference(s) which the VM uses.")
+
+class NoValidHost(Exception):
+    msg_fmt = _("No valid host was found. %(reason)s")
+
+class Invalid(Exception):
+    msg_fmt = _("Unacceptable parameters.")
+    code = 400
+
+class InstanceNotFound(NotFound):
+    ec2_code = 'InvalidInstanceID.NotFound'
+    msg_fmt = _("Instance %(instance_id)s could not be found.")
+
+class NetworkAdapterNotFound(NotFound):
+    msg_fmt = _("Network adapter %(adapter)s could not be found.")
+
+class SwitchNotFoundForNetworkAdapter(NotFound):
+    msg_fmt = _("Virtual switch associated with the "
+                "network adapter %(adapter)s not found.")
+
+class NetworkNotFound(NotFound):
+    msg_fmt = _("Network %(network_id)s could not be found.")
+
+class NetworkNotFoundForBridge(NetworkNotFound):
+    msg_fmt = _("Network could not be found for bridge %(bridge)s")
+
+class InvalidVLANTag(Invalid):
+    msg_fmt = _("VLAN tag is not appropriate for the port group "
+                "%(bridge)s. Expected VLAN tag is %(tag)s, "
+                "but the one associated with the port group is %(pgroup)s.")
+
+class InvalidVLANPortGroup(Invalid):
+    msg_fmt = _("vSwitch which contains the port group %(bridge)s is "
+                "not associated with the desired physical adapter. "
+                "Expected vSwitch is %(expected)s, but the one associated "
+                "is %(actual)s.")
diff --git a/neutron/plugins/ml2/drivers/vmware/mech_vmware.py b/neutron/plugins/ml2/drivers/vmware/mech_vmware.py
new file mode 100644
index 0000000..e8a36da
--- /dev/null
+++ b/neutron/plugins/ml2/drivers/vmware/mech_vmware.py
@@ -0,0 +1,56 @@
+# Copyright (c) 2014 OpenStack Foundation
+# All Rights Reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+from oslo.config import cfg
+
+from neutron.common import constants
+from neutron.extensions import portbindings
+from neutron.openstack.common import log
+from neutron.plugins.ml2.drivers import mech_agent
+from neutron.plugins.ml2.drivers.vmware  import vif
+from neutron.plugins.ml2.drivers.vmware.apisession import VMwareAPISession 
+
+LOG = log.getLogger(__name__)
+
+cfg.CONF.import_opt('cluster_name',
+                    'neutron.plugins.ml2.drivers.vmware.VMwareAPISession',
+                    'VMWARE')
+
+
+class VMwareVCMechanismDriver(mech_agent.AgentMechanismDriverBase):
+    """Attach to networks using openvswitch L2 agent.
+
+    The VMwareVCMechanismDriver integrates the ml2 plugin with the
+    openvswitch L2 agent.
+    """
+
+    def __init__(self):
+        super(VMwareVCMechanismDriver, self).__init__(
+            constants.AGENT_TYPE_OVS,
+            portbindings.VIF_TYPE_OVS,
+            True)
+        self._session = VMwareAPISession(scheme='https')
+        self._cluster = cfg.CONF.VMWARE.cluster_name
+
+    def check_segment_for_agent(self, segment, agent):
+            return False
+
+    def create_network_postcommit(self, context):
+        """Provision the network on the vmware vCenter."""
+
+        network = context.current
+        vif.ensure_vlan_bridge(self._session, network,
+                               cluster=self._cluster, create_vlan=True)
+
diff --git a/neutron/plugins/ml2/drivers/vmware/network_util.py b/neutron/plugins/ml2/drivers/vmware/network_util.py
new file mode 100644
index 0000000..dbdf0b1
--- /dev/null
+++ b/neutron/plugins/ml2/drivers/vmware/network_util.py
@@ -0,0 +1,183 @@
+# vim: tabstop=4 shiftwidth=4 softtabstop=4
+
+# Copyright (c) 2012 VMware, Inc.
+# Copyright (c) 2011 Citrix Systems, Inc.
+# Copyright 2011 OpenStack Foundation
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+"""
+Utility functions for ESX Networking.
+"""
+
+import netaddr
+
+from neutron.common import exceptions as exception
+from neutron.openstack.common.gettextutils import _
+from neutron.openstack.common import log as logging
+from neutron.plugins.ml2.drivers.vmware import error_util
+from neutron.plugins.ml2.drivers.vmware import vim_util
+from neutron.plugins.ml2.drivers.vmware import vm_util
+
+LOG = logging.getLogger(__name__)
+
+
+def get_network_with_the_name(session, network_name="vmnet0", cluster=None):
+    """
+    Gets reference to the network whose name is passed as the
+    argument.
+    """
+    host = vm_util.get_host_ref(session, cluster)
+    if cluster is not None:
+        vm_networks_ret = session._call_method(vim_util,
+                                               "get_dynamic_property", cluster,
+                                               "ClusterComputeResource",
+                                               "network")
+    else:
+        vm_networks_ret = session._call_method(vim_util,
+                                               "get_dynamic_property", host,
+                                               "HostSystem", "network")
+
+    # Meaning there are no networks on the host. suds responds with a ""
+    # in the parent property field rather than a [] in the
+    # ManagedObjectReference property field of the parent
+    if not vm_networks_ret:
+        return None
+    vm_networks = vm_networks_ret.ManagedObjectReference
+    network_obj = {}
+    LOG.debug(vm_networks)
+    for network in vm_networks:
+        # Get network properties
+        if network._type == 'DistributedVirtualPortgroup':
+            props = session._call_method(vim_util,
+                        "get_dynamic_property", network,
+                        "DistributedVirtualPortgroup", "config")
+            # NOTE(asomya): This only works on ESXi if the port binding is
+            # set to ephemeral
+            if props.name == network_name:
+                network_obj['type'] = 'DistributedVirtualPortgroup'
+                network_obj['dvpg'] = props.key
+                dvs_props = session._call_method(vim_util,
+                                "get_dynamic_property",
+                                props.distributedVirtualSwitch,
+                                "VmwareDistributedVirtualSwitch", "uuid")
+                network_obj['dvsw'] = dvs_props
+        else:
+            props = session._call_method(vim_util,
+                        "get_dynamic_property", network,
+                        "Network", "summary.name")
+            if props == network_name:
+                network_obj['type'] = 'Network'
+                network_obj['name'] = network_name
+    if (len(network_obj) > 0):
+        return network_obj
+
+
+def get_vswitch_for_vlan_interface(session, vlan_interface, cluster=None):
+    """
+    Gets the vswitch associated with the physical network adapter
+    with the name supplied.
+    """
+    # Get the list of vSwicthes on the Host System
+    host_mor = vm_util.get_host_ref(session, cluster)
+    vswitches_ret = session._call_method(vim_util,
+                "get_dynamic_property", host_mor,
+                "HostSystem", "config.network.vswitch")
+    # Meaning there are no vSwitches on the host. Shouldn't be the case,
+    # but just doing code check
+    if not vswitches_ret:
+        return
+    vswitches = vswitches_ret.HostVirtualSwitch
+    # Get the vSwitch associated with the network adapter
+    for elem in vswitches:
+        try:
+            for nic_elem in elem.pnic:
+                if str(nic_elem).split('-')[-1].find(vlan_interface) != -1:
+                    return elem.name
+        # Catching Attribute error as a vSwitch may not be associated with a
+        # physical NIC.
+        except AttributeError:
+            pass
+
+
+def check_if_vlan_interface_exists(session, vlan_interface, cluster=None):
+    """Checks if the vlan_interface exists on the esx host."""
+    host_mor = vm_util.get_host_ref(session, cluster)
+    physical_nics_ret = session._call_method(vim_util,
+                "get_dynamic_property", host_mor,
+                "HostSystem", "config.network.pnic")
+    # Meaning there are no physical nics on the host
+    if not physical_nics_ret:
+        return False
+    physical_nics = physical_nics_ret.PhysicalNic
+    for pnic in physical_nics:
+        if vlan_interface == pnic.device:
+            return True
+    return False
+
+
+def get_vlanid_and_vswitch_for_portgroup(session, pg_name, cluster=None):
+    """Get the vlan id and vswicth associated with the port group."""
+    host_mor = vm_util.get_host_ref(session, cluster)
+    port_grps_on_host_ret = session._call_method(vim_util,
+                "get_dynamic_property", host_mor,
+                "HostSystem", "config.network.portgroup")
+    if not port_grps_on_host_ret:
+        msg = _("ESX SOAP server returned an empty port group "
+                "for the host system in its response")
+        LOG.error(msg)
+        raise exception.NeutronException(msg)
+    port_grps_on_host = port_grps_on_host_ret.HostPortGroup
+    for p_gp in port_grps_on_host:
+        if p_gp.spec.name == pg_name:
+            p_grp_vswitch_name = p_gp.vswitch.split("-")[-1]
+            return p_gp.spec.vlanId, p_grp_vswitch_name
+
+
+def create_port_group(session, pg_name, vswitch_name, vlan_id=0, cluster=None):
+    """
+    Creates a port group on the host system with the vlan tags
+    supplied. VLAN id 0 means no vlan id association.
+    """
+    client_factory = session._get_vim().client.factory
+    add_prt_grp_spec = vm_util.get_add_vswitch_port_group_spec(
+                    client_factory,
+                    vswitch_name,
+                    pg_name,
+                    vlan_id)
+    host_mor = vm_util.get_host_ref(session, cluster)
+    network_system_mor = session._call_method(vim_util,
+        "get_dynamic_property", host_mor,
+        "HostSystem", "configManager.networkSystem")
+    LOG.debug(_("Creating Port Group with name %s on "
+                "the ESX host") % pg_name)
+    try:
+        session._call_method(session._get_vim(),
+                "AddPortGroup", network_system_mor,
+                portgrp=add_prt_grp_spec)
+    except error_util.VimFaultException as exc:
+        # There can be a race condition when two instances try
+        # adding port groups at the same time. One succeeds, then
+        # the other one will get an exception. Since we are
+        # concerned with the port group being created, which is done
+        # by the other call, we can ignore the exception.
+        if error_util.FAULT_ALREADY_EXISTS not in exc.fault_list:
+            raise exception.NeutronException(exc)
+    LOG.debug(_("Created Port Group with name %s on "
+                "the ESX host") % pg_name)
+
+def is_valid_ipv6(address):
+    try:
+        return netaddr.valid_ipv6(address)
+    except Exception:
+        return False
diff --git a/neutron/plugins/ml2/drivers/vmware/vif.py b/neutron/plugins/ml2/drivers/vmware/vif.py
new file mode 100644
index 0000000..8e7dc95
--- /dev/null
+++ b/neutron/plugins/ml2/drivers/vmware/vif.py
@@ -0,0 +1,152 @@
+# vim: tabstop=4 shiftwidth=4 softtabstop=4
+
+# Copyright (c) 2011 Citrix Systems, Inc.
+# Copyright 2011 OpenStack Foundation
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+"""VIF drivers for VMware."""
+
+from oslo.config import cfg
+
+from neutron.openstack.common.gettextutils import _
+from neutron.openstack.common import log as logging
+from neutron.plugins.ml2.drivers.vmware import error_util
+from neutron.plugins.ml2.drivers.vmware import network_util
+from neutron.plugins.ml2.drivers.vmware import vim_util
+from neutron.plugins.ml2.drivers.vmware import vm_util
+
+LOG = logging.getLogger(__name__)
+CONF = cfg.CONF
+
+vmwareapi_vif_opts = [
+    cfg.StrOpt('vlan_interface',
+               default='vmnic0',
+               help='Physical ethernet adapter name for vlan networking'),
+]
+
+CONF.register_opts(vmwareapi_vif_opts, 'vmware')
+
+
+def _get_associated_vswitch_for_interface(session, interface, cluster=None):
+    # Check if the physical network adapter exists on the host.
+    if not network_util.check_if_vlan_interface_exists(session,
+                                        interface, cluster):
+        raise error_util.NetworkAdapterNotFound(adapter=interface)
+    # Get the vSwitch associated with the Physical Adapter
+    vswitch_associated = network_util.get_vswitch_for_vlan_interface(
+                                    session, interface, cluster)
+    if not vswitch_associated:
+        raise error_util.SwitchNotFoundForNetworkAdapter(adapter=interface)
+    return vswitch_associated
+
+
+def ensure_vlan_bridge(session, network, cluster=None, create_vlan=True):
+    """Create a vlan and bridge unless they already exist."""
+    vlan_num = network.get_meta('vlan')
+    bridge = network['bridge']
+    vlan_interface = CONF.vmware.vlan_interface
+
+    network_ref = network_util.get_network_with_the_name(session, bridge,
+                                                         cluster)
+    if network_ref and network_ref['type'] == 'DistributedVirtualPortgroup':
+        return network_ref
+
+    if not network_ref:
+        # Create a port group on the vSwitch associated with the
+        # vlan_interface corresponding physical network adapter on the ESX
+        # host.
+        vswitch_associated = _get_associated_vswitch_for_interface(session,
+                                 vlan_interface, cluster)
+        network_util.create_port_group(session, bridge,
+                                       vswitch_associated,
+                                       vlan_num if create_vlan else 0,
+                                       cluster)
+        network_ref = network_util.get_network_with_the_name(session,
+                                                             bridge,
+                                                             cluster)
+    elif create_vlan:
+        # Get the vSwitch associated with the Physical Adapter
+        vswitch_associated = _get_associated_vswitch_for_interface(session,
+                                 vlan_interface, cluster)
+        # Get the vlan id and vswitch corresponding to the port group
+        _get_pg_info = network_util.get_vlanid_and_vswitch_for_portgroup
+        pg_vlanid, pg_vswitch = _get_pg_info(session, bridge, cluster)
+
+        # Check if the vswitch associated is proper
+        if pg_vswitch != vswitch_associated:
+            raise error_util.InvalidVLANPortGroup(
+                bridge=bridge, expected=vswitch_associated,
+                actual=pg_vswitch)
+
+        # Check if the vlan id is proper for the port group
+        if pg_vlanid != vlan_num:
+            raise error_util.InvalidVLANTag(bridge=bridge, tag=vlan_num,
+                                       pgroup=pg_vlanid)
+    return network_ref
+
+
+def _is_valid_opaque_network_id(opaque_id, bridge_id, integration_bridge,
+                                num_networks):
+    return (opaque_id == bridge_id or
+            (num_networks == 1 and
+             opaque_id == integration_bridge))
+
+
+def _get_network_ref_from_opaque(opaque_networks, integration_bridge, bridge):
+    num_networks = len(opaque_networks)
+    for network in opaque_networks:
+        if _is_valid_opaque_network_id(network['opaqueNetworkId'], bridge,
+                                       integration_bridge, num_networks):
+            return {'type': 'OpaqueNetwork',
+                    'network-id': network['opaqueNetworkId'],
+                    'network-name': network['opaqueNetworkName'],
+                    'network-type': network['opaqueNetworkType']}
+    LOG.warning(_("No valid network found in %(opaque)s, from %(bridge)s "
+                  "or %(integration_bridge)s"),
+                {'opaque': opaque_networks, 'bridge': bridge,
+                 'integration_bridge': integration_bridge})
+
+
+def get_neutron_network(session, network_name, cluster, vif):
+    host = vm_util.get_host_ref(session, cluster)
+    try:
+        opaque = session._call_method(vim_util, "get_dynamic_property", host,
+                                      "HostSystem",
+                                      "config.network.opaqueNetwork")
+    except error_util.VimFaultException:
+        opaque = None
+    if opaque:
+        bridge = vif['network']['id']
+        opaque_networks = opaque.HostOpaqueNetworkInfo
+        network_ref = _get_network_ref_from_opaque(opaque_networks,
+                CONF.vmware.integration_bridge, bridge)
+    else:
+        bridge = network_name
+        network_ref = network_util.get_network_with_the_name(
+                session, network_name, cluster)
+    if not network_ref:
+        raise error_util.NetworkNotFoundForBridge(bridge=bridge)
+    return network_ref
+
+
+# def get_network_ref(session, cluster, vif, is_neutron):
+#     if is_neutron:
+#         network_name = (vif['network']['bridge'] or
+#                         CONF.vmware.integration_bridge)
+#         network_ref = get_neutron_network(session, network_name, cluster, vif)
+#     else:
+#         create_vlan = vif['network'].get_meta('should_create_vlan', False)
+#         network_ref = ensure_vlan_bridge(session, vif, cluster=cluster,
+#                                          create_vlan=create_vlan)
+#     return network_ref
diff --git a/neutron/plugins/ml2/drivers/vmware/vim.py b/neutron/plugins/ml2/drivers/vmware/vim.py
new file mode 100644
index 0000000..a77410c
--- /dev/null
+++ b/neutron/plugins/ml2/drivers/vmware/vim.py
@@ -0,0 +1,240 @@
+# vim: tabstop=4 shiftwidth=4 softtabstop=4
+
+# Copyright (c) 2012 VMware, Inc.
+# Copyright (c) 2011 Citrix Systems, Inc.
+# Copyright 2011 OpenStack Foundation
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+"""
+Classes for making VMware VI SOAP calls.
+"""
+
+import httplib
+
+from oslo.config import cfg
+import suds
+
+from neutron.openstack.common.gettextutils import _
+from neutron.openstack.common import network_utils as utils
+from neutron.plugins.ml2.drivers.vmware import error_util
+
+RESP_NOT_XML_ERROR = 'Response is "text/html", not "text/xml"'
+CONN_ABORT_ERROR = 'Software caused connection abort'
+ADDRESS_IN_USE_ERROR = 'Address already in use'
+
+vmwareapi_wsdl_loc_opt = cfg.StrOpt('wsdl_location',
+        help='Optional VIM Service WSDL Location '
+             'e.g http://<server>/vimService.wsdl. '
+             'Optional over-ride to default location for bug work-arounds')
+
+CONF = cfg.CONF
+CONF.register_opt(vmwareapi_wsdl_loc_opt, 'vmware')
+
+
+def get_moref(value, type):
+    """Get managed object reference."""
+    moref = suds.sudsobject.Property(value)
+    moref._type = type
+    return moref
+
+
+def object_to_dict(obj, list_depth=1):
+    """Convert Suds object into serializable format.
+
+    The calling function can limit the amount of list entries that
+    are converted.
+    """
+    d = {}
+    for k, v in suds.sudsobject.asdict(obj).iteritems():
+        if hasattr(v, '__keylist__'):
+            d[k] = object_to_dict(v, list_depth=list_depth)
+        elif isinstance(v, list):
+            d[k] = []
+            used = 0
+            for item in v:
+                used = used + 1
+                if used > list_depth:
+                    break
+                if hasattr(item, '__keylist__'):
+                    d[k].append(object_to_dict(item, list_depth=list_depth))
+                else:
+                    d[k].append(item)
+        else:
+            d[k] = v
+    return d
+
+
+class VIMMessagePlugin(suds.plugin.MessagePlugin):
+    def addAttributeForValue(self, node):
+        # suds does not handle AnyType properly.
+        # VI SDK requires type attribute to be set when AnyType is used
+        if node.name == 'value':
+            node.set('xsi:type', 'xsd:string')
+
+    def marshalled(self, context):
+        """suds will send the specified soap envelope.
+        Provides the plugin with the opportunity to prune empty
+        nodes and fixup nodes before sending it to the server.
+        """
+        # suds builds the entire request object based on the wsdl schema.
+        # VI SDK throws server errors if optional SOAP nodes are sent
+        # without values, e.g. <test/> as opposed to <test>test</test>
+        context.envelope.prune()
+        context.envelope.walk(self.addAttributeForValue)
+
+
+class Vim:
+    """The VIM Object."""
+
+    def __init__(self,
+                 protocol="https",
+                 host="localhost"):
+        """
+        Creates the necessary Communication interfaces and gets the
+        ServiceContent for initiating SOAP transactions.
+
+        protocol: http or https
+        host    : ESX IPAddress[:port] or ESX Hostname[:port]
+        """
+        if not suds:
+            raise Exception(_("Unable to import suds."))
+
+        self._protocol = protocol
+        self._host_name = host
+        self.wsdl_url = Vim.get_wsdl_url(protocol, host)
+        self.url = Vim.get_soap_url(protocol, host)
+        self.client = suds.client.Client(self.wsdl_url, location=self.url,
+                                         plugins=[VIMMessagePlugin()])
+        self._service_content = self.retrieve_service_content()
+
+    def retrieve_service_content(self):
+        return self.RetrieveServiceContent("ServiceInstance")
+
+    @staticmethod
+    def get_wsdl_url(protocol, host_name):
+        """
+        allows override of the wsdl location, making this static
+        means we can test the logic outside of the constructor
+        without forcing the test environment to have multiple valid
+        wsdl locations to test against.
+
+        :param protocol: https or http
+        :param host_name: localhost or other server name
+        :return: string to WSDL location for vSphere WS Management API
+        """
+        # optional WSDL location over-ride for work-arounds
+        if CONF.vmware.wsdl_location:
+            return CONF.vmware.wsdl_location
+
+        # calculate default WSDL location if no override supplied
+        return Vim.get_soap_url(protocol, host_name) + "/vimService.wsdl"
+
+    @staticmethod
+    def get_soap_url(protocol, host_name):
+        """
+        Calculates the location of the SOAP services
+        for a particular server. Created as a static
+        method for testing.
+
+        :param protocol: https or http
+        :param host_name: localhost or other vSphere server name
+        :return: the url to the active vSphere WS Management API
+        """
+        if utils.is_valid_ipv6(host_name):
+            return '%s://[%s]/sdk' % (protocol, host_name)
+        return '%s://%s/sdk' % (protocol, host_name)
+
+    def get_service_content(self):
+        """Gets the service content object."""
+        return self._service_content
+
+    def __getattr__(self, attr_name):
+        """Makes the API calls and gets the result."""
+        def vim_request_handler(managed_object, **kwargs):
+            """
+            Builds the SOAP message and parses the response for fault
+            checking and other errors.
+
+            managed_object    : Managed Object Reference or Managed
+                                Object Name
+            **kwargs          : Keyword arguments of the call
+            """
+            # Dynamic handler for VI SDK Calls
+            try:
+                request_mo = self._request_managed_object_builder(
+                             managed_object)
+                request = getattr(self.client.service, attr_name)
+                response = request(request_mo, **kwargs)
+                # To check for the faults that are part of the message body
+                # and not returned as Fault object response from the ESX
+                # SOAP server
+                if hasattr(error_util.FaultCheckers,
+                                attr_name.lower() + "_fault_checker"):
+                    fault_checker = getattr(error_util.FaultCheckers,
+                                attr_name.lower() + "_fault_checker")
+                    fault_checker(response)
+                return response
+            # Catch the VimFaultException that is raised by the fault
+            # check of the SOAP response
+            except error_util.VimFaultException:
+                raise
+            except suds.MethodNotFound:
+                raise
+            except suds.WebFault as excep:
+                doc = excep.document
+                detail = doc.childAtPath("/Envelope/Body/Fault/detail")
+                fault_list = []
+                for child in detail.getChildren():
+                    fault_list.append(child.get("type"))
+                raise error_util.VimFaultException(fault_list, excep)
+            except AttributeError as excep:
+                raise error_util.VimAttributeError(_("No such SOAP method "
+                     "'%s' provided by VI SDK") % (attr_name), excep)
+            except (httplib.CannotSendRequest,
+                    httplib.ResponseNotReady,
+                    httplib.CannotSendHeader) as excep:
+                raise error_util.SessionOverLoadException(_("httplib "
+                                "error in %s: ") % (attr_name), excep)
+            except Exception as excep:
+                # Socket errors which need special handling for they
+                # might be caused by ESX API call overload
+                if (str(excep).find(ADDRESS_IN_USE_ERROR) != -1 or
+                        str(excep).find(CONN_ABORT_ERROR)) != -1:
+                    raise error_util.SessionOverLoadException(_("Socket "
+                                "error in %s: ") % (attr_name), excep)
+                # Type error that needs special handling for it might be
+                # caused by ESX host API call overload
+                elif str(excep).find(RESP_NOT_XML_ERROR) != -1:
+                    raise error_util.SessionOverLoadException(_("Type "
+                                "error in  %s: ") % (attr_name), excep)
+                else:
+                    raise error_util.VimException(
+                       _("Exception in %s ") % (attr_name), excep)
+        return vim_request_handler
+
+    def _request_managed_object_builder(self, managed_object):
+        """Builds the request managed object."""
+        # Request Managed Object Builder
+        if isinstance(managed_object, str):
+            mo = suds.sudsobject.Property(managed_object)
+            mo._type = managed_object
+        else:
+            mo = managed_object
+        return mo
+
+    def __repr__(self):
+        return "VIM Object"
+
+    def __str__(self):
+        return "VIM Object"
diff --git a/neutron/plugins/ml2/drivers/vmware/vim_util.py b/neutron/plugins/ml2/drivers/vmware/vim_util.py
new file mode 100644
index 0000000..1ec6cd5
--- /dev/null
+++ b/neutron/plugins/ml2/drivers/vmware/vim_util.py
@@ -0,0 +1,285 @@
+# vim: tabstop=4 shiftwidth=4 softtabstop=4
+
+# Copyright (c) 2011 Citrix Systems, Inc.
+# Copyright 2011 OpenStack Foundation
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+"""
+The VMware API utility module.
+"""
+
+from oslo.config import cfg
+
+from neutron.openstack.common.gettextutils import _
+from neutron.openstack.common import log as logging
+
+vmware_opts = cfg.IntOpt('maximum_objects', default=100,
+                         help='The maximum number of ObjectContent data '
+                              'objects that should be returned in a single '
+                              'result. A positive value will cause the '
+                              'operation to suspend the retrieval when the '
+                              'count of objects reaches the specified '
+                              'maximum. The server may still limit the count '
+                              'to something less than the configured value. '
+                              'Any remaining objects may be retrieved with '
+                              'additional requests.')
+CONF = cfg.CONF
+CONF.register_opt(vmware_opts, 'vmware')
+LOG = logging.getLogger(__name__)
+
+
+def build_selection_spec(client_factory, name):
+    """Builds the selection spec."""
+    sel_spec = client_factory.create('ns0:SelectionSpec')
+    sel_spec.name = name
+    return sel_spec
+
+
+def build_traversal_spec(client_factory, name, spec_type, path, skip,
+                         select_set):
+    """Builds the traversal spec object."""
+    traversal_spec = client_factory.create('ns0:TraversalSpec')
+    traversal_spec.name = name
+    traversal_spec.type = spec_type
+    traversal_spec.path = path
+    traversal_spec.skip = skip
+    traversal_spec.selectSet = select_set
+    return traversal_spec
+
+
+def build_recursive_traversal_spec(client_factory):
+    """
+    Builds the Recursive Traversal Spec to traverse the object managed
+    object hierarchy.
+    """
+    visit_folders_select_spec = build_selection_spec(client_factory,
+                                    "visitFolders")
+    # For getting to hostFolder from datacenter
+    dc_to_hf = build_traversal_spec(client_factory, "dc_to_hf", "Datacenter",
+                                    "hostFolder", False,
+                                    [visit_folders_select_spec])
+    # For getting to vmFolder from datacenter
+    dc_to_vmf = build_traversal_spec(client_factory, "dc_to_vmf", "Datacenter",
+                                     "vmFolder", False,
+                                     [visit_folders_select_spec])
+    # For getting Host System to virtual machine
+    h_to_vm = build_traversal_spec(client_factory, "h_to_vm", "HostSystem",
+                                   "vm", False,
+                                   [visit_folders_select_spec])
+
+    # For getting to Host System from Compute Resource
+    cr_to_h = build_traversal_spec(client_factory, "cr_to_h",
+                                   "ComputeResource", "host", False, [])
+
+    # For getting to datastore from Compute Resource
+    cr_to_ds = build_traversal_spec(client_factory, "cr_to_ds",
+                                    "ComputeResource", "datastore", False, [])
+
+    rp_to_rp_select_spec = build_selection_spec(client_factory, "rp_to_rp")
+    rp_to_vm_select_spec = build_selection_spec(client_factory, "rp_to_vm")
+    # For getting to resource pool from Compute Resource
+    cr_to_rp = build_traversal_spec(client_factory, "cr_to_rp",
+                                "ComputeResource", "resourcePool", False,
+                                [rp_to_rp_select_spec, rp_to_vm_select_spec])
+
+    # For getting to child res pool from the parent res pool
+    rp_to_rp = build_traversal_spec(client_factory, "rp_to_rp", "ResourcePool",
+                                "resourcePool", False,
+                                [rp_to_rp_select_spec, rp_to_vm_select_spec])
+
+    # For getting to Virtual Machine from the Resource Pool
+    rp_to_vm = build_traversal_spec(client_factory, "rp_to_vm", "ResourcePool",
+                                "vm", False,
+                                [rp_to_rp_select_spec, rp_to_vm_select_spec])
+
+    # Get the assorted traversal spec which takes care of the objects to
+    # be searched for from the root folder
+    traversal_spec = build_traversal_spec(client_factory, "visitFolders",
+                                  "Folder", "childEntity", False,
+                                  [visit_folders_select_spec, dc_to_hf,
+                                   dc_to_vmf, cr_to_ds, cr_to_h, cr_to_rp,
+                                   rp_to_rp, h_to_vm, rp_to_vm])
+    return traversal_spec
+
+
+def build_property_spec(client_factory, type="VirtualMachine",
+                        properties_to_collect=None,
+                        all_properties=False):
+    """Builds the Property Spec."""
+    if not properties_to_collect:
+        properties_to_collect = ["name"]
+
+    property_spec = client_factory.create('ns0:PropertySpec')
+    property_spec.all = all_properties
+    property_spec.pathSet = properties_to_collect
+    property_spec.type = type
+    return property_spec
+
+
+def build_object_spec(client_factory, root_folder, traversal_specs):
+    """Builds the object Spec."""
+    object_spec = client_factory.create('ns0:ObjectSpec')
+    object_spec.obj = root_folder
+    object_spec.skip = False
+    object_spec.selectSet = traversal_specs
+    return object_spec
+
+
+def build_property_filter_spec(client_factory, property_specs, object_specs):
+    """Builds the Property Filter Spec."""
+    property_filter_spec = client_factory.create('ns0:PropertyFilterSpec')
+    property_filter_spec.propSet = property_specs
+    property_filter_spec.objectSet = object_specs
+    return property_filter_spec
+
+
+def get_object_properties(vim, collector, mobj, type, properties):
+    """Gets the properties of the Managed object specified."""
+    client_factory = vim.client.factory
+    if mobj is None:
+        return None
+    usecoll = collector
+    if usecoll is None:
+        usecoll = vim.get_service_content().propertyCollector
+    property_filter_spec = client_factory.create('ns0:PropertyFilterSpec')
+    property_spec = client_factory.create('ns0:PropertySpec')
+    property_spec.all = (properties is None or len(properties) == 0)
+    property_spec.pathSet = properties
+    property_spec.type = type
+    object_spec = client_factory.create('ns0:ObjectSpec')
+    object_spec.obj = mobj
+    object_spec.skip = False
+    property_filter_spec.propSet = [property_spec]
+    property_filter_spec.objectSet = [object_spec]
+    options = client_factory.create('ns0:RetrieveOptions')
+    options.maxObjects = CONF.vmware.maximum_objects
+    return vim.RetrievePropertiesEx(usecoll, specSet=[property_filter_spec],
+                                    options=options)
+
+
+def get_dynamic_property(vim, mobj, type, property_name):
+    """Gets a particular property of the Managed Object."""
+    property_dict = get_dynamic_properties(vim, mobj, type, [property_name])
+    return property_dict.get(property_name)
+
+
+def get_dynamic_properties(vim, mobj, type, property_names):
+    """Gets the specified properties of the Managed Object."""
+    obj_content = get_object_properties(vim, None, mobj, type, property_names)
+    if hasattr(obj_content, 'token'):
+        vim.CancelRetrievePropertiesEx(token=obj_content.token)
+    property_dict = {}
+    if obj_content.objects:
+        if hasattr(obj_content.objects[0], 'propSet'):
+            dynamic_properties = obj_content.objects[0].propSet
+            if dynamic_properties:
+                for prop in dynamic_properties:
+                    property_dict[prop.name] = prop.val
+        # The object may have information useful for logging
+        if hasattr(obj_content.objects[0], 'missingSet'):
+            for m in obj_content.objects[0].missingSet:
+                LOG.warning(_("Unable to retrieve value for %(path)s "
+                              "Reason: %(reason)s"),
+                            {'path': m.path,
+                             'reason': m.fault.localizedMessage})
+    return property_dict
+
+
+def get_objects(vim, type, properties_to_collect=None, all=False):
+    """Gets the list of objects of the type specified."""
+    if not properties_to_collect:
+        properties_to_collect = ["name"]
+
+    client_factory = vim.client.factory
+    object_spec = build_object_spec(client_factory,
+                        vim.get_service_content().rootFolder,
+                        [build_recursive_traversal_spec(client_factory)])
+    property_spec = build_property_spec(client_factory, type=type,
+                                properties_to_collect=properties_to_collect,
+                                all_properties=all)
+    property_filter_spec = build_property_filter_spec(client_factory,
+                                [property_spec],
+                                [object_spec])
+    options = client_factory.create('ns0:RetrieveOptions')
+    options.maxObjects = CONF.vmware.maximum_objects
+    return vim.RetrievePropertiesEx(
+            vim.get_service_content().propertyCollector,
+            specSet=[property_filter_spec], options=options)
+
+
+def cancel_retrieve(vim, token):
+    """Cancels the retrieve operation."""
+    return vim.CancelRetrievePropertiesEx(
+            vim.get_service_content().propertyCollector,
+            token=token)
+
+
+def continue_to_get_objects(vim, token):
+    """Continues to get the list of objects of the type specified."""
+    return vim.ContinueRetrievePropertiesEx(
+            vim.get_service_content().propertyCollector,
+            token=token)
+
+
+def get_prop_spec(client_factory, spec_type, properties):
+    """Builds the Property Spec Object."""
+    prop_spec = client_factory.create('ns0:PropertySpec')
+    prop_spec.type = spec_type
+    prop_spec.pathSet = properties
+    return prop_spec
+
+
+def get_obj_spec(client_factory, obj, select_set=None):
+    """Builds the Object Spec object."""
+    obj_spec = client_factory.create('ns0:ObjectSpec')
+    obj_spec.obj = obj
+    obj_spec.skip = False
+    if select_set is not None:
+        obj_spec.selectSet = select_set
+    return obj_spec
+
+
+def get_prop_filter_spec(client_factory, obj_spec, prop_spec):
+    """Builds the Property Filter Spec Object."""
+    prop_filter_spec = client_factory.create('ns0:PropertyFilterSpec')
+    prop_filter_spec.propSet = prop_spec
+    prop_filter_spec.objectSet = obj_spec
+    return prop_filter_spec
+
+
+def get_properties_for_a_collection_of_objects(vim, type,
+                                               obj_list, properties):
+    """
+    Gets the list of properties for the collection of
+    objects of the type specified.
+    """
+    client_factory = vim.client.factory
+    if len(obj_list) == 0:
+        return []
+    prop_spec = get_prop_spec(client_factory, type, properties)
+    lst_obj_specs = []
+    for obj in obj_list:
+        lst_obj_specs.append(get_obj_spec(client_factory, obj))
+    prop_filter_spec = get_prop_filter_spec(client_factory,
+                                            lst_obj_specs, [prop_spec])
+    options = client_factory.create('ns0:RetrieveOptions')
+    options.maxObjects = CONF.vmware.maximum_objects
+    return vim.RetrievePropertiesEx(
+            vim.get_service_content().propertyCollector,
+            specSet=[prop_filter_spec], options=options)
+
+
+def get_about_info(vim):
+    """Get the About Info from the service content."""
+    return vim.get_service_content().about
diff --git a/neutron/plugins/ml2/drivers/vmware/vm_util.py b/neutron/plugins/ml2/drivers/vmware/vm_util.py
new file mode 100644
index 0000000..f3c75b1
--- /dev/null
+++ b/neutron/plugins/ml2/drivers/vmware/vm_util.py
@@ -0,0 +1,1138 @@
+# vim: tabstop=4 shiftwidth=4 softtabstop=4
+
+# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
+# Copyright (c) 2012 VMware, Inc.
+# Copyright (c) 2011 Citrix Systems, Inc.
+# Copyright 2011 OpenStack Foundation
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+"""
+The VMware API VM utility module to build SOAP object specs.
+"""
+
+import collections
+import copy
+
+from neutron.plugins.ml2.drivers.vmware import error_util as exception
+from neutron.openstack.common.gettextutils import _
+from neutron.openstack.common import log as logging
+from neutron.plugins.ml2.drivers.vmware import vim_util
+
+LOG = logging.getLogger(__name__)
+
+Mi = 1024 ** 2
+
+def build_datastore_path(datastore_name, path):
+    """Build the datastore compliant path."""
+    return "[%s] %s" % (datastore_name, path)
+
+
+def split_datastore_path(datastore_path):
+    """
+    Split the VMware style datastore path to get the Datastore
+    name and the entity path.
+    """
+    spl = datastore_path.split('[', 1)[1].split(']', 1)
+    path = ""
+    if len(spl) == 1:
+        datastore_url = spl[0]
+    else:
+        datastore_url, path = spl
+    return datastore_url, path.strip()
+
+
+def get_vm_create_spec(client_factory, instance, data_store_name,
+                       vif_infos, os_type="otherGuest"):
+    """Builds the VM Create spec."""
+    config_spec = client_factory.create('ns0:VirtualMachineConfigSpec')
+    config_spec.name = instance['uuid']
+    config_spec.guestId = os_type
+
+    # Allow nested ESX instances to host 64 bit VMs.
+    if os_type == "vmkernel5Guest":
+        config_spec.nestedHVEnabled = "True"
+
+    vm_file_info = client_factory.create('ns0:VirtualMachineFileInfo')
+    vm_file_info.vmPathName = "[" + data_store_name + "]"
+    config_spec.files = vm_file_info
+
+    tools_info = client_factory.create('ns0:ToolsConfigInfo')
+    tools_info.afterPowerOn = True
+    tools_info.afterResume = True
+    tools_info.beforeGuestStandby = True
+    tools_info.beforeGuestShutdown = True
+    tools_info.beforeGuestReboot = True
+
+    config_spec.tools = tools_info
+    config_spec.numCPUs = int(instance['vcpus'])
+    config_spec.memoryMB = int(instance['memory_mb'])
+
+    vif_spec_list = []
+    for vif_info in vif_infos:
+        vif_spec = create_network_spec(client_factory, vif_info)
+        vif_spec_list.append(vif_spec)
+
+    device_config_spec = vif_spec_list
+
+    config_spec.deviceChange = device_config_spec
+
+    # add vm-uuid and iface-id.x values for Neutron
+    extra_config = []
+    opt = client_factory.create('ns0:OptionValue')
+    opt.key = "nvp.vm-uuid"
+    opt.value = instance['uuid']
+    extra_config.append(opt)
+
+    i = 0
+    for vif_info in vif_infos:
+        if vif_info['iface_id']:
+            opt = client_factory.create('ns0:OptionValue')
+            opt.key = "nvp.iface-id.%d" % i
+            opt.value = vif_info['iface_id']
+            extra_config.append(opt)
+            i += 1
+
+    config_spec.extraConfig = extra_config
+
+    return config_spec
+
+
+def get_vm_resize_spec(client_factory, instance):
+    """Provides updates for a VM spec."""
+    resize_spec = client_factory.create('ns0:VirtualMachineConfigSpec')
+    resize_spec.numCPUs = int(instance['vcpus'])
+    resize_spec.memoryMB = int(instance['memory_mb'])
+    return resize_spec
+
+
+def create_controller_spec(client_factory, key, adapter_type="lsiLogic"):
+    """
+    Builds a Config Spec for the LSI or Bus Logic Controller's addition
+    which acts as the controller for the virtual hard disk to be attached
+    to the VM.
+    """
+    # Create a controller for the Virtual Hard Disk
+    virtual_device_config = client_factory.create(
+                            'ns0:VirtualDeviceConfigSpec')
+    virtual_device_config.operation = "add"
+    if adapter_type == "busLogic":
+        virtual_controller = client_factory.create(
+                                'ns0:VirtualBusLogicController')
+    elif adapter_type == "lsiLogicsas":
+        virtual_controller = client_factory.create(
+                                'ns0:VirtualLsiLogicSASController')
+    else:
+        virtual_controller = client_factory.create(
+                                'ns0:VirtualLsiLogicController')
+    virtual_controller.key = key
+    virtual_controller.busNumber = 0
+    virtual_controller.sharedBus = "noSharing"
+    virtual_device_config.device = virtual_controller
+    return virtual_device_config
+
+
+def create_network_spec(client_factory, vif_info):
+    """
+    Builds a config spec for the addition of a new network
+    adapter to the VM.
+    """
+    network_spec = client_factory.create('ns0:VirtualDeviceConfigSpec')
+    network_spec.operation = "add"
+
+    # Keep compatible with other Hyper vif model parameter.
+    if vif_info['vif_model'] == "e1000":
+        vif_info['vif_model'] = "VirtualE1000"
+
+    vif = 'ns0:' + vif_info['vif_model']
+    net_device = client_factory.create(vif)
+
+    # NOTE(asomya): Only works on ESXi if the portgroup binding is set to
+    # ephemeral. Invalid configuration if set to static and the NIC does
+    # not come up on boot if set to dynamic.
+    network_ref = vif_info['network_ref']
+    network_name = vif_info['network_name']
+    mac_address = vif_info['mac_address']
+    backing = None
+    if network_ref and network_ref['type'] == 'OpaqueNetwork':
+        backing_name = ''.join(['ns0:VirtualEthernetCard',
+                                'OpaqueNetworkBackingInfo'])
+        backing = client_factory.create(backing_name)
+        backing.opaqueNetworkId = network_ref['network-id']
+        backing.opaqueNetworkType = network_ref['network-type']
+    elif (network_ref and
+            network_ref['type'] == "DistributedVirtualPortgroup"):
+        backing_name = ''.join(['ns0:VirtualEthernetCardDistributed',
+                                'VirtualPortBackingInfo'])
+        backing = client_factory.create(backing_name)
+        portgroup = client_factory.create(
+                    'ns0:DistributedVirtualSwitchPortConnection')
+        portgroup.switchUuid = network_ref['dvsw']
+        portgroup.portgroupKey = network_ref['dvpg']
+        backing.port = portgroup
+    else:
+        backing = client_factory.create(
+                  'ns0:VirtualEthernetCardNetworkBackingInfo')
+        backing.deviceName = network_name
+
+    connectable_spec = client_factory.create('ns0:VirtualDeviceConnectInfo')
+    connectable_spec.startConnected = True
+    connectable_spec.allowGuestControl = True
+    connectable_spec.connected = True
+
+    net_device.connectable = connectable_spec
+    net_device.backing = backing
+
+    # The Server assigns a Key to the device. Here we pass a -ve temporary key.
+    # -ve because actual keys are +ve numbers and we don't
+    # want a clash with the key that server might associate with the device
+    net_device.key = -47
+    net_device.addressType = "manual"
+    net_device.macAddress = mac_address
+    net_device.wakeOnLanEnabled = True
+
+    network_spec.device = net_device
+    return network_spec
+
+
+def get_vmdk_attach_config_spec(client_factory,
+                                adapter_type="lsiLogic",
+                                disk_type="preallocated",
+                                file_path=None,
+                                disk_size=None,
+                                linked_clone=False,
+                                controller_key=None,
+                                unit_number=None,
+                                device_name=None):
+    """Builds the vmdk attach config spec."""
+    config_spec = client_factory.create('ns0:VirtualMachineConfigSpec')
+
+    # The controller Key pertains to the Key of the LSI Logic Controller, which
+    # controls this Hard Disk
+    device_config_spec = []
+    # For IDE devices, there are these two default controllers created in the
+    # VM having keys 200 and 201
+    if controller_key is None:
+        if adapter_type == "ide":
+            controller_key = 200
+        else:
+            controller_key = -101
+            controller_spec = create_controller_spec(client_factory,
+                                                     controller_key,
+                                                     adapter_type)
+            device_config_spec.append(controller_spec)
+    virtual_device_config_spec = create_virtual_disk_spec(client_factory,
+                                controller_key, disk_type, file_path,
+                                disk_size, linked_clone,
+                                unit_number, device_name)
+
+    device_config_spec.append(virtual_device_config_spec)
+
+    config_spec.deviceChange = device_config_spec
+    return config_spec
+
+
+def get_cdrom_attach_config_spec(client_factory,
+                                 datastore,
+                                 file_path,
+                                 cdrom_unit_number):
+    """Builds and returns the cdrom attach config spec."""
+    config_spec = client_factory.create('ns0:VirtualMachineConfigSpec')
+
+    device_config_spec = []
+    # For IDE devices, there are these two default controllers created in the
+    # VM having keys 200 and 201
+    controller_key = 200
+    virtual_device_config_spec = create_virtual_cdrom_spec(client_factory,
+                                                           datastore,
+                                                           controller_key,
+                                                           file_path,
+                                                           cdrom_unit_number)
+
+    device_config_spec.append(virtual_device_config_spec)
+
+    config_spec.deviceChange = device_config_spec
+    return config_spec
+
+
+def get_vmdk_detach_config_spec(client_factory, device,
+                                destroy_disk=False):
+    """Builds the vmdk detach config spec."""
+    config_spec = client_factory.create('ns0:VirtualMachineConfigSpec')
+
+    device_config_spec = []
+    virtual_device_config_spec = detach_virtual_disk_spec(client_factory,
+                                                          device,
+                                                          destroy_disk)
+
+    device_config_spec.append(virtual_device_config_spec)
+
+    config_spec.deviceChange = device_config_spec
+    return config_spec
+
+
+def get_vm_extra_config_spec(client_factory, extra_opts):
+    """Builds extra spec fields from a dictionary."""
+    config_spec = client_factory.create('ns0:VirtualMachineConfigSpec')
+    # add the key value pairs
+    extra_config = []
+    for key, value in extra_opts.iteritems():
+        opt = client_factory.create('ns0:OptionValue')
+        opt.key = key
+        opt.value = value
+        extra_config.append(opt)
+        config_spec.extraConfig = extra_config
+    return config_spec
+
+
+def get_vmdk_path_and_adapter_type(hardware_devices):
+    """Gets the vmdk file path and the storage adapter type."""
+    if hardware_devices.__class__.__name__ == "ArrayOfVirtualDevice":
+        hardware_devices = hardware_devices.VirtualDevice
+    vmdk_file_path = None
+    vmdk_controller_key = None
+    disk_type = None
+    unit_number = 0
+
+    adapter_type_dict = {}
+    for device in hardware_devices:
+        if device.__class__.__name__ == "VirtualDisk":
+            if device.backing.__class__.__name__ == \
+                    "VirtualDiskFlatVer2BackingInfo":
+                vmdk_file_path = device.backing.fileName
+                vmdk_controller_key = device.controllerKey
+                if getattr(device.backing, 'thinProvisioned', False):
+                    disk_type = "thin"
+                else:
+                    if getattr(device.backing, 'eagerlyScrub', False):
+                        disk_type = "eagerZeroedThick"
+                    else:
+                        disk_type = "preallocated"
+            if device.unitNumber > unit_number:
+                unit_number = device.unitNumber
+        elif device.__class__.__name__ == "VirtualLsiLogicController":
+            adapter_type_dict[device.key] = "lsiLogic"
+        elif device.__class__.__name__ == "VirtualBusLogicController":
+            adapter_type_dict[device.key] = "busLogic"
+        elif device.__class__.__name__ == "VirtualIDEController":
+            adapter_type_dict[device.key] = "ide"
+        elif device.__class__.__name__ == "VirtualLsiLogicSASController":
+            adapter_type_dict[device.key] = "lsiLogicsas"
+
+    adapter_type = adapter_type_dict.get(vmdk_controller_key, "")
+
+    return (vmdk_file_path, vmdk_controller_key, adapter_type,
+            disk_type, unit_number)
+
+
+def get_rdm_disk(hardware_devices, uuid):
+    """Gets the RDM disk key."""
+    if hardware_devices.__class__.__name__ == "ArrayOfVirtualDevice":
+        hardware_devices = hardware_devices.VirtualDevice
+
+    for device in hardware_devices:
+        if (device.__class__.__name__ == "VirtualDisk" and
+            device.backing.__class__.__name__ ==
+                "VirtualDiskRawDiskMappingVer1BackingInfo" and
+                device.backing.lunUuid == uuid):
+            return device
+
+
+def get_copy_virtual_disk_spec(client_factory, adapter_type="lsiLogic",
+                               disk_type="preallocated"):
+    """Builds the Virtual Disk copy spec."""
+    dest_spec = client_factory.create('ns0:VirtualDiskSpec')
+    dest_spec.adapterType = get_vmdk_adapter_type(adapter_type)
+    dest_spec.diskType = disk_type
+    return dest_spec
+
+
+def get_vmdk_create_spec(client_factory, size_in_kb, adapter_type="lsiLogic",
+                         disk_type="preallocated"):
+    """Builds the virtual disk create spec."""
+    create_vmdk_spec = client_factory.create('ns0:FileBackedVirtualDiskSpec')
+    create_vmdk_spec.adapterType = get_vmdk_adapter_type(adapter_type)
+    create_vmdk_spec.diskType = disk_type
+    create_vmdk_spec.capacityKb = size_in_kb
+    return create_vmdk_spec
+
+
+def get_rdm_create_spec(client_factory, device, adapter_type="lsiLogic",
+                        disk_type="rdmp"):
+    """Builds the RDM virtual disk create spec."""
+    create_vmdk_spec = client_factory.create('ns0:DeviceBackedVirtualDiskSpec')
+    create_vmdk_spec.adapterType = get_vmdk_adapter_type(adapter_type)
+    create_vmdk_spec.diskType = disk_type
+    create_vmdk_spec.device = device
+    return create_vmdk_spec
+
+
+def create_virtual_cdrom_spec(client_factory,
+                              datastore,
+                              controller_key,
+                              file_path,
+                              cdrom_unit_number):
+    """Builds spec for the creation of a new Virtual CDROM to the VM."""
+    config_spec = client_factory.create(
+        'ns0:VirtualDeviceConfigSpec')
+    config_spec.operation = "add"
+
+    cdrom = client_factory.create('ns0:VirtualCdrom')
+
+    cdrom_device_backing = client_factory.create(
+        'ns0:VirtualCdromIsoBackingInfo')
+    cdrom_device_backing.datastore = datastore
+    cdrom_device_backing.fileName = file_path
+
+    cdrom.backing = cdrom_device_backing
+    cdrom.controllerKey = controller_key
+    cdrom.unitNumber = cdrom_unit_number
+    cdrom.key = -1
+
+    connectable_spec = client_factory.create('ns0:VirtualDeviceConnectInfo')
+    connectable_spec.startConnected = True
+    connectable_spec.allowGuestControl = False
+    connectable_spec.connected = True
+
+    cdrom.connectable = connectable_spec
+
+    config_spec.device = cdrom
+    return config_spec
+
+
+def create_virtual_disk_spec(client_factory, controller_key,
+                             disk_type="preallocated",
+                             file_path=None,
+                             disk_size=None,
+                             linked_clone=False,
+                             unit_number=None,
+                             device_name=None):
+    """
+    Builds spec for the creation of a new/ attaching of an already existing
+    Virtual Disk to the VM.
+    """
+    virtual_device_config = client_factory.create(
+                            'ns0:VirtualDeviceConfigSpec')
+    virtual_device_config.operation = "add"
+    if (file_path is None) or linked_clone:
+        virtual_device_config.fileOperation = "create"
+
+    virtual_disk = client_factory.create('ns0:VirtualDisk')
+
+    if disk_type == "rdm" or disk_type == "rdmp":
+        disk_file_backing = client_factory.create(
+                            'ns0:VirtualDiskRawDiskMappingVer1BackingInfo')
+        disk_file_backing.compatibilityMode = "virtualMode" \
+            if disk_type == "rdm" else "physicalMode"
+        disk_file_backing.diskMode = "independent_persistent"
+        disk_file_backing.deviceName = device_name or ""
+    else:
+        disk_file_backing = client_factory.create(
+                            'ns0:VirtualDiskFlatVer2BackingInfo')
+        disk_file_backing.diskMode = "persistent"
+        if disk_type == "thin":
+            disk_file_backing.thinProvisioned = True
+        else:
+            if disk_type == "eagerZeroedThick":
+                disk_file_backing.eagerlyScrub = True
+    disk_file_backing.fileName = file_path or ""
+
+    connectable_spec = client_factory.create('ns0:VirtualDeviceConnectInfo')
+    connectable_spec.startConnected = True
+    connectable_spec.allowGuestControl = False
+    connectable_spec.connected = True
+
+    if not linked_clone:
+        virtual_disk.backing = disk_file_backing
+    else:
+        virtual_disk.backing = copy.copy(disk_file_backing)
+        virtual_disk.backing.fileName = ""
+        virtual_disk.backing.parent = disk_file_backing
+    virtual_disk.connectable = connectable_spec
+
+    # The Server assigns a Key to the device. Here we pass a -ve random key.
+    # -ve because actual keys are +ve numbers and we don't
+    # want a clash with the key that server might associate with the device
+    virtual_disk.key = -100
+    virtual_disk.controllerKey = controller_key
+    virtual_disk.unitNumber = unit_number or 0
+    virtual_disk.capacityInKB = disk_size or 0
+
+    virtual_device_config.device = virtual_disk
+
+    return virtual_device_config
+
+
+def detach_virtual_disk_spec(client_factory, device, destroy_disk=False):
+    """
+    Builds spec for the detach of an already existing Virtual Disk from VM.
+    """
+    virtual_device_config = client_factory.create(
+                            'ns0:VirtualDeviceConfigSpec')
+    virtual_device_config.operation = "remove"
+    if destroy_disk:
+        virtual_device_config.fileOperation = "destroy"
+    virtual_device_config.device = device
+
+    return virtual_device_config
+
+
+def clone_vm_spec(client_factory, location,
+                  power_on=False, snapshot=None, template=False):
+    """Builds the VM clone spec."""
+    clone_spec = client_factory.create('ns0:VirtualMachineCloneSpec')
+    clone_spec.location = location
+    clone_spec.powerOn = power_on
+    if snapshot:
+        clone_spec.snapshot = snapshot
+    clone_spec.template = template
+    return clone_spec
+
+
+def relocate_vm_spec(client_factory, datastore=None, host=None,
+                     disk_move_type="moveAllDiskBackingsAndAllowSharing"):
+    """Builds the VM relocation spec."""
+    rel_spec = client_factory.create('ns0:VirtualMachineRelocateSpec')
+    rel_spec.datastore = datastore
+    rel_spec.diskMoveType = disk_move_type
+    if host:
+        rel_spec.host = host
+    return rel_spec
+
+
+def get_dummy_vm_create_spec(client_factory, name, data_store_name):
+    """Builds the dummy VM create spec."""
+    config_spec = client_factory.create('ns0:VirtualMachineConfigSpec')
+
+    config_spec.name = name
+    config_spec.guestId = "otherGuest"
+
+    vm_file_info = client_factory.create('ns0:VirtualMachineFileInfo')
+    vm_file_info.vmPathName = "[" + data_store_name + "]"
+    config_spec.files = vm_file_info
+
+    tools_info = client_factory.create('ns0:ToolsConfigInfo')
+    tools_info.afterPowerOn = True
+    tools_info.afterResume = True
+    tools_info.beforeGuestStandby = True
+    tools_info.beforeGuestShutdown = True
+    tools_info.beforeGuestReboot = True
+
+    config_spec.tools = tools_info
+    config_spec.numCPUs = 1
+    config_spec.memoryMB = 4
+
+    controller_key = -101
+    controller_spec = create_controller_spec(client_factory, controller_key)
+    disk_spec = create_virtual_disk_spec(client_factory, 1024, controller_key)
+
+    device_config_spec = [controller_spec, disk_spec]
+
+    config_spec.deviceChange = device_config_spec
+    return config_spec
+
+
+def get_machine_id_change_spec(client_factory, machine_id_str):
+    """Builds the machine id change config spec."""
+    virtual_machine_config_spec = client_factory.create(
+                                  'ns0:VirtualMachineConfigSpec')
+
+    opt = client_factory.create('ns0:OptionValue')
+    opt.key = "machine.id"
+    opt.value = machine_id_str
+    virtual_machine_config_spec.extraConfig = [opt]
+    return virtual_machine_config_spec
+
+
+def get_add_vswitch_port_group_spec(client_factory, vswitch_name,
+                                    port_group_name, vlan_id):
+    """Builds the virtual switch port group add spec."""
+    vswitch_port_group_spec = client_factory.create('ns0:HostPortGroupSpec')
+    vswitch_port_group_spec.name = port_group_name
+    vswitch_port_group_spec.vswitchName = vswitch_name
+
+    # VLAN ID of 0 means that VLAN tagging is not to be done for the network.
+    vswitch_port_group_spec.vlanId = int(vlan_id)
+
+    policy = client_factory.create('ns0:HostNetworkPolicy')
+    nicteaming = client_factory.create('ns0:HostNicTeamingPolicy')
+    nicteaming.notifySwitches = True
+    policy.nicTeaming = nicteaming
+
+    vswitch_port_group_spec.policy = policy
+    return vswitch_port_group_spec
+
+
+def get_vnc_config_spec(client_factory, port):
+    """Builds the vnc config spec."""
+    virtual_machine_config_spec = client_factory.create(
+                                    'ns0:VirtualMachineConfigSpec')
+
+    opt_enabled = client_factory.create('ns0:OptionValue')
+    opt_enabled.key = "RemoteDisplay.vnc.enabled"
+    opt_enabled.value = "true"
+    opt_port = client_factory.create('ns0:OptionValue')
+    opt_port.key = "RemoteDisplay.vnc.port"
+    opt_port.value = port
+    extras = [opt_enabled, opt_port]
+    virtual_machine_config_spec.extraConfig = extras
+    return virtual_machine_config_spec
+
+
+def search_datastore_spec(client_factory, file_name):
+    """Builds the datastore search spec."""
+    search_spec = client_factory.create('ns0:HostDatastoreBrowserSearchSpec')
+    search_spec.matchPattern = [file_name]
+    return search_spec
+
+
+def _get_token(results):
+    """Get the token from the property results."""
+    return getattr(results, 'token', None)
+
+
+def _get_reference_for_value(results, value):
+    for object in results.objects:
+        if object.obj.value == value:
+            return object
+
+
+def _get_object_for_value(results, value):
+    for object in results.objects:
+        if object.propSet[0].val == value:
+            return object.obj
+
+
+def _get_object_from_results(session, results, value, func):
+    while results:
+        token = _get_token(results)
+        object = func(results, value)
+        if object:
+            if token:
+                session._call_method(vim_util,
+                                     "cancel_retrieve",
+                                     token)
+            return object
+
+        if token:
+            results = session._call_method(vim_util,
+                                           "continue_to_get_objects",
+                                           token)
+        else:
+            return None
+
+
+def _cancel_retrieve_if_necessary(session, results):
+    token = _get_token(results)
+    if token:
+        results = session._call_method(vim_util,
+                                       "cancel_retrieve",
+                                       token)
+
+
+def get_vm_ref_from_name(session, vm_name):
+    """Get reference to the VM with the name specified."""
+    vms = session._call_method(vim_util, "get_objects",
+                "VirtualMachine", ["name"])
+    return _get_object_from_results(session, vms, vm_name,
+                                    _get_object_for_value)
+
+
+def get_vm_ref_from_uuid(session, instance_uuid):
+    """Get reference to the VM with the uuid specified."""
+    vms = session._call_method(vim_util, "get_objects",
+                "VirtualMachine", ["name"])
+    return _get_object_from_results(session, vms, instance_uuid,
+                                    _get_object_for_value)
+
+
+def get_vm_ref(session, instance):
+    """Get reference to the VM through uuid or vm name."""
+    vm_ref = get_vm_ref_from_uuid(session, instance['uuid'])
+    if not vm_ref:
+        vm_ref = get_vm_ref_from_name(session, instance['name'])
+    if vm_ref is None:
+        raise exception.InstanceNotFound(instance_id=instance['uuid'])
+    return vm_ref
+
+
+def get_host_ref_from_id(session, host_id, property_list=None):
+    """Get a host reference object for a host_id string."""
+
+    if property_list is None:
+        property_list = ['name']
+
+    host_refs = session._call_method(
+                    vim_util, "get_objects",
+                    "HostSystem", property_list)
+    return _get_object_from_results(session, host_refs, host_id,
+                                    _get_reference_for_value)
+
+
+def get_host_id_from_vm_ref(session, vm_ref):
+    """
+    This method allows you to find the managed object
+    ID of the host running a VM. Since vMotion can
+    change the value, you should not presume that this
+    is a value that you can cache for very long and
+    should be prepared to allow for it to change.
+
+    :param session: a vSphere API connection
+    :param vm_ref: a reference object to the running VM
+    :return: the host_id running the virtual machine
+    """
+
+    # to prevent typographical errors below
+    property_name = 'runtime.host'
+
+    # a property collector in VMware vSphere Management API
+    # is a set of local representations of remote values.
+    # property_set here, is a local representation of the
+    # properties we are querying for.
+    property_set = session._call_method(
+            vim_util, "get_object_properties",
+            None, vm_ref, vm_ref._type, [property_name])
+
+    prop = property_from_property_set(
+        property_name, property_set)
+
+    if prop is not None:
+        prop = prop.val.value
+    else:
+        # reaching here represents an impossible state
+        raise RuntimeError(
+            "Virtual Machine %s exists without a runtime.host!"
+            % (vm_ref))
+
+    return prop
+
+
+def property_from_property_set(property_name, property_set):
+    '''
+    Use this method to filter property collector results.
+
+    Because network traffic is expensive, multiple
+    VMwareAPI calls will sometimes pile-up properties
+    to be collected. That means results may contain
+    many different values for multiple purposes.
+
+    This helper will filter a list for a single result
+    and filter the properties of that result to find
+    the single value of whatever type resides in that
+    result. This could be a ManagedObjectReference ID
+    or a complex value.
+
+    :param property_name: name of property you want
+    :param property_set: all results from query
+    :return: the value of the property.
+    '''
+
+    for prop in property_set.objects:
+        p = _property_from_propSet(prop.propSet, property_name)
+        if p is not None:
+            return p
+
+
+def _property_from_propSet(propSet, name='name'):
+    for p in propSet:
+        if p.name == name:
+            return p
+
+
+def get_host_ref_for_vm(session, instance, props):
+    """Get the ESXi host running a VM by its name."""
+
+    vm_ref = get_vm_ref(session, instance)
+    host_id = get_host_id_from_vm_ref(session, vm_ref)
+    return get_host_ref_from_id(session, host_id, props)
+
+
+def get_host_name_for_vm(session, instance):
+    """Get the ESXi host running a VM by its name."""
+    host_ref = get_host_ref_for_vm(session, instance, ['name'])
+    return get_host_name_from_host_ref(host_ref)
+
+
+def get_host_name_from_host_ref(host_ref):
+    p = _property_from_propSet(host_ref.propSet)
+    if p is not None:
+        return p.val
+
+
+def get_vm_state_from_name(session, vm_name):
+    vm_ref = get_vm_ref_from_name(session, vm_name)
+    vm_state = session._call_method(vim_util, "get_dynamic_property",
+                vm_ref, "VirtualMachine", "runtime.powerState")
+    return vm_state
+
+
+def get_stats_from_cluster(session, cluster):
+    """Get the aggregate resource stats of a cluster."""
+    cpu_info = {'vcpus': 0, 'cores': 0, 'vendor': [], 'model': []}
+    mem_info = {'total': 0, 'free': 0}
+    # Get the Host and Resource Pool Managed Object Refs
+    prop_dict = session._call_method(vim_util, "get_dynamic_properties",
+                                     cluster, "ClusterComputeResource",
+                                     ["host", "resourcePool"])
+    if prop_dict:
+        host_ret = prop_dict.get('host')
+        if host_ret:
+            host_mors = host_ret.ManagedObjectReference
+            result = session._call_method(vim_util,
+                         "get_properties_for_a_collection_of_objects",
+                         "HostSystem", host_mors, ["summary.hardware"])
+            for obj in result.objects:
+                hardware_summary = obj.propSet[0].val
+                # Total vcpus is the sum of all pCPUs of individual hosts
+                # The overcommitment ratio is factored in by the scheduler
+                cpu_info['vcpus'] += hardware_summary.numCpuThreads
+                cpu_info['cores'] += hardware_summary.numCpuCores
+                cpu_info['vendor'].append(hardware_summary.vendor)
+                cpu_info['model'].append(hardware_summary.cpuModel)
+
+        res_mor = prop_dict.get('resourcePool')
+        if res_mor:
+            res_usage = session._call_method(vim_util, "get_dynamic_property",
+                            res_mor, "ResourcePool", "summary.runtime.memory")
+            if res_usage:
+                # maxUsage is the memory limit of the cluster available to VM's
+                mem_info['total'] = int(res_usage.maxUsage / Mi)
+                # overallUsage is the hypervisor's view of memory usage by VM's
+                consumed = int(res_usage.overallUsage / Mi)
+                mem_info['free'] = mem_info['total'] - consumed
+    stats = {'cpu': cpu_info, 'mem': mem_info}
+    return stats
+
+
+def get_cluster_ref_from_name(session, cluster_name):
+    """Get reference to the cluster with the name specified."""
+    cls = session._call_method(vim_util, "get_objects",
+                               "ClusterComputeResource", ["name"])
+    return _get_object_from_results(session, cls, cluster_name,
+                                    _get_object_for_value)
+
+
+def get_host_ref(session, cluster=None):
+    """Get reference to a host within the cluster specified."""
+    if cluster is None:
+        results = session._call_method(vim_util, "get_objects",
+                                       "HostSystem")
+        _cancel_retrieve_if_necessary(session, results)
+        host_mor = results.objects[0].obj
+    else:
+        host_ret = session._call_method(vim_util, "get_dynamic_property",
+                                        cluster, "ClusterComputeResource",
+                                        "host")
+        if not host_ret or not host_ret.ManagedObjectReference:
+            msg = _('No host available on cluster')
+            raise exception.NoValidHost(reason=msg)
+        host_mor = host_ret.ManagedObjectReference[0]
+
+    return host_mor
+
+
+def propset_dict(propset):
+    """Turn a propset list into a dictionary
+
+    PropSet is an optional attribute on ObjectContent objects
+    that are returned by the VMware API.
+
+    You can read more about these at:
+    http://pubs.vmware.com/vsphere-51/index.jsp
+        #com.vmware.wssdk.apiref.doc/
+            vmodl.query.PropertyCollector.ObjectContent.html
+
+    :param propset: a property "set" from ObjectContent
+    :return: dictionary representing property set
+    """
+    if propset is None:
+        return {}
+
+    #TODO(hartsocks): once support for Python 2.6 is dropped
+    # change to {[(prop.name, prop.val) for prop in propset]}
+    return dict([(prop.name, prop.val) for prop in propset])
+
+
+def _get_datastore_ref_and_name(data_stores, datastore_regex=None):
+    # selects the datastore with the most freespace
+    """Find a usable datastore in a given RetrieveResult object.
+
+    :param data_stores: a RetrieveResult object from vSphere API call
+    :param datastore_regex: an optional regular expression to match names
+    :return: datastore_ref, datastore_name, capacity, freespace
+    """
+    DSRecord = collections.namedtuple(
+        'DSRecord', ['datastore', 'name', 'capacity', 'freespace'])
+
+    # we lean on checks performed in caller methods to validate the
+    # datastore reference is not None. If it is, the caller handles
+    # a None reference as appropriate in its context.
+    found_ds = DSRecord(datastore=None, name=None, capacity=None, freespace=0)
+
+    # datastores is actually a RetrieveResult object from vSphere API call
+    for obj_content in data_stores.objects:
+        # the propset attribute "need not be set" by returning API
+        if not hasattr(obj_content, 'propSet'):
+            continue
+
+        propdict = propset_dict(obj_content.propSet)
+        # Local storage identifier vSphere doesn't support CIFS or
+        # vfat for datastores, therefore filtered
+        ds_type = propdict['summary.type']
+        ds_name = propdict['summary.name']
+        if ((ds_type == 'VMFS' or ds_type == 'NFS') and
+                propdict['summary.accessible']):
+            if datastore_regex is None or datastore_regex.match(ds_name):
+                new_ds = DSRecord(
+                    datastore=obj_content.obj,
+                    name=ds_name,
+                    capacity=propdict['summary.capacity'],
+                    freespace=propdict['summary.freeSpace'])
+                # find the largest freespace to return
+                if new_ds.freespace > found_ds.freespace:
+                    found_ds = new_ds
+
+    #TODO(hartsocks): refactor driver to use DSRecord namedtuple
+    # using DSRecord through out will help keep related information
+    # together and improve readability and organisation of the code.
+    if found_ds.datastore is not None:
+        return (found_ds.datastore, found_ds.name,
+                    found_ds.capacity, found_ds.freespace)
+
+
+def get_datastore_ref_and_name(session, cluster=None, host=None,
+                               datastore_regex=None):
+    """Get the datastore list and choose the first local storage."""
+    if cluster is None and host is None:
+        data_stores = session._call_method(vim_util, "get_objects",
+                    "Datastore", ["summary.type", "summary.name",
+                                  "summary.capacity", "summary.freeSpace",
+                                  "summary.accessible"])
+    else:
+        if cluster is not None:
+            datastore_ret = session._call_method(
+                                        vim_util,
+                                        "get_dynamic_property", cluster,
+                                        "ClusterComputeResource", "datastore")
+        else:
+            datastore_ret = session._call_method(
+                                        vim_util,
+                                        "get_dynamic_property", host,
+                                        "HostSystem", "datastore")
+
+        if not datastore_ret:
+            raise exception.DatastoreNotFound()
+        data_store_mors = datastore_ret.ManagedObjectReference
+        data_stores = session._call_method(vim_util,
+                                "get_properties_for_a_collection_of_objects",
+                                "Datastore", data_store_mors,
+                                ["summary.type", "summary.name",
+                                 "summary.capacity", "summary.freeSpace",
+                                 "summary.accessible"])
+    while data_stores:
+        token = _get_token(data_stores)
+        results = _get_datastore_ref_and_name(data_stores, datastore_regex)
+        if results:
+            if token:
+                session._call_method(vim_util,
+                                     "cancel_retrieve",
+                                     token)
+            return results
+        if token:
+            data_stores = session._call_method(vim_util,
+                                               "continue_to_get_objects",
+                                               token)
+        else:
+            if datastore_regex:
+                raise exception.DatastoreNotFound(
+                _("Datastore regex %s did not match any datastores")
+                % datastore_regex.pattern)
+            else:
+                raise exception.DatastoreNotFound()
+    raise exception.DatastoreNotFound()
+
+
+def get_vmdk_backed_disk_uuid(hardware_devices, volume_uuid):
+    if hardware_devices.__class__.__name__ == "ArrayOfVirtualDevice":
+        hardware_devices = hardware_devices.VirtualDevice
+
+    for device in hardware_devices:
+        if (device.__class__.__name__ == "VirtualDisk" and
+                device.backing.__class__.__name__ ==
+                "VirtualDiskFlatVer2BackingInfo" and
+                volume_uuid in device.backing.fileName):
+            return device.backing.uuid
+
+
+def get_vmdk_backed_disk_device(hardware_devices, uuid):
+    if hardware_devices.__class__.__name__ == "ArrayOfVirtualDevice":
+        hardware_devices = hardware_devices.VirtualDevice
+
+    for device in hardware_devices:
+        if (device.__class__.__name__ == "VirtualDisk" and
+                device.backing.__class__.__name__ ==
+                "VirtualDiskFlatVer2BackingInfo" and
+                device.backing.uuid == uuid):
+            return device
+
+
+def get_vmdk_volume_disk(hardware_devices):
+    if hardware_devices.__class__.__name__ == "ArrayOfVirtualDevice":
+        hardware_devices = hardware_devices.VirtualDevice
+
+    for device in hardware_devices:
+        if (device.__class__.__name__ == "VirtualDisk"):
+            return device
+
+
+def get_res_pool_ref(session, cluster, node_mo_id):
+    """Get the resource pool."""
+    if cluster is None:
+        # With no cluster named, use the root resource pool.
+        results = session._call_method(vim_util, "get_objects",
+                                       "ResourcePool")
+        _cancel_retrieve_if_necessary(session, results)
+        # The 0th resource pool is always the root resource pool on both ESX
+        # and vCenter.
+        res_pool_ref = results.objects[0].obj
+    else:
+        if cluster.value == node_mo_id:
+            # Get the root resource pool of the cluster
+            res_pool_ref = session._call_method(vim_util,
+                                                  "get_dynamic_property",
+                                                  cluster,
+                                                  "ClusterComputeResource",
+                                                  "resourcePool")
+
+    return res_pool_ref
+
+
+def get_all_cluster_mors(session):
+    """Get all the clusters in the vCenter."""
+    try:
+        results = session._call_method(vim_util, "get_objects",
+                                        "ClusterComputeResource", ["name"])
+        _cancel_retrieve_if_necessary(session, results)
+        return results.objects
+
+    except Exception as excep:
+        LOG.warn(_("Failed to get cluster references %s") % excep)
+
+
+def get_all_res_pool_mors(session):
+    """Get all the resource pools in the vCenter."""
+    try:
+        results = session._call_method(vim_util, "get_objects",
+                                             "ResourcePool")
+
+        _cancel_retrieve_if_necessary(session, results)
+        return results.objects
+    except Exception as excep:
+        LOG.warn(_("Failed to get resource pool references " "%s") % excep)
+
+
+def get_dynamic_property_mor(session, mor_ref, attribute):
+    """Get the value of an attribute for a given managed object."""
+    return session._call_method(vim_util, "get_dynamic_property",
+                                mor_ref, mor_ref._type, attribute)
+
+
+def find_entity_mor(entity_list, entity_name):
+    """Returns managed object ref for given cluster or resource pool name."""
+    return [mor for mor in entity_list if (hasattr(mor, 'propSet') and
+                                           mor.propSet[0].val == entity_name)]
+
+
+def get_all_cluster_refs_by_name(session, path_list):
+    """Get reference to the Cluster, ResourcePool with the path specified.
+
+    The path is the display name. This can be the full path as well.
+    The input will have the list of clusters and resource pool names
+    """
+    cls = get_all_cluster_mors(session)
+    if not cls:
+        return
+    res = get_all_res_pool_mors(session)
+    if not res:
+        return
+    path_list = [path.strip() for path in path_list]
+    list_obj = []
+    for entity_path in path_list:
+        # entity_path could be unique cluster and/or resource-pool name
+        res_mor = find_entity_mor(res, entity_path)
+        cls_mor = find_entity_mor(cls, entity_path)
+        cls_mor.extend(res_mor)
+        for mor in cls_mor:
+            list_obj.append((mor.obj, mor.propSet[0].val))
+    return get_dict_mor(session, list_obj)
+
+
+def get_dict_mor(session, list_obj):
+    """The input is a list of objects in the form
+    (manage_object,display_name)
+    The managed object will be in the form
+    { value = "domain-1002", _type = "ClusterComputeResource" }
+
+    Output data format:
+    dict_mors = {
+                  'respool-1001': { 'cluster_mor': clusterMor,
+                                    'res_pool_mor': resourcePoolMor,
+                                    'name': display_name },
+                  'domain-1002': { 'cluster_mor': clusterMor,
+                                    'res_pool_mor': resourcePoolMor,
+                                    'name': display_name },
+                }
+    """
+    dict_mors = {}
+    for obj_ref, path in list_obj:
+        if obj_ref._type == "ResourcePool":
+            # Get owner cluster-ref mor
+            cluster_ref = get_dynamic_property_mor(session, obj_ref, "owner")
+            dict_mors[obj_ref.value] = {'cluster_mor': cluster_ref,
+                                        'res_pool_mor': obj_ref,
+                                        'name': path,
+                                        }
+        else:
+            # Get default resource pool of the cluster
+            res_pool_ref = get_dynamic_property_mor(session,
+                                                    obj_ref, "resourcePool")
+            dict_mors[obj_ref.value] = {'cluster_mor': obj_ref,
+                                        'res_pool_mor': res_pool_ref,
+                                        'name': path,
+                                        }
+    return dict_mors
+
+
+def get_mo_id_from_instance(instance):
+    """Return the managed object ID from the instance.
+
+    The instance['node'] will have the hypervisor_hostname field of the
+    compute node on which the instance exists or will be provisioned.
+    This will be of the form
+    'respool-1001(MyResPoolName)'
+    'domain-1001(MyClusterName)'
+    """
+    return instance['node'].partition('(')[0]
+
+
+def get_vmdk_adapter_type(adapter_type):
+    """Return the adapter type to be used in vmdk descriptor.
+
+    Adapter type in vmdk descriptor is same for LSI-SAS & LSILogic
+    because Virtual Disk Manager API does not recognize the newer controller
+    types.
+    """
+    if adapter_type == "lsiLogicsas":
+        vmdk_adapter_type = "lsiLogic"
+    else:
+        vmdk_adapter_type = adapter_type
+    return vmdk_adapter_type


评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

quqi99

你的鼓励就是我创造的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值