vm安装xenserver_使用XenServer,Java和Apache CloudStack启动VM

vm安装xenserver

Apache CloudStack is an open-source management server for running a private cloud infrastructure. Because it’s backed by Citrix, CloudStack has enterprise-class support for scaling out VMs on XenServer hosts. The CloudStack management server controls the XenServer host instances using the Java bindings of the XenServer Management API. Since the CloudStack source code is on github, it serves as an instructive example of how to use the XenServer Java API. In this article we’ll do a code inspection on CloudStack and learn the mechanics of how to scale out a private cloud with Xen and Java.

Apache CloudStack是用于运行私有云基础架构的开源管理服务器。 由于它由Citrix支持,因此CloudStack具有企业级支持,可扩展XenServer主机上的VM。 CloudStack管理服务器使用XenServer Management API的Java绑定控制XenServer主机实例。 由于CloudStack源代码位于github上 ,因此它是如何使用XenServer Java API的说明性示例。 在本文中,我们将对CloudStack进行代码检查,并学习如何使用Xen和Java扩展私有云的机制。

The intended audience for this article includes Java programmers interested in Java-based clouds, and also virtualization sysadmins looking under the hood of their CloudStack and/or XenServer systems. You don’t need CloudStack or XenServer installed to follow this code inspection, but if you want to try running a program using the XenServer Java API then you will definitely need an installation of XenServer and a VM template. Setting up XenServer and the VM template is beyond the scope of this article. Also, please note: the code extracts shown here are greatly simplified from the original CloudStack code, not complete Java programs to be directly compiled. The real CloudStack code is pretty long so I’ve heavily edited the original to produce something short enough to get the point across.

本文的目标读者包括对基于Java的云感兴趣的Java程序员,以及希望在其CloudStack和/或XenServer系统的幕后工作的虚拟化系统管理员。 您不需要安装CloudStack或XenServer即可进行此代码检查,但是如果您想尝试使用XenServer Java API运行程序,则肯定需要安装XenServer和VM模板。 设置XenServer和VM模板不在本文讨论范围之内。 另外,请注意:此处显示的代码摘录与原始CloudStack代码相比大大简化了,而不是完整的Java程序,无法直接编译。 真正的CloudStack代码很长,因此我已经对原始代码进行了大量编辑,以产生足够短的内容来理解要点。

So with that introduction, let’s look at some CloudStack code.

因此,在介绍之后,让我们看一些CloudStack代码。

When CloudStack needs to scale out on a XenServer hypervisor, it makes a specification of what kind of VM it would like to launch and embeds the spec inside a StartCommand. So our code inspection begins with method execute(StartCommand), found in the Java source file CitrixResourceBase.java. The CloudStack code lives in Java package com.cloud, and makes use of the XenServer management library in Java package com.xensource.

当CloudStack需要在XenServer虚拟机管理程序上进行扩展时,它会确定要启动哪种VM的规范,并将该规范嵌入StartCommand 。 因此,我们的代码检查从在Java源文件CitrixResourceBase.java中找到的execute(StartCommand)方法开始。 CloudStack代码位于Java包com.cloud中,并使用Java包com.xensource中的XenServer管理库。

Below are the essentials of CloudStack’s execute(StartCommand) method.

以下是CloudStack的execute(StartCommand)方法的要点。

package com.cloud.hypervisor.xen.resource;
                                                                     .
import com.xensource.xenapi.Connection;
import com.xensource.xenapi.Host;
import com.xensource.xenapi.VM;
                                                                     .
import com.cloud.agent.api.StartCommand;
import com.cloud.agent.api.StartAnswer;
import com.cloud.agent.api.to.NicTO;
import com.cloud.agent.api.to.VirtualMachineTO;
import com.cloud.agent.api.to.VolumeTO;
                                                                     .
public class CitrixResourceBase {
                                                                     .
   public StartAnswer execute(StartCommand cmd) {
      VirtualMachineTO vmSpec = cmd.getVirtualMachine();
      Connection conn = getConnection();
      Host host = Host.getByUuid(conn, _host.uuid);
      VM vm = createVmFromTemplate(conn, vmSpec, host);
      for (VolumeTO disk : vmSpec.getDisks())
          createVbd(conn, disk, vmName, vm, vmSpec.getBootloader());
      for (NicTO nic : vmSpec.getNics())
          createVif(conn, vmName, vm, nic);
      startVM(conn, host, vm, vmName);
      return new StartAnswer(cmd);
   }
                                                                     .
}

When the above method is finished, your XenServer will have a brand new VM guest running and ready to process whatever tasks your cloud is designed for. Let’s dig into the steps this method follows to launch that VM.

完成上述方法后,您的XenServer将运行一个全新的VM guest虚拟机,并准备处理您的云设计任务。 让我们深入研究此方法启动该VM所遵循的步骤。

获取Xen连接并登录 (Get a Xen Connection and Login)

Line 17 above shows that the first step is to get a Connection to the XenServer. To prepare a secure, encrypted Xen connection, simply construct a new Connection object over HTTPS using the IP address of the XenServer instance:

上面的第17行显示,第一步是获得与XenServer的连接。 要准备一个安全的,加密的Xen连接,只需使用XenServer实例的IP地址通过HTTPS构造一个新的Connection对象:

import java.net.URL;
import com.xensource.xenapi.Connection;
...
Connection conn = new Connection(new URL("https://" + ipAddress),
                                 NUM_SECONDS_TO_WAIT);

Having a Connection in hand, next you would proceed to login, which is what makes the first XML-RPC transmission to the XenServer. With normal XenServer credentials you could login like so:

拥有一个Connection之后,接下来您将继续登录,这是第一次将XML-RPC传输到XenServer的原因。 使用普通的XenServer凭据,您可以像这样登录:

import com.xensource.xenapi.APIVersion;
import com.xensource.xenapi.Connection;
import com.xensource.xenapi.Session;
...
Session session = Session.loginWithPassword(conn, "username",
   "password", APIVersion.latest().toString());

Login is a tad more complicated when you have multiple XenServer instances configured in a Master/Slave resource pool. Normally you should connect only to the master, using method loginWithPassword, which produces a Session that is valid on any host in the pool. If you need to connect to a specific slave instance you would use slaveLocalLoginWithPassword, which gives you an “emergency mode” session usable only on that slave host. When presented with a XenServer pool, CloudStack’s intention is to make its permanent connection to the master. To be safe, it assumes that the IP address it’s been given to connect to could be a slave, so it does the slave local login first, then obtains the master host IP address from that login and re-authenticates with the master. (For more information on XenServer Master/Slave resource pools, see the XenServer System Recovery Guide.)

当您在主/从资源池中配置了多个XenServer实例时,登录会有点复杂。 通常,您应该使用loginWithPassword方法仅连接到主loginWithPassword ,该方法会生成一个在池中任何主机上均有效的Session。 如果需要连接到特定的从属实例,则可以使用slaveLocalLoginWithPassword ,它为您提供了仅在该从属主机上可用的“紧急模式”会话。 当提供XenServer池时,CloudStack的目的是使其与主服务器建立永久连接。 为了安全起见,它假定已提供连接的IP地址可能是从属服务器,因此它首先进行从属服务器本地登录,然后从该登录名获取主控主机IP地址,并与主控服务器重新进行身份验证。 (有关XenServer主/从资源池的更多信息,请参见《 XenServer 系统恢复指南》 。)

The CloudStack login code lives in XenServerConnectionPool.java. Here it is:

CloudStack登录代码位于XenServerConnectionPool.java中 。 这里是:

package com.cloud.hypervisor.xen.resource;
                                                                     .
import java.net.URL;
                                                                     .
import com.xensource.xenapi.APIVersion;
import com.xensource.xenapi.Connection;
import com.xensource.xenapi.Host;
import com.xensource.xenapi.Pool;
import com.xensource.xenapi.Session;
                                                                     .
public class XenServerConnectionPool {
                                                                     .
   protected Map<String, Connection> _conns;
                                                                     .
   public Connection connect(String hostUuid, String poolUuid,
         String ipAddress, String username, String password,
         int wait) {
      Host host = null;
      Connection sConn = null;
      Connection mConn = _conns.get(poolUuid); // Cached?
      if (mConn != null)  {
         try {
            host = Host.getByUuid(mConn, hostUuid);
         }
         catch (SessionInvalid e) {
            Session.loginWithPassword(mConn, mConn.getUsername(),
               mConn.getPassword(), APIVersion.latest().toString());
         }
      }
      else {
         Connection sConn = new Connection(
            new URL("https://" + ipAddress), wait);
         Session.slaveLocalLoginWithPassword(sConn, username,
            password);
         Pool.Record pr = Pool.getAllRecords(sConn)
            .values().iterator().next(); //Just 1 pool, 1 record
         String masterIp = pr.master.getAddress(sConn);
         mConn = new Connection(masterIp, wait);
         Session.loginWithPassword(mConn, username, password,
            APIVersion.latest().toString());
         _conns.put(poolUuid, mConn);
      }
      return mConn;
   }
                                                                     .
}

Notice above that the XenServer host and resource pool are identified by a UUID (Universally Unique ID), which is an object naming convention used in both XenServer and CloudStack.

上面请注意,XenServer主机和资源池由UUID(通用唯一ID)标识,UUID是XenServer和CloudStack中都使用的对象命名约定。

创建一个Xen VM (Create a Xen VM)

Line 19 of the execute(StartCommand) method above shows the next step is to create a VM from a template which is specified in the command. The CloudStack StartCommand contains a CloudStack-specific vmSpec of type VirtualMachineTO. (“TO” means Transfer Object, which applies to data records passed in and out of the command api. Transfer Objects are not persisted to a database.) vmSpec is a specification to the cloud management server that makes a request for a VM with certain characteristics, such as number of cpus, clock speed, min/max RAM. The CloudStack createVmFromTemplate method applies the vmSpec specification and produces a XenServer Java VM object. The Xen VM class represents a guest virtual machine which can run on a XenServer instance. The VM returned by this method is not started yet.

上面execute(StartCommand)方法的第19行显示了下一步,即从命令中指定的模板创建VM。 CloudStack StartCommand包含类型为VirtualMachineTO的特定vmSpec CloudStack的vmSpec 。 (“ TO”表示传输对象,适用于传入和传出命令api的数据记录。传输对象不会持久存储到数据库中。) vmSpec是云管理服务器的规范,该规范向具有一定要求的VM发出请求的VM特性,例如cpus数量,时钟速度,最小/最大RAM。 CloudStack createVmFromTemplate方法应用vmSpec规范并生成XenServer Java VM对象。 Xen VM类表示可在XenServer实例上运行的来宾虚拟机。 该方法返回的虚拟机尚未启动。

Here are the essentials of the CloudStack createVmFromTemplate method.

这是CloudStack createVmFromTemplate方法的createVmFromTemplate

import com.xensource.xenapi.Connection;
import com.xensource.xenapi.Host;
import com.xensource.xenapi.Types;
import com.xensource.xenapi.VM;
                                                                     .
import com.cloud.agent.api.to.VirtualMachineTO;
                                                                     .
public class CitrixResourceBase {
                                                                     .
   protected VM createVmFromTemplate(Connection conn,
                        VirtualMachineTO vmSpec, Host host) {
      String guestOsTypeName = getGuestOsType(vmSpec.getOs());
      VM template = VM.getByNameLabel(conn, guestOsTypeName)
         .iterator().next(); //Just 1 template
      VM vm = template.createClone(conn, vmSpec.getName());
                                                                     .
      vm.setIsATemplate(conn, false);
      vm.setAffinity(conn, host); //Preferred host
      vm.removeFromOtherConfig(conn, "disks");
      vm.setNameLabel(conn, vmSpec.getName());
      vm.setMemoryStaticMin(conn, vmSpec.getMinRam());
      vm.setMemoryDynamicMin(conn, vmSpec.getMinRam());
      vm.setMemoryDynamicMax(conn, vmSpec.getMinRam());
      vm.setMemoryStaticMax(conn, vmSpec.getMinRam());
      vm.setVCPUsMax(conn, (long)vmSpec.getCpus());
      vm.setVCPUsAtStartup(conn, (long)vmSpec.getCpus());
                                                                     .
      Map<String, String> vcpuParams = new HashMap<String, String>();
      Integer speed = vmSpec.getSpeed();
      vcpuParams.put("weight", Integer.toString(
         (int)(speed*0.99 / _host.speed * _maxWeight)));
      vcpuParams.put("cap", Long.toString(
         !vmSpec.getLimitCpuUse()  ?  0
            :  ((long)speed * 100 * vmSpec.getCpus()) / _host.speed));
      vm.setVCPUsParams(conn, vcpuParams);
                                                                     .
      vm.setActionsAfterCrash(conn, Types.OnCrashBehaviour.DESTROY);
      vm.setActionsAfterShutdown(conn, Types.OnNormalExit.DESTROY);
      vm.setPVArgs(vmSpec.getBootArgs()); //if paravirtualized guest
                                                                     .
      if (!guestOsTypeName.startsWith("Windows")) {
         vm.setPVBootloader(conn, "pygrub");
      }
   }
                                                                     .
   // Current host, discovered with getHostInfo(Connection)
   protected final XsHost _host = ... ;
                                                                     .
}

Notice above that every Xen VM method requires a Connection argument. Because the guest is virtually a computer, there can be many Connections to it simultaneously – so the Connection is not a property of the VM. On the other hand, a VM can only reside on one XenServer host at a time, so the VM.Record type has a field residentOn of type Host. The above CloudStack method does not set residentOn, but it does set another field of type Host, called affinity, which is a hint to CloudStack that the VM would “prefer” to be launched on a particular host. CitrixResourceBase also has a field _host of type XsHost, a CloudStack helper structure, which gets initialized with XenServer host info and uuids in a CloudStack method called getHostInfo, separately from what is shown in this code inspection.

上面请注意,每个Xen VM方法都需要一个Connection参数。 由于来宾实际上是一台计算机,因此可以同时有许多与之的连接-因此,连接不是VM的属性。 在另一方面,虚拟机可以一次只能驻留在一个XenServer主机上,因此VM.Record类型有一个字段residentOn类型的Host 。 上面的CloudStack方法没有设置residentOn ,但确实设置了另一个Host类型的字段,称为affinity ,这向CloudStack暗示了VM将“首选”在特定主机上启动。 CitrixResourceBase还具有XsHost类型的字段_host ,它是CloudStack帮助程序结构,此代码在名为getHostInfo的CloudStack方法中用XenServer主机信息和uuid初始化,与本代码检查中所示的内容分开。

The call to removeFromOtherConfig refers to the Citrix other-config map parameter, which is a map object that provides arbitrary key/value arguments to XenServer host commands. (See the xe command reference for some host commands that make use of the other-config parameter.) In this case the point is to strip away any initial assumptions about disks associated with the VM.

removeFromOtherConfig的调用引用了Citrix other-config映射参数,该参数是一个映射对象,为XenServer主机命令提供了任意键/值参数。 (有关使用other-config参数的某些主机命令,请参见xe命令参考 。)在这种情况下,重点是消除有关与VM关联的磁盘的所有初始假设。

VCPUsParams weight and cap are defined in Citrix article CTX117960. A higher weight results in XenServer allocating more cpu to this VM than other guests on the host. cap is a percentage limit on how much cpu the VM can use.

VCPU参数的weightcap在Citrix文章CTX117960中定义。 较高的weight导致XenServer比主机上的其他guest虚拟机为该VM分配更多的cpu。 cap是VM可以使用多少cpu的百分比限制。

为每个磁盘创建一个VBD (Create a VBD for each disk)

Line 21 of the execute(StartCommand) method invokes the CloudStack method createVbd to create a VBD (Virtual Block Device) in the XenServer guest for each disk in the vmSpec. VBD is a decorator to an underlying VDI (Virtual Disk Image). CloudStack recognizes four types of disk volumes: Root (the main drive), Swap, Datadisk (more local storage), ISO (CD-ROM). The vmSpec disk info is passed to the volume method parameter below.

execute(StartCommand)方法的第21行调用CloudStack方法createVbd在XenServer guest虚拟机中为vmSpec每个磁盘创建一个VBD (虚拟块设备)。 VBD是基础VDI (虚拟磁盘映像)的装饰器。 CloudStack可以识别四种类型的磁盘卷:根(主驱动器),交换,数据磁盘(更多本地存储),ISO(CD-ROM)。 vmSpec磁盘信息将传递到下面的volume方法参数。

Below is CloudStack method createVbd. It does not do a whole lot, relying mainly on a pre-existing VDI. If you have done a Basic Installation of CloudStack, then you’ll have the VDI already. The key invocation of interest here is VBD.create, which allocates resources on the XenServer.

下面是CloudStack方法createVbd 。 它并没有做很多事情,主要是依赖于预先存在的VDI。 如果您已经完成了CloudStack的基本安装,那么您将已经拥有VDI。 这里感兴趣的关键调用是VBD.create ,它在XenServer上分配资源。

import com.xensource.xenapi.Connection;
import com.xensource.xenapi.VBD;
import com.xensource.xenapi.VDI;
import com.xensource.xenapi.VM;
                                                                     .
import com.cloud.agent.api.to.VolumeTO;
import com.cloud.storage.Volume;
import com.cloud.template.VirtualMachineTemplate.BootloaderType;
                                                                     .
public class CitrixResourceBase {
                                                                     .
   protected VBD createVbd(Connection conn, VolumeTO volume,
         String vmName, VM vm, BootloaderType bootLoaderType) {
      VDI vdi = VDI.getByUuid(conn, volume.getPath());
      VBD.Record vbdr = new VBD.Record();
      vbdr.VM = vm;
      vbdr.VDI = vm;
      vbdr.userdevice = Long.toString(volume.getDeviceId());
      vbdr.mode = Types.VbdMode.RW;
      vbdr.type = Types.VbdType.DISK;
      if (volume.getType() == Volume.Type.ROOT
            && bootLoaderType == BootloaderType.PyGrub)
         vbdr.bootable = true;
      if (volume.getType() != Volume.Type.ROOT)
         vbdr.unpluggable = true;
      VBD vbd = VBD.create(conn, vbdr);
      return vbd;
   }
                                                                     .
}

Although createVbd returns a VBD, the caller execute does not save it. Presumably this is because CloudStack could look it up again later from the XenServer using static method VBD.getAllRecords(Connection).

尽管createVbd返回了VBD,但调用者execute不会保存它。 大概是因为CloudStack以后可以使用静态方法VBD.getAllRecords(Connection)从XenServer再次查找它。

For simplicity I edited out the logic in createVbd involving Volume.Type.ISO, which is for CD-ROM drives.

为简单起见,我编辑了createVbd涉及Volume.Type.ISO的逻辑,该逻辑适用于CD-ROM驱动器。

为每个网络接口创建一个VIF (Create a VIF for each network interface)

Line 23 of the execute(StartCommand) method invokes the CloudStack method createVif to create a new XenServer Java VIF (Virtual network Interface) for the NICs (Network Interface Controllers, aka network adapters) specified on the guest VM. The method appears at first to be pretty simple:

execute(StartCommand)方法的第23行调用CloudStack方法createVif来为来宾VM上指定的NIC(网络接口控制器,又名网络适配器)创建新的XenServer Java VIF (虚拟网络接口)。 该方法最初看起来很简单:

import com.xensource.xenapi.Connection;
import com.xensource.xenapi.Network;
import com.xensource.xenapi.VIF;
import com.xensource.xenapi.VM;
                                                                     .
import com.cloud.agent.api.to.NicTO;
                                                                     .
public class CitrixResourceBase {
                                                                     .
   protected VIF createVif(Connection conn, String vmName, VM vm,
         NicTO nic) {
      VIF.Record vifr = new VIF.Record();
      vifr.VM = vm;
      vifr.device = Integer.toString(nic.getDeviceId());
      vifr.MAC = nic.getMac();
      vifr.network = getNetwork(conn, nic);
      VIF vif = VIF.create(conn, vifr);
      return vif;
   }
                                                                     .
}

But nothing is really quite simple when it comes to the network. CloudStack’s method getNetwork undergoes a nontrivial effort to piece together a Xen Network object. The overall procedure is essentially like this:

但是,当涉及到网络时,没有任何事情真的很简单。 CloudStack的getNetwork方法getNetwork Xen Network对象组合在一起。 整个过程本质上是这样的:

import com.xensource.xenapi.Connection;
import com.xensource.xenapi.Network;
import com.xensource.xenapi.VLAN;
                                                                     .
import com.cloud.agent.api.to.NicTO;
import com.cloud.network.Networks.BroadcastDomainType;
                                                                     .
public class CitrixResourceBase {
                                                                     .
   protected Network getNetwork(Connection conn, NicTO nic) {
      BroadcastDomainType nicType = nic.getBroadcastType();
      Network network = null;
      Network.Record nwr = new Network.Record();
      if (nic.getBroadcastType() == BroadcastDomainType.Native
       || nic.getBroadcastType() == BroadcastDomainType.LinkLocal) {
         network = ...
      }
      else if (nicType == BroadcastDomainType.Vlan) {
         network = ...
      }
      else if (nicType == BroadcastDomainType.Vswitch) {
         network = ...
      }
      return network;
   }
                                                                     .
}

The basic network scenarios above are Native/Link-Local, Vlan, and Open vSwitch.

上面的基本网络方案是本机/本地链接,VLAN和Open vSwitch。

Native networking means the guest VM will perform plain vanilla traffic to its ethernet adapter. In this case, CloudStack just takes note of the XenServer uuid for guest traffic on this VM and passes it as an argument to the Xen Network constructor. As mentioned above, CitrixResourceBase has called getHostInfo to obtain XenServer uuids and has saved them in the field called _host.

本机网络意味着来宾VM将执行普通的流量到其以太网适配器。 在这种情况下,CloudStack仅记录该VM上用于访客流量的XenServer uuid,并将其作为参数传递给Xen Network构造函数。 如上所述, CitrixResourceBase已调用getHostInfo以获得XenServer uuid,并将其保存在名为_host的字段中。

Link-Local networking, formally defined in RFC 3927, is for the networking setup where two machines are directly connected by ethernet cables with no intervening switch or router. You wouldn’t normally configure your hardware this way but in fact CloudStack does use link-local addressing to connect the XenServer host to a special system control VM called the Secondary Storage VM. When that VM starts it falls into this case, and since CloudStack already has its network adapter uuid stored in _host it makes sense to handle the Link-Local case in the same block of code as for Native networking:

RFC 3927中正式定义的本地链路网络是用于网络设置的,其中两台计算机通过以太网电缆直接连接,而没有中间的交换机或路由器。 您通常不会以这种方式配置硬件,但实际上CloudStack确实使用链接本地寻址将XenServer主机连接到称为辅助存储VM的特殊系统控制VM。 当该VM启动时,就属于这种情况,并且由于CloudStack已经将其网络适配器uuid存储在_host因此以与本机网络相同的代码块来处理Link-Local情况是有意义的:

protected Network getNetwork(Connection conn, NicTO nic) {
      ...
      if (nic.getBroadcastType() == BroadcastDomainType.Native
       || nic.getBroadcastType() == BroadcastDomainType.LinkLocal) {
         String uuid = null;
         switch (nic.getType()) {
            case Guest: uuid = _host.guestNetwork; break;
            case Control: uuid = _host.linkLocalNetwork; break;
            case Management: uuid = _host.privateNetwork; break;
            //other cases not shown
         }
         network = Network.getByUuid(conn, uuid);
      }
      ...

With Vlan, CloudStack makes a network name unique by adding a tag, to allow virtual LANs to coexist on a single network. CloudStack employs an additional trick of adding a timestamp to the Vlan name in case of clustered XenServer hosts concurrently trying to create the same Vlan; they will be made to choose the Vlan created with the earliest timestamp.

借助Vlan,CloudStack通过添加标签使网络名称唯一,以允许虚拟LAN共存于单个网络中。 CloudStack还采用了另一个技巧,即在群集XenServer主机同时尝试创建同一Vlan的情况下,在Vlan名称中添加时间戳。 他们将被选择使用最早的时间戳创建的VLAN。

protected Network getNetwork(Connection conn, NicTO nic) {
      ...
      else if (nicType == BroadcastDomainType.Vlan) {
         long tag = Long.parseLong(nic.getBroadcastUri().getHost());
         nwr.nameLabel = "VLAN-" + network.getNetworkRecord(conn).uuid
            + "-" + tag;
         nwr.tags = new HashSet<String>();
         nwr.tags.add(generateTimeStamp());
         network = Network.create(conn, nwr);
         VLAN.create(conn, network.getPif(conn), tag, network);
      }
      ...

Open vSwitch, like CloudStack itself, is another open source virtualization product backed by Citrix. When the guest NIC is routed with Open vSwitch, CloudStack creates the VIF on the XenServer control domain (“dom0”) and temporarily plugs it in, which has the effect of creating a network bridge to the underlying PIF (Physical network Interface). Open vSwitch supports its own Vlan, or you can choose Tunnel-style networking to the underlying network interface.

与CloudStack本身一样, Open vSwitch是Citrix支持的另一种开源虚拟化产品。 当使用Open vSwitch路由来宾NIC时,CloudStack会在XenServer控制域(“ dom0”)上创建VIF并暂时将其插入,这具有创建到基础PIF(物理网络接口)的网桥的作用。 Open vSwitch支持其自己的Vlan,或者您可以选择到基础网络接口的隧道式网络。

protected Network getNetwork(Connection conn, NicTO nic) {
      ...
      else if (nicType == BroadcastDomainType.Vswitch) {
         String nwName = null;
         if (nic.getBroadcastUri().getAuthority().startsWith("vlan")
            nwName = "vswitch";
         else {
            nwName = "OVSTunnel"
               + Long.parseLong(nic.getBroadcastUri().getHost());
            nwr.otherConfig = mapPair("ovs-host-setup", "");
         }
         nwr.nameLabel = nwName;
         network = Network.create(conn, nwr);
         VM dom0 = null;
         for (VM vm : Host.getByUuid(conn, _host.uuid)
                          .getResidentVMs(conn)) {
            if (vm.getIsControlDomain(conn)) {
               dom0 = vm;
               break;
            }
         }
         VIF.Record vifr = new VIF.Record();
         vifr.VM = dom0;
         vifr.device = getLowestAvailableVIFDeviceNum(conn, dom0);
         vifr.otherConfig = mapPair("nameLabel", nwName);
         vifr.MAC = "FE:FF:FF:FF:FF:FF";
         vifr.network = network;
         VIF dom0vif = VIF.create(conn, vifr);
         dom0vif.plug(conn); //XenServer creates a bridge
         dom0vif.unplug(conn);
      }
      ...

启动虚拟机 (Start the VM)

Finally, on line 24 of execute(StartCommand), we see a call to the CloudStack method startVM. The sole job of startVM is to invoke the Xen method VM.startOnAsync. It is a potentially long-running operation, so it returns a Xen Task object that CloudStack will monitor, waiting for the VM to be done with startup. startOnAsync also takes two boolean flags, which in this case are set to start the VM in a running state (not paused), and to force startup regardless of whether the current VM configuration looks different than in its last startup.

最后,在execute(StartCommand)第24行,我们看到了对CloudStack方法startVM的调用。 startVM的唯一工作是调用Xen方法VM.startOnAsync 。 这是一个可能会长时间运行的操作,因此它返回CloudStack将监视的Xen Task对象,等待VM在启动时完成。 startOnAsyncstartOnAsync两个布尔标志,在这种情况下,将它们设置为以运行状态(不暂停)启动VM,并强制启动,而不管当前VM配置与上次启动是否不同。

If you ignore the try/catch handling, the process of starting a VM is nice and short:

如果忽略try / catch处理,则启动VM的过程将很短:

import com.xensource.xenapi.Connection;
import com.xensource.xenapi.Host;
import com.xensource.xenapi.Task;
import com.xensource.xenapi.Types;
import com.xensource.xenapi.VM;
                                                                     .
public class CitrixResourceBase {
                                                                     .
   void startVM(Connection conn, Host host, VM vm, String vmName) {
      boolean startPaused = false;
      boolean force = true;
      Task task = vm.startOnAsync(conn, host, startPaused, force);
      while (task.getStatus(conn) == Types.TaskStatusType.PENDING) {
         Thread.sleep(1000);
      }
      if (task.getStatus(conn) != Types.TaskStatusType.SUCCESS) {
         task.cancel(conn);
         throw new Types.BadAsyncResult();
      }
   }
                                                                     .
}

And there we have it! With this code inspection we’ve walked through the Java code in which CloudStack has made a connection to a XenServer hypervisor, logged in, created the guest VM, allocated its VBDs and VIFs, and started the guest. A brand new VM is fired up and running on the private cloud.

我们终于得到它了! 通过此代码检查,我们遍历了Java代码,其中CloudStack已连接到XenServer虚拟机管理程序,登录,创建来宾VM,分配了其VBD和VIF并启动了来宾。 全新虚拟机启动并在私有云上运行。

翻译自: https://www.sitepoint.com/launching-a-vm-with-xenserver-java-and-apache-cloudstack/

vm安装xenserver

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值