Nova组件源码分析之冷迁移与Resize

冷迁移Resize

1、迁移是指将虚拟机从一个计算节点迁移到另外一个节点上冷迁移过程中虚拟机是关机或是处于不可用的状态,而热迁移则需要保证虚拟机时刻运行。

2、Resize则是指根据需求调整虚拟机的计算能力和资源Resize和冷迁移的工作流程相同,区别只是Resize时必须保持新的Flavor配置大于旧的配置,而冷迁移则要求两者相同

3、resize和冷迁移的本质区别:

同一个接口,根据传参不同,有flavor就执行升配,无flavor就走的是冷迁移。

  1. 迁移一般都是从一个节点迁移到另一个节点上,升配的时候一般默认也是如此,但可以配置resize_to_same_host,升配的时候,不迁移到别的节点。

1、 入口API:nova/api/openstack/compute/migrate_server.py

class MigrateServerController(wsgi.Controller):
  ................

def _migrate(self, req, id, body):
    """允许管理员将服务器迁移到新主机。"""
    context = req.environ['nova.context']

    instance = common.get_instance(self.compute_api, context, id,
                                   expected_attrs=['flavor', 'services'])
    context.can(ms_policies.POLICY_ROOT % 'migrate',
                target={'project_id': instance.project_id})

    host_name = None
    if (api_version_request.is_supported(req, min_version='2.56') and
        body['migrate'] is not None):
        host_name = body['migrate'].get('host')

try:
    self.compute_api.resize(req.environ['nova.context'], instance,
                            host_name=host_name)

  ................

instance = common.get_instance(self.compute_api, context, id,expected_attrs=['flavor', 'services'])依据instance id 获取instance id 并对instance的存在性做了校验

实际调用函数self.compute_api.resize(req.environ['nova.context'], instance,host_name=host_name)。compute_api.resize定义在 nova/compute/api.py

2、nova/compute/api.py

class API:
    """用于与计算管理器交互的API."""

@check_instance_lock
@check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.STOPPED])
@check_instance_host(check_is_up=True)
def resize(self, context, instance, flavor_id=None, clean_shutdown=True,
           host_name=None, auto_disk_config=None):
 """调整(即,迁移)一个正在运行的实例的大小。

如果flavor_id为None,则认为该进程是迁移,保持

原flavor_id。如果flavor_id不是None,实例应该

迁移到新的主机并调整大小为新的flavor_id。

在调整大小的情况下,host_name总是None。

Host_name仅在冷迁移情况下可以设置。
    """

allow_cross_cell_resize = self._allow_cross_cell_resize(
    context, instance)
if host_name is not None:
    node = self._validate_host_for_cold_migrate(
        context, instance, host_name, allow_cross_cell_resize)

# 校验磁盘信息
self._check_auto_disk_config(
    instance, auto_disk_config=auto_disk_config)
# 获取当前flavor 并校验flavor
current_flavor = instance.get_flavor()
’’’

注意(aarents):确保image_base_image_ref存在,因为在finish_resize/cross_cell_resize期间需要它。实例升级

从一个较老的新星释放可能没有这个属性,因为

a rebuild bug bug /1893618。

‘’’
instance.system_metadata.update(
    {'image_base_image_ref': instance.image_ref}
)
volume_backed = None

# 如果没有提供flavor_id,则只迁移实例。
if not flavor_id:
    LOG.debug("flavor_id is None. Assuming migration.",
              instance=instance)

 # 保证迁移前后虚拟机 Flavor 不会发生改变
    new_flavor = current_flavor
else:
  ................
filter_properties = {'ignore_hosts': []}

“””通过配置项 allow_resize_to_same_host 来决定是否会 resize 到同一个计算节点

实际上,当 Migrate 到同一个计算节点时,nova-compute 会触发 UnableToMigrateToSelf 异常,

再继续 Retry Scheduler,直至调度到合适的计算节点或异常退出,前提是 nova-scheduler 启用了 RetryFilter

“””
if not self._allow_resize_to_same_host(same_flavor, instance):
    filter_properties['ignore_hosts'].append(instance.host)
# 获取request_spec,该参数为新的调度参数,将ignore host加入调度
request_spec = objects.RequestSpec.get_by_instance_uuid(
    context, instance.uuid)
request_spec.ignore_hosts = filter_properties['ignore_hosts']
  ................

# 更新虚拟机状态为RESIZE_PREP, 虚拟机设置为该状态开始,将会锁定,不允许进行其他任务

instance.task_state = task_states.RESIZE_PREP
instance.progress = 0
instance.auto_disk_config = auto_disk_config or False
instance.save(expected_task_state=[None])

# 发送迁移状态

if not flavor_id:
    self._record_action_start(context, instance, instance_actions.MIGRATE)
else:
    self._record_action_start(context, instance,instance_actions.RESIZE)
scheduler_hint = {'filter_properties': filter_properties}
  ................

# resize_instance 任务迁移
self.compute_task_api.resize_instance(
    context, instance,
    scheduler_hint=scheduler_hint,
    flavor=new_flavor,
    clean_shutdown=clean_shutdown,
    request_spec=request_spec,
    do_cast=True)

该步骤主要是进行参数校验,并获取request_spec 调度参数。 调用nova/conductor/api的resize_instance函数

3、nova/conductor/api

def resize_instance(self, context, instance, scheduler_hint, flavor,
                    reservations=None, clean_shutdown=True,
                    request_spec=None, host_list=None, do_cast=False):
    self.conductor_compute_rpcapi.migrate_server(
        context, instance, scheduler_hint, live=False, rebuild=False,
        flavor=flavor, block_migration=None, disk_over_commit=None,
        reservations=reservations, clean_shutdown=clean_shutdown,
        request_spec=request_spec, host_list=host_list,
        do_cast=do_cast)

调用的conductor 层的 migrate_server, 通过rpc远程调用方式进行。

4、nova/conductor/rcpapi.py

def migrate_server(self, context, instance, scheduler_hint, live, rebuild,
              flavor, block_migration, disk_over_commit,
              reservations=None, clean_shutdown=True, request_spec=None,
              host_list=None, do_cast=False):

cctxt = self.client.prepare(
     version=version,
     call_monitor_timeout=CONF.rpc_response_timeout,
     timeout=CONF.long_rpc_timeout)
if do_cast:
     return cctxt.cast(context, 'migrate_server', **kw)
return cctxt.call(context, 'migrate_server', **kw)

实际调用的conductor的migrate_server该过程实际通过Manager进行处理。

5、nova/conductor/manager.py

@targets_cell
@wrap_instance_event(prefix='conductor')
def migrate_server(self, context, instance, scheduler_hint, live, rebuild,
        flavor, block_migration, disk_over_commit, reservations=None,
        clean_shutdown=True, request_spec=None, host_list=None):
    if instance and not isinstance(instance, nova_object.NovaObject):
        # NOTE(danms): Until v2 of the RPC API, we need to tolerate
        # old-world instance objects here
        attrs = ['metadata', 'system_metadata', 'info_cache',
                 'security_groups']
        instance = objects.Instance._from_db_object(
            context, objects.Instance(), instance,
            expected_attrs=attrs)
    # NOTE: Remove this when we drop support for v1 of the RPC API
    if flavor and not isinstance(flavor, objects.Flavor):
        # Code downstream may expect extra_specs to be populated since it
        # is receiving an object, so lookup the flavor to ensure this.
        flavor = objects.Flavor.get_by_id(context, flavor['id'])
    if live and not rebuild and not flavor:
        self._live_migrate(context, instance, scheduler_hint,
                           block_migration, disk_over_commit, request_spec)
    elif not live and not rebuild and flavor:
        instance_uuid = instance.uuid
        with compute_utils.EventReporter(context, 'cold_migrate',
                                         self.host, instance_uuid):
            self._cold_migrate(context, instance, flavor,
                               scheduler_hint['filter_properties'],
                               clean_shutdown, request_spec,
                               host_list)
    else:
        raise NotImplementedError()

migrate_server 是一个公用的函数, 冷迁移, resiez, 以及热迁移都经过该函数。这里冷迁移实际使用_cold_migrate 进行处理。

def _cold_migrate(self, context, instance, flavor, filter_properties,
                  clean_shutdown, request_spec, host_list):

#  更新request_spec

request_spec = self._get_request_spec_for_cold_migrate(
    context, instance, flavor, filter_properties, request_spec)

# 创建冷迁移任务 并执行
task = self._build_cold_migrate_task(context, instance, flavor,
        request_spec, clean_shutdown, host_list)
try:
    task.execute()
except exception.NoValidHost as ex:

  ..................

# 保存request_spec 调度参数

 if request_spec.obj_what_changed():
    request_spec.save()

该过程实际就是做异常处理,并保存请求参数。task.execute() 实际处理由_execute进行定义在nova/conductor/tasks/migrate.py

6、nova/conductor/task/migrate.py

def _execute(self):

# 将forced host 重置掉,以免影响调度

self.request_spec.reset_forced_destinations()

legacy_props = self.request_spec.to_legacy_filter_properties_dict()
scheduler_utils.setup_instance_group(self.context, self.request_spec)

if not ('requested_destination' in self.request_spec and self.request_spec.requested_destination and 'host' in self.request_spec.requested_destination):
scheduler_utils.populate_retry(legacy_props,self.instance.uuid)
port_res_req, req_lvl_params = (self.network_api.get_requested_resource_for_instance(
self.context, self.instance.uuid))

# 重新调度生成request_spec

self.request_spec.requested_resources = port_res_req
self.request_spec.request_level_params = req_lvl_params
self._set_requested_destination_cell(legacy_props)

   .................

if self.host_list is None:

# 调度, 获取可用宿主机,并选用一台
    selection = self._schedule()

   .................

else:
 # 这是一个重新调度,将使用提供的备用主机,在host_list中作为目的主机。
    selection = self._reschedule()

scheduler_utils.populate_filter_properties(legacy_props, selection)
(host, node) = (selection.service_host, selection.nodename)

# rpc 调用目的节点prep_resize

self.compute_rpcapi.prep_resize(
    self.context, self.instance, self.request_spec.image,
    self.flavor, host, migration,
    request_spec=self.request_spec, filter_properties=legacy_props,
    node=node, clean_shutdown=self.clean_shutdown,
    host_list=self.host_list)

该步骤主要进行调度, 选取目标宿主机, 发送任务, 进行迁移。Nova-compute发起RPC请求后,Nova-compute的manager响应对应的方法

7、nova/compute/manager.py

def prep_resize(self, context, image, instance, flavor,
                request_spec, filter_properties, node,
                clean_shutdown, migration, host_list):

# 校验参数

if node is None:
    node = self._get_nodename(instance, refresh=True)

instance_state = instance.vm_state
with self._error_out_instance_on_exception(
        context, instance, instance_state=instance_state),\
        errors_out_migration_ctxt(migration):

# 通知状态
    self._send_prep_resize_notifications(
        context, instance, fields.NotificationPhase.START,
        flavor)
    try:
        scheduler_hints = self._get_scheduler_hints(filter_properties,
                                                    request_spec)

try:
    self._validate_instance_group_policy(context, instance,
                                         scheduler_hints)
except exception.RescheduledException as e:
    raise exception.InstanceFaultRollback(inner_exception=e)
self._prep_resize(context, image, instance,
                  flavor, filter_properties,
                  node, migration, request_spec,
                  clean_shutdown)

  ..............

finally:
    self._send_prep_resize_notifications(
        context, instance, fields.NotificationPhase.END,
        flavor)

由_prep_resize 进行处理

def _prep_resize(
    self, context, image, instance, flavor, filter_properties, node,
    migration, request_spec, clean_shutdown=True,
):

if not filter_properties:
    filter_properties = {}
if not instance.host:
    self._set_instance_obj_error_state(instance)
    msg = _('Instance has no source host')
    raise exception.MigrationError(reason=msg)

“””

判断是否为同一台主机,

并检查 supports_migrate_to_same_host 是否支持迁移到同一台主机上

“””
same_host = instance.host == self.host

# 如果flavor id匹配,它就是迁移;否则调整
if same_host and flavor.id == instance['instance_type_id']:
    if not self.driver.capabilities.get(
            'supports_migrate_to_same_host', False):
                raise exception.InstanceFaultRollback(
            inner_exception=exception.UnableToMigrateToSelf(
                instance_id=instance.uuid, host=self.host))

# 更新instance 信息 并保存

instance.new_flavor = flavor
vm_state = instance.vm_state
LOG.debug('Stashing vm_state: %s', vm_state, instance=instance)
instance.system_metadata['old_vm_state'] = vm_state
instance.save()

  ............

limits = filter_properties.get('limits', {})
allocs = self.reportclient.get_allocations_for_consumer(
    context, instance.uuid)

# 开始迁移
with self.rt.resize_claim(
    context, instance, flavor, node, migration, allocs,
    image_meta=image, limits=limits,
) as claim:
    LOG.info('Migrating', instance=instance)

# RPC转换到源主机以启动实际的大小调整/迁移。
    self.compute_rpcapi.resize_instance(
        context, instance, claim.migration, image,
        flavor, request_spec, clean_shutdown)

通过RPC回到源主机上由源主机的nova-compute服务完成迁移。

8、nova/compure/rpcapi.py

def resize_instance(self, ctxt, instance, migration, image, flavor,
                    request_spec, clean_shutdown=True):
    version = '6.0'
    msg_args = {'instance': instance, 'migration': migration,
                'image': image,
                'flavor': flavor,
                'clean_shutdown': clean_shutdown,
                'request_spec': request_spec,
    }
    client = self.router.client(ctxt)
    if not client.can_send_version(version):
        version = self._ver(ctxt, '5.2')
        del msg_args['flavor']
        msg_args['instance_type'] = flavor
        if not client.can_send_version(version):
            msg_args.pop('request_spec')
            version = '5.0'
    cctxt = client.prepare(server=_compute_host(None, instance),
            version=version)
    cctxt.cast(ctxt, 'resize_instance', **msg_args)

通过源码分析,得到 compute_rpcapi.resize_instance 其实是调用instance原来主机上的过程,
进行正式的迁移。

9、nova/compute/manager.py

def resize_instance(self, context, instance, image,
                    migration, flavor, clean_shutdown,
                    request_spec):
 """启动一个正在运行的实例迁移到另一个主机。

这是从目标主机的' ' prep_resize ' '例程启动的并在源主机上运行。
 """
    try:
        self._resize_instance(context, instance, image, migration,
                              flavor, clean_shutdown, request_spec)
    except Exception:
        with excutils.save_and_reraise_exception():
            self._revert_allocation(context, instance, migration)

主要调用_resize_instance函数。

def _resize_instance(
    self, context, instance, image, migration, flavor,
    clean_shutdown, request_spec,
):

instance_state = instance.vm_state
with self._error_out_instance_on_exception(
        context, instance, instance_state=instance_state), \
     errors_out_migration_ctxt(migration):

# 获取instance 网络信息 该部分现在由NEUTRON 处理
    network_info = self.network_api.get_instance_nw_info(context, instance)

# 更新状态
    migration.status = 'migrating'
    migration.save()
    instance.task_state = task_states.RESIZE_MIGRATING
    instance.save(expected_task_state=task_states.RESIZE_PREP)
    bdms = objects.BlockDeviceMappingList.get_by_instance_uuid(
            context, instance.uuid)

# 通知instance状态
    self._send_resize_instance_notifications(
        context, instance, bdms, network_info,
        fields.NotificationPhase.START)
 # 获取虚拟机的块设备信息

block_device_info = self._get_instance_block_device_info(
                        context, instance, bdms=bdms)
    timeout, retry_interval = self._get_power_off_values(
        instance, clean_shutdown)

# 关闭并进行C盘迁移
    disk_info = self.driver.migrate_disk_and_power_off(
        context, instance, migration.dest_host,
        flavor, network_info,
        block_device_info,
        timeout, retry_interval)

# 中断磁盘映射关系
    self._terminate_volume_connections(context, instance, bdms)
# 迁移虚拟机网络
    self.network_api.migrate_instance_start(context,
                                            instance,
                                            migration)

# 保存虚拟机状态
    migration.status = 'post-migrating'
    migration.save()

# 迁移完成
    instance.host = migration.dest_compute
    instance.node = migration.dest_node
    instance.old_flavor = instance.flavor
    instance.task_state = task_states.RESIZE_MIGRATED
    instance.save(expected_task_state=task_states.RESIZE_MIGRATING)

# 完成迁移, 完成迁移的收尾工作, 实例创建, 启动等
    self.compute_rpcapi.finish_resize(context, instance,
        migration, image, disk_info, migration.dest_compute,
        request_spec)
self._send_resize_instance_notifications(
    context, instance, bdms, network_info,
    fields.NotificationPhase.END)
self.instance_events.clear_events_for_instance(instance)

关键步骤:

1、关闭并进行磁盘迁移

2、网络迁移

3、迁移收尾工作, 创建实例, 启动实例等

核心步骤:

  1. 磁盘迁移:
disk_info = self.driver.migrate_disk_and_power_off(
    context, instance, migration.dest_host,
    flavor, network_info,
    block_device_info,
    timeout, retry_interval)

driver.migrate_disk_and_power_off过程·定义在nova/virt/libvirt/driver.py

def migrate_disk_and_power_off(self, context, instance, dest,
                               flavor, network_info,
                               block_device_info=None,
                               timeout=0, retry_interval=0):

LOG.debug("Starting migrate_disk_and_power_off",
           instance=instance)
ephemerals = driver.block_device_info_get_ephemerals(block_device_info)

eph_size = (block_device.get_bdm_ephemeral_disk_size(ephemerals) or
            instance.flavor.ephemeral_gb)

# 磁盘参数校验

root_down = flavor.root_gb < instance.flavor.root_gb
ephemeral_down = flavor.ephemeral_gb < eph_size
booted_from_volume = self._is_booted_from_volume(block_device_info)
if (root_down and not booted_from_volume) or ephemeral_down:
    reason = _("Unable to resize disk down.")
    raise exception.InstanceFaultRollback(
        exception.ResizeError(reason=reason))

   .................

# 是否为共享存储, 如果不是则在目标宿主机上创建对应的目录

if not shared_instance_path:
    try:
        self._remotefs.create_dir(dest, inst_base)
    except processutils.ProcessExecutionError as e:
        reason = _("not able to execute ssh command: %s") % e
        raise exception.InstanceFaultRollback(
            exception.ResizeError(reason=reason))

# 关闭虚拟机
self.power_off(instance, timeout, retry_interval)
self.unplug_vifs(instance, network_info)

# 获取磁盘映射关系
block_device_mapping = driver.block_device_info_get_mapping(
    block_device_info)
for vol in block_device_mapping:
    connection_info = vol['connection_info']
    self._disconnect_volume(context, connection_info, instance)
disk_info = self._get_instance_disk_info(instance, block_device_info)

# 进行目录操作, 磁盘设置等
try:

self._cleanup_failed_instance_base(inst_base_resize)
os.rename(inst_base, inst_base_resize)

if shared_instance_path:
    dest = None
    fileutils.ensure_tree(inst_base)
on_execute = lambda process: \
    self.job_tracker.add_job(instance, process.pid)
on_completion = lambda process: \
    self.job_tracker.remove_job(instance, process.pid)
   ...............

# 将磁盘复制到目标节点
compression = info['type'] not in NO_COMPRESSION_TYPES
libvirt_utils.copy_image(from_path, img_path, host=dest,
                         on_execute=on_execute,
                         on_completion=on_completion,
                         compression=compression)

# 迁移disk.info 磁盘信息, 例如注入信息等

src_disk_info_path = os.path.join(inst_base_resize, 'disk.info')
if os.path.exists(src_disk_info_path):
    dst_disk_info_path = os.path.join(inst_base, 'disk.info')
    libvirt_utils.copy_image(src_disk_info_path,
                             dst_disk_info_path,
                             host=dest, on_execute=on_execute,
                             on_completion=on_completion)

libvirt_utils.save_and_migrate_vtpm_dir(
        instance.uuid, inst_base_resize, inst_base, dest,
        on_execute, on_completion)
except Exception:
    with excutils.save_and_reraise_exception():
        self._cleanup_remote_migration(dest, inst_base,
                                       inst_base_resize,
                                       shared_instance_path)
return jsonutils.dumps(disk_info)
  1. 磁盘迁移完成后, 进行网络迁移。
  2. 网络迁移完成后,finish_resize。通过RPC到目标主机上完成虚拟机的Resize

10、nova/compute/rcpapi.py

def finish_resize(self, ctxt, instance, migration, image, disk_info, host,
                  request_spec):

  .............

cctxt = client.prepare(
        server=host, version=version)
cctxt.cast(ctxt, 'finish_resize', **msg_args)

  1. nova/compute/manager.py
def finish_resize(self, context, disk_info, image, instance,
                  migration, request_spec):

“””

完成迁移过程。

设置新传输的磁盘,并在其新的主机上打开实例。

“””

try:
    self._finish_resize_helper(context, disk_info, image, instance,migration, request_spec)

except Exception:
    with excutils.save_and_reraise_exception():

LOG.info('Deleting allocations for old flavor on source node '
         '%s after finish_resize failure. You may be able to '
         'recover the instance by hard rebooting it.',
         migration.source_compute, instance=instance)
self._delete_allocation_after_move(
    context, instance, migration)

通过调用._finish_resize_helper函数调用_finish_resize 。

def _finish_resize(self, context, instance, migration, disk_info,
                   image_meta, bdms, request_spec):
    resize_instance = False
    old_instance_type_id = migration.old_instance_type_id
    new_instance_type_id = migration.new_instance_type_id
    old_flavor = instance.flavor

    old_vm_state = instance.system_metadata.get('old_vm_state',vm_states.ACTIVE)

if old_instance_type_id != new_instance_type_id:
    new_flavor = instance.new_flavor  # this is set in _prep_resize
    self._set_instance_info(instance, new_flavor)
    for key in ('root_gb', 'swap', 'ephemeral_gb'):
        if old_flavor[key] != new_flavor[key]:
            resize_instance = True
            break
    instance.apply_migration_context()

# 创建设置网络, 并完成网络迁移

self.network_api.setup_networks_on_host(context, instance,
                                        migration.dest_compute)
provider_mappings = self._get_request_group_mapping(request_spec)
    self.network_api.migrate_instance_finish(
    context, instance, migration, provider_mappings)
    network_info = self.network_api.get_instance_nw_info(context, instance)
    instance.task_state = task_states.RESIZE_FINISH
    instance.save(expected_task_state=task_states.RESIZE_MIGRATED)
    self._send_finish_resize_notifications(
    context, instance, bdms, network_info,
    fields.NotificationPhase.START)
   self._update_volume_attachments(context, instance, bdms)

#  获取磁盘信息
   block_device_info = self._get_instance_block_device_info(
    context, instance, refresh_conn_info=True, bdms=bdms)
    power_on = old_vm_state != vm_states.STOPPED
    allocations = self.reportclient.get_allocs_for_consumer(
    context, instance.uuid)['allocations']
    try:

# 完成迁移
        self.driver.finish_migration(context, migration, instance,disk_info,  

           network_info,image_meta, resize_instance,allocations, block_device_info, power_on)
    except Exception:
        with excutils.save_and_reraise_exception():
            if old_instance_type_id != new_instance_type_id:
                self._set_instance_info(instance, old_flavor)
    self._complete_volume_attachments(context, bdms)
    migration.status = 'finished'
    migration.save()

# 更新状态
    instance.vm_state = vm_states.RESIZED
    instance.task_state = None
    instance.launched_at = timeutils.utcnow()
    instance.save(expected_task_state=task_states.RESIZE_FINISH)
    return network_info

finish_migration 定义在 virt/libvirt/driver.py

12、nova/virt/libvirt/driver.py

def finish_migration(
    self,
    context: nova_context.RequestContext,
    migration: 'objects.Migration',
    instance: 'objects.Instance',
    disk_info: str,
    network_info: network_model.NetworkInfo,
    image_meta: 'objects.ImageMeta',
    resize_instance: bool,
    allocations: ty.Dict[str, ty.Any],
    block_device_info: ty.Optional[ty.Dict[str, ty.Any]] = None,
    power_on: bool = True,
) -> None:

# 在目标主机上完成迁移过程

# 获取磁盘信息,并创建磁盘镜像

block_disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type,
                                          instance,
                                          image_meta,
                                          block_device_info)

self._create_image(context, instance, block_disk_info['mapping'],
                   block_device_info=block_device_info,
                   ignore_bdi_for_swap=True,
                   fallback_from_host=migration.source_compute)

self._ensure_console_log_for_instance(instance)
gen_confdrive = functools.partial(
    self._create_configdrive, context, instance,
    InjectionInfo(admin_pass=None, network_info=network_info,
                  files=None))

for info in jsonutils.loads(disk_info):
    path = info['path']
    disk_name = os.path.basename(path)

if (disk_name != 'disk.config' and
            info['type'] == 'raw' and CONF.use_cow_images):
    self._disk_raw_to_qcow2(info['path'])

mdevs = self._allocate_mdevs(allocations)

self._finish_migration_vtpm(context, instance)

# 依据instance 信息生成 xml 文件
xml = self._get_guest_xml(context, instance, network_info,
                          block_disk_info, image_meta,
                          block_device_info=block_device_info,
                          mdevs=mdevs)

# 创建虚拟机

guest = self._create_guest_with_network(
    context, xml, instance, network_info, block_device_info,
    power_on=power_on, vifs_already_plugged=True,
    post_xml_callback=gen_confdrive)
if power_on:
    timer = loopingcall.FixedIntervalLoopingCall(
                                            self._wait_for_running,
                                            instance)
    timer.start(interval=0.5).wait()

guest.sync_guest_time()
LOG.debug("finish_migration finished successfully.", instance=instance)

13、_create_guest_with_network创建虚拟机

_create_guest_and_network 在目标宿主机创建虚拟机, 如果原始状态为active则启动
该步骤在虚拟机创建中分析。

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值