openstack虚拟机热迁移优化(victoria版)(附源码分析以及日志分析)

优化目标

提高热迁移成功率

热迁移认知

热迁移是转移内存(或存储)的过程。源主机不断把虚拟机的内存转移到目的主机,直到源主机仅仅省一部分可以一次转移完成的内存未被转移,此时把源主机上的虚拟机暂停,转移掉最后这一部分。其实热迁移并不是业务不中断,只是在迁移的最后时刻,虚拟机会有有短暂挂起,快速完成最后一次内存复制。Hypervisor中挂起虚拟机本质就是改变VCPU的调度,暂时不给虚拟机可用的物理CPU时间片。这个时间大概在50毫秒,对用户业务来说几乎是不感知的

热迁移原理分析

热迁移步骤

  • Stage 1:将虚拟机所有 RAM 数据都标记为脏内存。
  • Stage 2:迁移所有脏内存,然后重新计算新产生的脏内存,如此迭代,直到满足条件退出。
  • Stage 3:停止运行 GuestOS,将剩余的脏内存以及虚拟机的设备状态信息都迁移过去

关键步骤是Stage2,Nova 选择的是退出条件是动态配置 max downtime,Libvirt Pre-Copy Live Migration 每次迭代都会重新计算虚拟机新的脏内存以及每次迭代所花掉的时间来估算带宽,再根据带宽和当前迭代的脏页数计算出传输剩余数据的时间,这个时间就是 downtime。如果 downtime 在管理员配置的 Live Migration Max Downtime 范围之内,则退出,进入 Stage 3。

热迁移参数分析

live_migration_bandwidth
迁移期间要使用的最大带宽(以MiB / s为单位),默认值是0,意味着不限制迁移带宽。

live_migration_downtime、live_migration_downtime_steps、live_migration_downtime_delay
live_migration_downtime 停机时间达到最终的最大值
live_migration_downtime_steps表示达到最大值所经历的次数
live_migration_downtime_delay表示每一次增加downtime的间隔时间

普遍迁移停机时间设置策略:在最大停机时间一定的情况下:

迁移速度优先策略:不在乎停机的时间需要尽快完成迁移,这时我们减少live_migration_downtime_steps和live_migration_downtime_delay,尽快达到最大停机时间

服务中断最短策略:不在乎迁移总时长需要服务中断的时间越短越好, 就尽量增大live_migration_downtime_steps和live_migration_downtime_delay,慢慢地达到最大停机时间,这样在有可能在最大停机时间内完成VM的迁移。

live_migration_completion_timeout
整个迁移过程所允许的最大迁移时间,若超过这个时间迁移未完成,则迁移失败,取消迁移

live_migration_permit_post_copy
live_migration_permit_post_copy即后拷贝迁移模式, 不设置则默认前拷贝模式

后拷贝模式:先传输设备的状态和一部分(10%)数据到目标主机,然后就先切换到目标主机上运行VM。之后再拷贝剩余的数据
前拷贝模式:VM上所有内存数据都是在切换到目标主机之前拷贝完

live_migration_permit_auto_converge
live_migration_permit_auto_converge即自动收敛模式(又称降频迁移模式)
在虚拟机长时间处于高业务下而影响迁移的时候,调整vcpu的参数减少vcpu的负载,降低“脏内存”的增长速度,从而使迁移能够顺利完成

热迁移代码实现(victoria版)

准备阶段

nova/api/openstack/compute/migrate_server.py

    def _migrate_live(self, req, id, body):
        ...
        # 是否执行块迁移
        block_migration = body["os-migrateLive"]["block_migration"]
        # 是否异步执行
        async_ = api_version_request.is_supported(req, min_version='2.34')
        ...
            # 是否强制执行
            force = self._get_force_param_for_live_migration(body, host)
        ...
            # 是否支持磁盘超额
            disk_over_commit = body["os-migrateLive"]["disk_over_commit"]

nova/nova/compute/api.py

    def live_migrate(self, context, instance, block_migration,
                     disk_over_commit, host_name, force=None, async_=False):
        """Migrate a server lively to a new host."""
       ...
        if force is False and host_name:
        ...
            # 非强制执行:设定目的主机信息
            destination = objects.Destination(
                host=target.host,
                node=target.hypervisor_hostname
            )
            request_spec.requested_destination = destination
            ...
            self.compute_task_api.live_migrate_instance(context, instance,
                host_name, block_migration=block_migration,
                disk_over_commit=disk_over_commit,
                request_spec=request_spec, async_=async_)

nova/nova/conductor/manager.py

        def _live_migrate(self, context, instance, scheduler_hint,
                      block_migration, disk_over_commit, request_spec):
        # 获取目标主机
        destination = scheduler_hint.get("host")
        ...
        task = self._build_live_migrate_task(context, instance, destination,
                                             block_migration, disk_over_commit,
                                             migration, request_spec)
        ...
            task.execute()

nova/nova/conductor/tasks/live_migrate.py

class LiveMigrationTask(base.TaskBase):
    ...
    def _execute(self):
        # 检查虚拟机是否正常运行
        self._check_instance_is_active()
        # 检查是否使用numa
        self._check_instance_has_no_numa()
        # 检查源主机服务进程是否正常
        self._check_host_is_up(self.source)
        ....
        # 限定必须指定目的主机
        if not self.destination:
            self.destination, dest_node, self.limits = self._find_destination()
        else:
            # 检查目的主机和源主机是否为同一个
            self._check_destination_is_not_source()
            # 检查目的主机服务进程是否正常
            self._check_host_is_up(self.destination)
            # 检查目的主机是否有足够的内存空间
            self._check_destination_has_enough_memory()
            ...
                # 检查目的主机和源主机的 Hypervisor 是否一致
                self._check_compatible_with_source_hypervisor(
            ...
                # 检查目的主机是否可以进行热迁移
                self._check_requested_destination()
            ...
        return self.compute_rpcapi.live_migration(self.context,
                host=self.source,
                instance=self.instance,
                dest=self.destination,
                block_migration=self.block_migration,
                migration=self.migration,
                migrate_data=self.migrate_data)

nova/nova/conductor/tasks/live_migrate.py
_check_requested_destination函数

    def _check_requested_destination(self):
        """Performs basic pre-live migration checks for the forced host."""
        self._call_livem_checks_on_host(self.destination, {})
        ...

_call_livem_checks_on_host函数也在live_migrate.py文件中

    def _call_livem_checks_on_host(self, destination, provider_mapping):
        ...
        try:
            self.migrate_data = self.compute_rpcapi.\
                check_can_live_migrate_destination(self.context, self.instance,
                    destination, self.block_migration, self.disk_over_commit,
                    self.migration, self.limits)
        ...

/nova/nova/virt/libvirt/driver.py

    def check_can_live_migrate_destination(self, context, instance,
                                           src_compute_info, dst_compute_info,
                                           block_migration=False,
                                           disk_over_commit=False):
        """这会在目标主机上运行检查,然后返回源主机检查结果。
        """
        if disk_over_commit:
            disk_available_gb = dst_compute_info['free_disk_gb']
        else:
            disk_available_gb = dst_compute_info['disk_available_least']
        disk_available_mb = (
            (disk_available_gb * units.Ki) - CONF.reserved_host_disk_mb)

        # 比较CPU
        try:
            if not instance.vcpu_model or not instance.vcpu_model.model:
                source_cpu_info = src_compute_info['cpu_info']
                self._compare_cpu(None, source_cpu_info, instance)
            else:
                self._compare_cpu(instance.vcpu_model, None, instance)
        except exception.InvalidCPUInfo as e:
            raise exception.MigrationPreCheckError(reason=e)

        # 创建共享存储测试文件,在源主机进行测试
        filename = self._create_shared_storage_test_file(instance)

        data = objects.LibvirtLiveMigrateData()
        data.filename = filename
        data.image_type = CONF.libvirt.images_type
        data.graphics_listen_addr_vnc = CONF.vnc.server_listen
        data.graphics_listen_addr_spice = CONF.spice.server_listen
        ...
        return data

nova/nova/compute/manager.py

    def live_migration(self, context, dest, instance, block_migration,
                       migration, migrate_data):
        ...
        # 设定 migration 状态为 '队列中'
        self._set_migration_status(migration, 'queued')
        self._waiting_live_migrations[instance.uuid] = (None, None)
        ...
                # 调用_do_live_migration
                self._do_live_migration, context, dest, instance,
                block_migration, migration, migrate_data)
            self._waiting_live_migrations[instance.uuid] = (migration, future)


    def _do_live_migration(self, context, dest, instance, block_migration,
                           migration, migrate_data):
        ...
        # 设定 migration 状态为 '准备'
        self._set_migration_status(migration, 'preparing')
        source_bdms = objects.BlockDeviceMappingList.get_by_instance_uuid(
                context, instance.uuid)
        # 让目的主机执行热迁移前的准备
        migrate_data = self._do_pre_live_migration_from_source(
            context, dest, instance, block_migration, migration, migrate_data,
            source_bdms)
        ...
        # 设定 migration 状态为 '进行中'
        self._set_migration_status(migration, 'running')

nova/nova/virt/libvirt/driver.py

    def _live_migration(self, context, instance, dest, post_method,
                        recover_method, block_migration,
                        migrate_data):
        ...
        # nova.virt.libvirt.guest.Guest 对象
        guest = self._host.get_guest(instance)

        disk_paths = []
        device_names = []
        if (migrate_data.block_migration and
                CONF.libvirt.virt_type != "parallels"):
            # 块迁移:获取本地磁盘文件路径,如果不需要块迁移,则只内存数据
            disk_paths, device_names = self._live_migration_copy_disk_paths(
                context, instance, guest)
        """
        spawn 建立一个线程执行_live_migration_operation函数
        _live_migration_operation向libvirtd 发出 Live Migration 指令
        实际操作就是调用 libvirt python 接口方法 virDomainMigrateToURI ,来实现从当前主机迁移 domain 对象到给定的目标主机
        """
        opthread = utils.spawn(self._live_migration_operation,
                                     context, instance, dest,
                                     block_migration,
                                     migrate_data, guest,
                                     device_names)
        ...
            # 监控 libvirtd 数据迁移进度
            self._live_migration_monitor(context, instance, guest, dest,
                                         post_method, recover_method,
                                         block_migration, migrate_data,
                                         finish_event, disk_paths)
        ...
向 libvirtd 发出 Live Migration 指令

nova/nova/virt/libvirt/driver.py

    def _live_migration_operation(self, context, instance, dest,
                                  block_migration, migrate_data, guest,
                                  device_names):
            ...
            # 获取 live migration URI
            migrate_uri = None
            if ('target_connect_addr' in migrate_data and
                    migrate_data.target_connect_addr is not None):
                dest = migrate_data.target_connect_addr
                if (migration_flags &
                    libvirt.VIR_MIGRATE_TUNNELLED == 0):
                    migrate_uri = self._migrate_uri(dest)
            # 获取 GuestOS XML
            new_xml_str = None
            ...
                new_resources = None
                if isinstance(instance, objects.Instance):
                    new_resources = self._sorted_migrating_resources(
                        instance, instance.flavor)
                new_xml_str = libvirt_migrate.get_updated_guest_xml(
                    guest, migrate_data, self._get_volume_config,
                    get_vif_config=get_vif_config, new_resources=new_resources)
            ...
            # 调用 libvirt.virDomain.migrate 的封装函数
            # 向 libvirtd 发出 Live Migration 指令
            guest.migrate(self._live_migration_uri(dest),
                          migrate_uri=migrate_uri,
                          flags=migration_flags,
                          migrate_disks=device_names,
                          destination_xml=new_xml_str,
                          bandwidth=CONF.libvirt.live_migration_bandwidth)

guest.migrate调用guest.py中的migrate函数
/nova/nova/virt/libvirt/guest.py

    def migrate(self, destination, migrate_uri=None, migrate_disks=None,
                destination_xml=None, flags=0, bandwidth=0):
        """
        将访客对象从其当前主机迁移到目的地
        """
        params = {}
        params['bandwidth'] = bandwidth

        if destination_xml:
            params['destination_xml'] = destination_xml
            params['persistent_xml'] = destination_xml
        if migrate_disks:
            params['migrate_disks'] = migrate_disks
        if migrate_uri:
            params['migrate_uri'] = migrate_uri
        # 通过 Flags 来配置 Libvirt 迁移细节
        if (flags & libvirt.VIR_MIGRATE_NON_SHARED_INC != 0 and
                not params.get('migrate_disks')):
            flags &= ~libvirt.VIR_MIGRATE_NON_SHARED_INC

        self._domain.migrateToURI3(
            destination, params=params, flags=flags)

各种flags都在 nova.conf 的配置项 live_migration_flag 写着

VIR_MIGRATE_LIVE – 在迁移过程中不要暂停 VM
VIR_MIGRATE_PEER2PEER – 源主机和目标主机之间的直接连接
VIR_MIGRATE_TUNNELLED – 通过 libvirt RPC 通道隧道迁移数据
VIR_MIGRATE_PERSIST_DEST – 如果迁移成功,则在目标主机上保留域。
VIR_MIGRATE_UNDEFINE_SOURCE – 如果迁移成功,则取消定义源主机上的域。
VIR_MIGRATE_PAUSED – 使域在远程端暂停。
VIR_MIGRATE_CHANGE_PROTECTION – 在迁移过程中防止域配置更改(支持时自动设置)
VIR_MIGRATE_UNSAFE – 强制迁移,即使它被认为是不安全的。
VIR_MIGRATE_CHANGE_PROTECTION – 在迁移过程中防止域配置更改(支持时自动设置)
VIR_MIGRATE_UNSAFE – 强制迁移,即使它被认为是不安全的。
VIR_MIGRATE_OFFLINE – 离线迁移

nova/nova/virt/libvirt/driver.py

    def _live_migration_monitor(self, context, instance, guest,
                                dest, post_method,
                                recover_method, block_migration,
                                migrate_data, finish_event,
                                disk_paths):

        on_migration_failure: ty.Deque[str] = deque()
        # 获取需要进行热迁移的总数据量,包括 RAM 和本地磁盘文件
        data_gb = self._live_migration_data_gb(instance, disk_paths)
        # 根据用户设置的多个downtime参数去计算downtime_steps,每个 Step 的 max downtime 都在递增直到真正用户设定的最大可容忍 downtime
        downtime_steps = list(libvirt_migrate.downtime_steps(data_gb))
        migration = migrate_data.migration
        curdowntime = None

        migration_flags = self._get_migration_flags(
                                  migrate_data.block_migration)
        # 轮询次数
        n = 0
        start = time.time()
        is_post_copy_enabled = self._is_post_copy_enabled(migration_flags)
        # 是否启用了post_copy(注:vpmem 不支持 post copy)
        is_post_copy_enabled &= not bool(self._get_vpmems(instance))
        while True:
            # 获取 Live Migration Job 的信息
            info = guest.get_job_info()
            ...
            elif info.type == libvirt.VIR_DOMAIN_JOB_UNBOUNDED:
                # 迁移仍在进行
                # 这是我们呼叫更改实时迁移状态的地方。 例如更改最大停机时间、取消操作、更改最大带宽
                libvirt_migrate.run_tasks(guest, instance,
                                          self.active_migrations,
                                          on_migration_failure,
                                          migration,
                                          is_post_copy_enabled)

                now = time.time()
                elapsed = now - start

                completion_timeout = int(
                    CONF.libvirt.live_migration_completion_timeout * data_gb)

                # 判断是否触发超时操作
                if libvirt_migrate.should_trigger_timeout_action(
                        instance, elapsed, completion_timeout,
                        migration.status):
                    timeout_act = CONF.libvirt.live_migration_timeout_action
                    if timeout_act == 'force_complete':
                        self.live_migration_force_complete(instance)
                    else:
                        try:
                            # 超时就终止迁移
                            guest.abort_job()
                        except libvirt.libvirtError as e:
                            LOG.warning("Failed to abort migration %s",
                                    encodeutils.exception_to_unicode(e),
                                    instance=instance)
                            self._clear_empty_migration(instance)
                            raise
                # 迭代的动态传递 Max Downtime Step
                curdowntime = libvirt_migrate.update_downtime(
                    guest, instance, curdowntime,
                    downtime_steps, elapsed)

                if (n % 10) == 0:
                    remaining = 100
                    if info.memory_total != 0:
                        # 计算剩余迁移量
                        remaining = round(info.memory_remaining *
                                          100 / info.memory_total)

                    libvirt_migrate.save_stats(instance, migration,
                                               info, remaining)
                    disk_remaining = 100
                    if info.disk_total != 0:
                        disk_remaining = round(info.disk_remaining *
                                               100 / info.disk_total)
                    # 每轮询 60 次打印一次 info
                    # 每轮询 10 次打印一次 debug
                    lg = LOG.debug
                    if (n % 60) == 0:
                        lg = LOG.info

                    lg("Migration running for %(secs)d secs, "
                       "memory %(remaining)d%% remaining "
                       "(bytes processed=%(processed_memory)d, "
                       "remaining=%(remaining_memory)d, "
                       "total=%(total_memory)d); "
                       "disk %(disk_remaining)d%% remaining "
                       "(bytes processed=%(processed_disk)d, "
                       "remaining=%(remaining_disk)d, "
                       "total=%(total_disk)d).",
                       {"secs": n / 2, "remaining": remaining,
                        "processed_memory": info.memory_processed,
                        "remaining_memory": info.memory_remaining,
                        "total_memory": info.memory_total,
                        "disk_remaining": disk_remaining,
                        "processed_disk": info.disk_processed,
                        "remaining_disk": info.disk_remaining,
                        "total_disk": info.disk_total}, instance=instance)

                n = n + 1
            # 迁移全部完成
            elif info.type == libvirt.VIR_DOMAIN_JOB_COMPLETED:
                # Migration is all done
                LOG.info("Migration operation has completed",
                         instance=instance)
                post_method(context, instance, dest, block_migration,
                            migrate_data)
                break
            # 迁移失败
            elif info.type == libvirt.VIR_DOMAIN_JOB_FAILED:
                # Migration did not succeed
                LOG.error("Migration operation has aborted", instance=instance)
                libvirt_migrate.run_recover_tasks(self._host, guest, instance,
                                                  on_migration_failure)
                recover_method(context, instance, dest, migrate_data)
                break
            # 迁移被取消
            elif info.type == libvirt.VIR_DOMAIN_JOB_CANCELLED:
                # Migration was stopped by admin
                LOG.warning("Migration operation was cancelled",
                            instance=instance)
                libvirt_migrate.run_recover_tasks(self._host, guest, instance,
                                                  on_migration_failure)
                recover_method(context, instance, dest, migrate_data,
                               migration_status='cancelled')
                break
            else:
                LOG.warning("Unexpected migration job type: %d",
                            info.type, instance=instance)

            time.sleep(0.5)
        self._clear_empty_migration(instance)

更新downtime的函数 update_downtime
nova/nova/virt/libvirt/migration.py

def update_downtime(guest, instance,
                    olddowntime,
                    downtime_steps, elapsed):
    """如果需要,更新最大停机时间

     :param guest: 一个 nova.virt.libvirt.guest.Guest 来设置停机时间
     :param 实例:一个 nova.objects.Instance
     :param olddowntime: 当前设置的停机时间,或 None
     :param downtime_steps: 停机步骤列表
     :param elapsed: 以秒为单位的总迁移时间

     根据停机时间步骤确定是否需要增加最长停机时间。 停机步骤列表中的每个元素都应该是一个 2 元素元组。 第一个元素包含一个时间标记,第二个元素包含要在标记被点击时设置的停机时间值。
     来宾对象将用于更改实例上的当前停机时间值。 更新停机时间时遇到的任何错误都将被忽略返回:新的停机时间值
    """
    LOG.debug("Current %(dt)s elapsed %(elapsed)d steps %(steps)s",
              {"dt": olddowntime, "elapsed": elapsed,
               "steps": downtime_steps}, instance=instance)
    thisstep = None
    for step in downtime_steps:
        if elapsed > step[0]:
            thisstep = step

    if thisstep is None:
        LOG.debug("No current step", instance=instance)
        return olddowntime

    if thisstep[1] == olddowntime:
        LOG.debug("Downtime does not need to change",
                  instance=instance)
        return olddowntime

    LOG.info("Increasing downtime to %(downtime)d ms "
             "after %(waittime)d sec elapsed time",
             {"downtime": thisstep[1],
              "waittime": thisstep[0]},
             instance=instance)

    try:
        guest.migrate_configure_max_downtime(thisstep[1])
    except libvirt.libvirtError as e:
        LOG.warning("Unable to increase max downtime to %(time)d ms: %(e)s",
                    {"time": thisstep[1], "e": e}, instance=instance)
    return thisstep[1]

热迁移提高成功率优化点

优化点

1.降频迁移(又称自动收敛)
调整vcpu的参数减少vcpu的负载,降低“脏内存”的增长速度, 在libvirtd中,当“脏内存“的增长速度大于迁移内存传输速度时,触发自动收敛模式,libvirtd在一开始降低了20%的vcpu性能,如果“脏内存“的增长速度依旧大于迁移内存传输速度,则在此基础上再降低10%的性能,直到“脏内存“的增长速度小于迁移内存传输速度。

2.后拷贝模式

先传输设备的状态和一部分(10%)数据到目标主机,然后就简单的切换到目标主机上面运行虚拟机。
此时用户访问就切到目标主机上了,此时源主机的内存数据(剩下的90%)还在持续往目标主机上拷贝。
当发现访问虚拟机的某些内存page不存在时,就会产生一个远程页错误,触发从源主机上面拉取该page的动作(相当于插队,优先拉取用户请求的这部分page)

在qemu的实际过程:

  • post_copy先发送cpu state和device state,停止src虚拟机运行,dst标志page都无效,开始运行
  • qemu捕捉pagefault,然后从src请求page
  • src把page加入发送队列,dst等到这个page通知内核处理pagefualt然后继续运行。

ps: postcopy模式src会持续给dst发送page直到全部拷贝完,只有dst急需的page会插队优先发送

3.优化downtime参数设置

风险点:
  1. 降频迁移风险点
    • 启用降频迁移会极大减慢虚拟机速度,请确保虚拟机上的应用程序可以容忍减速
    • 自动收敛并不能保证热迁移一定成功
  2. 后拷贝风险点
    • post-copy 引入的页面错误会减慢虚拟机的速度。
    • 当源主机和目的主机之间的网络连接中断时,将无法再解决页面错误并重启虚拟机
  3. 优化downtime参数风险点
    • 具体的参数优化需要与环境的网络以及虚拟机负载相适宜,不当的参数配置可能反而会降低迁移成功率

ps:同时启用降频迁移和后拷贝,降频迁移将保持禁用状态

热迁移优化实验

环境准备

ps: 多节点的话,每个节点都要改

1.修改/etc/nova/nova.conf

# 把live_migration_flag解开注释
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"

# 检查[vnc]分组下,vncserver_proxyclient_address是否等于当前节点ip
vncserver_proxyclient_address=当前节点ip

# 检查[vnc]分组下,novncproxy_base_url是否是vip的链接
novncproxy_base_url=http://【vip】:6080/vnc_auto.html

# 重启nova-compute服务
systemctl restart openstack-nova-compute

2.修改/etc/libvirt/libvirtd.conf

# 检查listen_tcp
listen_tcp = 1

# 检查tcp_port是否解开注释
tcp_port = "16509"

# 检查listen_addr是否是当前节点ip
listen_addr = "192.168.10.82"

# 检查auth_tcp
auth_tcp = "none" 

3.修改/etc/sysconfig/libvirtd

# 启动 libvirtd-tls.socket
systemctl start libvirtd-tls.socket

# 启动 systemctl start libvirtd-tcp.socket
systemctl start libvirtd-tcp.socket

# 执行 masek
systemctl mask libvirtd.socket libvirtd-ro.socket libvirtd-admin.socket libvirtd-tls.socket libvirtd-tcp.socket

#修改 LIBVIRTD_ARGS 参数,添加--listen
LIBVIRTD_ARGS="--timeout 120 --listen"

# 重启libvirtd服务
systemctl restart libvirtd

热迁移执行方式

  • 命令行
  • 界面 (管理员 --> 计算 --> 实例 --> 实例热迁移 )

热迁移命令行执行方式:

# 查看虚拟机列表
openstack server list

# 查看虚拟机详情 
openstack server show 【虚拟机UUID】

# 列出计算节点
openstack compute service list

# 检查目标主机是否有足够的资源进行迁移
openstack host show 【目标主机名称】

# 执行迁移命令(指定目标主机)
nova live-migration 【虚拟机UUID】 【目标主机名称】

# 执行迁移命令(不指定目标主机,由Compute service自行选择)
nova live-migration 【虚拟机UUID】

# 查看迁移进度任务列表(如果虚拟机迁移速度很快的话可能会看不到)(获取到迁移进度ID)
nova server-migration-list 【虚拟机UUID】

# 查看迁移进度详情
 nova server-migration-show 【虚拟机UUID】【迁移进度ID】

降频迁移模式

1.修改/etc/nova/nova.conf

# 在[libvirt]分组下添加
live_migration_permit_auto_converge=true

# 重启nova-compute服务
systemctl restart openstack-nova-compute

2.创建虚拟机,并进入虚拟机安装stress压测工具(安装过程就不啰嗦了),添加定时任务执行压测命令

# 打开crontab -e,写入以下内容,注意不要带注释,

# 每分钟执行一次压测命令,压测命令:产生4个sqrt函数计算由rand函数产生的随机数的平方根消耗cpu资源
* * * * * stress -c 4 --timeout 40 
# 每分钟执行一次压测命令,压测命令:生成3个进程,每个进程占用300M内存
* * * * * stress -m 3 --vm-bytes 300M --timeout 40
# 每分钟执行一次压测命令,压测命令:产生2个进程,每个进程反复调用sync()将内存上的内容写到硬盘上.产生4个进程,每个进程往当前目录中写入512M的临时文件,然后执行unlink操作删除该临时文件
* * * * * stress -i 2 -d 4 --hdd-bytes 512M --timeout 40

ps:stress命令仅供参考,具体环境具体配置,不会的话看一下stress怎么用

3.完整日志输出(本实验是将一个叫live_migration的虚拟机,从从controller3热迁移到controller2)

ps:这个环境中controller2和controller3都是计算控制混合的,所以不要奇怪为什么往控制节点迁移,对于这个案例来说,这两个混合节点就当计算节点用了

2021-06-16 13:59:42.769 205830 DEBUG oslo_service.periodic_task [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:211
2021-06-16 13:59:42.811 205830 DEBUG nova.compute.manager [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Triggering sync for uuid ca591761-ff41-4b9a-8574-b772f682f30e _sync_power_states /usr/lib/python3.6/site-packages/nova/compute/manager.py:9603
2021-06-16 13:59:42.812 205830 DEBUG oslo_concurrency.lockutils [-] Lock "ca591761-ff41-4b9a-8574-b772f682f30e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359
2021-06-16 13:59:42.813 205830 DEBUG oslo_service.periodic_task [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:211
2021-06-16 13:59:42.925 205830 DEBUG oslo_concurrency.lockutils [-] Lock "ca591761-ff41-4b9a-8574-b772f682f30e" released by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.113s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:371
2021-06-16 13:59:47.038 205830 DEBUG nova.virt.libvirt.driver [req-11c6f1ed-956f-4350-b692-276ada60a3c1 90f8ef71f39d4eb2b5588c17e052cbb3 0b953145f50344a1b3cc0b800ed12681 - default default] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Check if temp file /var/lib/nova/instances/tmpgdoo947e exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py:8989
2021-06-16 13:59:47.042 205830 DEBUG nova.virt.libvirt.driver [req-11c6f1ed-956f-4350-b692-276ada60a3c1 90f8ef71f39d4eb2b5588c17e052cbb3 0b953145f50344a1b3cc0b800ed12681 - default default] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py:10068
2021-06-16 13:59:47.043 205830 DEBUG nova.compute.manager [req-11c6f1ed-956f-4350-b692-276ada60a3c1 90f8ef71f39d4eb2b5588c17e052cbb3 0b953145f50344a1b3cc0b800ed12681 - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=25600,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=False,filename='tmpgdoo947e',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=0.0.0.0,image_type='default',instance_relative_path='ca591761-ff41-4b9a-8574-b772f682f30e',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=<?>,wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.6/site-packages/nova/compute/manager.py:7945
2021-06-16 13:59:48.432 205830 DEBUG nova.compute.manager [-] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Preparing to wait for external event network-vif-plugged-f8f2ad0c-d618-44dc-8566-d051a95a5cec prepare_for_instance_event /usr/lib/python3.6/site-packages/nova/compute/manager.py:286
2021-06-16 13:59:48.433 205830 DEBUG oslo_concurrency.lockutils [-] Lock "ca591761-ff41-4b9a-8574-b772f682f30e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359
2021-06-16 13:59:48.433 205830 DEBUG oslo_concurrency.lockutils [-] Lock "ca591761-ff41-4b9a-8574-b772f682f30e-events" released by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:371
2021-06-16 13:59:48.433 205830 DEBUG oslo_concurrency.lockutils [-] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:266
2021-06-16 13:59:48.434 205830 DEBUG oslo_concurrency.lockutils [-] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:282
2021-06-16 13:59:54.625 205830 DEBUG nova.compute.manager [req-227326f4-4f76-4ca6-ac85-1d9fb632698b 67a46988b6bf4395bb5606c2712d22bf 2c0d465052c147fa9e81b320b7d2ceed - default default] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Received event network-vif-unplugged-f8f2ad0c-d618-44dc-8566-d051a95a5cec external_instance_event /usr/lib/python3.6/site-packages/nova/compute/manager.py:10382
2021-06-16 13:59:54.627 205830 DEBUG oslo_concurrency.lockutils [req-227326f4-4f76-4ca6-ac85-1d9fb632698b 67a46988b6bf4395bb5606c2712d22bf 2c0d465052c147fa9e81b320b7d2ceed - default default] Lock "ca591761-ff41-4b9a-8574-b772f682f30e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359
2021-06-16 13:59:54.627 205830 DEBUG oslo_concurrency.lockutils [req-227326f4-4f76-4ca6-ac85-1d9fb632698b 67a46988b6bf4395bb5606c2712d22bf 2c0d465052c147fa9e81b320b7d2ceed - default default] Lock "ca591761-ff41-4b9a-8574-b772f682f30e-events" released by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:371
2021-06-16 13:59:54.628 205830 DEBUG nova.compute.manager [req-227326f4-4f76-4ca6-ac85-1d9fb632698b 67a46988b6bf4395bb5606c2712d22bf 2c0d465052c147fa9e81b320b7d2ceed - default default] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] No event matching network-vif-unplugged-f8f2ad0c-d618-44dc-8566-d051a95a5cec in dict_keys([('network-vif-plugged', 'f8f2ad0c-d618-44dc-8566-d051a95a5cec')]) pop_instance_event /usr/lib/python3.6/site-packages/nova/compute/manager.py:335
# 接收event  network-vif-unplugged-f8f2ad0c-d618-44dc-8566-d051a95a5cec (带状态迁移的)
2021-06-16 13:59:54.628 205830 DEBUG nova.compute.manager [req-227326f4-4f76-4ca6-ac85-1d9fb632698b 67a46988b6bf4395bb5606c2712d22bf 2c0d465052c147fa9e81b320b7d2ceed - default default] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Received event network-vif-unplugged-f8f2ad0c-d618-44dc-8566-d051a95a5cec for instance with task_state migrating. _process_instance_event /usr/lib/python3.6/site-packages/nova/compute/manager.py:10165
# pre_live_migration() 在 dhost 上提前准备虚拟机迁移所需要的资源
2021-06-16 13:59:54.632 205830 INFO nova.compute.manager [-] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Took 6.20 seconds for pre_live_migration on destination host controller2.
# 接收event
2021-06-16 13:59:55.960 205830 DEBUG nova.compute.manager [req-785a6f8c-952d-4f0f-80d1-82b075d8707a 67a46988b6bf4395bb5606c2712d22bf 2c0d465052c147fa9e81b320b7d2ceed - default default] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Received event network-vif-plugged-f8f2ad0c-d618-44dc-8566-d051a95a5cec external_instance_event /usr/lib/python3.6/site-packages/nova/compute/manager.py:10382
2021-06-16 13:59:55.961 205830 DEBUG oslo_concurrency.lockutils [req-785a6f8c-952d-4f0f-80d1-82b075d8707a 67a46988b6bf4395bb5606c2712d22bf 2c0d465052c147fa9e81b320b7d2ceed - default default] Lock "ca591761-ff41-4b9a-8574-b772f682f30e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359
2021-06-16 13:59:55.961 205830 DEBUG oslo_concurrency.lockutils [req-785a6f8c-952d-4f0f-80d1-82b075d8707a 67a46988b6bf4395bb5606c2712d22bf 2c0d465052c147fa9e81b320b7d2ceed - default default] Lock "ca591761-ff41-4b9a-8574-b772f682f30e-events" released by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:371
# 虚拟机处理上面收到的event
2021-06-16 13:59:55.962 205830 DEBUG nova.compute.manager [req-785a6f8c-952d-4f0f-80d1-82b075d8707a 67a46988b6bf4395bb5606c2712d22bf 2c0d465052c147fa9e81b320b7d2ceed - default default] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Processing event network-vif-plugged-f8f2ad0c-d618-44dc-8566-d051a95a5cec _process_instance_event /usr/lib/python3.6/site-packages/nova/compute/manager.py:10147
# live_migration 数据 LibvirtLiveMigrateData
2021-06-16 13:59:55.989 205830 DEBUG nova.compute.manager [-] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=25600,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=False,filename='tmpgdoo947e',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=0.0.0.0,image_type='default',instance_relative_path='ca591761-ff41-4b9a-8574-b772f682f30e',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(41b84f0d-3008-4dbd-aa46-d58960ad6c86),old_vol_attachment_ids={a46d2d7c-8694-4818-9417-97d1b04b1dce='93018a62-249e-4ddc-9889-d95549d1c51e'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.6/site-packages/nova/compute/manager.py:8282
# 在虚拟机上延迟加载“migration_context”
2021-06-16 13:59:55.993 205830 DEBUG nova.objects.instance [-] Lazy-loading 'migration_context' on Instance uuid ca591761-ff41-4b9a-8574-b772f682f30e obj_load_attr /usr/lib/python3.6/site-packages/nova/objects/instance.py:1101
# 开始监控实时迁移的虚拟机(这个被迁移的虚拟机名称叫live_migration)
2021-06-16 13:59:55.995 205830 DEBUG nova.virt.libvirt.driver [-] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Starting monitoring of live migration _live_migration /usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py:9551
# 操作线程还在运行 _live_migration_monitor
2021-06-16 13:59:55.997 205830 DEBUG nova.virt.libvirt.driver [-] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Operation thread is still running _live_migration_monitor /usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py:9352
# 迁移尚未运行 _live_migration_monitor
2021-06-16 13:59:55.997 205830 DEBUG nova.virt.libvirt.driver [-] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Migration not running yet _live_migration_monitor /usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py:9361
# 找到相同的序列号:pos=0, serial=a46d2d7c-8694-4818-9417-97d1b04b1dce _update_volume_xml
2021-06-16 13:59:56.021 205830 DEBUG nova.virt.libvirt.migration [-] Find same serial number: pos=0, serial=a46d2d7c-8694-4818-9417-97d1b04b1dce _update_volume_xml /usr/lib/python3.6/site-packages/nova/virt/libvirt/migration.py:230
# vif_type相关(一堆虚拟机参数以及底层参数,用于与neutron交互)
2021-06-16 13:59:56.022 205830 DEBUG nova.virt.libvirt.vif [-] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2021-06-16T01:44:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='live-migration',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(11),hidden=False,host='controller3',hostname='live-migration',id=30,image_ref='',info_cache=InstanceInfoCache,instance_type_id=11,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2021-06-16T01:46:36Z,launched_on='controller3',locked=False,locked_by=None,memory_mb=1024,metadata={},migration_context=None,new_flavor=None,node='controller3',numa_topology=None,old_flavor=None,os_type='linux',pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0b953145f50344a1b3cc0b800ed12681',ramdisk_id='',reservation_id='r-thql0o00',resources=None,root_device_name='/dev/vda',root_gb=20,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='',image_container_format='bare',image_disk_format='qcow2',image_min_disk='20',image_min_ram='0',image_os_type='linux',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2021-06-16T03:21:55Z,user_data=None,user_id='90f8ef71f39d4eb2b5588c17e052cbb3',uuid=ca591761-ff41-4b9a-8574-b772f682f30e,vcpu_model=<?>,vcpus=2,vm_mode=None,vm_state='active') vif={"id": "f8f2ad0c-d618-44dc-8566-d051a95a5cec", "address": "fa:16:3e:71:ab:4f", "network": {"id": "0116f1ae-9726-4c74-84d3-d779d10467c4", "bridge": "br-int", "label": "network", "subnets": [{"cidr": "192.168.10.0/24", "dns": [], "gateway": {"address": "192.168.10.10", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.10.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.10.1"}}], "meta": {"injected": false, "tenant_id": "0b953145f50344a1b3cc0b800ed12681", "mtu": 1500, "physical_network": "external", "tunneled": false}}, "type": "ovs", "details": {"connectivity": "l2", "port_filter": true, "ovs_hybrid_plug": true, "datapath_type": "system", "bridge_name": "br-int"}, "devname": "tapf8f2ad0c-d6", "ovs_interfaceid": "f8f2ad0c-d618-44dc-8566-d051a95a5cec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "meta": {}} virt_type=qemu get_config /usr/lib/python3.6/site-packages/nova/virt/libvirt/vif.py:549
# 转换 vif
2021-06-16 13:59:56.023 205830 DEBUG nova.network.os_vif_util [-] Converting VIF {"id": "f8f2ad0c-d618-44dc-8566-d051a95a5cec", "address": "fa:16:3e:71:ab:4f", "network": {"id": "0116f1ae-9726-4c74-84d3-d779d10467c4", "bridge": "br-int", "label": "network", "subnets": [{"cidr": "192.168.10.0/24", "dns": [], "gateway": {"address": "192.168.10.10", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.10.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.10.1"}}], "meta": {"injected": false, "tenant_id": "0b953145f50344a1b3cc0b800ed12681", "mtu": 1500, "physical_network": "external", "tunneled": false}}, "type": "ovs", "details": {"connectivity": "l2", "port_filter": true, "ovs_hybrid_plug": true, "datapath_type": "system", "bridge_name": "br-int"}, "devname": "tapf8f2ad0c-d6", "ovs_interfaceid": "f8f2ad0c-d618-44dc-8566-d051a95a5cec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "meta": {}} nova_to_osvif_vif /usr/lib/python3.6/site-packages/nova/network/os_vif_util.py:501
# 转换后的对象 VIFBridge
2021-06-16 13:59:56.024 205830 DEBUG nova.network.os_vif_util [-] Converted object VIFBridge(active=True,address=fa:16:3e:71:ab:4f,bridge_name='qbrf8f2ad0c-d6',has_traffic_filtering=True,id=f8f2ad0c-d618-44dc-8566-d051a95a5cec,network=Network(0116f1ae-9726-4c74-84d3-d779d10467c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8f2ad0c-d6') nova_to_osvif_vif /usr/lib/python3.6/site-packages/nova/network/os_vif_util.py:538
# 虚拟机处理使用 vif 配置更新访客 XML:<interface type="bridge">
2021-06-16 13:59:56.025 205830 DEBUG nova.virt.libvirt.migration [-] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Updating guest XML with vif config: <interface type="bridge">
# 虚拟机即将调用迁移API _live_migration_operation
2021-06-16 13:59:56.025 205830 DEBUG nova.virt.libvirt.driver [-] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] About to invoke the migrate API _live_migration_operation /usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py:9183
# 根据用户设置的多个downtime参数去计算downtime_steps,每个 Step 的 max downtime 都在递增直到真正用户设定的最大可容忍 downtime  (当前没有经过 0 步 [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050) , 365), (1200, 410), (1350, 455), (1500, 500)]  )
2021-06-16 13:59:56.500 205830 DEBUG nova.virt.libvirt.migration [-] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.6/site-packages/nova/virt/libvirt/migration.py:499
# 虚拟机在经过 0 秒后将停机时间增加到 50 毫秒
2021-06-16 13:59:56.501 205830 INFO nova.virt.libvirt.migration [-] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Increasing downtime to 50 ms after 0 sec elapsed time
# 虚拟机迁移运行 0 秒,内存 100% 剩余(已处理字节数=0,剩余数=0,总计=0); 磁盘 100% 剩余(处理的字节数 = 0,剩余数 = 0,总数 = 0)。
2021-06-16 13:59:56.700 205830 INFO nova.virt.libvirt.driver [-] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
# 虚拟机接收event network-changed-f8f2ad0c-d618-44dc-8566-d051a95a5cec 
2021-06-16 13:59:56.705 205830 DEBUG nova.compute.manager [req-fc496424-f7b0-48f1-bef4-83073d355a8d 67a46988b6bf4395bb5606c2712d22bf 2c0d465052c147fa9e81b320b7d2ceed - default default] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Received event network-changed-f8f2ad0c-d618-44dc-8566-d051a95a5cec external_instance_event /usr/lib/python3.6/site-packages/nova/compute/manager.py:10382
# 虚拟机由事件 network-changed-f8f2ad0c-d618-44dc-8566-d051a95a5cec 刷新实例网络信息缓存
2021-06-16 13:59:56.706 205830 DEBUG nova.compute.manager [req-fc496424-f7b0-48f1-bef4-83073d355a8d 67a46988b6bf4395bb5606c2712d22bf 2c0d465052c147fa9e81b320b7d2ceed - default default] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Refreshing instance network info cache due to event network-changed-f8f2ad0c-d618-44dc-8566-d051a95a5cec. external_instance_event /usr/lib/python3.6/site-packages/nova/compute/manager.py:10386
2021-06-16 13:59:56.706 205830 DEBUG oslo_concurrency.lockutils [req-fc496424-f7b0-48f1-bef4-83073d355a8d 67a46988b6bf4395bb5606c2712d22bf 2c0d465052c147fa9e81b320b7d2ceed - default default] Acquired lock "refresh_cache-ca591761-ff41-4b9a-8574-b772f682f30e" lock /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:266
# 虚拟机刷新端口 f8f2ad0c-d618-44dc-8566-d051a95a5cec _get_instance_nw_info 的网络信息缓存
2021-06-16 13:59:56.707 205830 DEBUG nova.network.neutron [req-fc496424-f7b0-48f1-bef4-83073d355a8d 67a46988b6bf4395bb5606c2712d22bf 2c0d465052c147fa9e81b320b7d2ceed - default default] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Refreshing network info cache for port f8f2ad0c-d618-44dc-8566-d051a95a5cec _get_instance_nw_info /usr/lib/python3.6/site-packages/nova/network/neutron.py:1829
# 执行检查虚拟机构建时间周期任务
2021-06-16 13:59:57.130 205830 DEBUG oslo_service.periodic_task [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:211
# 执行修复实例信息缓存周期任务
2021-06-16 13:59:57.131 205830 DEBUG oslo_service.periodic_task [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:211
# 启动修复实例信息缓存cache _heal_instance_info_cache函数
2021-06-16 13:59:57.132 205830 DEBUG nova.compute.manager [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.6/site-packages/nova/compute/manager.py:9121
# 重建实例列表以修复实例信息缓存
2021-06-16 13:59:57.132 205830 DEBUG nova.compute.manager [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.6/site-packages/nova/compute/manager.py:9125
# 当前 50 已过去 1 步, 更新downtimeSteps
2021-06-16 13:59:57.204 205830 DEBUG nova.virt.libvirt.migration [-] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.6/site-packages/nova/virt/libvirt/migration.py:499
# 停机不需要更改 update_downtime
2021-06-16 13:59:57.205 205830 DEBUG nova.virt.libvirt.migration [-] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Downtime does not need to change update_downtime /usr/lib/python3.6/site-packages/nova/virt/libvirt/migration.py:511
# 报告NeutronError错误信息:找不到资源
2021-06-16 13:59:57.574 205830 DEBUG neutronclient.v2_0.client [req-fc496424-f7b0-48f1-bef4-83073d355a8d 67a46988b6bf4395bb5606c2712d22bf 2c0d465052c147fa9e81b320b7d2ceed - default default] Error message: {"NeutronError": {"type": "HTTPNotFound", "message": "The resource could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.6/site-packages/neutronclient/v2_0/client.py:258
# 发射 event <LifecycleEvent: 1623823197.6696067, ca591761-ff41-4b9a-8574-b772f682f30e => Paused> 
2021-06-16 13:59:57.670 205830 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1623823197.6696067, ca591761-ff41-4b9a-8574-b772f682f30e => Paused> emit_event /usr/lib/python3.6/site-packages/nova/virt/driver.py:1704
# 虚拟机暂停(生命周期事件)
2021-06-16 13:59:57.671 205830 INFO nova.compute.manager [-] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] VM Paused (Lifecycle Event)
# 当前 50 已过去 1 步, 更新downtimeSteps
2021-06-16 13:59:57.708 205830 DEBUG nova.virt.libvirt.migration [-] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.6/site-packages/nova/virt/libvirt/migration.py:499
# 停机不需要更改 update_downtime
2021-06-16 13:59:57.708 205830 DEBUG nova.virt.libvirt.migration [-] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Downtime does not need to change update_downtime /usr/lib/python3.6/site-packages/nova/virt/libvirt/migration.py:511
# 检查虚拟机电源状态
2021-06-16 13:59:57.742 205830 DEBUG nova.compute.manager [req-2cad8f84-0a5f-4b73-a747-39425e426a9a - - - - -] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Checking state _get_power_state /usr/lib/python3.6/site-packages/nova/compute/manager.py:1569
# 虚拟机迁移API 已完成 _live_migration_operation
2021-06-16 13:59:57.952 205830 DEBUG nova.virt.libvirt.driver [-] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Migrate API has completed _live_migration_operation /usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py:9190
# 迁移操作线程已完成_live_migration_operation
2021-06-16 13:59:57.952 205830 DEBUG nova.virt.libvirt.driver [-] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Migration operation thread has finished _live_migration_operation /usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py:9239
# 迁移操作线程通知thread_finished
2021-06-16 13:59:57.953 205830 DEBUG nova.virt.libvirt.driver [-] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Migration operation thread notification thread_finished /usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py:9542
# 未找到域:没有域匹配虚拟机,域已经关闭或者消失
2021-06-16 13:59:58.211 205830 DEBUG nova.virt.libvirt.guest [-] Domain has shutdown/gone away: Domain not found: no domain with matching uuid 'ca591761-ff41-4b9a-8574-b772f682f30e' (instance-0000001e) get_job_info /usr/lib/python3.6/site-packages/nova/virt/libvirt/guest.py:722
# 虚拟机迁移操作已完成
2021-06-16 13:59:58.213 205830 INFO nova.virt.libvirt.driver [-] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Migration operation has completed
# 虚拟机实例 _post_live_migration() 已启动
2021-06-16 13:59:58.214 205830 INFO nova.compute.manager [-] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] _post_live_migration() is started..
# 获取连接器属性
2021-06-16 13:59:58.217 205830 DEBUG os_brick.utils [-] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.10.83', 'multipath': False, 'enforce_multipath': True, 'host': 'controller3', 'execute': None}" trace_logging_wrapper /usr/lib/python3.6/site-packages/os_brick/utils.py:153
# 运行 privsep helper
2021-06-16 13:59:58.219 205830 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/usr/share/nova/nova-dist.conf', '--config-file', '/etc/nova/nova.conf', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmp8uynjxiy/privsep.sock']
# 虚拟机更新了端口 f8f2ad0c-d618-44dc-8566-d051a95a5cec 的实例网络信息缓存中的VIF条目。
2021-06-16 13:59:58.324 205830 DEBUG nova.network.neutron [req-fc496424-f7b0-48f1-bef4-83073d355a8d 67a46988b6bf4395bb5606c2712d22bf 2c0d465052c147fa9e81b320b7d2ceed - default default] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Updated VIF entry in instance network info cache for port f8f2ad0c-d618-44dc-8566-d051a95a5cec. _build_network_info_model /usr/lib/python3.6/site-packages/nova/network/neutron.py:3122
# 使用 network_info 更新 instance_info_cache
2021-06-16 13:59:58.326 205830 DEBUG nova.network.neutron [req-fc496424-f7b0-48f1-bef4-83073d355a8d 67a46988b6bf4395bb5606c2712d22bf 2c0d465052c147fa9e81b320b7d2ceed - default default] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Updating instance_info_cache with network_info: [{"id": "f8f2ad0c-d618-44dc-8566-d051a95a5cec", "address": "fa:16:3e:71:ab:4f", "network": {"id": "0116f1ae-9726-4c74-84d3-d779d10467c4", "bridge": "br-int", "label": "network", "subnets": [{"cidr": "192.168.10.0/24", "dns": [], "gateway": {"address": "192.168.10.10", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.10.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.10.1"}}], "meta": {"injected": false, "tenant_id": "0b953145f50344a1b3cc0b800ed12681", "mtu": 1500, "physical_network": "external", "tunneled": false}}, "type": "ovs", "details": {"connectivity": "l2", "port_filter": true, "ovs_hybrid_plug": true, "datapath_type": "system", "bridge_name": "br-int"}, "devname": "tapf8f2ad0c-d6", "ovs_interfaceid": "f8f2ad0c-d618-44dc-8566-d051a95a5cec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "controller2"}, "preserve_on_delete": false, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.6/site-packages/nova/network/neutron.py:117
2021-06-16 13:59:58.362 205830 DEBUG oslo_concurrency.lockutils [req-fc496424-f7b0-48f1-bef4-83073d355a8d 67a46988b6bf4395bb5606c2712d22bf 2c0d465052c147fa9e81b320b7d2ceed - default default] Releasing lock "refresh_cache-ca591761-ff41-4b9a-8574-b772f682f30e" lock /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:282
2021-06-16 13:59:58.363 205830 DEBUG oslo_concurrency.lockutils [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Acquired lock "refresh_cache-ca591761-ff41-4b9a-8574-b772f682f30e" lock /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:266
# 虚拟机强制刷新网络信息缓存
2021-06-16 13:59:58.363 205830 DEBUG nova.network.neutron [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.6/site-packages/nova/network/neutron.py:1826
# 在虚拟机上延迟加载 'info_cache' 
2021-06-16 13:59:58.364 205830 DEBUG nova.objects.instance [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Lazy-loading 'info_cache' on Instance uuid ca591761-ff41-4b9a-8574-b772f682f30e obj_load_attr /usr/lib/python3.6/site-packages/nova/objects/instance.py:1101
# 报告 NeutronError 错误信息:找不到资源
2021-06-16 13:59:58.757 205830 DEBUG neutronclient.v2_0.client [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Error message: {"NeutronError": {"type": "HTTPNotFound", "message": "The resource could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.6/site-packages/neutronclient/v2_0/client.py:258
# 使用 network_info 更新 instance_info_cache
2021-06-16 13:59:59.235 205830 DEBUG nova.network.neutron [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Updating instance_info_cache with network_info: [{"id": "f8f2ad0c-d618-44dc-8566-d051a95a5cec", "address": "fa:16:3e:71:ab:4f", "network": {"id": "0116f1ae-9726-4c74-84d3-d779d10467c4", "bridge": "br-int", "label": "network", "subnets": [{"cidr": "192.168.10.0/24", "dns": [], "gateway": {"address": "192.168.10.10", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.10.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.10.1"}}], "meta": {"injected": false, "tenant_id": "0b953145f50344a1b3cc0b800ed12681", "mtu": 1500, "physical_network": "external", "tunneled": false}}, "type": "ovs", "details": {"connectivity": "l2", "port_filter": true, "ovs_hybrid_plug": true, "datapath_type": "system", "bridge_name": "br-int"}, "devname": "tapf8f2ad0c-d6", "ovs_interfaceid": "f8f2ad0c-d618-44dc-8566-d051a95a5cec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "controller2"}, "preserve_on_delete": false, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.6/site-packages/nova/network/neutron.py:117
2021-06-16 13:59:59.264 205830 DEBUG oslo_concurrency.lockutils [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Releasing lock "refresh_cache-ca591761-ff41-4b9a-8574-b772f682f30e" lock /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:282
# 虚拟机更新网络 info_cache
2021-06-16 13:59:59.264 205830 DEBUG nova.compute.manager [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.6/site-packages/nova/compute/manager.py:9193
# 轮询重启虚拟机
2021-06-16 13:59:59.265 205830 DEBUG oslo_service.periodic_task [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:211
# 轮询 rescued 虚拟机 
2021-06-16 13:59:59.266 205830 DEBUG oslo_service.periodic_task [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:211
# 轮询未经确认的调整大小
2021-06-16 13:59:59.266 205830 DEBUG oslo_service.periodic_task [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:211
# 虚拟机使用审计
2021-06-16 13:59:59.267 205830 DEBUG oslo_service.periodic_task [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:211
# 轮询卷使用情况
2021-06-16 13:59:59.267 205830 DEBUG oslo_service.periodic_task [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:211
# 回收消息队列
2021-06-16 13:59:59.268 205830 DEBUG oslo_service.periodic_task [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:211
# 跳过回收消息队列
2021-06-16 13:59:59.268 205830 DEBUG nova.compute.manager [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.6/site-packages/nova/compute/manager.py:9810
# 更新可用资源
2021-06-16 13:59:59.273 205830 DEBUG oslo_service.periodic_task [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:211
# 审计 controller3 的本地可用计算资源
2021-06-16 13:59:59.347 205830 DEBUG nova.compute.resource_tracker [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Auditing locally available compute resources for controller3 (node: controller3) update_available_resource /usr/lib/python3.6/site-packages/nova/compute/resource_tracker.py:879
# 打印controller3的 Hypervisor资源
2021-06-16 13:59:59.416 205830 DEBUG nova.compute.resource_tracker [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Hypervisor/Node resource view: name=controller3 free_ram=6489MB free_disk=92.7729721069336GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1111", "vendor_id": "1234", "numa_node": null, "label": "label_1234_1111", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_12_0", "address": "0000:00:12.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "0001", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_0001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1e_0", "address": "0000:00:1e.0", "product_id": "0001", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_0001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_13_0", "address": "0000:00:13.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1004", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1004", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1032
2021-06-16 13:59:59.417 205830 DEBUG oslo_concurrency.lockutils [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359
# 虚拟机从 migration 41b84f0d-3008-4dbd-aa46-d58960ad6c86 更新资源使用情况
2021-06-16 13:59:59.547 205830 INFO nova.compute.resource_tracker [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Updating resource usage from migration 41b84f0d-3008-4dbd-aa46-d58960ad6c86
# migration 41b84f0d-3008-4dbd-aa46-d58960ad6c86 在此计算主机上处于活动状态并已分配:{'resources': {'VCPU': 2, 'MEMORY_MB': 1024}}
2021-06-16 13:59:59.573 205830 DEBUG nova.compute.resource_tracker [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Migration 41b84f0d-3008-4dbd-aa46-d58960ad6c86 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 2, 'MEMORY_MB': 1024}}. _remove_deleted_instances_allocations /usr/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1544
# 可用 vcpus 总数:6,分配的 vcpus 总数:2 报告最终资源视图
2021-06-16 13:59:59.574 205830 DEBUG nova.compute.resource_tracker [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Total usable vcpus: 6, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1048
# 打印 最终资源视图:name=controller3 phys_ram=15873MB used_ram=1536MB phys_disk=98GB used_disk=0GB total_vcpus=6 used_vcpus=2 pci_stats=[] _report_final_resource_view
2021-06-16 13:59:59.574 205830 DEBUG nova.compute.resource_tracker [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Final resource view: name=controller3 phys_ram=15873MB used_ram=1536MB phys_disk=98GB used_disk=0GB total_vcpus=6 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1070
# 通过 rootwrap 生成新的 privsep 守护进程
2021-06-16 13:59:59.580 205830 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
# privsep 守护进程启动
2021-06-16 13:59:59.443 206092 INFO oslo.privsep.daemon [-] privsep daemon starting
# 使用 uid/gid 运行的 privsep 进程:0/0
2021-06-16 13:59:59.449 206092 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
# privsep 进程以功能 (eff/prm/inh) 运行:CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
2021-06-16 13:59:59.452 206092 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
# privsep 守护进程作为 pid 206092 运行
2021-06-16 13:59:59.453 206092 INFO oslo.privsep.daemon [-] privsep daemon running as pid 206092
# 供应商的 ProviderTree 中的库存未更改:1257854c-4d3d-41d4-9d53-ef8a47b3243f 更新库存
2021-06-16 13:59:59.642 205830 DEBUG nova.compute.provider_tree [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Inventory has not changed in ProviderTree for provider: 1257854c-4d3d-41d4-9d53-ef8a47b3243f update_inventory /usr/lib/python3.6/site-packages/nova/compute/provider_tree.py:181
# 打印供应商信息
2021-06-16 13:59:59.667 205830 DEBUG nova.scheduler.client.report [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Inventory has not changed for provider 1257854c-4d3d-41d4-9d53-ef8a47b3243f based on inventory data: {'VCPU': {'total': 6, 'reserved': 0, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 2.0}, 'MEMORY_MB': {'total': 15873, 'reserved': 512, 'min_unit': 1, 'max_unit': 15873, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 98, 'reserved': 0, 'min_unit': 1, 'max_unit': 98, 'step_size': 1, 'allocation_ratio': 3.0}} set_inventory_for_provider /usr/lib/python3.6/site-packages/nova/scheduler/client/report.py:899
# 为controller3 更新 Compute_service 记录
2021-06-16 13:59:59.668 205830 DEBUG nova.compute.resource_tracker [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Compute_service record updated for controller3:controller3 _update_available_resource /usr/lib/python3.6/site-packages/nova/compute/resource_tracker.py:983
2021-06-16 13:59:59.668 205830 DEBUG oslo_concurrency.lockutils [req-a80c1634-1e1f-4628-9a29-fe469882092d - - - - -] Lock "compute_resources" released by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.251s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:371
# 在系统上未检测到光纤通道支持
2021-06-16 13:59:59.827 205830 DEBUG os_brick.initiator.linuxfc [-] No Fibre Channel support detected on system. get_fc_hbas /usr/lib/python3.6/site-packages/os_brick/initiator/linuxfc.py:157
2021-06-16 13:59:59.827 205830 DEBUG os_brick.initiator.linuxfc [-] No Fibre Channel support detected on system. get_fc_hbas /usr/lib/python3.6/site-packages/os_brick/initiator/linuxfc.py:157
# 获取连接器属性
2021-06-16 13:59:59.836 205830 DEBUG os_brick.utils [-] <== get_connector_properties: return (1616ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.10.83', 'host': 'controller3', 'multipath': False, 'initiator': 'iqn.1994-05.com.redhat:b523fef7fa1', 'do_local_attach': False, 'system uuid': 'a61acc3c-6872-4d4a-8037-02e1b9422821'} trace_logging_wrapper /usr/lib/python3.6/site-packages/os_brick/utils.py:177
# 激活端口绑定(激活端口 f8f2ad0c-d618-44dc-8566-d051a95a5cec 和controller2的绑定)
2021-06-16 14:00:00.846 205830 DEBUG nova.network.neutron [-] Activated binding for port f8f2ad0c-d618-44dc-8566-d051a95a5cec and host controller2. activate_port_binding /usr/lib/python3.6/site-packages/nova/network/neutron.py:1436
# 虚拟机使用来自 migrate_data 的原始源 VIF 调用 driver.post_live_migration_at_source
2021-06-16 14:00:00.847 205830 DEBUG nova.compute.manager [-] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "f8f2ad0c-d618-44dc-8566-d051a95a5cec", "address": "fa:16:3e:71:ab:4f", "network": {"id": "0116f1ae-9726-4c74-84d3-d779d10467c4", "bridge": "br-int", "label": "network", "subnets": [{"cidr": "192.168.10.0/24", "dns": [], "gateway": {"address": "192.168.10.10", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.10.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.10.1"}}], "meta": {"injected": false, "tenant_id": "0b953145f50344a1b3cc0b800ed12681", "mtu": 1500, "physical_network": "external", "tunneled": false}}, "type": "ovs", "details": {"connectivity": "l2", "port_filter": true, "ovs_hybrid_plug": true, "datapath_type": "system", "bridge_name": "br-int"}, "devname": "tapf8f2ad0c-d6", "ovs_interfaceid": "f8f2ad0c-d618-44dc-8566-d051a95a5cec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "meta": {}}] _post_live_migration /usr/lib/python3.6/site-packages/nova/compute/manager.py:8598
# 打印vif_type信息
2021-06-16 14:00:00.848 205830 DEBUG nova.virt.libvirt.vif [-] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2021-06-16T01:44:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='live-migration',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(11),hidden=False,host='controller3',hostname='live-migration',id=30,image_ref='',info_cache=InstanceInfoCache,instance_type_id=11,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2021-06-16T01:46:36Z,launched_on='controller3',locked=False,locked_by=None,memory_mb=1024,metadata={},migration_context=None,new_flavor=None,node='controller3',numa_topology=None,old_flavor=None,os_type='linux',pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0b953145f50344a1b3cc0b800ed12681',ramdisk_id='',reservation_id='r-thql0o00',resources=None,root_device_name='/dev/vda',root_gb=20,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='',image_container_format='bare',image_disk_format='qcow2',image_min_disk='20',image_min_ram='0',image_os_type='linux',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2021-06-16T05:59:44Z,user_data=None,user_id='90f8ef71f39d4eb2b5588c17e052cbb3',uuid=ca591761-ff41-4b9a-8574-b772f682f30e,vcpu_model=<?>,vcpus=2,vm_mode=None,vm_state='active') vif={"id": "f8f2ad0c-d618-44dc-8566-d051a95a5cec", "address": "fa:16:3e:71:ab:4f", "network": {"id": "0116f1ae-9726-4c74-84d3-d779d10467c4", "bridge": "br-int", "label": "network", "subnets": [{"cidr": "192.168.10.0/24", "dns": [], "gateway": {"address": "192.168.10.10", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.10.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.10.1"}}], "meta": {"injected": false, "tenant_id": "0b953145f50344a1b3cc0b800ed12681", "mtu": 1500, "physical_network": "external", "tunneled": false}}, "type": "ovs", "details": {"connectivity": "l2", "port_filter": true, "ovs_hybrid_plug": true, "datapath_type": "system", "bridge_name": "br-int"}, "devname": "tapf8f2ad0c-d6", "ovs_interfaceid": "f8f2ad0c-d618-44dc-8566-d051a95a5cec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "meta": {}} unplug /usr/lib/python3.6/site-packages/nova/virt/libvirt/vif.py:812
# 转换 VIF
2021-06-16 14:00:00.848 205830 DEBUG nova.network.os_vif_util [-] Converting VIF {"id": "f8f2ad0c-d618-44dc-8566-d051a95a5cec", "address": "fa:16:3e:71:ab:4f", "network": {"id": "0116f1ae-9726-4c74-84d3-d779d10467c4", "bridge": "br-int", "label": "network", "subnets": [{"cidr": "192.168.10.0/24", "dns": [], "gateway": {"address": "192.168.10.10", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.10.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.10.1"}}], "meta": {"injected": false, "tenant_id": "0b953145f50344a1b3cc0b800ed12681", "mtu": 1500, "physical_network": "external", "tunneled": false}}, "type": "ovs", "details": {"connectivity": "l2", "port_filter": true, "ovs_hybrid_plug": true, "datapath_type": "system", "bridge_name": "br-int"}, "devname": "tapf8f2ad0c-d6", "ovs_interfaceid": "f8f2ad0c-d618-44dc-8566-d051a95a5cec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "meta": {}} nova_to_osvif_vif /usr/lib/python3.6/site-packages/nova/network/os_vif_util.py:501
# 转换对象 VIFBridge
2021-06-16 14:00:00.850 205830 DEBUG nova.network.os_vif_util [-] Converted object VIFBridge(active=True,address=fa:16:3e:71:ab:4f,bridge_name='qbrf8f2ad0c-d6',has_traffic_filtering=True,id=f8f2ad0c-d618-44dc-8566-d051a95a5cec,network=Network(0116f1ae-9726-4c74-84d3-d779d10467c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8f2ad0c-d6') nova_to_osvif_vif /usr/lib/python3.6/site-packages/nova/network/os_vif_util.py:538
# Unplugging vif VIFBridge
2021-06-16 14:00:00.850 205830 DEBUG os_vif [-] Unplugging vif VIFBridge(active=True,address=fa:16:3e:71:ab:4f,bridge_name='qbrf8f2ad0c-d6',has_traffic_filtering=True,id=f8f2ad0c-d618-44dc-8566-d051a95a5cec,network=Network(0116f1ae-9726-4c74-84d3-d779d10467c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8f2ad0c-d6') unplug /usr/lib/python3.6/site-packages/os_vif/__init__.py:109
# 创建 lookup_table 索引
2021-06-16 14:00:01.430 205830 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Flow_Table.name autocreate_indices /usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:86
2021-06-16 14:00:01.431 205830 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Mirror.name autocreate_indices /usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:86
# 创建模式索引
2021-06-16 14:00:01.431 205830 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:104
2021-06-16 14:00:01.432 205830 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:104
2021-06-16 14:00:01.432 205830 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:104
2021-06-16 14:00:01.432 205830 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Manager.target autocreate_indices /usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:104
2021-06-16 14:00:01.433 205830 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.6/site-packages/ovs/reconnect.py:488
2021-06-16 14:00:01.434 205830 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLOUT] on fd 29 __log_wakeup /usr/lib64/python3.6/site-packages/ovs/poller.py:263
2021-06-16 14:00:01.434 205830 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.6/site-packages/ovs/reconnect.py:488
2021-06-16 14:00:01.435 205830 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.6/site-packages/ovs/poller.py:263
2021-06-16 14:00:01.437 205830 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.6/site-packages/ovs/poller.py:263
2021-06-16 14:00:01.440 205830 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.6/site-packages/ovs/poller.py:263
2021-06-16 14:00:01.447 205830 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.6/site-packages/ovs/poller.py:263
2021-06-16 14:00:01.447 205830 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(port=qvof8f2ad0c-d6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:88
2021-06-16 14:00:01.448 205830 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.6/site-packages/ovs/poller.py:263
2021-06-16 14:00:01.450 205830 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.6/site-packages/ovs/poller.py:263
# 成功 unplugged vif VIFBridge
2021-06-16 14:00:01.469 205830 INFO os_vif [-] Successfully unplugged vif VIFBridge(active=True,address=fa:16:3e:71:ab:4f,bridge_name='qbrf8f2ad0c-d6',has_traffic_filtering=True,id=f8f2ad0c-d618-44dc-8566-d051a95a5cec,network=Network(0116f1ae-9726-4c74-84d3-d779d10467c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8f2ad0c-d6')
2021-06-16 14:00:01.470 205830 DEBUG oslo_concurrency.lockutils [-] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359
2021-06-16 14:00:01.470 205830 DEBUG oslo_concurrency.lockutils [-] Lock "compute_resources" released by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:371
# 虚拟机 从 _post_live_migration _post_live_migration 调用 driver.cleanup
2021-06-16 14:00:01.470 205830 DEBUG nova.compute.manager [-] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.6/site-packages/nova/compute/manager.py:8620
# 删除实例文件
2021-06-16 14:00:01.471 205830 INFO nova.virt.libvirt.driver [-] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Deleting instance files /var/lib/nova/instances/ca591761-ff41-4b9a-8574-b772f682f30e_del
2021-06-16 14:00:01.471 205830 INFO nova.virt.libvirt.driver [-] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Deletion of /var/lib/nova/instances/ca591761-ff41-4b9a-8574-b772f682f30e_del complete
2021-06-16 14:00:01.943 205830 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.6/site-packages/ovs/poller.py:263
# 接收 event 事件
2021-06-16 14:00:02.026 205830 DEBUG nova.compute.manager [req-e97da698-6346-4ab2-8f5a-c2d635ffbb82 67a46988b6bf4395bb5606c2712d22bf 2c0d465052c147fa9e81b320b7d2ceed - default default] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Received event network-vif-unplugged-f8f2ad0c-d618-44dc-8566-d051a95a5cec external_instance_event /usr/lib/python3.6/site-packages/nova/compute/manager.py:10382
2021-06-16 14:00:02.027 205830 DEBUG oslo_concurrency.lockutils [req-e97da698-6346-4ab2-8f5a-c2d635ffbb82 67a46988b6bf4395bb5606c2712d22bf 2c0d465052c147fa9e81b320b7d2ceed - default default] Lock "ca591761-ff41-4b9a-8574-b772f682f30e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359
2021-06-16 14:00:02.027 205830 DEBUG oslo_concurrency.lockutils [req-e97da698-6346-4ab2-8f5a-c2d635ffbb82 67a46988b6bf4395bb5606c2712d22bf 2c0d465052c147fa9e81b320b7d2ceed - default default] Lock "ca591761-ff41-4b9a-8574-b772f682f30e-events" released by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:371
# 虚拟机没有发现等待事件调度 network-vif-unplugged-f8f2ad0c-d618-44dc-8566-d051a95a5cec pop_instance_event
2021-06-16 14:00:02.027 205830 DEBUG nova.compute.manager [req-e97da698-6346-4ab2-8f5a-c2d635ffbb82 67a46988b6bf4395bb5606c2712d22bf 2c0d465052c147fa9e81b320b7d2ceed - default default] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] No waiting events found dispatching network-vif-unplugged-f8f2ad0c-d618-44dc-8566-d051a95a5cec pop_instance_event /usr/lib/python3.6/site-packages/nova/compute/manager.py:324
2021-06-16 14:00:02.028 205830 DEBUG nova.compute.manager [req-e97da698-6346-4ab2-8f5a-c2d635ffbb82 67a46988b6bf4395bb5606c2712d22bf 2c0d465052c147fa9e81b320b7d2ceed - default default] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Received event network-vif-unplugged-f8f2ad0c-d618-44dc-8566-d051a95a5cec for instance with task_state migrating. _process_instance_event /usr/lib/python3.6/site-packages/nova/compute/manager.py:10165
2021-06-16 14:00:03.449 205830 DEBUG oslo_concurrency.lockutils [-] Lock "ca591761-ff41-4b9a-8574-b772f682f30e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359
2021-06-16 14:00:03.450 205830 DEBUG oslo_concurrency.lockutils [-] Lock "ca591761-ff41-4b9a-8574-b772f682f30e-events" released by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:371
# 审计 controller3 的本地可用计算资源,更新资源
2021-06-16 14:00:03.484 205830 DEBUG nova.compute.resource_tracker [-] Auditing locally available compute resources for controller3 (node: controller3) update_available_resource /usr/lib/python3.6/site-packages/nova/compute/resource_tracker.py:879
# 打印controller3的 Hypervisor资源
2021-06-16 14:00:03.539 205830 DEBUG nova.compute.resource_tracker [-] Hypervisor/Node resource view: name=controller3 free_ram=6494MB free_disk=92.77292251586914GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1111", "vendor_id": "1234", "numa_node": null, "label": "label_1234_1111", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_12_0", "address": "0000:00:12.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "0001", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_0001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1e_0", "address": "0000:00:1e.0", "product_id": "0001", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_0001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_13_0", "address": "0000:00:13.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1004", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1004", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1032
2021-06-16 14:00:03.539 205830 DEBUG oslo_concurrency.lockutils [-] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359
# 迁移虚拟机是指另一个主机的实例
2021-06-16 14:00:03.619 205830 DEBUG nova.compute.resource_tracker [-] Migration for instance ca591761-ff41-4b9a-8574-b772f682f30e refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.6/site-packages/nova/compute/resource_tracker.py:912
2021-06-16 14:00:03.676 205830 DEBUG nova.compute.resource_tracker [-] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1397
2021-06-16 14:00:03.722 205830 DEBUG nova.compute.resource_tracker [-] Migration 41b84f0d-3008-4dbd-aa46-d58960ad6c86 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 2, 'MEMORY_MB': 1024}}. _remove_deleted_instances_allocations /usr/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1544
2021-06-16 14:00:03.723 205830 DEBUG nova.compute.resource_tracker [-] Total usable vcpus: 6, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1048
2021-06-16 14:00:03.723 205830 DEBUG nova.compute.resource_tracker [-] Final resource view: name=controller3 phys_ram=15873MB used_ram=512MB phys_disk=98GB used_disk=0GB total_vcpus=6 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.6/site-packages/nova/compute/resource_tracker.py:1070
2021-06-16 14:00:03.787 205830 DEBUG nova.compute.provider_tree [-] Inventory has not changed in ProviderTree for provider: 1257854c-4d3d-41d4-9d53-ef8a47b3243f update_inventory /usr/lib/python3.6/site-packages/nova/compute/provider_tree.py:181
2021-06-16 14:00:03.813 205830 DEBUG nova.scheduler.client.report [-] Inventory has not changed for provider 1257854c-4d3d-41d4-9d53-ef8a47b3243f based on inventory data: {'VCPU': {'total': 6, 'reserved': 0, 'min_unit': 1, 'max_unit': 6, 'step_size': 1, 'allocation_ratio': 2.0}, 'MEMORY_MB': {'total': 15873, 'reserved': 512, 'min_unit': 1, 'max_unit': 15873, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 98, 'reserved': 0, 'min_unit': 1, 'max_unit': 98, 'step_size': 1, 'allocation_ratio': 3.0}} set_inventory_for_provider /usr/lib/python3.6/site-packages/nova/scheduler/client/report.py:899
# 为 controller3 更新了 Compute_service 记录
2021-06-16 14:00:03.814 205830 DEBUG nova.compute.resource_tracker [-] Compute_service record updated for controller3:controller3 _update_available_resource /usr/lib/python3.6/site-packages/nova/compute/resource_tracker.py:983
2021-06-16 14:00:03.815 205830 DEBUG oslo_concurrency.lockutils [-] Lock "compute_resources" released by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.275s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:371
# 虚拟机迁移到controller2成功完成
2021-06-16 14:00:03.840 205830 INFO nova.compute.manager [-] [instance: ca591761-ff41-4b9a-8574-b772f682f30e] Migrating instance to controller2 finished successfully.

后拷贝模式
1.修改/etc/nova/nova.conf

# 在[libvirt]分组下添加
live_migration_permit_post_copy=true

# 重启nova-compute服务
systemctl restart openstack-nova-compute

参考文献:
https://docs.openstack.org/nova/victoria/admin/configuring-migrations.html
https://docs.openstack.org/nova/victoria/admin/live-migration-usage.html
https://www.cnblogs.com/jmilkfan-fanguiju/p/10589712.html#Nova__260
https://blog.csdn.net/Jmilk/article/details/88721288
https://blog.csdn.net/hxj3315/article/details/111561760
https://www.cnblogs.com/jmilkfan-fanguiju/p/11825024.html
https://is-cloud.blog.csdn.net/article/details/111597147
https://blog.zhaw.ch/icclab/tunneled-hybrid-live-migration/
https://liujiong63.github.io/

  • 3
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 3
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值