[WIP]一些openstack bug记录(by quqi99)

作者:张华  发表于:2015-05-07
版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明

http://blog.csdn.net/quqi99 )

一,为什么虚机ping不通虚拟所在的tap设

一种可能是虚机的MAC地址和其他的重复了,可使用arp命令查看。
第二种可能就是ebtables, 可使用ebtables -t nat -L命令检查。
# ebtables -t nat -L
Bridge table: nat
Bridge chain: PREROUTING, entries: 1, policy: ACCEPT
-i tapc8f2fe4d-7f -j libvirt-I-tapc8f2fe4d-7f
Bridge chain: OUTPUT, entries: 0, policy: ACCEPT
Bridge chain: POSTROUTING, entries: 1, policy: ACCEPT
-o tapc8f2fe4d-7f -j libvirt-O-tapc8f2fe4d-7f
Bridge chain: libvirt-I-tapc8f2fe4d-7f, entries: 1, policy: ACCEPT
-j DROP
Bridge chain: libvirt-O-tapc8f2fe4d-7f, entries: 1, policy: ACCEPT
-j DROP
显然上述的ebtables是由下列的libvirt配置($nova/virt/libvirt/firewall.py)所加(位于/etc/libvirt/nwfilter/目录, 可用virsh nwfilter-list命令查看):
<filterref filter='nova-instance-instance-00000010-fa163ea2e6b8'/>

 

这是因为没有设置使用neutron security group, 也没有禁止nova security group(在nova.conf中设置firewall_driver = nova.virt.firewall.NoopFirewallDriver), 

这样nova用了默认的iptable firewall driver,而这个驱动刚好又很弱,它使用了libvirt nwfilter做了上面的事情,还因为弱没法做到想用nova security group的功能的同时又禁用掉nwfilter, 见[1], [2].

 

[1], https://answers.launchpad.net/nova/+question/156184
[2], https://answers.launchpad.net/nova/+question/234365

 

 

二,在Icehouse版本上nova evaluate失败后将虚机硬件给删除了
1, 首先是这个bug, nova evaluate时应该设置recreate=true让它不要删除老的虚机,这个patch进入了Icehouse后一些的版本。
   https://git.openstack.org/cgit/openstack/nova/commit/?id=3de3f1066fa47312b8c3075abf790631034d67a3
2, 使用nova evaluate迁移到另外一个host时没有更新binding:host_id从而导致虚机的port无法建立抛 VirtualInterfaceCreateException然后把虚机给删除了。使用vif_plugging_is_fatal = false && vif_plugging_timeout = 10 作为workaround
   https://answers.launchpad.net/ubuntu/+source/nova/+question/257358
3, 对于上面在nova evaluate下的port binding问题仍然无法解决(虽然有patch, https://review.openstack.org/#/c/169827/ , 但不容易backport到icehouose),但可以使用nova migration作为workaround,因为nova migration会最终调用migrate_instance_finish去更新port binding (port_req_body = {'port': {'binding:host_id': migration['dest_compute']}}).
$ nova evacuate --on-shared-storage <instance> <target_host>
$ nova migrate <instance>
$ nova confirm-resize <instance> 

20210408更新-这次一客户又遇到VirtualInterfaceCreateException,是因为openstack没有安装octavia,但upper k8s正在使用k8s,导致neutron-server报错起不来,然后support将neutron-server service给mask了,然后就是报这个错。

原因是sriov节点上的enable-local-dhcp-and-metadata居然是true (juju config neutron-openvswitch-sriov enable-local-dhcp-and-metadata), sriov节点是不应该有dhcp-agents的,enable-local-dhcp-and-metadata=true导致了dhcp_agent.ini用了默认值(Error loading interface driver 'None'),从而“openstack network agent list --network $network_uuid”看到了有dhcp agent和这个network关联。但是“openstack port list --network <net-id> --long”会看到只有network:router_interface_distributed与network:router_centralized_snat的port, 而没有network:dhcp port
$ openstack network agent remove network --dhcp 1094154d-edc2-45ab-ba10-89d92dc1e402 ebc3e51c-b776-4f98-9fd6-59ec59c74966
$ openstack network agent remove network --dhcp 5b05ffe2-3724-4ab4-8497-60f158acbbf0 ebc3e51c-b776-4f98-9fd6-59ec59c74966

三,virt命令与neutron命令在反复删除新建网卡后的数目不符

https://review.openstack.org/#/c/130151/8/nova/compute/manager.py

https://git.openstack.org/cgit/openstack/nova/commit/?id=8694c1619d774bb8a6c23ed4c0f33df2084849bc&context=3&ignorews=0&dt=0 
https://git.openstack.org/cgit/openstack/nova/commit/?id=0fb97014689b1b9575cafae88447db7f86ff4292&context=3&ignorews=0&dt=0 
https://git.openstack.org/cgit/openstack/nova/commit/?id=3031adb857993d8196b4c9febca51ac82cf35fd6&context=3&ignorews=0&dt=0 

 

四,运行'cinder delete <vol-id>'没反应,看到有cinder-volume是down的

cinder目前使用cinder.conf中定义的host与enabled_backends构成host@enabled_backends(例:cinder@cinder-ceph)作为ipc_host,  如果不定义host就会默认使用hostname作为host(例:node1@cinder-ceph),这样一旦node1死掉,那么node1上之前已经定义的卷因为在数据库里依然是node1@cinder-ceph,所以使用"cinder delete"它时就会失败。所以一般将所有做HA的cinder节点的host都定义成一样的如cinder,那样因为是无状态的服务"cinder delete'操作仍然可以通过cinder@cinder-ceph给其他活着的cinder节点发消息删除卷。见:http://blog.csdn.net/canxinghen/article/details/40895205

 

五,Cron脚本最佳实践

*/1 * * * * root timeout -s SIGINT 60 flock -xn /var/lock/my.lock -c '/bin/bash -x /usr/local/bin/test.sh' |logger -p local0.notice

 

六,live-migration

libvirt side: qemuDomainMigrateBegin3 -> qemuMigrationBegin -> qemuMigrationBeginPhase -> qemuMigrationBakeCookie -> qemuMigrationCookieAddNetwork -> qemuMigrationCookieNetworkAlloc -> virNetDevOpenvswitchGetMigrateData

nova side

_migrate_live(nova/api/openstack/compute/contrib/admin_actions.py) -> live_migrate(nova/compute/api.py) -> live_migrate_instance(nova/conductor/api.py) -> migrate_server(nova/conductor/manager.py) -> self._live_migrate -> LiveMigrationTask -> live_migration(nova/compute/manager.py):

  • pre_live_migration(destination host) -> connect_volume(cinder) -> plug_vifs -> setup_networks_on_host -> ensure_filtering_rules_for_instance
  • live_migration(nova/virt/libvirt/driver.py) -> _live_migration -> dom.migrateToURI(libvirt) -> _post_live_migration
  • post_live_migration(nova/virt/libvirt/driver.py)

   On source host:  disconnect_volume(cinder) -> terminate_connection(cinder) -> unfilter_instance(firewall_driver - Releasing security group ingress rule) -> network_migrate_instance_start(just pass) -> migrate_instance_finish(setting binding:host_id)

   On Dest host: post_live_migration_at_destination -> setup_networks_on_host -> migrate_instance_finish(setting binding:host_id) -> j(libvirt, Define migrated instance)


七, 为什么non-admin做snapshot慢 (lp: 1786144)

当glance policy.conf中的get_image_location被设置为admin role的话, 将会造成snapshot (nova image-create vm1 mysnapshot)慢. 

因为 /usr/lib/python2.7/dist-packages/nova/compute/manager.py#_snapshot_instance 在此context.evelate()升级为admin context之后auth_token并不会被重新生成也不会被重新序列化, 所以glance端(/usr/lib/python2.7/dist-packages/glance/api/middleware/context.py#process_request)将仍会取到non-admin context. 这样glance端(/usr/lib/python2.7/dist-packages/glance/api/v2/images.py)non-admin用户就无法读取所有镜像的locations信息, 从而造成nova端无法和ceph直接做direct_snapshot, 而只能和glance打交道得update镜像到glance从而慢.  Code Path:

manager.py#_snapshot_instance(context = context.elevated()) -> driver.py#direct_snapshot -> _get_parent_pool -> image_meta = IMAGE_API.get(context, base_image_id,include_locations=True) -> session, image_id = self._get_session_and_image_id(context, id_or_uri) -> glance.get_remote_image_service(context, id_or_uri) -> get_default_image_service() -> GlanceImageServiceV2() -> GlanceClientWrapper()
 

八, 为什么delete volume慢, 并且此时也无法做volume做其他操作

cinder通过pool在多个green threads中共离rados对象, 且rados对象会再启动local thread去连rados, 当删除一个volume时, 周期性任务查询该volume做throw exception. 而这个非阻塞性的local thread里抛出的这个异常会造成该green thread无法yield, 从而造成该green thread无法运行. 其他的green thread也因类似原因一个个坏掉.

九, 分析Mysql

#!/bin/bash
PASS=$(juju run --unit mysql/0 'leader-get root-password')

# enable slow log
for i in 0 1 2; do
    juju ssh mysql/${i} "mysql -u root --password=${PASS} -e 'SET GLOBAL slow_query_log_timestamp_always=ON, slow_query_log_use_global_control='all', long_query_time=0.0, slow_query_log=ON;'"
done

# enable full log
for i in 0 1 2; do
    juju ssh mysql/${i} "mysql -u root --password=${PASS} -e 'SET GLOBAL slow_query_log=OFF;'"
done
for i in 0 1 2; do
    juju ssh mysql/${i} "mysql -u root --password=${PASS} -e 'SET GLOBAL innodb_status_output_locks=ON;'"
done
while true; do
    for i in 0 1 2 ; do
        juju ssh mysql/${i} "mysql -u root --password=${PASS} -e 'show full processlist;'" >mysql-${i}-processlist-`date +%F-%H-%M-%S` 2>&1
        juju ssh mysql/${i} "mysql -u root --password=${PASS} -e 'show global status; show engine innodb status;'" >mysql-${i}-status-`date +%F-%H-%M-%S` 2>&1
    done
    sleep 10
done

十, 分析日志

port=__PORT_UUID__ 
for s in nova neutron; do 
[ -d /var/log/$s ] || continue 
grep -q network-vif /var/log/$s/* || continue 
readarray -t out<<<"`grep -h $port /var/log/$s/*`" 
for line in "${out[@]}"; do 
res="`echo $line| grep network-vif`" 
[ -n "$res" ] || continue 
req="`echo $line| sed -r 's/.+(req-[[:alnum:]\-]+)\s+.+/\1/g;t;d'`" 
echo $line| sed -r -e "s/([0-9\-]+\s[0-9\:\.]+)\s+.+\]/\1 $req/g;t;d"| sed -r 's,/usr.+,,g'| egrep $port 
done 
done 

11, heat创建虚机慢一例:

juju config nova-cloud-controller config-flags='disk_weight_multiplier=0'
juju config mysql wait-timeout=3600
juju config nova-cloud-controller scheduler-default-filters=RetryFilter,AvailabilityZoneFilter,CoreFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,AggregateCoreFilter,AggregateInstanceExtraSpecsFilter,IoOpsFilter

12, 创建虚机失败 - Failed to allocate the network

   https://bugs.launchpad.net/cloud-archive/+bug/1763442

# Failed to allocate the network(s), not rescheduling
vif_plugging_is_fatal=false
vif_plugging_timeout=0

13, 网络慢

客户反应两台虚机间运行iperf慢,但同一台机器上的虚机运行iperf不慢。

iperf数据显示 包进入ovs有了vxlan头,然后并没有从br-ex出去,又回到br-int了形成了ovs loop, 原来是他们用了bridge_mappings = physnet1:br-ex, 但br-ex关联了bondB,同时在local_ip又设置的bondB的IP (100.65.79.1),他们应该将bondB做两个vlan一个用于local_ip一个肜于br-ex吧。

另外,要设置explicitly_egress_direct=true - https://bugs.launchpad.net/neutron/+bug/1732067

不过,现在tcp好了,但仍然说udp偶尔有问题。

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

quqi99

你的鼓励就是我创造的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值