配置kubernetes使用kubevirt管理虚拟机

 

一、前言

kubevirt可以扩展kubernetes的功能以管理虚拟机,其架构见文章

https://github.com/kubevirt/kubevirt/blob/master/docs/architecture.md

在安装之前需要先做如下准备工作:

安装libvirt和qemu软件包

yum -y install qemu-kvm libvirt virt-install bridge-utils
 
 

给k8s node打标签


 
 
  1. kubectl label node k8s-01 kubevirt.io=virt-controller
  2. kubectl label node k8s-02 kubevirt.io=virt-controller
  3. kubectl label node k8s-03 kubevirt.io=virt-api
  4. kubectl label node k8s-04 kubevirt.io=virt-api

查看node能力


 
 
  1. kubectl get nodes -o yaml
  2. allocatable:
  3. cpu: "2"
  4. devices.kubevirt.io/tun: "110"
  5. ephemeral-storage: "16415037823"
  6. hugepages -2Mi: "0"
  7. memory: 1780264Ki
  8. pods: "110"
  9. capacity:
  10. cpu: "2"
  11. devices.kubevirt.io/tun: "110"
  12. ephemeral-storage: 17394Mi
  13. hugepages -2Mi: "0"
  14. memory: 1882664Ki
  15. pods: "110"

查看节点是否支持kvm硬件辅助虚拟化


 
 
  1. ls /dev/kvm
  2. ls: cannot access /dev/kvm: No such file or directory
  3. virt-host-validate qemu
  4. QEMU: Checking for hardware virtualization : FAIL (Only emulated CPUs are available, performance will be significantly limited)
  5. QEMU: Checking if device /dev/vhost-net exists : PASS
  6. QEMU: Checking if device /dev/net/tun exists : PASS
  7. QEMU: Checking for cgroup 'memory' controller support : PASS
  8. QEMU: Checking for cgroup 'memory' controller mount-point : PASS
  9. QEMU: Checking for cgroup 'cpu' controller support : PASS
  10. QEMU: Checking for cgroup 'cpu' controller mount-point : PASS
  11. QEMU: Checking for cgroup 'cpuacct' controller support : PASS
  12. QEMU: Checking for cgroup 'cpuacct' controller mount-point : PASS
  13. QEMU: Checking for cgroup 'cpuset' controller support : PASS
  14. QEMU: Checking for cgroup 'cpuset' controller mount-point : PASS
  15. QEMU: Checking for cgroup 'devices' controller support : PASS
  16. QEMU: Checking for cgroup 'devices' controller mount-point : PASS
  17. QEMU: Checking for cgroup 'blkio' controller support : PASS
  18. QEMU: Checking for cgroup 'blkio' controller mount-point : PASS
  19. WARN (Unknown if this platform has IOMMU support)

如不支持,则先生成让kubevirt使用软件虚拟化的配置

kubectl configmap -n kube-system kubevirt-config --from-literal debug.useEmulation=true
 
 

转载自https://blog.csdn.net/cloudvtech

二、安装kubevirt

2.1 部署kubevirt 


 
 
  1. export VERSION=v0.8.0
  2. kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/ $VERSION/kubevirt.yaml

2.2 查看部署结果


 
 
  1. virt-api-79c6f4d756-2vwqh 0/1 Running 0 9s
  2. virt-controller-559c749968-7lwrl 0/1 Running 0 9s
  3. virt-handler-2grjk 1/1 Running 0 9s
  4. virt-handler-djfbr 1/1 Running 0 9s
  5. virt-handler-r2pls 1/1 Running 0 9s
  6. virt-handler-rb948 1/1 Running 0 9s

2.3 virt-controller logs


 
 
  1. level=info timestamp=2018-10-06T14:46:37.788008Z pos=application.go:179 component=virt-controller msg= "DataVolume integration disabled"
  2. level=info timestamp=2018-10-06T14:46:37.790568Z pos=application.go:194 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182
  3. level=info timestamp=2018-10-06T14:46:55.398082Z pos=virtinformers.go:117 component=virt-controller service=http msg= "STARTING informer vmiInformer"
  4. level=info timestamp=2018-10-06T14:46:55.398243Z pos=virtinformers.go:117 component=virt-controller service=http msg= "STARTING informer kubeVirtPodInformer"
  5. level=info timestamp=2018-10-06T14:46:55.398271Z pos=virtinformers.go:117 component=virt-controller service=http msg= "STARTING informer vmiPresetInformer"
  6. level=info timestamp=2018-10-06T14:46:55.398300Z pos=virtinformers.go:117 component=virt-controller service=http msg= "STARTING informer vmInformer"
  7. level=info timestamp=2018-10-06T14:46:55.398322Z pos=virtinformers.go:117 component=virt-controller service=http msg= "STARTING informer limitrangeInformer"
  8. level=info timestamp=2018-10-06T14:46:55.398339Z pos=virtinformers.go:117 component=virt-controller service=http msg= "STARTING informer kubeVirtNodeInformer"
  9. level=info timestamp=2018-10-06T14:46:55.398359Z pos=virtinformers.go:117 component=virt-controller service=http msg= "STARTING informer vmirsInformer"
  10. level=info timestamp=2018-10-06T14:46:55.398372Z pos=virtinformers.go:117 component=virt-controller service=http msg= "STARTING informer configMapInformer"
  11. level=info timestamp=2018-10-06T14:46:55.398385Z pos=virtinformers.go:117 component=virt-controller service=http msg= "STARTING informer fakeDataVolumeInformer"
  12. level=info timestamp=2018-10-06T14:46:55.398430Z pos=vm.go:113 component=virt-controller service=http msg= "Starting VirtualMachine controller."
  13. level=info timestamp=2018-10-06T14:46:55.399309Z pos=node.go:104 component=virt-controller service=http msg= "Starting node controller."
  14. level=info timestamp=2018-10-06T14:46:55.399360Z pos=vmi.go:165 component=virt-controller service=http msg= "Starting vmi controller."
  15. level=info timestamp=2018-10-06T14:46:55.399393Z pos=replicaset.go:111 component=virt-controller service=http msg= "Starting VirtualMachineInstanceReplicaSet controller."
  16. level=info timestamp=2018-10-06T14:46:55.399416Z pos=preset.go:74 component=virt-controller service=http msg= "Starting Virtual Machine Initializer.”

2.4 virt-api logs


 
 
  1. level=info timestamp=2018-10-06T14:46:49.538030Z pos=filter.go:46 component=virt-api remoteAddress=10.244.11.128 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19
  2. level=info timestamp=2018-10-06T14:46:49.550993Z pos=filter.go:46 component=virt-api remoteAddress=10.244.11.128 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19
  3. level=info timestamp=2018-10-06T14:46:51.623461Z pos=filter.go:46 component=virt-api remoteAddress=10.244.11.128 username=- method=GET url= "/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136
  4. level=info timestamp=2018-10-06T14:46:52.758037Z pos=filter.go:46 component=virt-api remoteAddress=10.244.11.128 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19
  5. level=info timestamp=2018-10-06T14:46:52.975272Z pos=filter.go:46 component=virt-api remoteAddress=10.244.11.128 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19
  6. level=info timestamp=2018-10-06T14:46:52.976837Z pos=filter.go:46 component=virt-api remoteAddress=10.244.11.128 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19
  7. level=info timestamp=2018-10-06T14:46:55.124209Z pos=filter.go:46 component=virt-api remoteAddress=10.244.11.128 username=- method=GET url= "/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136
  8. level=info timestamp=2018-10-06T14:46:55.278349Z pos=filter.go:46 component=virt-api remoteAddress=10.244.11.128 username=- method=GET url= "/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136
  9. level=info timestamp=2018-10-06T14:46:58.350208Z pos=filter.go:46 component=virt-api remoteAddress=10.244.11.128 username=- method=GET url= "/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136
  10. level=info timestamp=2018-10-06T14:47:08.771700Z pos=filter.go:46 component=virt-api remoteAddress=10.244.11.128 username=- method=GET url= "/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136
  11. level=info timestamp=2018-10-06T14:47:21.831239Z pos=filter.go:46 component=virt-api remoteAddress=10.244.11.128 username=- method=GET url= "/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136
  12. level=info timestamp=2018-10-06T14:47:22.746803Z pos=filter.go:46 component=virt-api remoteAddress=10.244.11.128 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19
  13. level=info timestamp=2018-10-06T14:47:25.637788Z pos=filter.go:46 component=virt-api remoteAddress=10.244.11.128 username=- method=GET url= "/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136
  14. level=info timestamp=2018-10-06T14:47:28.376151Z pos=filter.go:46 component=virt-api remoteAddress=10.244.11.128 username=- method=GET url= "/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136
  15. 2018/10/06 14:47:28 http: TLS handshake error from 10.0.2.15:45040: EOF
  16. 2018/10/06 14:47:38 http: TLS handshake error from 10.0.2.15:45046: EOF
  17. level=info timestamp=2018-10-06T14:47:38.972610Z pos=filter.go:46 component=virt-api remoteAddress=10.244.11.128 username=- method=GET url= "/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136
  18. 2018/10/06 14:47:48 http: TLS handshake error from 10.0.2.15:45052: EOF
  19. level=info timestamp=2018-10-06T14:47:51.847764Z pos=filter.go:46 component=virt-api remoteAddress=10.244.11.128 username=- method=GET url= "/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136
  20. level=info timestamp=2018-10-06T14:47:52.756364Z pos=filter.go:46 component=virt-api remoteAddress=10.244.11.128 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19
  21. level=info timestamp=2018-10-06T14:47:52.979039Z pos=filter.go:46 component=virt-api remoteAddress=10.244.11.128 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19
  22. level=info timestamp=2018-10-06T14:47:52.981050Z pos=filter.go:46 component=virt-api remoteAddress=10.244.11.128 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19
  23. level=info timestamp=2018-10-06T14:47:55.842662Z pos=filter.go:46 component=virt-api remoteAddress=10.244.11.128 username=- method=GET url= "/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136
  24. level=info timestamp=2018-10-06T14:47:58.396208Z pos=filter.go:46 component=virt-api remoteAddress=10.244.11.128 username=- method=GET url= "/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136

2.5 virt-handler logs


 
 
  1. level=info timestamp=2018-10-06T14:46:38.210175Z pos=virt-handler.go:87 component=virt-handler hostname=k8s-01
  2. level=info timestamp=2018-10-06T14:46:38.213346Z pos=vm.go:210 component=virt-handler msg= "Starting virt-handler controller."
  3. level=info timestamp=2018-10-06T14:46:38.213831Z pos=cache.go:151 component=virt-handler msg= "Synchronizing domains"
  4. level=info timestamp=2018-10-06T14:46:38.213918Z pos=cache.go:121 component=virt-handler msg= "List domains from sock /var/run/kubevirt/sockets/41ad405c-c96b-11e8-9d0e-08002763f94a_sock"
  5. level=error timestamp=2018-10-06T14:46:38.213995Z pos=cache.go:124 component=virt-handler reason= "dial unix /var/run/kubevirt/sockets/41ad405c-c96b-11e8-9d0e-08002763f94a_sock: connect: connection refused" msg= "failed to connect to cmd client socket"
  6. level=info timestamp=2018-10-06T14:46:38.314489Z pos=device_controller.go:133 component=virt-handler msg= "Starting device plugin controller"
  7. level=info timestamp=2018-10-06T14:46:38.315207Z pos=device_controller.go:113 component=virt-handler msg= "kvm device not found. Waiting."
  8. level=info timestamp=2018-10-06T14:46:38.317441Z pos=device_controller.go:127 component=virt-handler msg= "tun device plugin started"

转载自https://blog.csdn.net/cloudvtech

三、启动一个VM

3.1 vm.yaml


 
 
  1. apiVersion: kubevirt.io/v1alpha2
  2. kind: VirtualMachine
  3. metadata:
  4. name: testvm
  5. spec:
  6. running: false
  7. selector:
  8. matchLabels:
  9. guest: testvm
  10. template:
  11. metadata:
  12. labels:
  13. guest: testvm
  14. kubevirt.io/size: small
  15. spec:
  16. domain:
  17. devices:
  18. disks:
  19. - name: registrydisk
  20. volumeName: registryvolume
  21. disk:
  22. bus: virtio
  23. - name: cloudinitdisk
  24. volumeName: cloudinitvolume
  25. disk:
  26. bus: virtio
  27. volumes:
  28. - name: registryvolume
  29. registryDisk:
  30. image: kubevirt/cirros-registry-disk-demo
  31. - name: cloudinitvolume
  32. cloudInitNoCloud:
  33. userDataBase64: SGkuXG4=
  34. ---
  35. apiVersion: kubevirt.io/v1alpha2
  36. kind: VirtualMachineInstancePreset
  37. metadata:
  38. name: small
  39. spec:
  40. selector:
  41. matchLabels:
  42. kubevirt.io/size: small
  43. domain:
  44. resources:
  45. requests:
  46. memory: 64M
  47. devices: {}

 3.2 部署


 
 
  1. kubectl apply -f vm.yaml
  2. virtualmachine.kubevirt.io/testvm created
  3. virtualmachineinstancepreset.kubevirt.io/small created

3.3 查看部署结果

此时由于VM还没启动,所有没有POD没有vmis,只有vms资源


 
 
  1. [root@k8s-install-node kubevirt] # kubectl get pods
  2. No resources found.
  3. [root@k8s-install-node kubevirt] # kubectl get vms
  4. NAME AGE
  5. testvm 43s
  6. [root@k8s-install-node kubevirt] # kubectl get vmis
  7. No resources found.
  8. kubectl describe vms
  9. Name: testvm
  10. Namespace: default
  11. Labels: <none>
  12. Annotations: kubectl.kubernetes.io/last-applied-configuration={ "apiVersion": "kubevirt.io/v1alpha2", "kind": "VirtualMachine", "metadata":{ "annotations":{}, "name": "testvm", "namespace": "default"}, "spec":{ "running":fals...
  13. API Version: kubevirt.io/v1alpha2
  14. Kind: VirtualMachine
  15. Metadata:
  16. Creation Timestamp: 2018-10-06T14:54:30Z
  17. Generation: 1
  18. Resource Version: 630320
  19. Self Link: /apis/kubevirt.io/v1alpha2/namespaces/default/virtualmachines/testvm
  20. UID: baf248ad-c977-11e8-9d0e-08002763f94a
  21. Spec:
  22. Running: false
  23. Selector:
  24. Match Labels:
  25. Guest: testvm
  26. Template:
  27. Metadata:
  28. Labels:
  29. Guest: testvm
  30. Kubevirt . Io / Size: small
  31. Spec:
  32. Domain:
  33. Devices:
  34. Disks:
  35. Disk:
  36. Bus: virtio
  37. Name: registrydisk
  38. Volume Name: registryvolume
  39. Disk:
  40. Bus: virtio
  41. Name: cloudinitdisk
  42. Volume Name: cloudinitvolume
  43. Volumes:
  44. Name: registryvolume
  45. Registry Disk:
  46. Image: kubevirt/cirros-registry-disk-demo
  47. Cloud Init No Cloud:
  48. User Data Base 64: SGkuXG4=
  49. Name: cloudinitvolume
  50. Events: <none>

转载自https://blog.csdn.net/cloudvtech

四、启动虚拟机

4.1 启动虚拟机

下载virtctl


 
 
  1. curl -L -o virtctl https://github.com/kubevirt/kubevirt/releases/download/ $VERSION/virtctl- $VERSION-linux-amd64
  2. chmod +x virtctl

启动VM


 
 
  1. virtctl start testvm
  2. VM testvm was scheduled to start

4.2 查看状态


 
 
  1. kubectl get pods
  2. NAME READY STATUS RESTARTS AGE
  3. virt-launcher-testvm-jv6tk 2/2 Running 0 1m
  4. kubectl get vmis
  5. NAME AGE
  6. testvm 1m

 
 
  1. kubectl get vms -o yaml testvm
  2. apiVersion: kubevirt.io/v1alpha2
  3. kind: VirtualMachine
  4. metadata:
  5. annotations:
  6. kubectl.kubernetes.io/last-applied-configuration: |
  7. { "apiVersion": "kubevirt.io/v1alpha2", "kind": "VirtualMachine", "metadata":{ "annotations":{}, "name": "testvm", "namespace": "default"}, "spec":{ "running": false, "selector":{ "matchLabels":{ "guest": "testvm"}}, "template":{ "metadata":{ "labels":{ "guest": "testvm", "kubevirt.io/size": "small"}}, "spec":{ "domain":{ "devices":{ "disks":[{ "disk":{ "bus": "virtio"}, "name": "registrydisk", "volumeName": "registryvolume"},{ "disk":{ "bus": "virtio"}, "name": "cloudinitdisk", "volumeName": "cloudinitvolume"}]}}, "volumes":[{ "name": "registryvolume", "registryDisk":{ "image": "kubevirt/cirros-registry-disk-demo"}},{ "cloudInitNoCloud":{ "userDataBase64": "SGkuXG4="}, "name": "cloudinitvolume"}]}}}}
  8. creationTimestamp: 2018-10-06T14:54:30Z
  9. generation: 1
  10. name: testvm
  11. namespace: default
  12. resourceVersion: "630920"
  13. selfLink: /apis/kubevirt.io/v1alpha2/namespaces/default/virtualmachines/testvm
  14. uid: baf248ad-c977-11e8-9d0e-08002763f94a
  15. spec:
  16. running: true
  17. template:
  18. metadata:
  19. creationTimestamp: null
  20. labels:
  21. guest: testvm
  22. kubevirt.io/size: small
  23. spec:
  24. domain:
  25. devices:
  26. disks:
  27. - disk:
  28. bus: virtio
  29. name: registrydisk
  30. volumeName: registryvolume
  31. - disk:
  32. bus: virtio
  33. name: cloudinitdisk
  34. volumeName: cloudinitvolume
  35. machine:
  36. type: ""
  37. resources: {}
  38. volumes:
  39. - name: registryvolume
  40. registryDisk:
  41. image: kubevirt/cirros-registry-disk-demo
  42. - cloudInitNoCloud:
  43. userDataBase64: SGkuXG4=
  44. name: cloudinitvolume
  45. status:
  46. created: true
  47. ready: true

 


 
 
  1. kubectl get vmis -o yaml
  2. apiVersion: v1
  3. items:
  4. - apiVersion: kubevirt.io/v1alpha2
  5. kind: VirtualMachineInstance
  6. metadata:
  7. annotations:
  8. presets.virtualmachines.kubevirt.io/presets-applied: kubevirt.io/v1alpha2
  9. virtualmachinepreset.kubevirt.io/small: kubevirt.io/v1alpha2
  10. creationTimestamp: 2018-10-06T14:58:39Z
  11. finalizers:
  12. - foregroundDeleteVirtualMachine
  13. generateName: testvm
  14. generation: 1
  15. labels:
  16. guest: testvm
  17. kubevirt.io/nodeName: k8s-01
  18. kubevirt.io/size: small
  19. name: testvm
  20. namespace: default
  21. ownerReferences:
  22. - apiVersion: kubevirt.io/v1alpha2
  23. blockOwnerDeletion: true
  24. controller: true
  25. kind: VirtualMachine
  26. name: testvm
  27. uid: baf248ad-c977-11e8-9d0e-08002763f94a
  28. resourceVersion: "630919"
  29. selfLink: /apis/kubevirt.io/v1alpha2/namespaces/default/virtualmachineinstances/testvm
  30. uid: 4f2f4897-c978-11e8-9d0e-08002763f94a
  31. spec:
  32. domain:
  33. devices:
  34. disks:
  35. - disk:
  36. bus: virtio
  37. name: registrydisk
  38. volumeName: registryvolume
  39. - disk:
  40. bus: virtio
  41. name: cloudinitdisk
  42. volumeName: cloudinitvolume
  43. interfaces:
  44. - bridge: {}
  45. name: default
  46. features:
  47. acpi:
  48. enabled: true
  49. firmware:
  50. uuid: 5a9fc181-957e-5c32-9e5a-2de5e9673531
  51. machine:
  52. type: q35
  53. resources:
  54. requests:
  55. memory: 64M
  56. networks:
  57. - name: default
  58. pod: {}
  59. volumes:
  60. - name: registryvolume
  61. registryDisk:
  62. image: kubevirt/cirros-registry-disk-demo
  63. - cloudInitNoCloud:
  64. userDataBase64: SGkuXG4=
  65. name: cloudinitvolume
  66. status:
  67. interfaces:
  68. - ipAddress: 10.244.61.218
  69. nodeName: k8s-01
  70. phase: Running
  71. kind: List
  72. metadata:
  73. resourceVersion: ""
  74. selfLink: “ "

4.3 日志

virt-controller logs


 
 
  1. level=info timestamp=2018-10-06T14:58:46.837085Z pos=preset.go:142 component=virt-controller service=http namespace=default name=testvm kind= uid=4f2f4897-c978-11e8-9d0e-08002763f94a msg= "Initializing VirtualMachineInstance"
  2. level=info timestamp=2018-10-06T14:58:46.837188Z pos=preset.go:255 component=virt-controller service=http namespace=default name=testvm kind= uid=4f2f4897-c978-11e8-9d0e-08002763f94a msg= "VirtualMachineInstancePreset small matches VirtualMachineInstance"
  3. level=info timestamp=2018-10-06T14:58:46.837223Z pos=preset.go:171 component=virt-controller service=http namespace=default name=testvm kind= uid=4f2f4897-c978-11e8-9d0e-08002763f94a msg= "Marking VirtualMachineInstance as initialized"

virt-handler logs


 
 
  1. level=info timestamp=2018-10-06T14:58:59.921866Z pos=vm.go:319 component=virt-handler msg= "Processing vmi testvm, existing: true\n"
  2. level=info timestamp=2018-10-06T14:58:59.921906Z pos=vm.go:321 component=virt-handler msg= "vmi is in phase: Scheduled\n"
  3. level=info timestamp=2018-10-06T14:58:59.921920Z pos=vm.go:339 component=virt-handler msg= "Domain: existing: false\n"
  4. level=info timestamp=2018-10-06T14:58:59.921967Z pos=vm.go:426 component=virt-handler namespace=default name=testvm kind= uid=4f2f4897-c978-11e8-9d0e-08002763f94a msg= "Processing vmi update"
  5. level=info timestamp=2018-10-06T14:59:00.401528Z pos=server.go:75 component=virt-handler msg= "Received Domain Event of type ADDED"
  6. level=info timestamp=2018-10-06T14:59:00.401800Z pos=vm.go:731 component=virt-handler namespace=default name=testvm kind=Domain uid=4f2f4897-c978-11e8-9d0e-08002763f94a msg= "Domain is in state Paused reason StartingUp"
  7. level=info timestamp=2018-10-06T14:59:00.484518Z pos=server.go:75 component=virt-handler msg= "Received Domain Event of type MODIFIED"
  8. level=info timestamp=2018-10-06T14:59:00.486456Z pos=vm.go:762 component=virt-handler namespace=default name=testvm kind=Domain uid=4f2f4897-c978-11e8-9d0e-08002763f94a msg= "Domain is in state Running reason Unknown"
  9. level=info timestamp=2018-10-06T14:59:00.488657Z pos=vm.go:450 component=virt-handler namespace=default name=testvm kind= uid=4f2f4897-c978-11e8-9d0e-08002763f94a msg= "Synchronization loop succeeded."
  10. level=info timestamp=2018-10-06T14:59:00.489459Z pos=vm.go:319 component=virt-handler msg= "Processing vmi testvm, existing: true\n"
  11. level=info timestamp=2018-10-06T14:59:00.489483Z pos=vm.go:321 component=virt-handler msg= "vmi is in phase: Scheduled\n"
  12. level=info timestamp=2018-10-06T14:59:00.489497Z pos=vm.go:339 component=virt-handler msg= "Domain: existing: true\n"
  13. level=info timestamp=2018-10-06T14:59:00.489506Z pos=vm.go:341 component=virt-handler msg= "Domain status: Running, reason: Unknown\n"
  14. level=info timestamp=2018-10-06T14:59:00.489534Z pos=vm.go:429 component=virt-handler namespace=default name=testvm kind= uid=4f2f4897-c978-11e8-9d0e-08002763f94a msg= "No update processing required"
  15. level=info timestamp=2018-10-06T14:59:00.500284Z pos=server.go:75 component=virt-handler msg= "Received Domain Event of type MODIFIED"
  16. level=info timestamp=2018-10-06T14:59:00.507488Z pos=vm.go:450 component=virt-handler namespace=default name=testvm kind= uid=4f2f4897-c978-11e8-9d0e-08002763f94a msg= "Synchronization loop succeeded."
  17. level=info timestamp=2018-10-06T14:59:00.507577Z pos=vm.go:319 component=virt-handler msg= "Processing vmi testvm, existing: true\n"
  18. level=info timestamp=2018-10-06T14:59:00.507590Z pos=vm.go:321 component=virt-handler msg= "vmi is in phase: Running\n"
  19. level=info timestamp=2018-10-06T14:59:00.507602Z pos=vm.go:339 component=virt-handler msg= "Domain: existing: true\n"
  20. level=info timestamp=2018-10-06T14:59:00.507611Z pos=vm.go:341 component=virt-handler msg= "Domain status: Running, reason: Unknown\n"
  21. level=info timestamp=2018-10-06T14:59:00.507651Z pos=vm.go:426 component=virt-handler namespace=default name=testvm kind= uid=4f2f4897-c978-11e8-9d0e-08002763f94a msg= "Processing vmi update"

4.4 访问virt-luncher POD

POD信息


 
 
  1. kubectl get pod
  2. NAME READY STATUS RESTARTS AGE
  3. virt-launcher-testvm-jv6tk 2/2 Running 0 11m
  4. Events:
  5. Type Reason Age From Message
  6. ---- ------ ---- ---- -------
  7. Normal Scheduled 12m default-scheduler Successfully assigned default/virt-launcher-testvm-jv6tk to k8s-01
  8. Normal Pulled 11m kubelet, k8s-01 Container image "kubevirt/cirros-registry-disk-demo" already present on machine
  9. Normal Created 11m kubelet, k8s-01 Created container
  10. Normal Started 11m kubelet, k8s-01 Started container
  11. Normal Pulled 11m kubelet, k8s-01 Container image "docker.io/kubevirt/virt-launcher:v0.8.0" already present on machine
  12. Normal Created 11m kubelet, k8s-01 Created container
  13. Normal Started 11m kubelet, k8s-01 Started container

POD日志


 
 
  1. 2018-10-06 14:58:49.197+0000: 52: error : virPCIDeviceConfigOpen:312 : Failed to open config space file '/sys/bus/pci/devices/0000:00:07.0/config': Read-only file system
  2. 2018-10-06 14:58:49.197+0000: 52: error : virPCIDeviceConfigOpen:312 : Failed to open config space file '/sys/bus/pci/devices/0000:00:08.0/config': Read-only file system
  3. 2018-10-06 14:58:49.197+0000: 52: error : virPCIDeviceConfigOpen:312 : Failed to open config space file '/sys/bus/pci/devices/0000:00:0d.0/config': Read-only file system
  4. 2018-10-06 14:58:51.058+0000: 47: error : virCommandWait:2600 : internal error: Child process (/usr/sbin/dmidecode -q -t 0,1,2,3,4,17) unexpected exit status 1: /dev/mem: No such file or directory
  5. 2018-10-06 14:58:51.069+0000: 47: error : virNodeSuspendSupportsTarget:336 : internal error: Cannot probe for supported suspend types
  6. 2018-10-06 14:58:51.069+0000: 47: warning : virQEMUCapsInit:1229 : Failed to get host power management capabilities
  7. level=info timestamp=2018-10-06T14:58:58.295400Z pos=libvirt.go:276 component=virt-launcher msg= "Connected to libvirt daemon"
  8. level=info timestamp=2018-10-06T14:58:58.307021Z pos=virt-launcher.go:143 component=virt-launcher msg= "Watchdog file created at /var/run/kubevirt/watchdog-files/default_testvm"
  9. level=info timestamp=2018-10-06T14:58:58.307374Z pos=client.go:152 component=virt-launcher msg= "Registered libvirt event notify callback"
  10. level=info timestamp=2018-10-06T14:58:58.307460Z pos=virt-launcher.go:60 component=virt-launcher msg= "Marked as ready"
  11. level=info timestamp=2018-10-06T14:58:59.926926Z pos=converter.go:381 component=virt-launcher msg= "Hardware emulation device '/dev/kvm' not present. Using software emulation."
  12. level=info timestamp=2018-10-06T14:58:59.935170Z pos=cloud-init.go:254 component=virt-launcher msg= "generated nocloud iso file /var/run/libvirt/kubevirt-ephemeral-disk/cloud-init-data/default/testvm/noCloud.iso"
  13. level=info timestamp=2018-10-06T14:58:59.937885Z pos=converter.go:811 component=virt-launcher msg= "Found nameservers in /etc/resolv.conf: \n`\u0000\n"
  14. level=info timestamp=2018-10-06T14:58:59.937953Z pos=converter.go:812 component=virt-launcher msg= "Found search domains in /etc/resolv.conf: default.svc.cluster.local svc.cluster.local cluster.local"
  15. level=info timestamp=2018-10-06T14:58:59.938724Z pos=dhcp.go:62 component=virt-launcher msg= "Starting SingleClientDHCPServer"
  16. level=error timestamp=2018-10-06T14:58:59.940154Z pos=common.go:126 component=virt-launcher msg= "updated MAC for interface: eth0 - be:38:5d:23:70:ad"
  17. level=info timestamp=2018-10-06T14:58:59.971683Z pos=manager.go:158 component=virt-launcher namespace=default name=testvm kind= uid=4f2f4897-c978-11e8-9d0e-08002763f94a msg= "Domain defined."
  18. level=info timestamp=2018-10-06T14:58:59.972022Z pos=client.go:136 component=virt-launcher msg= "Libvirt event 0 with reason 0 received"
  19. 2018-10-06 14:59:00.006+0000: 32: error : virDBusGetSystemBus:109 : internal error: Unable to get DBus system bus connection: Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory
  20. 2018-10-06 14:59:00.388+0000: 32: error : virCgroupDetect:714 : At least one cgroup controller is required: No such device or address
  21. level=info timestamp=2018-10-06T14:59:00.394568Z pos=virt-launcher.go:214 component=virt-launcher msg= "Detected domain with UUID 5a9fc181-957e-5c32-9e5a-2de5e9673531"
  22. level=info timestamp=2018-10-06T14:59:00.394747Z pos=monitor.go:253 component=virt-launcher msg= "Monitoring loop: rate 1s start timeout 5m0s"
  23. level=info timestamp=2018-10-06T14:59:00.398183Z pos=client.go:119 component=virt-launcher msg= "domain status: 3:11"
  24. level=info timestamp=2018-10-06T14:59:00.402066Z pos=client.go:145 component=virt-launcher msg= "processed event"
  25. level=info timestamp=2018-10-06T14:59:00.470313Z pos=client.go:136 component=virt-launcher msg= "Libvirt event 4 with reason 0 received"
  26. level=info timestamp=2018-10-06T14:59:00.478060Z pos=client.go:119 component=virt-launcher msg= "domain status: 1:1"
  27. level=info timestamp=2018-10-06T14:59:00.484866Z pos=manager.go:189 component=virt-launcher namespace=default name=testvm kind= uid=4f2f4897-c978-11e8-9d0e-08002763f94a msg= "Domain started."
  28. level=info timestamp=2018-10-06T14:59:00.485661Z pos=server.go:74 component=virt-launcher namespace=default name=testvm kind= uid=4f2f4897-c978-11e8-9d0e-08002763f94a msg= "Synced vmi"
  29. level=info timestamp=2018-10-06T14:59:00.490557Z pos=client.go:145 component=virt-launcher msg= "processed event"
  30. level=info timestamp=2018-10-06T14:59:00.490629Z pos=client.go:136 component=virt-launcher msg= "Libvirt event 2 with reason 0 received"
  31. level=info timestamp=2018-10-06T14:59:00.499447Z pos=client.go:119 component=virt-launcher msg= "domain status: 1:1"
  32. level=info timestamp=2018-10-06T14:59:00.500643Z pos=client.go:145 component=virt-launcher msg= "processed event"

在POD内部信息


 
 
  1. [root@testvm /] # ps -ef
  2. UID PID PPID C STIME TTY TIME CMD
  3. root 1 0 0 14:58 ? 00:00:00 /bin/bash /usr/share/kubevirt/virt-launcher/entrypoint.sh --qemu-timeout
  4. root 7 1 0 14:58 ? 00:00:00 virt-launcher --qemu-timeout 5m --name testvm --uid 4f2f4897-c978-11e8-9
  5. root 17 7 0 14:58 ? 00:00:00 /bin/bash /usr/share/kubevirt/virt-launcher/libvirtd.sh
  6. root 18 7 0 14:58 ? 00:00:00 /usr/sbin/virtlogd -f /etc/libvirt/virtlogd.conf
  7. root 31 17 0 14:58 ? 00:00:00 /usr/sbin/libvirtd
  8. qemu 156 1 2 14:58 ? 00:00:23 /usr/bin/qemu-system-x86_64 -name guest=default_testvm,debug-threads=on
  9. root 2829 0 0 15:13 ? 00:00:00 bash
  10. root 2842 1 0 15:13 ? 00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1
  11. root 2843 2829 0 15:13 ? 00:00:00 ps -ef
  12. [root@testvm /] # ps -ef | grep 156
  13. qemu 156 1 2 14:58 ? 00:00:23 /usr/bin/qemu-system-x86_64 -name guest=default_testvm,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-default_testvm/master-key.aes -machine pc-q35-2.12,accel=tcg,usb=off,dump-guest-core=off -cpu EPYC,acpi=on,ss=on,hypervisor=on,erms=on, mpx=on,pcommit=on,clwb=on,pku=on,ospke=on,la57=on,3dnowext=on,3dnow=on,vme=off,fma=off,avx=off,f16c=off,rdrand=off,avx2=off,rdseed=off,sha-ni=off,xsavec=off,fxsr_opt=off,misalignsse=off,3dnowprefetch=off,osvw=off -m 62 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 5a9fc181-957e-5c32-9e5a-2de5e9673531 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-1-default_testvm/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 -drive file=/var/run/libvirt/kubevirt-ephemeral-disk/registry-disk-data/default/testvm/disk_registryvolume/disk-image.raw,format=raw, if=none,id=drive-ua-registrydisk -device virtio-blk-pci,scsi=off,bus=pci.2,addr=0x0,drive=drive-ua-registrydisk,id=ua-registrydisk,bootindex=1 -drive file=/var/run/libvirt/kubevirt-ephemeral-disk/cloud-init-data/default/testvm/noCloud.iso,format=raw, if=none,id=drive-ua-cloudinitdisk -device virtio-blk-pci,scsi=off,bus=pci.3,addr=0x0,drive=drive-ua-cloudinitdisk,id=ua-cloudinitdisk -netdev tap,fd=23,id=hostua-default -device virtio-net-pci,netdev=hostua-default,id=ua-default,mac=be:38:5d:65:04:dd,bus=pci.1,addr=0x0 -chardev socket,id=charserial0,path=/var/run/kubevirt-private/4f2f4897-c978-11e8-9d0e-08002763f94a/virt-serial0,server,nowait -device isa-serial,chardev=charserial0,id=serial0 -vnc vnc=unix:/var/run/kubevirt-private/4f2f4897-c978-11e8-9d0e-08002763f94a/virt-vnc -device VGA,id=video0,vgamem_mb=16,bus=pcie.0,addr=0x1 -msg timestamp=on
  14. root 2885 2829 0 15:13 ? 00:00:00 grep --color=auto 156
  15. [root@testvm /] # ip a
  16. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  17. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  18. inet 127.0.0.1/8 scope host lo
  19. valid_lft forever preferred_lft forever
  20. inet6 ::1/128 scope host
  21. valid_lft forever preferred_lft forever
  22. 2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
  23. link/ipip 0.0.0.0 brd 0.0.0.0
  24. 4: eth0@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master k6t-eth0 state UP group default
  25. link/ether be:38:5d:23:70:ad brd ff:ff:ff:ff:ff:ff link-netnsid 0
  26. inet6 fe80::bc38:5dff:fe23:70ad/64 scope link
  27. valid_lft forever preferred_lft forever
  28. 5: k6t-eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
  29. link/ether be:38:5d:23:70:ad brd ff:ff:ff:ff:ff:ff
  30. inet 169.254.75.10/32 brd 169.254.75.10 scope global k6t-eth0
  31. valid_lft forever preferred_lft forever
  32. inet6 fe80::bc38:5dff:fe65:4dd/64 scope link
  33. valid_lft forever preferred_lft forever
  34. 6: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master k6t-eth0 state UNKNOWN group default qlen 1000
  35. link/ether fe:38:5d:65:04:dd brd ff:ff:ff:ff:ff:ff
  36. inet6 fe80::fc38:5dff:fe65:4dd/64 scope link
  37. valid_lft forever preferred_lft forever

k8snode上面信息


 
 
  1. root 19315 19298 0 10:58 ? 00:00:00 /bin/bash /usr/share/kubevirt/virt-launcher/entrypoint.sh --qemu-timeout 5m --name testvm --uid 4f2f4897-c978-11e8-9d0e-08002763f94a --namespace default --kubevirt-share-dir /var/run/kubevirt --readiness-file /tmp/healthy --grace-period-seconds 45 --hook-sidecars 0 --use-emulation
  2. root 19334 19315 0 10:58 ? 00:00:00 virt-launcher --qemu-timeout 5m --name testvm --uid 4f2f4897-c978-11e8-9d0e-08002763f94a --namespace default --kubevirt-share-dir /var/run/kubevirt --readiness-file /tmp/healthy --grace-period-seconds 45 --hook-sidecars 0 --use-emulation
  3. root 19354 19334 0 10:58 ? 00:00:00 /bin/bash /usr/share/kubevirt/virt-launcher/libvirtd.sh
  4. root 19355 19334 0 10:58 ? 00:00:00 /usr/sbin/virtlogd -f /etc/libvirt/virtlogd.conf
  5. root 19368 19354 0 10:58 ? 00:00:00 /usr/sbin/libvirtd
  6. root 19607 2 0 10:58 ? 00:00:00 [kworker/1:4]
  7. qemu 19649 19315 2 10:58 ? 00:00:23 /usr/bin/qemu-system-x86_64 -name guest=default_testvm,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-default_testvm/master-key.aes -machine pc-q35-2.12,accel=tcg,usb=off,dump-guest-core=off -cpu EPYC,acpi=on,ss=on,hypervisor=on,erms=on,mpx=on,pcommit=on,clwb=on,pku=on,ospke=on,la57=on,3dnowext=on,3dnow=on,vme=off,fma=off,avx=off,f16c=off,rdrand=off,avx2=off,rdseed=off,sha-ni=off,xsavec=off,fxsr_opt=off,misalignsse=off,3dnowprefetch=off,osvw=off -m 62 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 5a9fc181-957e-5c32-9e5a-2de5e9673531 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-1-default_testvm/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 -drive file=/var/run/libvirt/kubevirt-ephemeral-disk/registry-disk-data/default/testvm/disk_registryvolume/disk-image.raw,format=raw, if=none,id=drive-ua-registrydisk -device virtio-blk-pci,scsi=off,bus=pci.2,addr=0x0,drive=drive-ua-registrydisk,id=ua-registrydisk,bootindex=1 -drive file=/var/run/libvirt/kubevirt-ephemeral-disk/cloud-init-data/default/testvm/noCloud.iso,format=raw, if=none,id=drive-ua-cloudinitdisk -device virtio-blk-pci,scsi=off,bus=pci.3,addr=0x0,drive=drive-ua-cloudinitdisk,id=ua-cloudinitdisk -netdev tap,fd=23,id=hostua-default -device virtio-net-pci,netdev=hostua-default,id=ua-default,mac=be:38:5d:65:04:dd,bus=pci.1,addr=0x0 -chardev socket,id=charserial0,path=/var/run/kubevirt-private/4f2f4897-c978-11e8-9d0e-08002763f94a/virt-serial0,server,nowait -device isa-serial,chardev=charserial0,id=serial0 -vnc vnc=unix:/var/run/kubevirt-private/4f2f4897-c978-11e8-9d0e-08002763f94a/virt-vnc -device VGA,id=video0,vgamem_mb=16,bus=pcie.0,addr=0x1 -msg timestamp=on
  8. root 20444 2 0 10:59 ? 00:00:00 [kworker/0:1]
  9. root 24979 2 0 11:04 ? 00:00:00 [kworker/1:0]
  10. root 25450 665 0 09:46 ? 00:00:00 /sbin/dhclient -d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-enp0s3.pid -lf /var/lib/NetworkManager/dhclient-d5c415a1-73dc-47da-afdd-9e44d451e97f-enp0s3.lease -cf /var/lib/NetworkManager/dhclient-enp0s3.conf enp0s3

4.5 访问VM

virtctl console testvm


 
 
  1. Successfully connected to testvm console. The escape sequence is ^]
  2. login as 'cirros' user. default password: 'gocubsgo'. use 'sudo' for root.
  3. testvm login: cirros
  4. Password:
  5. $ ip a
  6. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
  7. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  8. inet 127.0.0.1/8 scope host lo
  9. valid_lft forever preferred_lft forever
  10. inet6 ::1/128 scope host
  11. valid_lft forever preferred_lft forever
  12. 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
  13. link/ether be:38:5d:65:04:dd brd ff:ff:ff:ff:ff:ff
  14. inet 10.244.61.218/32 brd 10.255.255.255 scope global eth0
  15. valid_lft forever preferred_lft forever
  16. inet6 fe80::bc38:5dff:fe65:4dd/64 scope link tentative flags 08
  17. valid_lft forever preferred_lft forever
  18. $ ps -ef
  19. PID USER COMMAND
  20. 1 root init
  21. 2 root [kthreadd]
  22. 3 root [ksoftirqd/0]
  23. 4 root [kworker/0:0]
  24. 5 root [kworker/0:0H]
  25. 7 root [rcu_sched]
  26. 8 root [rcu_bh]
  27. 9 root [migration/0]
  28. 10 root [watchdog/0]
  29. 11 root [kdevtmpfs]
  30. 12 root [netns]
  31. 13 root [perf]
  32. 14 root [khungtaskd]
  33. 15 root [writeback]
  34. 16 root [ksmd]
  35. 17 root [crypto]
  36. 18 root [kintegrityd]
  37. 19 root [bioset]
  38. 20 root [kblockd]
  39. 21 root [ata_sff]
  40. 22 root [md]
  41. 23 root [devfreq_wq]
  42. 24 root [kworker/u2:1]
  43. 25 root [kworker/0:1]
  44. 27 root [kswapd0]
  45. 28 root [vmstat]
  46. 29 root [fsnotify_mark]
  47. 30 root [ecryptfs-kthrea]
  48. 46 root [kthrotld]
  49. 47 root [acpi_thermal_pm]
  50. 48 root [bioset]
  51. 49 root [bioset]
  52. 50 root [bioset]
  53. 51 root [bioset]
  54. 52 root [bioset]
  55. 53 root [bioset]
  56. 54 root [bioset]
  57. 55 root [bioset]
  58. 56 root [bioset]
  59. 57 root [bioset]
  60. 58 root [bioset]
  61. 59 root [bioset]
  62. 60 root [bioset]
  63. 61 root [bioset]
  64. 62 root [bioset]
  65. 63 root [bioset]
  66. 64 root [bioset]
  67. 65 root [bioset]
  68. 66 root [bioset]
  69. 67 root [bioset]
  70. 68 root [bioset]
  71. 69 root [bioset]
  72. 70 root [bioset]
  73. 71 root [bioset]
  74. 72 root [bioset]
  75. 73 root [bioset]
  76. 77 root [ipv6_addrconf]
  77. 79 root [kworker/u2:2]
  78. 91 root [deferwq]
  79. 92 root [charger_manager]
  80. 123 root [jbd2/vda1-8]
  81. 124 root [ext4-rsv-conver]
  82. 125 root [kworker/0:1H]
  83. 152 root /sbin/syslogd -n
  84. 154 root /sbin/klogd -n
  85. 181 root /sbin/acpid
  86. 252 root udhcpc -p /var/run/udhcpc.eth0.pid -R -n -T 60 -i eth0 -s /sbin/
  87. 280 root /usr/sbin/dropbear -R
  88. 372 cirros -sh
  89. 373 root /sbin/getty 115200 tty1
  90. 378 cirros ps -ef

转载自https://blog.csdn.net/cloudvtech

五、启动另外一个虚拟机

5.1 vm1.yaml


 
 
  1. apiVersion: kubevirt.io/v1alpha2
  2. kind: VirtualMachine
  3. metadata:
  4. name: testvm1
  5. spec:
  6. running: true
  7. selector:
  8. matchLabels:
  9. guest: testvm1
  10. template:
  11. metadata:
  12. labels:
  13. guest: testvm1
  14. kubevirt.io/size: big
  15. spec:
  16. domain:
  17. devices:
  18. disks:
  19. - name: registrydisk
  20. volumeName: registryvolume
  21. disk:
  22. bus: virtio
  23. - name: cloudinitdisk
  24. volumeName: cloudinitvolume
  25. disk:
  26. bus: virtio
  27. volumes:
  28. - name: registryvolume
  29. registryDisk:
  30. image: kubevirt/cirros-registry-disk-demo
  31. - name: cloudinitvolume
  32. cloudInitNoCloud:
  33. userDataBase64: SGkuXG4=
  34. ---
  35. apiVersion: kubevirt.io/v1alpha2
  36. kind: VirtualMachineInstancePreset
  37. metadata:
  38. name: big
  39. spec:
  40. selector:
  41. matchLabels:
  42. kubevirt.io/size: big
  43. domain:
  44. resources:
  45. requests:
  46. memory: 128M
  47. devices: {}

5.2 查看状态


 
 
  1. [root@k8s-install-node kubevirt] # kubectl get pod -o wide
  2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
  3. virt-launcher-testvm-jv6tk 2/2 Running 0 33m 10.244.61.218 k8s-01 <none>
  4. virt-launcher-testvm1-dtxfl 2/2 Running 0 3m 10.244.179.39 k8s-02 <none>
  5. [root@k8s-install-node ~] # ssh cirros@10.244.179.39
  6. The authenticity of host '10.244.179.39 (10.244.179.39)' can 't be established.
  7. ECDSA key fingerprint is SHA256:OgAtlTnwN/EYYwCPFxWBrDMBaZeoZzDS+P7pEZ6CEwM.
  8. ECDSA key fingerprint is MD5:c3:97:7f:e3:73:11:5d:8a:cd:dc:7e:27:a2:1b:c7:3e.
  9. Are you sure you want to continue connecting (yes/no)? yes
  10. Warning: Permanently added '10.244.179.39 ' (ECDSA) to the list of known hosts.
  11. cirros@10.244.179.39's password:
  12. $ ip a
  13. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
  14. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  15. inet 127.0.0.1/8 scope host lo
  16. valid_lft forever preferred_lft forever
  17. inet6 ::1/128 scope host
  18. valid_lft forever preferred_lft forever
  19. 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
  20. link/ether b6:91:13:f1:b2:0f brd ff:ff:ff:ff:ff:ff
  21. inet 10.244.179.39/32 brd 10.255.255.255 scope global eth0
  22. valid_lft forever preferred_lft forever
  23. inet6 fe80::b491:13ff:fef1:b20f/64 scope link tentative flags 08
  24. valid_lft forever preferred_lft forever

转载自https://blog.csdn.net/cloudvtech

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值