Test OS: Windows 2016+
Guest OS:
[GPUs not including A100/A30] Windows Server , Windows 10
[A100/A30 Only] Linux OS , e.g. RHEL 8.x/7.x
Note: A100/A30 only have vCS support,for Linux guest VMs only - no Windows guest VM support.
Test Tool: vCenter/ VNC/ filezilla/ Burnin/
1、Install hyper-v to the SUT
Right click "this pc"--> Click on "manage"--> Click on "add roles and features" --> Default option Next go to "server roles" and check "Hyper-v" --> Click on 'add features'--> next-->install-->Restart the installation successfully
2、 Install client
2.1 Open hyper-v manager
Right click "this pc"--> Click on "manage"-->tools --> hyper –v manager
2.2 Adding a Virtual NIC
Click on "virtual switch manager for win-uj6l0muu8vr"--> external --> create virtual switch--> external network Select the network port that has a network--> Make up a name--> ok
2.3 Add a VM
New --> new vitual machine wizard --> next --> Make up a name --> Select the memory size (greater than the memory capacity of all Gpus, less than the memory capacity of the physical machine) --> Select the virtual NIC created in 2.2 -->Select the disk capacity of the VM and the location for storing the VM --> Select mounting medium -->finsh
2.4 vm os installation
The procedure for installing VMS and physical machines is the same.
I won't go into details here
3、Open Windows PowerShell (PS) as Administrator and execute below command:
$pnpdevs = Get-PnpDevice -presentOnly
$gpudevs = $pnpdevs |where-object {$_.Class -like "Display" -and $_.Manufacturer -like "NVIDIA"}
$gpudevs
4、Use the command to shut down the GPU graphics device on the parent partition.
Disable-PnpDevice -InstanceId $gpudevs.InstanceId
5、Uninstall the GPU device from the parent partition using the command:
1)GPU0 : $locationPath0 = ($gpudevs[0] |Get-PnpDeviceProperty DEVPKEY_Device_LocationPaths).data[0]
GPU1: $locationPath1 = ($gpudevs[1] |Get-PnpDeviceProperty DEVPKEY_Device_LocationPaths).data[0]
2)
GPU0 : Dismount-VMHostAssignableDevice -force -LocationPath $locationPath0
GPU1: Dismount-VMHostAssignableDevice -force -LocationPath $locationPath1
Step: $pnp = Get-PnpDevice | Where-Object {$_.Present -eq $true} | Where-Object {$_.Class -eq "Display"}
Step: $pnp
6、Before performing the following operations, install the os and shut down the VM
$vm = "ddatest1"
Note 1:ddatest1 should be replaced with the name of the virtual machine
7、 Check for the GPU device under the VM
Note2: If you have multiple GPUs , do assign device to VM with that "one device per each VM".
For example:
GPU0: Add-VMAssignableDevice -LocationPath $locationpath0 -VMName $vm -> First VM
GPU1: Add-VMAssignableDevice -LocationPath $locationpath1 -VMName $vm -> Second VM
etc...
8、 By default, each VM starts off with 128MB of low MMIO space and 512MB of high MMIO space allocated to it, but a device might require more, or you may pass through multiple devices that the combined requirements exceed these values. Changing MMIO Space is straight forward, you do it in PowerShell using the following commands, for each Virtual machine:
Set-VM -LowMemoryMappedIoSpace 3Gb -VMName $vm
Set-VM -HighMemoryMappedIoSpace XGb -VMName $vm
X=double size vide card memory , such as V100 32G, X=64
9、 Edit VM and set “Automatic Stop Action” policy to “Turn off the virtual machine” will allow the physical device to be passed through to the VM
10、Starting a VM --> Upload the driver to the VM -->Install the driver on the VM
11、 Run GPU stress on all VM. Please check if GPU consumption is >= 80% of GPU TDP when running stress test.
12、 To repatriate the NVIDIA GPU graphics adapter back to the Parent (Host) Partition:
Step1: Shut down the VM that uses the NVIDIA GPU graphics card.
Step2: $pnpdevs = Get-PnpDevice
Step3: $gpudevs = $pnpdevs |where-object {$_.Description -like "*Dismounted"}
Step4: $gpudevs
Step5:
GPU0: $locationPath0=($gpudevs[0]| Get-PnpDeviceProperty DEVPKEY_Device_LocationPaths).data[0]
GPU1: $locationPath1=($gpudevs[1]| Get-PnpDeviceProperty DEVPKEY_Device_LocationPaths).data[0]
Step6:
$vm = "ddatest1"
注意1:ddatest1应该被替换为虚拟机的名称
GPU0: Remove-VMAssignableDevice -LocationPath $locationpath0 -VMName $vm
GPU1:Remove-VMAssignableDevice -LocationPath $locationpath1 -VMName $vm
Step7:
GPU0: Mount-VMHostAssignableDevice -LocationPath $locationpath0
GPU1: Mount-VMHostAssignableDevice -LocationPath $locationpath1
Step8: $gpudevs = $pnpdevs |where-object {$_.Class -like "Display" -and $_.Manufacturer -like "NVIDIA"}
Step9: $gpudevs
Step10: Enable-PnpDevice -InstanceId $gpudevs.InstanceId
Step11: On Host Device Manager , all GPUs are available for Host.