Technology Stronghold

Technology Stronghold

  • Pass-through wired or wireless (Wi-Fi) NIC to VM using Hyper-V Discrete Device Assignment (DDA) in Windows Server 2016

Hyper-V Discrete Device Assignment (DDA) was developed for attaching (passing through) video adapters into virtual machines. This feature exists on other hypervisors (for example VMDirectPath I/O passthrough on VMware ESXi) for a long time and it is used to virtualize applications that can leverage GPU hardware. Such applications are used for example for scientific calculations or for video encoding.

I tested DDA on network adapters. On Windows Server Technology Preview 5 I attached three network adapters (two wireless and one wired) into Windows 10 Enterprise Update 1511. As you can see on screenshot all interfaces are functional.

discrete device assignment vmware

If you want to try the same thing, then you can use my code. Just change VM name and InstanceId that you can find in Device Manager (devmgm.msc) or using cmdlet Get-PnpDevice.

Please read verbose stream to understand what steps you need to make.

Rudolf Vesely

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Active Directory Advanced function AlwaysOn Availability Groups AlwaysOn Failover Cluster Instances Building Cloud Cloud Cluster Cmdlet Database Deployment Design DFS Domain Controller DSC Fabric Failover Clustering File Server Group Policy Hardware Profile Host Hyper-V Installation Library Library Asset Library Server Network Operations Manager Orchestrator PowerShell PowerShell User Group PowerShell Workflow Security Service Manager SQL Server Storage System Center Template Time Time Synchronization Tips Virtual Machine Virtual Machine Manager VM Network VM Template Windows Server 2012 R2

  • Podman (daemonless Docker-like cont. engine) generates great container names
  • Microsoft opens its UK data center region to Azure and Office 365 customers
  • PowerShell User Group event – June 2016 in London in Rackspace
  • PowerShell User Group event – February 2016 in London in Rackspace

' src=

  • Active Directory (6)
  • Cloud Computing (55)
  • Containers (2)
  • Databases (12)
  • Hardware (2)
  • Microsoft Windows Server (6)
  • Scripting (37)
  • Security (2)
  • Virtualization (12)

Workspace ONE UEM Managing Devices

Device assignments, table of contents.

  • Managing Devices with Workspace ONE UEM
  • Filtering Devices in List View
  • Add a Device from List View
  • Certificate Management
  • Compliance Policies List View
  • View Devices Page
  • Compliance Policy Rules and Actions
  • Add a Compliance Policy
  • View Device Assignment, Compliance Policy
  • Compromised Device Detection with Health Attestation
  • Custom Attributes
  • Device Actions
  • Device Details
  • How Do You Enroll Devices in Workspace ONE UEM?
  • Additional Enrollment Restrictions
  • Basic vs. Directory Services Enrollment
  • Bring Your Own Device (BYOD) Enrollment
  • Configure Enrollment Options
  • Denylist and Allowlist Device Registrations
  • Device Registration
  • Enrollment Status
  • Self-Enrollment Versus Device Staging
  • User Enrollment OG Precedence Order
  • Profile Processing
  • Add a Device Profile
  • Profiles and Profile Resources Used in Workflows
  • Device Profile Editing
  • Compliance Profiles
  • Profile Resources
  • Geofence Areas
  • Time Schedules
  • View Device Assignment, Device Profile
  • Device Tags
  • Lookup Values
  • Privacy for BYOD Deployments
  • Make a Time Window and Assign it to Devices
  • Wipe Protection
  • Last Updated May 03, 2024
  • 6 minute read
  • Device Management
  • Product Documentation

You can move devices across organization groups (OG) and user names based on the network Internet protocol (IP) address range or custom attributes using Device Assignments. It is an alternative to organizing content by user groups in Workspace ONE UEM.

Instead of manually moving devices between OGs, you can direct the console to move devices automatically when it connects to Wi-Fi that you define. You can also move devices based on custom attribute rules that you define.

A typical use case for device assignments is a user who regularly changes roles and requires specialized profiles and applications for each role.

You must select between implementing User Groups and Device Assignments to move devices since Workspace ONE UEM does not support both functions on the same device.

  • NOTE: Windows devices do not support defining IP address ranges. Therefore, Windows devices must use user groups instead of device assignments for moving devices across OGs automatically. You can also move a device based on attributes in Workspace ONE Intelligence or by using a sensor. For details, see Workspace ONE Intelligence and Creating Sensors for Windows Desktop Devices .

Enable Device Assignments

Before you can move devices across organization groups (OG) and user names based on an Internet protocol (IP) or custom attribute, you must enable device assignments in Workspace ONE UEM. Device assignments can only be configured at a child organization group.

This screenshot shows the Devices & Users, General Advanced system settings, which you can use to enable device assignments.

  • Ensure you are in a customer type organization group.
  • Navigate to Groups & Settings > All Settings > Devices & Users > General > Advanced and select Override or Inherit for the Current Setting according to your needs.
  • Select Enabled in the Device Assignment Rules setting.

Select the management Type from the following.

  • Organization Group By IP Range – Moves the device to a specified OG when the device leaves one Wi-Fi network range and enters another. This move triggers the automatic push of profiles, apps, policies, and products.

Organization Group By Custom Attribute – Moves the device to an organization group based on custom attributes.

Custom attributes enable administrators to extract specific values from a managed device and return it to the Workspace ONE UEM console. You can also assign the attribute value to devices for use in product provisioning or device lookup values.

  • User name By IP Range – When a device exits one network and enters another, the device changes user names instead of moving to another OG. This user name change triggers the same push of profiles, apps, policies, and products as an OG change does. This option is for customers with a limited ability to create organization groups, providing an alternate way to take advantage of the device assignment feature.

Important: If you want to change the assignment Type on an existing assignment configuration, you must delete all existing defined ranges. Remove IP Range assignments by navigating to Groups & Settings > Groups > Organization Groups > Network Ranges . Remove custom attribute assignments by navigating to Devices > Provisioning > Custom Attributes > Custom Attribute Assignment Rules .

Select the Device Ownership options. Only devices with the selected ownership types are assigned.

  • Corporate – Dedicated
  • Corporate – Shared
  • Employee Owned

You can add a network range by selecting the link, Click here to create a network range .

You can alternatively visit this page by navigating to Groups & Settings > Groups > Organization Groups > Network Ranges . The Network Ranges settings selection is only visible if Device Assignments is enabled for the Organization Group you are in when you visit this location.

When selected, the Network Ranges page is displayed.

Select Save once all the options are set.

Define Device Assignment Rule or Network Range

When your device connects to Wi-Fi while managed by Workspace ONE UEM, the device authenticates and automatically installs profiles, apps, policies, and product provisions specific to the OG that you select.

You can also define rules based on custom attributes. When a device enrolls with an assigned attribute, the rule assigns the device to the configured organization group. The device can also be assigned in the case where the device receives a product provision containing a qualifying custom attribute.

Device assignments can only be configured at a child organization group.

This screenshot shows the Network Ranges screen, which is only visible after you enable device assignments.

Navigate to Groups & Settings > Groups > Organization Groups > Network Ranges .

The Network Ranges option is not visible until you enable device assignments. So if you cannot find ‘Network Ranges’ in the Organization Groups navigation path, see the section above entitled Enable Device Assignments .

To add a single Internet protocol (IP) address range, select Add Network Range . In the Add/Edit Network Range page, complete the following settings and then select Save .

Overlapping network ranges results in the message, “Save Failed, Network Range exists.”

If you have several network ranges to add, you can optionally select Batch Import to save time.

  • On the Batch Import page, select the Download template for this batch type link to view and download the bulk import template in CSV format.

Open the CSV file. The CSV file features several columns corresponding to the options on the Add Network Range screen. Enter the organization Group ID in the “OrganisationGroup” column instead of organization group name.

Note: You can identify the Group ID of any organization group by 1) moving to the OG you want to identify and 2) hovering your pointer over the OG label which displays a popup that contains the Group ID.

This partial screen shot demonstrates how hovering your mouse pointer over the OG label in UEM causes a popup to display containing the group ID for the OG you are in.

Note: A CSV file (comma-separated values) is simply a text file whose extension is changed from “TXT” to “CSV”. It stores tabular data (text and numbers) in plain text. Each line or row of the file is a data record. Each record consists of one or more fields, separated by commas. It can be opened and edited with any text editor. It can also be opened and edited with Microsoft Excel.

When you open the CSV template, notice that sample data has been added to each column in the template. The sample data is presented to inform you what kind of data is required and what format it must be in. Do not stray from the format presented by the sample data. Complete this template by filling in each of the required columns for each network range you want to add.

  • Import the completed template using the Batch Import page.

Select Save .

The screenshot shows the Batch Import screen for the Network Range option.

Working Hard In IT

My view on it from the trenches.

Working Hard In IT

Setting up Discrete Device Assignment with a GPU

Introduction.

Let’s take a look at setting up Discrete Device Assignment with a GPU. Windows Server 2016 introduces Discrete Device Assignment (DDA). This allows a PCI Express connected device, that supports this, to be connected directly through to a virtual machine.

The idea behind this is to gain extra performance. In our case we’ll use one of the four display adapters in our NVIDIA GROD K1 to assign to a VM via DDA. The 3 other can remain for use with RemoteFX. Perhaps we could even leverage DDA for GPUs that do not support RemoteFX to be used directly by a VM, we’ll see.

As we directly assign the hardware to VM we need to install the drivers for that hardware inside of that VM just like you need to do with real hardware.

I refer you to the starting blog of a series on DDA in Windows 2016:

  • Discrete Device Assignment — Guests and Linux

Here you can get a wealth of extra information. My experimentations with this feature relied heavily on these blogs and MSFT provide GitHub script to query a host for DDA capable devices. That was very educational in regards to finding out the PowerShell we needed to get DDA to work! Please see A 1st look at Discrete Device Assignment in Hyper-V to see the output of this script and how we identified that our NVIDIA GRID K1 card was a DDA capable candidate.

Requirements

There are some conditions the host system needs to meet to even be able to use DDA. The host needs to support Access Control services which enables pass through of PCI Express devices in a secure manner. The host also need to support SLAT and Intel VT-d2 or AMD I/O MMU. This is dependent on UEFI, which is not a big issue. All my W2K12R2 cluster nodes & member servers run UEFI already anyway. All in all, these requirements are covered by modern hardware. The hardware you buy today for Windows Server 2012 R2 meets those requirements when you buy decent enterprise grade hardware such as the DELL PowerEdge R730 series. That’s the model I had available to test with. Nothing about these requirements is shocking or unusual.

A PCI express device that is used for DDA cannot be used by the host in any way. You’ll see we actually dismount it form the host. It also cannot be shared amongst VMs. It’s used exclusively by the VM it’s assigned to. As you can imagine this is not a scenario for live migration and VM mobility. This is a major difference between DDA and SR-IOV or virtual fibre channel where live migration is supported in very creative, different ways. Now I’m not saying Microsoft will never be able to combine DDA with live migration, but to the best of my knowledge it’s not available today.

The host requirements are also listed here: https://technet.microsoft.com/en-us/library/mt608570.aspx

  • The processor must have either Intel’s Extended Page Table (EPT) or AMD’s Nested Page Table (NPT).

The chipset must have:

  • The firmware tables must expose the I/O MMU to the Windows hypervisor. Note that this feature might be turned off in the UEFI or BIOS. For instructions, see the hardware documentation or contact your hardware manufacturer.

You get this technology both on premises with Windows Server 2016 as and with virtual machines running Windows Server 2016; Windows 10 (1511 or higher) and Linux distros that support it. It’s also an offering on high end Azure VMs (IAAS). It supports both Generation 1 and generation 2 virtual machines. All be it that generation 2 is X64 bit only, this might be important for certain client VMs. We’ve dumped 32 bit Operating systems over decade ago so to me this is a non-issue.

For this article I used a DELL PowerEdge R730, a NVIIA GRID K1 GPU. Windows Server 2016 TPv4 with CU of March 2016 and Windows 10 Insider Build 14295.

Microsoft supports 2 devices at the moment:

  • NVMe (Non-Volatile Memory express) SSD controllers

Other devices might work but you’re dependent on the hardware vendor for support. Maybe that’s OK for you, maybe it’s not.

Below I describe the steps to get DDA working. There’s also a rough video out on my Vimeo channel: Discrete Device Assignment with a GPU in Windows 2016 TPv4 .

Preparing a Hyper-V host with a GPU for Discrete Device Assignment

First of all, you need a Windows Server 2016 Host running Hyper-V. It needs to meet the hardware specifications discussed above, boot form EUFI with VT-d enabled and you need a PCI Express GPU to work with that can be used for discrete device assignment.

It pays to get the most recent GPU driver installed and for our NVIDIA GRID K1 which was 362.13 at the time of writing.

clip_image001

On the host when your installation of the GPU and drivers is OK you’ll see 4 NIVIDIA GRID K1 Display Adapters in device manager.

clip_image002

We create a generation 2 VM for this demo. In case you recuperate a VM that already has a RemoteFX adapter in use, remove it. You want a VM that only has a Microsoft Hyper-V Video Adapter.

clip_image003

In Hyper-V manager I also exclude the NVDIA GRID K1 GPU I’ll configure for DDA from being used by RemoteFX. In this show case that we’ll use the first one.

clip_image005

OK, we’re all set to start with our DDA setup for an NVIDIA GRID K1 GPU!

Assign the PCI Express GPU to the VM

Prepping the gpu and host.

As stated above to have a GPU assigned to a VM we must make sure that the host no longer has use of it. We do this by dismounting the display adapter which renders it unavailable to the host. Once that is don we can assign that device to a VM.

Let’s walk through this. Tip: run PoSh or the ISE as an administrator.

We run Get-VMHostAssignableDevice. This return nothing as no devices yet have been made available for DDA.

I now want to find my display adapters

#Grab all the GPUs in the Hyper-V Host

$MyDisplays = Get-PnpDevice | Where-Object {$_.Class -eq “Display”}

$MyDisplays | ft -AutoSize

This returns

clip_image007

As you can see it list all adapters. Let’s limit this to the NVIDIA ones alone.

#We can get all NVIDIA cards in the host by querying for the nvlddmkm

#service which is a NVIDIA kernel mode driver

$MyNVIDIA = Get-PnpDevice | Where-Object {$_.Class -eq “Display”} |

Where-Object {$_.Service -eq “nvlddmkm”}

$MyNVIDIA | ft -AutoSize

clip_image009

If you have multiple type of NVIDIA cared you might also want to filter those out based on the friendly name. In our case with only one GPU this doesn’t filter anything. What we really want to do is excluded any display adapter that has already been dismounted. For that we use the -PresentOnly parameter.

#We actually only need the NVIDIA GRID K1 cards, let’s filter some #more,there might be other NVDIA GPUs.We might already have dismounted #some of those GPU before. For this exercise we want to work with the #ones that are mountedt he paramater -PresentOnly will do just that.

$MyNVidiaGRIDK1 = Get-PnpDevice -PresentOnly| Where-Object {$_.Class -eq “Display”} |

Where-Object {$_.Service -eq “nvlddmkm”} |

Where-Object {$_.FriendlyName -eq “NVIDIA Grid K1”}

$MyNVidiaGRIDK1 | ft -AutoSize

Extra info: When you have already used one of the display adapters for DDA (Status “UnKnown”). Like in the screenshot below.

clip_image011

We can filter out any already unmounted device by using the -PresentOnly parameter. As we could have more NVIDIA adaptors in the host, potentially different models, we’ll filter that out with the FriendlyName so we only get the NVIDIA GRID K1 display adapters.

clip_image013

In the example above you see 3 display adapters as 1 of the 4 on the GPU is already dismounted. The “Unkown” one isn’t returned anymore.

Anyway, when we run

We get an array with the display adapters relevant to us. I’ll use the first (which I excluded form use with RemoteFX). In a zero based array this means I disable that display adapter as follows:

Disable-PnpDevice -InstanceId $MyNVidiaGRIDK1[0].InstanceId -Confirm:$false

clip_image015

When you now run

Again you’ll see

clip_image017

The disabled adapter has error as a status. This is the one we will dismount so that the host no longer has access to it. The array is zero based we grab the data about that display adapter.

#Grab the data (multi string value) for the display adapater

$DataOfGPUToDDismount = Get-PnpDeviceProperty DEVPKEY_Device_LocationPaths -InstanceId $MyNVidiaGRIDK1[0].InstanceId

$DataOfGPUToDDismount | ft -AutoSize

clip_image019

We grab the location path out of that data (it’s the first value, zero based, in the multi string value).

#Grab the location path out of the data (it’s the first value, zero based)

#How do I know: read the MSFT blogs and read the script by MSFT I mentioned earlier.

$locationpath = ($DataOfGPUToDDismount).data[0]

$locationpath | ft -AutoSize

clip_image021

This locationpath is what we need to dismount the display adapter.

#Use this location path to dismount the display adapter

Dismount-VmHostAssignableDevice -locationpath $locationpath -force

Once you dismount a display adapter it becomes available for DDA. When we now run

clip_image023

As you can see the dismounted display adapter is no longer present in display adapters when filtering with -presentonly. It’s also gone in device manager.

clip_image024

Yes, it’s gone in device manager. There’s only 3 NVIDIA GRID K1 adaptors left. Do note that the device is unmounted and as such unavailable to the host but it is still functional and can be assigned to a VM.That device is still fully functional. The remaining NVIDIA GRID K1 adapters can still be used with RemoteFX for VMs.

It’s not “lost” however. When we adapt our query to find the system devices that have dismounted I the Friendly name we can still get to it (needed to restore the GPU to the host when needed). This means that -PresentOnly for system has a different outcome depending on the class. It’s no longer available in the display class, but it is in the system class.

clip_image026

And we can also see it in System devices node in Device Manager where is labeled as “PCI Express Graphics Processing Unit – Dismounted”.

We now run Get-VMHostAssignableDevice again see that our dismounted adapter has become available to be assigned via DDA.

clip_image029

This means we are ready to assign the display adapter exclusively to our Windows 10 VM.

Assigning a GPU to a VM via DDA

You need to shut down the VM

Change the automatic stop action for the VM to “turn off”

clip_image031

This is mandatory our you can’t assign hardware via DDA. It will throw an error if you forget this.

I also set my VM configuration as described in https://blogs.technet.microsoft.com/virtualization/2015/11/23/discrete-device-assignment-gpus/

I give it up to 4GB of memory as that’s what this NVIDIA model seems to support. According to the blog the GPUs work better (or only work) if you set -GuestControlledCacheTypes to true.

“GPUs tend to work a lot faster if the processor can run in a mode where bits in video memory can be held in the processor’s cache for a while before they are written to memory, waiting for other writes to the same memory. This is called “write-combining.” In general, this isn’t enabled in Hyper-V VMs. If you want your GPU to work, you’ll probably need to enable it”

#Let’s set the memory resources on our generation 2 VM for the GPU

Set-VM RFX-WIN10ENT -GuestControlledCacheTypes $True -LowMemoryMappedIoSpace 2000MB -HighMemoryMappedIoSpace 4000MB

You can query these values with Get-VM RFX-WIN10ENT | fl *

We now assign the display adapter to the VM using that same $locationpath

Add-VMAssignableDevice -LocationPath $locationpath -VMName RFX-WIN10ENT

Boot the VM, login and go to device manager.

clip_image033

We now need to install the device driver for our NVIDIA GRID K1 GPU, basically the one we used on the host.

clip_image035

Once that’s done we can see our NVIDIA GRID K1 in the guest VM. Cool!

clip_image037

You’ll need a restart of the VM in relation to the hardware change. And the result after all that hard work is very nice graphical experience compared to RemoteFX

clip_image039

What you don’t believe it’s using an NVIDIA GPU inside of a VM? Open up perfmon in the guest VM and add counters, you’ll find the NVIDIA GPU one and see you have a GRID K1 in there.

clip_image041

Start some GP intensive process and see those counters rise.

image

Remove a GPU from the VM & return it to the host.

When you no longer need a GPU for DDA to a VM you can reverse the process to remove it from the VM and return it to the host.

Shut down the VM guest OS that’s currently using the NVIDIA GPU graphics adapter.

In an elevated PowerShell prompt or ISE we grab the locationpath for the dismounted display adapter as follows

$DisMountedDevice = Get-PnpDevice -PresentOnly |

Where-Object {$_.Class -eq “System” -AND $_.FriendlyName -like “PCI Express Graphics Processing Unit – Dismounted”}

$DisMountedDevice | ft -AutoSize

clip_image045

We only have one GPU that’s dismounted so that’s easy. When there are more display adapters unmounted this can be a bit more confusing. Some documentation might be in order to make sure you use the correct one.

We then grab the locationpath for this device, which is at location 0 as is an array with one entry (zero based). So in this case we could even leave out the index.

$LocationPathOfDismountedDA = ($DisMountedDevice[0] | Get-PnpDeviceProperty DEVPKEY_Device_LocationPaths).data[0]

$LocationPathOfDismountedDA

clip_image047

Using that locationpath we remove the DDA GPU from the VM

#Remove the display adapter from the VM.

Remove-VMAssignableDevice -LocationPath $LocationPathOfDismountedDA -VMName RFX-WIN10ENT

We now mount the display adapter on the host using that same locationpath

#Mount the display adapter again.

Mount-VmHostAssignableDevice -locationpath $LocationPathOfDismountedDA

We grab the display adapter that’s now back as disabled under device manager of in an “error” status in the display class of the pnpdevices.

#It will now show up in our query for -presentonly NVIDIA GRIDK1 display adapters

#It status will be “Error” (not “Unknown”)

clip_image049

We grab that first entry to enable the display adapter (or do it in device manager)

#Enable the display adapater

Enable-PnpDevice -InstanceId $MyNVidiaGRIDK1[0].InstanceId -Confirm:$false

The GPU is now back and available to the host. When your run you Get-VMHostAssignableDevice it won’t return this display adapter anymore.

We’ve enabled the display adapter and it’s ready for use by the host or RemoteFX again. Finally, we set the memory resources & configuration for the VM back to its defaults before I start it again (PS: these defaults are what the values are on standard VM without ever having any DDA GPU installed. That’s where I got ‘m)

#Let’s set the memory resources on our VM for the GPU to the defaults

Set-VM RFX-WIN10ENT -GuestControlledCacheTypes $False -LowMemoryMappedIoSpace 256MB -HighMemoryMappedIoSpace 512MB

clip_image053

Now tell me all this wasn’t pure fun!

Share this:

73 thoughts on “ setting up discrete device assignment with a gpu ”.

“This is mandatory our you can’t assign hardware via DDA. It will throw an error if you forget this”

Are you actually saying that when DDA is used in a VM any reboot of the Host results in a brute “Power Off” of the VM ? Or can you set this back to shutdown or save after you have assigned the device…?

Nope, you cannot do that. It acts as hardware, both in the posivitive way (stellar peformance for certain use cases) and in the negative way (you lose some capabilities you’ve become used to with virtualization). Now do note that this is TPv4 or a v1 implementation. We’ll see where this lands in the future. DDA is only for selecte use cases & needs whee the benefits outweigh the drawback and as it breaks through the virtualization layer as such it is also only for trusted admin scenarios.

Haha, yes, understand. But suppose you add a NMVe this way and then reboot the host while heavy IO is going on… “power Off” -> Really ??? 🙂 Even it’s real HW, you don’t need to turn off or cut power to a real HW system either… Same goes for SRIOV actually, so just sounds like it’s still in beta-testingstage for that matter… Put differently: DD is totally useless if Power Off will be your only choice @RTM…

I would not call that totally useless 🙂 A desktop is not totally useless because it can’t save state when you have a brown out. And you also manage a desktop, for planned events you shut it down. The use case determines what’s most immportant.

Shutdown wasn’t an option. Byebye CAU in VDI environment… Are would you go shutdown each VM manually ? I guess it will get better by the time it RTMs. I reckon MS understands that aswell…

Depends on use case. Ideally it comes without any restrictions. Keep the feedback coming. MSFT reads this blog every now and then 🙂 and don’t forget about uservoice https://windowsserver.uservoice.com/forums/295047-general-feedback !

So do you think some of the newer graphics cards that will “support” this type of DDA will be able to expose segments of their hardware? let’s say, an ATI FirePro s7150. It has the capability to serve ~16 users, but today, only one VM can use the entire card.

It’s early days yet and I do hope more information both from MSFT and card vendors will become available in the next months.

Pingback: The Ops Team #018 – “Bruised Banana” | The Ops Team | | php Technologies

Pingback: The Ops Team #018 - "Bruised Banana"

Pingback: Discrete Device Assignment with Storage Controllers - Working Hard In IT

I’m super close on this. I have the GPU assigned (a K620), but when I install the drivers and reboot, windows is ‘corrupt’. It won’t boot, and it’s not repairable. I can revert to snapshots pre-driver, but that’s about it. I’ve tried this in a Win 8 VM and a Win 10 VM. Both generation 2.

I have not seen that. Some issues with Fast Ring Windows 10 releases such as driver issues / errors but not corruption.

I think my issues is due to my video card. I’m testing this with a K620. I’m unclear if the K620 supports Access Control Services. I’m curious, your instructions use the -force flag on the Dismount-VmHostAssignableDevice command. Was the -force required with your GRID card as well? That card would absolutely support Access Control Services, I’m wondering if the -force was included for the card you were using, or for the PCI Express architecture. Thanks again for the article, I’m scrambling to find a card that supports Access Control Services to test further. I’m using the 620 because it does not require 6-pin power (My other Quadro cards do).

Hi, I’m still trying to get the details from MSFT/NVIDIA but without the force it doens’t work but throws an error. You can always try that. It’s very unclear what’s exactly supported and what not and I’ve heard (nor read) contradicting statements by the vendors involved. Early days yet.

The error is: The operation failed. The manufacturer of this device has not supplied any directives for securing this device while exposing it to a virtual machine. The device should only be exposed to trusted virtual machines. This device is not supported when passed through to a virtual machine.

Hi, I’m wondering if you have any information or experience with using DDA combined with windows server remoteApps technology. I have set up a generation 2 Windows 10 VM with a NVIDIA Grid K2 assigned to it. Remote desktop sessions work perfectly, however my remoteApp sessions occasionally freeze with a dwm.exe appcrash. I’m wondering if this could be something caused by the discrete device assignment? Are remoteApps stable with DDA?

I’m also used a PowerEdge R730 and a Tesla K80, Everything goes fine following your guide by the letter, until installing the driver on the VM, where I get a Code 12 error “This device cannot find enough free resources that it can use. (Code 12)” in Device Manager (Problem Status : {Conflicting Address Range} The specified address range conflicts with the address space.)

Any ideas what might be causing this, the driver is the latest version, and installed on the host without problems.

Hi, i have kinda same problem, same error msg but on IBM x3650 M3 with a gtx970 (thing its gtx960 works well..) u fixed it in any way? thx in advice =))

Same here with the R730 and a Tesla K80. Just finished up the above including installing Cuda and I get the same Code 12 error. Anyone figure out how to clear this error up?

i have the same problem with a HP DL 380p Gen.8:

I had the Problem in the HOST System too, there i had to enable the “PCI Express 64-BIT BAR Support” in the Bios. Then die Card works in the HOST System.

But not in the VM.

Nice read. I’ve been looking around for articles about using pass through with non-quadro cards, but haven’t been able to find much. Yours is actually the best I’ve read. By that I mean two nvidia retail geforce series cards, one for the host one for a pass through to a VM. From what I’ve read I don’t see anything to stop it working, so long as the guest card is 700 series or above, since the earlier cards don’t have support. Is that a fair assesment?

Hi. I have an error when Dismount-VmHostAssignableDevice: “The current configuration does not allow for OS control of the PCI Express bus. Please check your BIOS or UEFI settings.” What check in BIOS? Or maybe try Uninstall in Device manager?

Hello, did you found solution to this problem? I have same problem on my HP Z600 running Windows Server 2016.

I assigned a K2 GPU to a VM but now I am not able to boot the VM anymore…

I will get an error that a device is assigned to the VM and therefore it cannot be booted.

Cluster of 3x Dell R720 on Windows Server 2016, VM is limited to a single Node which has the K2 card (the other two node don’t have graphics cards yet).

Sure you didn’t assing it to 2 VMs by mistake? If both are shut down you can do it …

It looks like it just won’t work when the VM is marked as high available. When I remove this flag and store it on a local hdd of a cluster node it works.

Do you know if HP m710x with Intel P580 support DDA?

No I don’t. I’ve only use the the NVidia GRID/Tesla series so far. Ask your HP rep?

Tried to add a MVIDIA TESLA M10 (GRID) card (4xGPU) to 4 different VMs. It worked flawlessly but after that I could not get back all the GPUs when I tried to remove it from the VM. After Remove-VMAssignableDevice the GPU disappeared from the VM Device manager but I could not mount it back at the host. When listing it shows the “System PCI Express Graphics Processing Unit – Dismounted” line with the “Unknown” status. GPU disappeared from the VM but cannot be mounted and enabled as of your instructions. GPU disappeared? What could possibly caused this?

I have not run into that issue yet. Sorry.

This is great work and amazing. I have tried with NVIDIA Quadro K2200 and able to use OpenGL for apps I need.

One thing I noticed, Desktop Display is attached to Microsoft Hyper V Video and dxdiag shows it as primary video adapter. I am struggling to find a way if HYper V Video could be disabled and VM can be forced to use NVIDIA instead for all Display processing as primary video adapter. Thoughts?

Well, my personal take on this it’s not removable and it will function as it does on high end systems with an on board GPU and a specialty GPU. It used the high power one only when needed to save energy – always good, very much so on laptops. But that’s maybe not a primary concern. If your app is not being served by the GPU you want it to be serviced by you can try and dive into the settings in the control panel / software of the card, NVIDIA allows for this. Look if this helps you achieve what you need.

My vm is far from stable with a gpu through dda. (Msi r9 285 Gaming 2Gb). Yes it does work, performance is great, sometimes the vm locks up and gives a gpu driver issue. I dont get errors that get logged, just reboots or black/blue screens. Sometimes the vm crashes and comes online during the connection time of a remote connection. (Uptime reset).

I dont know if it is a problem with Ryzen. (1600x) 16Gb gigabyte ab350 gaming 3.

Launching Hwinfo64 within the vm those complete lockup the host and the vms. Outside the vm no problems.

Still great guide, the only one I could find.

Vendors / MSFT need to state support – working versus supported and this is new territory.

I disable ULPS, to prevent the gpu from idleing this morning. Vm did keep online for over 4 hours. But still at somepoint it goes doen. Here alle the error codes of the bluescreens -> http://imgur.com/a/vNWuf It seems like a driver issue to me.

when reading “Remove a GPU from the VM & return it to the host.” there is a typo.

Where-Object {$_.Class -eq “System” -AND $_.FriendlyName -like “PCI Express Graphics Processing Unit – Dismounted”}

the –, should be –

I got stuck when trying to return the gpu back to the main os, this helped

I see your website formats small -‘s as big ones

hmm now it doesnt, anyway, the –, should be – (guess i made a typo myself first)

ok something weird is going on..

Pingback: MCSA Windows Server 2016 Study Guide – Technology Random Blog

We are running the same setup with Dell 730 and Grid K1, all the setup as you listed works fine and the VM works with the DDA but after a day or so the grid inside the VM reports “Windows has stopped this device because it has reported problems. (Code 43)”

I have read that NVidia only support Grid k2 and above with DDA, so I am wondering if that’s the reason the driver is crashing?

We are running 22.21.13.7012 driver version

Have you seen this occur in your setup

It’s a lab setup only nowadays. The K1 is getting a bit old and there are no production installs I work with using that today. Some drivers do have know issues. Perhaps try R367(370.16) the latest update of the branch that still support K1/K2 with Windows Server 2016.

Thanks for your quick reply,

Yes it is an older card, we actually purchased this card some time ago for use with a Windows 2012 R2 RDS session host not knowing that it wouldn’t work with remotefx through session host.

We are now hoping to make use of it in server 2016 remotefx but I don’t think this works with a session host either, so are now testing out DDA.

We installed version 370.12 which seems to install driver version 22.21.13.7012 listed in Device manager.

I will test this newer version and let you know the results

Thanks again.

Did a quick check:

RemoteFX & DDA with 22.21.13.7012 works and after upgrading to 23.21.13.7016 it still does. Didn’t run it for loner naturally. I have seen error 43 with RemoteFX VM a few times, but that’s immediate and I have never found a good solution but replacing the VM with a clean one. Good luck.

Hello, you can read more on how to clean BIOS. Whether or not to include SR-IOV and what else will be needed.

Need help setting up the BIOS Motherboard “SuperMicro X10DRG-Q”, GPU nVIDIA K80

I assigned the video card TESLA k80, it is defined in the virtual machine, but when I look dxdiag I see an error

I have attached a Grid K1 card via DDA to a Windows 10 VM and it shows up fine and installs drivers OK in the VM but the VM still seems to be using the Microsoft Hyper-V adapter for video and rendering (Tested with Heaven Benchmark). GPU Z does not pick up any adapter. When I try to access the Nvidia Control panel I get a message saying “You are not currently using a display attached to an Nvidia GPU” Host is Windows Server 2016 Standard with all updates.

If anyone has any ideas that would help a lot, thanks.

Hi Everybody,

Someone can help me out here? 🙂

I have a “problem” with the VM console after implementing DDA. When installed the drivers on the HyperV host and configured DDA on the host and assigned a GPU to the VM that part works fine. After installing the drivers on the VM to install the GPU the drivers gets installed just fine. But after installing and a reboot of the VM I cannot manage the VM through the hyper-V console and the screen goes black. RDP on the VM works fine. What am I doing wrong here?

My setup is:

Server 2016 Datacenter Hyper-V HP Proliant DL380. Nividia Tesla M10 128 GB Profile: Virtualisation Optimisation.

I have tested version 4.7 NVIDIA GRID (on host and VM) and 6.2 NVIDIA Virtual GPU Software (on host and VM.

Kind regards

Does the GRID K1 need a nvidia vGPU license? I’m considering purchasing a K1 on ebay but am concern once I install in my server that the functionality of the card will be limited w/o a license. Is their licensing “honor” based? My intent is to use this card in a DDA configuration. If the functionality is limited I will likely need to return. Please advise. Thanks!

Nah, that’s an old card pre-licensing era afaik.

Thanks! Looks like I have this installed per the steps in this thread – a BIG THANK YOU! My guest VM (Win 10) sees the K1 assigned to it but is not detected on the 3D apps I’ve tried. Any thoughts on this?

I was reading some other posts on nvidia’s gridforums and the nvidia reps said to stick with the R367 release of drivers (369.49); which I’ve done on both the host and guest VM (I also tried the latest 370.x drivers). Anyway, I launch the CUDA-Z utility from the guest and no CUDA devices found. Cinebench sees the K1 but a OpenGL benchmark test results in 11fps (probably b/c it’s OpenGL and not CUDA). I also launch Blender 2.79b and 2.8 and it does not see any CUDA devices. Any thoughts on what I’m missing here?

I’m afraid no CUDA support is there with DDA.

Thanks for the reply. I did get CUDA to work by simply spinning up a new guest… must of been something corrupt with my initial VM. I also use the latest R367 drivers w/ no issue (in case anyone else is wondering).

Good to know. Depending on what you read CUDA works with passthrough or is for shared GPU. The 1st is correct it seems. Thx.

Great post, thank you for the info. My situation is similar but slightly different. I’m running a Dell PE T30 (it was a black Friday spur of the moment buy last year) that I’m running windows 10 w/Hyper-V enabled. There are two guests, another Windows 10 which is what I use routinely for my day-to-day life, and a Windows Server 2016 Essentials. This all used to be running on a PE2950 fully loaded running Hyper-V 2012 R2 core…

When moving to the T30 (more as an experiment) I was blow away at home much the little GPU on the T30 improved my windows 10 remote desktop experience. My only issue, not enough horse power. It only has two cores and I’m running BlueIris video software, file/print service and something called PlayOn for TV recording. This overwhelmed the CPU.

So this year I picked up a T130 with Windows 2016 standard with four cores and 8 threads. But, the T130 does not have a GPU, so, I purchased a video card and put it in. Fired it up, and, no GPU for the Hyper-V guests. I had to add the Remote desktop role to the 2016 server to then let Hyper-V use it, and then, yup, I needed an additional license at an additional fee, which I don’t really want to pay if I don’t have to… So my question:

– Is there an EASY way around this so I can use WS2016S as the host and the GPU for both guests but not have to buy a license? I say easy because the DDA sounds like it would meet this needs (for one guest?), but, also seems like more work than I’d prefer to embark on..

– Do I just use windows 10 as my Host and live the limitations, which sounds like the only thing I care about is the Virtualizing GPUs using RemoteFX. But I’m also confused on this since windows 10 on the T30 is using the GPU to make my remote experience better. So I know I’m missing some concept here…

Thanks for the help – Ed.

I cannot dismount my Grid K1 as per your instructions My setup is as follows

Motherboard: Supermicro X10DAi (SR-IOV enabled) Processor: Intel Xeon E2650-V3 Memory: 128GB DDR4 GPU: Nvidia Grid K1

When I try to dismount the card from the host I get the following: Dismount-VmHostAssignableDevice : The operation failed. The current configuration does not allow for OS control of the PCI Express bus. Please check your BIOS or UEFI settings. At line:1 char:1 + Dismount-VmHostAssignableDevice -force -locationpath $locationpath + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (:) [Dismount-VMHostAssignableDevice], Virtualizat ionException + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.DismountVMHos tAssignableDevice

Either a BIOS (EUFI) configuration issues that can be corrected or a BIOS (EUFI) that does not fully support OS control of the bus. Check the BIOS settings, but normally the vendors should specify what gear is compatible with DDA. If is is not an BIOS upgrade might introduce it. But this is a game of dotting the i’s and crossing the t’s before getting the hardware. Opportunistic lab builds with assembled gears might work but no guarantees given.

OK I now got 2 Nvidia Grid K1 cards installed in a HPDL380p server, 6 GPUs are showing healthy, 2 are showing error code 43, I have tried every variation of driver, BIOS, firmware, and am at my wits end, I know these cards are not faulty

Hi, thanks for the post. I did all the steps of the publication and I have an error when starting the VM windows10 Generation 2: I get the following message:

[Content] ‘VMWEBSCAN’ failed to start.

Virtual Pci Express Port (Instance ID 2614F85C-75E4-498F-BD71-721996234446): Failed to Power on with Error ‘A hypervisor feature is not available to the user.’.

[Expanded Information] ‘VMWEBSCAN’ failed to start. (Virtual machine ID B63B2531-4B4D-4863-8E3C-D8A36DC3E7AD)

‘VMWEBSCAN’ Virtual Pci Express Port (Instance ID 2614F85C-75E4-498F-BD71-721996234446): Failed to Power on with Error ‘A hypervisor feature is not available to the user.’ (0xC035001E). (Virtual machine ID B63B2531-4B4D-4863-8E3C-D8A36DC3E7AD)

I am using a PowerEdge R710 gen2 and Nvidia QUADRO P2000 that is supposed to support DDA.

Well. make sure you have the latest BIOS. But it is old hardware and support for DDA is very specific with models, hardware, Windows versions etc. The range of supported hardware is small. Verify everything like moel of CPU, chipset, R-IOV, VT-d/AMD-Vi, MSI/MSI-X, 64 PCI BAR, IRQ remapping. I would note even try with Windows Server 2019. That is only for the Tesla models, not even GRID is supported. Due to the age of the server and required BIOS support I’m afraid this might never work and even if it does can break at any time. Trial and error. You might get lucky but it will never be supported and it might break at every update.

Pingback: How-to install a Ubuntu 18.04 LTS guest/virtual machine (VM) running on Windows Server 2019 Hyper-V using Discrete Device Assignment (DDA) attaching a NVIDIA Tesla V100 32Gb on Dell Poweredge R740 – Blog

Any idea if this’ll work on IGPU such as intel’s UHD? Can’t find anything about it on the net

Can you add multiple gpu’s to the vm ?

As long as the OS can handle it sure, multiple GPUs, multiple NVMEs …

Need your advise. We are planning to create Hyper -V cluster based on two HP DL380 servers. Both servers will have Nvidia GPU card inside. The question is if it’s possible to create Hyper-v cluster based on those 2 nodes with most VMs with high availability and one VM on each node without it but with DDA to GPU? So, if I understand from this thread and comments correctly, I have to store VMs on shared data storage as usual, but for VMs with DDA I have to store them on local drive of the node. And I have to unflag HA for VMs with DDA. That’s all. Am I right?

Thanks in advance

You can also put them on shared storage but they cannot live migrate. the auto-stop action has to be set to shutdown. Whether you can use local storage depends on the storage array. On S2D having storage, other than the OS, outside of the virtual disks from the storage pool is not recommended. MSFT wants to improve this for DDA but when or if that will be available in vNext is unknown. Having DDA VM’s on shared storage also causes some extra work and planning if you want them to work on another node. Also see https://www.starwindsoftware.com/blog/windows-2016-makes-a-100-in-box-high-performance-vdi-solution-a-realistic-option “Now do note that the DDA devices on other hosts and as such also on other S2D clusters have other IDs and the VMs will need some intervention (removal, cleanup & adding of the GPU) to make them work. This can be prepared and automated, ready to go when needed. When you leverage NVME disks inside a VM the data on there will not be replicated this way. You’ll need to deal with that in another way if needed. Such as a replica of the NVMe in a guest and NVMe disk on a physical node in the stand by S2D cluster. This would need some careful planning and testing. It is possible to store data on a NVMe disk and later assign that disk to a VM. You can even do storage Replication between virtual machines, one does have to make sure the speed & bandwidth to do so is there. What is feasible and will materialize in real live remains to be seen as what I’m discussing here are already niche scenarios. But the beauty of testing & labs is that one can experiments a bit. Homo ludens, playing is how we learn and understand.”

Many thanks for you reply. Very useful. And what about GPU virtualization (GPU-PV)? Just as an idea to install Windows 10 VM and use GPU card on it. We’ll install CAD system on this VM and users will have access to it via RDP. Will it work fine?

Hyper-V only has RemoteFX which is disabled by default as it has some security risks being older technology. Then there DDA. GPU-PV is not available and while MSFT has plans/ is working on improvements I know no roadmap or timeline details for this.

Pingback: How to Customize Hyper-V VMs using PowerShell

Hi, I try to use DDA on my Dell T30 with an i7-6700k build in. Unfortunately I get the error when I try to dismount my desired device. Andy idea? Is the system not able to use DDA?

Dismount-VMHostAssignableDevice : The operation failed. The current configuration does not allow for OS control of the PCI Express bus. Please check your BIOS or UEFI settings. At line:1 char:1 + Dismount-VMHostAssignableDevice -LocationPath “PCIROOT(0)#PCI(1400)#U … + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (:) [Dismount-VMHostAssignableDevice], VirtualizationException + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.DismountVMHostAssignableDevice

Kind regards Jens

I have only used DDA with certified OEM solutions. It is impossible for me to find out what combinations of MobBo/BIOS/GPU cards will work and are supported.

Dell T30 with an i7-6700k and enabled all virtual things in BIOS.

When I try to dismount the card from the host I get the following: Dismount-VmHostAssignableDevice : The operation failed. The current configuration does not allow for OS control of the PCI

Did someone get this running with an Dell T30?

Leave a Reply, get the discussion going, share and learn with your peers. Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed .

  • PC & TABLETS
  • SERVERS & STORAGE
  • SMART DEVICES
  • SERVICES & SOLUTIONS

Lenovo Press

  • Portfolio Guide
  • 3D Tour Catalog
  • HX Series for Nutanix
  • MX Series for Microsoft
  • SX for Microsoft
  • VX Series for VMware
  • WenTian (联想问天)
  • Mission Critical
  • Hyperconverged
  • Edge Servers
  • Multi-Node Servers
  • Supercomputing
  • Expansion Units
  • Network Modules
  • Storage Modules
  • Network Adapters
  • Storage Adapters
  • Coprocessors
  • GPU adapters
  • RAID Adapters
  • Ethernet Adapters
  • InfiniBand / OPA Adapters
  • Host Bus Adapters
  • PCIe Flash Adapters
  • External Storage
  • Backup Units
  • Top-of-Rack Switches
  • Power Distribution Units
  • Rack Cabinets
  • KVM Switches & Consoles
  • SAN Storage
  • Software-Defined Storage
  • Direct-Attached Storage
  • Tape Drives
  • Tape Autoloaders and Libraries
  • 1 Gb Ethernet
  • 10 Gb Ethernet
  • 25 Gb Ethernet
  • 40 Gb Ethernet
  • 100 Gb Ethernet
  • Campus Networking

Solutions & Software

  • Artificial Intelligence
  • Hortonworks
  • Microsoft Data Warehouse Fast Track
  • Microsoft Applications
  • SAP Business Suite
  • Citrix Virtual Apps
  • VMware Horizon
  • Cloud Storage
  • MSP Solutions
  • Microsoft Hyper-V
  • OpenStack Cloud
  • VMware vCloud
  • VMware vSphere
  • Microsoft SQL Server
  • SAP NetWeaver BWA
  • Edge and IoT
  • High Performance Computing
  • Security Key Lifecycle Manager
  • Microsoft Windows
  • Red Hat Enterprise Linux
  • SUSE Linux Enterprise Server
  • Lenovo XClarity
  • BladeCenter Open Fabric Manager
  • IBM Systems Director
  • Flex System Manager
  • System Utilities
  • Network Management
  • About Lenovo Press
  • Newsletter Signup

discrete device assignment vmware

Introduction to Windows Server 2016 Hyper-V Discrete Device Assignment

Planning / implementation.

discrete device assignment vmware

  • David Tanaka

Form Number

This paper describes the steps on how to enable Discrete Device Assignment (also known as PCI Passthrough) available as part of the Hyper-V role in Microsoft Windows Server 2016.

Discrete Device Assignment is a performance enhancement that allows a specific physical PCI device to be directly controlled by a guest VM running on the Hyper-V instance.  Specifically, this new feature aims to deliver a certain type of PCI device class, such as Graphics Processing Units (GPU) or Non-Volatile Memory express (NVMe) devices, to a Windows Server 2016 virtual machine, where the VM will be given full and direct access to the physical PCIe device.

In this paper we describe how to enable and use this feature on Lenovo servers using Windows Server 2016 Technical Preview 4 (TP4). We provide the step-by-step instructions on how to make an NVIDIA GPU available to a Hyper-V virtual machine.

This paper is aimed at IT specialists and IT managers wanting to understand more about the new features of Windows Server 2016 and is part of a series of technical papers on the new operating system.

Table of Contents

Introduction Installing the GPU and creating a VM Enabling the device inside the VM Restoring the device to the host system Summary

Change History

Changes in the April 18 update:

  • Corrections to step 3 and step 4 in "Restoring the device to the host system" on page 12.

Related product families

Product families related to this document are the following:

  • Microsoft Alliance

View all documents published by this author

Configure and Buy

Full change history, course detail.

  • About Lenovo
  • Our Company
  • Smarter Technology For All
  • Investors Relations
  • Product Recycling
  • Product Security
  • Product Recalls
  • Executive Briefing Center
  • Lenovo Cares
  • Formula 1 Partnership
  • Products & Services
  • Laptops & Ultrabooks
  • Desktop Computers
  • Workstations
  • Gaming & VR
  • Servers, Storage, & Networking
  • Accessories & Software
  • Services & Warranty
  • Product FAQs
  • Lenovo Coupons
  • Cloud Security Software
  • Windows 11 Upgrade
  • Shop By Industry
  • Small Business Solutions
  • Large Enterprise Solutions
  • Government Solutions
  • Healthcare Solutions
  • K-12 Solutions
  • Higher Education Solutions
  • Student & Teacher Discounts
  • Healthcare Discounts
  • First Responder Discount
  • Senior Discounts
  • Gaming Community
  • LenovoEDU Community
  • LenovoPRO Community
  • LenovoPRO Small Business
  • MyLenovo Rewards
  • Lenovo Financing
  • Trade-in Program
  • Customer Discounts
  • Affiliate Program
  • Legion Influencer Program
  • Student Influencer Program
  • Affinity Program
  • Employee Purchase Program
  • Laptop Buying Guide
  • Where to Buy
  • Customer Support
  • Policy FAQs
  • Return Policy
  • Shipping Information
  • Order Lookup
  • Register a Product
  • Replacement Parts
  • Technical Support
  • Provide Feedback

discrete device assignment vmware

  • Operating Systems & Software
  • The Server Room

Hyper-V Discrete Device Assignment - Hardware options?

  • Thread starter fuzzyfuzzyfungus
  • Start date Oct 6, 2022

More options

Fuzzyfuzzyfungus, ars tribunus angusticlavius.

  • Oct 6, 2022
  • Add bookmark

I hope this is the correct forum, question is part hardware part software so I apologize if I'm putting this in the wrong place: For reasons involving dubious decision-making in the past(stuck with it now, just being handed the job of trying to mitigate it) I've found myself with the requirement of virtualizing a win10 guest that depends on some very oddball USB peripherals; and so needs PCIe-level passthrough of the xHCI controller; the higher-level device passthrough options offered by hyper-v, Vmware(workstation or ESX), etc. don't do the job; so I am looking for test candidate systems that specifically meet the requirements of Hyper-V Discrete Device Assignment ; in addition to the baseline requirements for hyper-v virtualization on Server 2019. Unfortunately, I've had mixed results with really early spitball tests of hardware that happened to be on hand(a T14 Gen2, AMD based, largely worked fine; a Dell Precision 1700 lacked PCIe access control services and was a complete no-go); and I've been having some difficulty getting vendor confirmation of the presence or absence of the DDA-specific requirements: you can pretty much assume an IOMMU/SR-IOV, etc. at this point; but neither the spec sheets nor the reps seen to know much about ACS or specifically confirm/deny that DDA is supported. Does anyone have any experience with this; and any suggestions either on specific model lines or on specific chipsets that I should either seek out or avoid? The host will need to share space with humans, so pedestal servers are OK, rackmount screamers are not; system requirements are pretty low(under 128GB of RAM, single socket, some PCIe but no punchy GPU or accelerator card requirements); it's just a matter of confirming these platform capabilities that seem to be oddly hard to find on spec sheets. (Also, if anyone has experience with passing through USB expansion cards in particular, whether hyper-v or other hypervisors; any recommendations and/or horror stories about specific chipsets or models would be appreciated; I'm at liberty to obtain a bunch of cards for testing; but if I can avoid going in blind and just selecting randomly that would be a plus). I appreciate any information anyone might have.  

Ars Legatus Legionis

Would a network USB hub work?  

Hyper-V is not designed for USB passthrough no matter what you read. Your hardware would be better served with a VMware solution. ESXi on bare metal, or just VMware Workstation on Linux/Windows running VMs. Put it this way: VMware touts USB passthrough as a feature. Microsoft doesn't. Don't waste your time with Hyper-V.  

Ars Praefectus

What about: https://www.net-usb.com/hyper-v-usb/  

Ars Scholae Palatinae

  • Oct 7, 2022

use a digi anywhereUSB network device to connect to your USB devices over the network. They are a bit $$, but 10000% worth the cost. Other "network USB" devices are not recognized as native USB, but digi's solution is. https://www.digi.com/products/netwo...ment/usb-connectivity/usb-over-ip/anywhereusb . As long as the OS you are using is supported I would use the digi device without hesitation. edit: the other advantage of using a network device is that you can then maintain the ability to migrate a VM from one host to another without having to physically move the USB device.  

Paladin

  • Oct 8, 2022

If you just need to pick hardware that has the features you need, I would look at the support manual for the hardware in question. If it mentions the option in the BIOS manual, then it should be supported. If not, make sure you have a good return policy.  

  • Nov 2, 2022
use a digi anywhereUSB network device to connect to your USB devices over the network Click to expand...
the higher-level device passthrough options offered by hyper-v, Vmware(workstation or ESX), etc. don't do the job Click to expand...

Brandon Kahler

Brandon Kahler

Even better is to use a USB PCIe card with several discrete controllers so you can assign ports to multiple VMs. I use these in a few ESXi hosts. Each controller can be configured for PCIe passthrough individually, even on the free tier of ESXi. No SR-IOV needed. https://www.amazon.com/FebSmart-Supersp ... B07WCQ64RN I used the above card to attach USB HDMI capture dongles to four different VMs. Worked perfectly.  

Very fancy, I was going to ask if it did it with SR-IOV (which in itself is impressive) but it seems purpose designed.  

The important part is that it has 8 Safty fuses. I trust Safty for all my mission critical fuse needs.  

sryan2k1 said: Very fancy, I was going to ask if it did it with SR-IOV (which in itself is impressive) but it seems purpose designed. Click to expand...

Select Product

Current Release

Citrix Virtual Apps and Desktops 7 2402 LTSR

Fixed issues

Known issues

Deprecation

System requirements

Technical overview

Delivery methods

Network ports

ICA virtual channels

Double-hop sessions

Install and configure

Machine identities

Active Directory joined

Hybrid Azure Active Directory joined

Prepare to install

AWS cloud environments

XenServer virtualization environments

Google Cloud environments

HPE Moonshot virtualization environments

Microsoft Azure Resource Manager cloud environments

Microsoft System Center Configuration Manager environments

Microsoft System Center Virtual Machine Manager virtualization environments

Nutanix virtualization environments

Nutanix cloud and partner solutions

VMware virtualization environments

VMware cloud and partner solutions

Install core components

Install using the command line

Install Web Studio

Install VDAs

Configure Windows Defender Access Control related to VDA Installation

Install VDAs using scripts

Install VDAs using SCCM

Create a site

Create and manage connections and resources

Connection to AWS

Connection to XenServer

Connection to Google cloud environments

Connection to HPE Moonshot

Connection to Microsoft Azure

Connection to Microsoft System Center Virtual Machine Manager

Connection to Nutanix

Connection to Nutanix cloud and partner solutions

Connection to VMware

Connection to VMware cloud and partner solutions

Image management (Preview)

Create machine catalogs

Create an AWS catalog

Create a XenServer catalog

Create a Google Cloud Platform catalog

Create an HPE Moonshot machine catalog

Create a Microsoft Azure catalog

Create a Microsoft System Center Virtual Machine Manager catalog

Create a Nutanix catalog

Create a VMware catalog

Create catalogs of different join types

Create Hybrid Azure Active Directory joined catalogs

Manage machine catalogs

Manage an AWS catalog

Manage a XenServer catalog

Manage a Google Cloud Platform catalog

Manage an HPE Moonshot catalog

Manage a Microsoft Azure catalog

Manage a Microsoft System Center Virtual Machine Manager catalog

Manage a VMware catalog

Power Management

Power manage AWS VMs

Power manage Azure VMs

Security policies

Security groups

Secure boot

Encryption capabilities

Create delivery groups

Manage delivery groups

Create application groups

Manage application groups

Remote PC Access

Publish content

User personalization layer

Remove components

Upgrade and migrate

Upgrade a deployment

Federated Authentication Service

FIDO2 authentication

Integrate Citrix Virtual Apps and Desktops with Citrix Gateway

Security considerations and best practices

Smart cards

Smart card deployments

Pass-through authentication and single sign-on with smart cards

Transport Layer Security (TLS)

Transport Layer Security (TLS) on Universal Print Server

Virtual channel security

WebSocket communication between VDA and Delivery Controller

HDX connectivity

Adaptive transport

Enlightened Data Transport

Troubleshooting

NAT Compatibility

Virtual channel allow list

Known third-party virtual channels

TWAIN Redirection

WIA Devices

USB Devices

Configuration

Composite Devices and Device Splitting

USB Diagnostics Tool

Legacy USB Redirection Configuration

Client Drive Mapping

Mobile and touch screen devices

Serial ports

Specialty keyboards

10-Bit High Dynamic Range (HDR)

GPU acceleration for Windows multi-session OS

GPU acceleration for Windows single-session OS

Text-based session watermark

Screen sharing

Virtual display layout

Adaptive refresh rate

Loss tolerant mode

Audio features

Browser content redirection

HDX video conferencing and webcam video compression

HTML5 multimedia redirection

Optimization for Microsoft Teams

Monitor, troubleshoot, and support Microsoft Teams

Windows Media redirection

General content redirection

Client folder redirection

Client location redirection

Bidirectional content redirection

Host to client redirection

Local App Access and URL redirection

Generic USB redirection and client drive considerations

Printing configuration example

Best practices, security considerations, and default operations

Printing policies and preferences

Provision printers

Maintain the printing environment

Work with policies

Policy templates

Create policies

Policy sets

Compare, prioritize, model, and troubleshoot policies

Default policy settings

Policy settings reference

ICA policy settings

HDX features managed through the registry

Load management policy settings

Profile management policy settings

User personalization policy settings

Virtual Delivery Agent policy settings

Virtual IP policy settings

Configure COM Port and LPT Port Redirection settings using the registry

Connector for Configuration Manager 2012 policy settings

Applications

App packages

Universal Windows Platform Apps

Getting started

Schedule-based and load-based settings

Dynamic session timeouts

Autoscaling tagged machines (cloud burst)

User logoff notifications

Broker PowerShell SDK commands

Citrix Insight Services

Citrix Scout

Collect a Citrix Diagnostic Facility (CDF) Trace at System Startup

Delegated administration

Delivery Controllers

IPv4/IPv6 support

Multi-type licensing

FAQ for licensing

Load balance machines

Local Host Cache

Security keys

Machines and sessions

Machine actions and columns

Session actions and columns

Session resilience settings

User profiles

VDA registration

Virtual IP and virtual loopback

Configuration logging

Advanced configuration

PIV smart card authentication

Network analysis

Delegated Administration and Director

Secure Director deployment

Configure with Citrix Analytics for Performance

Site analytics

Alerts and notifications

Filters data

Historical trends

Monitor Autoscale-managed machines

Troubleshoot deployments

User issues

Feature compatibility matrix

Data granularity and retention

Troubleshoot Director failure reasons

Third party notices

SDKs and APIs

Document History

This content has been machine translated dynamically.

Dieser Inhalt ist eine maschinelle Übersetzung, die dynamisch erstellt wurde. (Haftungsausschluss)

Cet article a été traduit automatiquement de manière dynamique. (Clause de non responsabilité)

Este artículo lo ha traducido una máquina de forma dinámica. (Aviso legal)

此内容已经过机器动态翻译。 放弃

このコンテンツは動的に機械翻訳されています。 免責事項

이 콘텐츠는 동적으로 기계 번역되었습니다. 책임 부인

Este texto foi traduzido automaticamente. (Aviso legal)

Questo contenuto è stato tradotto dinamicamente con traduzione automatica. (Esclusione di responsabilità))

This article has been machine translated.

Dieser Artikel wurde maschinell übersetzt. (Haftungsausschluss)

Ce article a été traduit automatiquement. (Clause de non responsabilité)

Este artículo ha sido traducido automáticamente. (Aviso legal)

この記事は機械翻訳されています. 免責事項

이 기사는 기계 번역되었습니다. 책임 부인

Este artigo foi traduzido automaticamente. (Aviso legal)

这篇文章已经过机器翻译. 放弃

Questo articolo è stato tradotto automaticamente. (Esclusione di responsabilità))

Translation failed!

With HDX 3D Pro, you can deliver graphically intensive applications as part of hosted desktops or applications on Single-session OS machines. HDX 3D Pro supports physical host computers (including desktop, blade, and rack workstations) and GPU Passthrough and GPU virtualization technologies offered by XenServer, vSphere, Nutanix, and Hyper-V (passthrough only) hypervisors.

HDX 3D Pro-offers the following features:

Adaptive H.264-based or H.265-based deep compression for optimal WAN and wireless performance. HDX 3D Pro uses CPU-based full-screen H.264 compression as the default compression technique for encoding. Hardware encoding with H.264 is used with NVIDIA, Intel, and AMD cards that support NVENC. Hardware encoding with H.265 is used with NVIDIA cards that support NVENC.

Lossless compression option for specialized use cases. HDX 3D Pro also offers a CPU-based lossless codec to support applications where pixel-perfect graphics are required, such as medical imaging. True lossless compression is recommended only for specialized use cases because it consumes more network and processing resources.

Caution: Editing the registry incorrectly can cause serious problems that might require you to reinstall your operating system. Citrix cannot guarantee that problems resulting from the incorrect use of Registry Editor can be solved. Use Registry Editor at your own risk. Be sure to back up the registry before you edit it.

Multiple and high-resolution monitor support. For Single-session OS machines, up to 8 4K monitors are supported. Users can arrange their monitors in any configuration and can mix monitors with different resolutions and orientations. The number of monitors is limited by the capabilities of the host computer GPU, the user device, and the available bandwidth. HDX 3D Pro supports all monitor resolutions and is limited only by the capabilities of the GPU on the host computer.

  • Dynamic resolution. You can resize the virtual desktop or application window to any resolution. Note: The only supported method to change the resolution is by resizing the VDA session window. Changing resolution from within the VDA session (using Control Panel > Appearance and Personalization > Display > Screen Resolution) is not supported .
  • Support for NVIDIA vGPU architecture. HDX 3D Pro supports NVIDIA vGPU cards. For information, see NVIDIA vGPU for GPU passthrough and GPU sharing. NVIDIA vGPU enables multiple VMs to have simultaneous, direct access to a single physical GPU, using the same NVIDIA graphics drivers that are deployed on non-virtualized operating systems.
  • Support for VMware vSphere and VMware ESX using Virtual Direct Graphics Acceleration (vDGA) - You can use HDX 3D Pro with vDGA for both RDS and VDI workloads.
  • Support for VMware vSphere/ESX.
  • Support for Microsoft HyperV using Discrete Device Assignment in Windows Server 2016.
  • Support for Data Center Graphics with Intel Xeon Processor E3 Family and Intel Data Center GPU Flex Series. For more information, see https://www.intel.com/content/www/us/en/products/details/discrete-gpus/data-center-gpu/flex-series.html .
  • Support for AMD GPUs.
Note: Support for AMD MxGPU (GPU virtualization) works with VMware vSphere vGPUs only. Citrix Hypervisor and Hyper-V are supported with GPU passthrough. For more information, see https://www.amd.com/en/graphics/workstation-virtual-graphics .
  • Access to a high-performance video encoder for NVIDIA GPUs, AMD GPUs, and Intel GPUs. A policy setting (enabled by default) controls this feature. The feature allows the use of hardware encoding for H.264, H.265, or AV1 encoding (where available). If such hardware is not available, the VDA falls back to CPU-based encoding using the software video codec. For more information, see Graphics policy settings .

As shown in the following figure:

  • When a user logs in to Citrix Workspace app and accesses the virtual application or desktop, the Controller authenticates the user. The Controller then contacts the VDA for HDX 3D Pro to broker a connection to the computer hosting the graphical application.

The VDA for HDX 3D Pro uses the appropriate hardware on the host to compress views of the complete desktop or of just the graphical application.

  • The desktop or application views and the user interactions with them are transmitted between the host computer and the user device. This transmission is done through a direct HDX connection between Citrix Workspace app and the VDA for HDX 3D Pro.

Diagram showing integration of HDX 3D Pro with Citrix Virtual Desktops and related components

  • Optimize the HDX 3D Pro user experience

When multiple users share a connection with limited bandwidth (for example, at a branch office), we recommend that you use the Overall session bandwidth limit policy setting to limit the bandwidth available to each user. Using this setting ensures that the available bandwidth does not fluctuate widely as users log on and off. Because HDX 3D Pro automatically adjusts to use all the available bandwidth, large variations in the available bandwidth over the course of user sessions can negatively impact performance.

For example, if 20 users share a 60 Mbps connection, the bandwidth available to each user can vary between 3 Mbps and 60 Mbps, depending on the number of concurrent users. To optimize the user experience in this scenario, determine the bandwidth required per user at peak periods and limit users to this amount always.

For users of a 3D mouse, we recommend that you increase the priority of the Generic USB Redirection virtual channel to 0. For information about changing the virtual channel priority, see the Knowledge Center article CTX128190 .

  • Lossless compression

When using lossless compression:

  • The lossless indicator, a notification area icon, notifies the user if the screen displayed is a lossy frame or a lossless frame. This icon helps when the Visual Quality policy setting specifies Build to lossless . The lossless indicator turns green when the frames sent are lossless.
  • The lossless switch enables the user to change to Always Lossless mode anytime within the session. To select or deselect Lossless anytime within a session, right-click the icon and click Switch to pixel perfect or use the shortcut ALT+SHIFT+1 .
  • For lossless compression: HDX 3D Pro uses the lossless codec for compression regardless of the codec selected through policy.
  • For lossy compression: HDX 3D Pro uses the original codec, either the default or the one selected through policy.
  • Lossless switch settings are not retained for subsequent sessions. To use a lossless codec for every connection, select Always lossless in the Visual quality policy setting.
  • Lossless hotkey

You can use a hotkey to select or clear Lossless at any time within a session, by using the default shortcut ALT+SHIFT+1 .

You can override the default shortcut, ALT+SHIFT+1 , in the Windows Registry. To configure a new Registry setting, set the following registry values:

  • Key : HKEY_CURRENT_USER\SOFTWARE\Citrix\Graphics
  • Name : HKLM_HotKey
  • Type : String

The format to configure a shortcut combination is C=0|1, A=0|1, S=0|1, W=0|1, K=val . Keys must be comma “,” separated without a space. The order of the keys doesn’t matter.

A, C, S, W and K are keys, where C=Control, A=ALT, S=SHIFT, W=Win, and K=a valid key where allowed values for K are 0–9, a–z, and any virtual key code.

For example,

  • For F10 , set K=0x79
  • For Ctrl + F10 , set C=1, K=0x79
  • For Alt + A , set A=1,K=a or A=1,K=A or K=A,A=1
  • For Ctrl + Alt + 5 , set C=1, A=1,K=5 or A=1,K=5,C=1
  • For Ctrl + Shift + F5 , set A=1,S=1,K=0x74

The following table depicts the example list of virtual key codes:

Ensure that there is no space between the shortcut combinations. For example:

Correct: C=1,K=0x74 Incorrect: C=1, K=0x74

HDX Registry Editor

In this article

This Preview product documentation is Citrix Confidential.

You agree to hold this documentation confidential pursuant to the terms of your Citrix Beta/Tech Preview Agreement.

The development, release and timing of any features or functionality described in the Preview documentation remains at our sole discretion and are subject to change without notice or consultation.

The documentation is for informational purposes only and is not a commitment, promise or legal obligation to deliver any material, code or functionality and should not be relied upon in making Citrix product purchase decisions.

If you do not agree, select I DO NOT AGREE to exit.

Do you want to switch to the website in your browser preferred language?

Edit Article

v林羽

  • 累计撰写 171 篇文章
  • 累计创建 34 个标签
  • 累计收到 11 条评论

目 录 CONTENT

文章目录

【Windows使用】之--Hyper-V网卡、显卡直通等PCIe硬件直通

【windows使用】之–hyper-v网卡、显卡直通等pcie硬件直通.

#手册 #教程 #系统 #Windows #虚拟机 #Hyper-V

从Windows Server 2016 TP4 版本,微软为Hyper-V虚拟机加入了物理设备直通(Discrete Device Assignment, DDA)的功能,允许虚拟机完全控制宿主机上的某些硬件(包括但不限于GPU,网络适配器等)。Hyper-V硬件直通需要满足以下几个条件:

  • 必须使用Windows Server Hyper-V。
  • 硬件支持IOMMU (Intel VT-d / AMD-Vi / ARM SMMU)
  • 设备支持硬件直通,如NVIDIA GeForce系列家用显卡就不受支持(2021年4月更新的英伟达驱动据说移除了GeForce显卡的直通限制)

下面以Windows Server 2022系统为例,使用前先检查BIOS开启IOMMU (Intel VT-d / AMD-Vi / ARM SMMU) :

通过微软提供的powershell命令检测一下硬件支持情况: Virtualization-Documentation/SurveyDDA.ps1 · GitHub 。

运行完毕后会得到如下的一个清单:

All of the interrupts are line-based, no assignment can work. 红色代表不被支持 And it has no interrupts at all -- assignment can work. 绿色支持直通。 PCIROOT(0)#PCI(0504) 硬件的 位置路径

这里看看就可以了,多数显示不支持的设备也能直通成功,比如:NVIDIA GetForce GTX970显卡,也是能直通成功的。

2. 网卡直通–使用powershell命令模式

绝大部分PCIe设备都是支持的,这里以Intel® Ethernet Connection I219-LM网卡直通为例:

  • 获取直通设备的 位置路径

在设备管理器中,右键查看属性,找到详细信息,获取 位置路径

discrete device assignment vmware

  • 设备管理器里禁用准备直通的设备
  • 通过Hyper-V管理器,完全关闭虚拟机。

通过上面命令可以得到如下信息,证明 PCIROOT(0)#PCI(1B03)#PCI(0000) 设备直通成功。

3. 显卡直通–使用图形界面的直通工具

最近在GitHub上发现一个图形化界面的直通工具: GitHub - chanket/DDA: 实现Hyper-V离散设备分配功能的图形界面工具。

可以看到界面很简单,能够显示已有的虚拟机的直通情况。下面直通显卡测试一下。

discrete device assignment vmware

添加NVIDIA GetForce GTX970显卡(在上面的检测中,是不被支持的,直通下试试)

discrete device assignment vmware

开机后设备管理器中已经有了。

discrete device assignment vmware

安装完驱动可以看到显卡已被正确识别,没有报错,看来英伟达驱动真的是移除了GeForce显卡的直通限制。下面我们测试一下显卡。

discrete device assignment vmware

开启增强会话模式,测试游戏可以看到GPU的变化,证明显卡已经能正常工作了。

discrete device assignment vmware

某些设备(尤其是 GPU)需要为 VM 分配额外的 MMIO 空间,以便访问该设备的内存。默认情况下,每个 VM 开始时分配有 128MB 的低 MMIO 空间和 512MB 的高 MMIO 空间。但是,设备可能需要更多 MMIO 空间,或者可能会通过多个设备,从而使组合要求超过这些值。具体可以参看微软给出的解释: Plan for deploying devices using Discrete Device Assignment | Microsoft Learn

在图形界面的直通工具Discrete Device Assignment(DDA)中也有相关的三个功能。

discrete device assignment vmware

具体的设置也可以按照设备管理器-显卡属性-资源,三个内存范围。这里是十六进制,通过计算可知16M+256M+32M

discrete device assignment vmware

Hyper-V虚拟机中硬盘有三种使用方式:虚拟硬盘、直接使用物理硬盘、DDA硬件直通。经过测试后面两个基本都能达到物理机下的性能,DDA模式下能更好的被系统和软件识别做更多设置和检测,如正确识别硬盘型号、温度显示等等。

4.1. 使用物理硬盘

  • 在磁盘管理中右键选择脱机,让磁盘处于 脱机 状态

discrete device assignment vmware

  • 在虚拟机中添加新硬盘,选择物理硬盘。

discrete device assignment vmware

4.2. DDA硬件直通

  • ${post.likes!0}

This browser is no longer supported.

Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.

Enabling Discrete Device Assignment on Windows 10

I've been experimenting with different versions of Windows 10, and I am almost ready to buy a license, but unfortunately there is one major problem that is a showstopper for me. For my work, I need to be able to pass through physical devices to a linux-based HyperV virtual machine. Microsoft calls this feature DDA or Discrete Device Assignment, and as far as I can tell it is only supported on Server editions of Windows. Unfortunately it is not viable for me to use Server as my main workstation OS. Dual-booting with it is also not an option.

Is there any way to somehow enable this feature on Windows 10 with some sort of registry/group policy hack? Or perhaps is it possible to move the feature from a Server installation over to a Win10 installation so I can use it?

My master goal is to pass through a PCI device (a USB3.1 controller/root hub) to a linux-based virtual machine. GPU passthrough would certainly be useful in the future, but right now it's not as important as the root hub.

DDA seems like a mature technology at this point, so I think it should be included in Pro, or at least in Enterprise/Pro for Workstation. Linux can already to this out of the box and it's a free operating system.

Currently this is the only thing that's holding me back from getting a proper license for Windows 10, and I would really appreciate it if someone form the Server or HyperV teams could point me in the right direction and show me how I can enable the feature.

Thanks in advance

Hyper-V A Windows technology providing a hypervisor-based virtualization solution enabling customers to consolidate workloads onto a single server. 2,561 questions Sign in to follow

I would like to check if the reply could be of help? If yes, please help accept answer , so that others meet a similar issue can find useful information quickly. If you have any other concerns or questions, please feel free to feedback.

Best Regards, Joann

We haven't heard from you for a few days. I hope you have solved your issue and my tips are useful to you. If yes, please help accept answer, so that others meet a similar issue can find useful information quickly. If you have any other concerns or questions, please feel free to feedback.

I don't know about getting to to work in Linux VMs, but Windows VMs should work. Here's info on getting a GPU passed through: https://www.reddit.com/r/sysadmin/comments/jym8xz/gpu_partitioning_is_finally_possible_in_hyperv/

That post seems to describe GPU partitioning, not DDA. For me, the most important thing to pass through is a USB root hub (a PCIe device), which is still impossible.

As I understand you may want to know whether you could use win10 to configure DDA so that you could pass through your PCIe to your linux-based VM.

According to the article, it only supports Microsoft Hyper-V Server 2016, Windows Server 2016, Microsoft Hyper-V Server 2019, Windows Server 2019.

https://learn.microsoft.com/en-us/windows-server/virtualization/hyper-v/plan/plan-for-deploying-devices-using-discrete-device-assignment

And after research, it seems there's no official method to bypass DDA for Windows 10, so it's not recommended to use third-party method to configure DDA for Windows 10 host in case you meet any unexpected issue. Thanks for your understanding!

Thank you for your time! If you have any other concerns or questions, please feel free to feedback!

Best regards Joann

--------------------------------------------------------------------------------------------------------------------

If the Answer is helpful, please click " Accept Answer " and upvote it.

Note: Please follow the steps in our documentation to enable e-mail notifications if you want to receive the related email notification for this thread.

I am aware that it's only officially supported on Server, I am asking for this feature to be made available in Windows 10.

Is there perhaps an unofficial method to bypass DDA? As far as I can understand, it is there, but it's locked, which is quite annoying.

Again, I would buy a win10 license if there was a way to do this.

Thanks, Zsolt

Thank you for your reply!

I am afraid we may only be able to offer you the official versions that support configuring DDA. In this sense, we strongly recommend you to report this issue to Microsoft directly with the Feedback Hub app. The Feedback Hub app lets you tell Microsoft about any problems you run in to while using Windows 10. For more information to get in touch with us, click here: https://support.microsoft.com/en-us/windows/send-feedback-to-microsoft-with-the-feedback-hub-app-f59187f8-8739-22d6-ba93-f66612949332

Improving the quality of the products and services is a never-ending process for Microsoft. Thanks again for choosing us!

And I would appreciate it if you could click "Accept Answer" to let others who share the similar with yours to grab this information quickly.

If you have any other concerns, please feel free to feedback!

Best wishes Joann Zhu

IMAGES

  1. Discrete Device Assignment -- Description and background

    discrete device assignment vmware

  2. Plan for deploying devices by using Discrete Device Assignment

    discrete device assignment vmware

  3. Discrete Device Assignment with Storage Controllers

    discrete device assignment vmware

  4. Setting up Discrete Device Assignment with a GPU

    discrete device assignment vmware

  5. Setting up Discrete Device Assignment with a GPU

    discrete device assignment vmware

  6. Discrete Device Assignment -- GPUs

    discrete device assignment vmware

VIDEO

  1. NPTEL||DISCRETE MATHEMATICS||WEEK8 ASSIGNMENT ANSWERS||CSIT_CODING

  2. NPTEL||DISCRETE MATHEMATICS ||WEEK9 ||ASSIGNMENT ANSWERS||CSIT_CODING

  3. Configuring NIC in LINUX assignment 3 in VMware

  4. NPTEL||DISCRETE MATHEMATICS ||WEEK6 || ASSIGNMENT ANSWERS

  5. NPTEL||DISCRETE MATHEMATICS||WEEK 7 ASSIGNMENT ANSWERS||LAST DATE 13 TH SEP||CSIT_CODING

  6. VMware SD-WAN Edge Zero Touch Provisioning

COMMENTS

  1. Plan for deploying devices by using Discrete Device Assignment

    Get the location path by using Device Manager. Open Device Manager and locate the device. Right-click the device and select Properties. On the Details tab, expand the Property drop-down menu and select Location Paths. Right-click the entry that begins with PCIROOT and select Copy to get the location path for the device.

  2. Some ponderings on DDA on Windows 10/11 Pro : r/HyperV

    Extract Windows\Branding & Windows\System32\spp\tokens\skus from server ISO. Copy skus to the same directory. Run slmgr.vbs /rilc in cmd. Run slmgr.vbs /ipk KEY where key is the server one you want. Copy branding to the same directory. Activate, there you go. Now you can have the client features as well as the DDA feature.

  3. Pass-through wired or wireless (Wi-Fi) NIC to VM using Hyper-V Discrete

    Hyper-V Discrete Device Assignment (DDA) was developed for attaching (passing through) video adapters into virtual machines. This feature exists on other hypervisors (for example VMDirectPath I/O passthrough on VMware ESXi) for a long time and it is used to virtualize applications that can leverage GPU hardware.

  4. Device Assignments

    Device Assignments. You can move devices across organization groups (OG) and user names based on the network Internet protocol (IP) address range or custom attributes using Device Assignments. It is an alternative to organizing content by user groups in Workspace ONE UEM. Instead of manually moving devices between OGs, you can direct the ...

  5. Pass Thru devices in Hyper-V (IOMMU)

    Passing through devices to Hyper-V VMs by using discrete device assignment - Scripting Blog [archived] Summary: Learn how to attach a device from your Hyper-V host to your VM by using a new feature of Windows Server 2016. Today we have a guest blogger, Rudolf Vesely, who has blogged here on previous occasions. Here is a link to his previous ...

  6. Plan for GPU acceleration in Windows Server

    Discrete Device Assignment (DDA) Discrete Device Assignment (DDA) allows you to dedicate one or more physical GPUs to a virtual machine. In a DDA deployment, virtualized workloads run on the native driver and typically have full access to the GPU's functionality. DDA offers the highest level of app compatibility and potential performance.

  7. 2020

    On Server 2016 and Server 2019, yes DDA (Discrete Device Assignment) is a thing and works great as long as your mainboard supports SR-IOV (as it has to for VMWare to pass along natively). You can use the Hyper-V core image on your workstation (that doesn't require licensing) and run your Windows 10 client OS on it with the GPU passed through ...

  8. Running GPU passthrough for a virtual desktop with Hyper-V

    The new method of assigning a GPU to a Hyper-V virtual desktop -- also known as GPU passthrough-- relies on Discrete Device Assignment (DDA). DDA enables Peripheral Component Interconnect Express hardware -- particularly graphics adapters and non-volatile memory express (NVMe) storage devices -- to be directly accessible with a virtual desktop.

  9. Setting up Discrete Device Assignment with a GPU

    Below I describe the steps to get DDA working. There's also a rough video out on my Vimeo channel: Discrete Device Assignment with a GPU in Windows 2016 TPv4. Setting up Discrete Device Assignment with a GPU Preparing a Hyper-V host with a GPU for Discrete Device Assignment. First of all, you need a Windows Server 2016 Host running Hyper-V.

  10. Which GPU Assignment Method Should You Use for Hyper-V ...

    2 GPU Assignment Methods. As mentioned earlier, Hyper-V provides two different options for assigning GPUs to virtual machines. One option is to use RemoteFX. The other option is to use a discrete device assignment. Unfortunately, neither option is suitable for every situation.

  11. Is it possible to pass-through an entire USB hub to VMware ...

    Using Hyper-V, and passing through the whole ASMedia PCIe device to a VM: This feature is called DDA (Discrete Device Assignment), and it's only available for Windows Server. I'm not willing to switch my daily-driver OS to Server, and I'm also not willing to dual-boot with it. ... PCIe Passthrough to the VMware VM: Only available in ESXi, I ...

  12. Introduction to Windows Server 2016 Hyper-V Discrete Device Assignment

    This paper describes the steps on how to enable Discrete Device Assignment (also known as PCI Passthrough) available as part of the Hyper-V role in Microsoft Windows Server 2016. Discrete Device Assignment is a performance enhancement that allows a specific physical PCI device to be directly controlled by a guest VM running on the Hyper-V instance. Specifically, this new feature aims to ...

  13. Hyper-V Discrete Device Assignment

    Oct 6, 2022. #3. Hyper-V is not designed for USB passthrough no matter what you read. Your hardware would be better served with a VMware solution. ESXi on bare metal, or just VMware Workstation on ...

  14. GPU acceleration for Windows single-session OS

    Support for VMware vSphere and VMware ESX using Virtual Direct Graphics Acceleration (vDGA) - You can use HDX 3D Pro with vDGA for both RDS and VDI workloads. ... Support for Microsoft HyperV using Discrete Device Assignment in Windows Server 2016. Support for Data Center Graphics with Intel Xeon Processor E3 Family and Intel Data Center GPU ...

  15. Hardware advice for Discrete Device Assignment : r/HyperV

    Hardware advice for Discrete Device Assignment. I've run into a situation where I need to deploy a number of workstations capable of Discrete Device Assignment (user has a frequently-changed, handbuilt, totally unsyspreppable Win10 dev/test environment with some USB peripherals that are sufficiently weird that the xHCI controller pretty much ...

  16. 【Windows使用】之--Hyper-V网卡、显卡直通等PCIe硬件直通-v林羽

    从Windows Server 2016 TP4 版本,微软为Hyper-V虚拟机加入了物理设备直通(Discrete Device Assignment, DDA)的功能,允许虚拟机完全控制宿主机上的某些硬件(包括但不限于GPU,网络适配器等)。测试网卡直通、显卡直通、硬盘直通流程,最后推荐一款图形化直通工具。

  17. Use GPUs with clustered VMs

    Assign a VM to a GPU resource pool. First, either create a new VM in your cluster, or find an existing VM. Prepare the VM for DDA by setting its cache behavior, stop action, and memory-mapped I/O (MMIO) properties according to the instructions in Deploy graphics devices using Discrete Device Assignment.

  18. Enabling Discrete Device Assignment on Windows 10

    Enabling Discrete Device Assignment on Windows 10. Hello, I've been experimenting with different versions of Windows 10, and I am almost ready to buy a license, but unfortunately there is one major problem that is a showstopper for me. For my work, I need to be able to pass through physical devices to a linux-based HyperV virtual machine.