Jump to content
Updated Privacy Statement
  • 1

PVS 7.15 CU3 - BNIStack failed - BSOD (CVhdMp.sys) XenServer

Tim Roumlser


Hi Community! 


I hope i can find someone who can help me with my Problem, this is my first entry here. I already read many articles but all none of the suggestions could resolve my Problem. :2_grimacing:


I'm currently receiving a BSOD as you can see in the attached screenshot


 This happens on every single one of our 3 different vDisks. the current versions are booting up fine, but since i create a new version the next reboot will end in the BSOD. It doesn't matter if i perform a reboot in the new version or if i close this version and boot any Server from this new version. 


Here are my system specs: 

- 2 PVS Server running un VMWare with Version (CU3)

- we have 13 Xenserver  Version 7.1 CU2 which run about 70 active VMs

- vDisks are installed with 2012r2 and PVS Target 7.15 CU3


I already have a open ticket with Citrix support but no solution untill now. 


Here is what i already tried:

- added Reg entrys for  BNIStack (socketopenretrylimit and socket openretryintervalms) this has resolved the problem once when we had this BSOD before but this not this time.. (already tried to increase the values to maximum)

- build a reverse image on a local Disk of a VM and reinstalled XenTools and PVSTarget (no Bluescreen in local reboots!!) then converted it with imaging wizard to a new vdisk. this vdisk boots up fine the first time and i'm able to reboot it once!! When i login after this reboot the message appears "PV Storage Host needs to reboot to finish installation" 


After this reboot the BSOD appears again!! This message was shown before on the local image after installing XenTools and rebooting was no problem with local disk. 

- deleted ghots NICs as suggest in other Citrix Discussion Articles


Hope someone has some new tipps for me.. 


Thanks & greets from Germany!





Edited by Tim Röser
Added Screenshots!
Link to comment

Recommended Posts

  • 0

We are experiencing VDA's freezing during the day once booted on PVS. The only way to resolve this is to hard reset the VM's.


On the VDA OS once you send a ctrl alt del on the console just has a blue spinning circle


PVS: OS 2016 Citrix: 7.15 CU3

DDC: OS 2016 Citrix 7.15 CU3

VDAs OS 2016 Citrix 7.15 CU3

Vmware ESXi 6.7U2 build 13473784


I have raised this to Citrix support but they require Wireshark captures from the VDA's and PVS servers which is hard to get as this only happens to around 8 VDAs per day.


Has anyone got any further updates?

Link to comment
  • 0

We have provided this to Citrix support but they now need a WireShark capture from before the target has failed and the PVS server . As you can image this is difficult to get as we can't predict what target devices fail each day. I am currently looking at Machine Creation Services to replace PVS. I can't see this getting fixed any time soon.





Link to comment
  • 0



We experience the bnistack connection failure with our new Windows Server 2016 image.

Used different PVS versions and VMware versions:

[MIoProcessIosReadTransaction] Invalid reconnection handle returned.

[IosReconnectHA] Failed to connect to the server pausing before retry.


Biggest issue for us is that HA isn’t working. No sudden freezes.

When your network isn’t very stable you could experience the same issue I think.


Already working with Citrix support from 17 okt 2019.

Did numerous Wireshark traces to collect more information. (forced HA by shutting down stream service)


Now Citrix support thinks the error is because the PVS agent doesn’t get a MTU value from the network card.

This is a requirement for HA to work. I will collect CDF traces tonight to find out the specific error and update Citrix support.


The only difference with our old image (Win2008r2) seems UEFI bios instead of conventional bios.

Can anyone check if they have similar issues with conventional bios?


Our environment (at the moment):

VMware ESXi, 6.7.0, 15160138



PVS servers: Win2016

PVS agents: Win2016

Citrix Virtual Apps and Desktop 1912 LTSR

VMware Tools version: 11265



Link to comment
  • 0

Hi Maurits and others,


I had EXACTLY the same issue, and exactly the same experience with Citrix Support.

I finally changed the NIC from VMxNet3 to E1000E and my issue is solved.  Of course changing the NIC-type from VMxNet3 to E1000E I had to 'reverse image' my master-image and import it again in PVS after re-installing the PVS-target software when switching the NIC-type.

But I hope, once the real underlaying issue is found and solved, to return to the VMxNet3 NIC as this network-adapter is much performant and better than the legacy E1000E Nic.


So I'm more and more convinced it must be a combination between PVS-server/PVS-target/VMware ESXi-version and maybe the VMware Tools version as we only experienced it with the VMxNET3 NIC.


My citrix-support ticket : 79445951  (started on 20 jan 2020).


I hope this helps.



Link to comment
  • 0

We have managed to resolve our issue in both data centers by changing the ring buffers on the VMXNET3 network cards. We also had to make changes on the underlying host hardware (Cisco UCS)


We noticed issues in our Skype for Business environment and VMware support advised against changing any buffer settings. 


Our hardware vendor Cisco recommended the buffer changes and we have seen a big improvement across the whole virtual estate.


I am sure we didn't have these issues in earlier versions of VMware on the same hardware.

Link to comment
  • 0

Hi mclayto98,


when you receive:

 running out of buffers:2289

 # of times the 1st ring is full:94


What is it telling you and what is the expected value? Is there an upper limit before it will crash?

Did you this on a already crashed VM or on a running one?


For one of my crashed VM 2 days ago running on DELL host I get:

   running out of buffers:840
   pkts receive error:0
   1st ring size:512
   2nd ring size:512
   # of times the 1st ring is full:62

Link to comment
  • 0

Citrix have now published an article based of our findings on this https://support.citrix.com/article/CTX272248


The the figures  running out of buffers & # of times the 1st ring is full reset after each time the virtual machine is rebooted.


In an ideal world the buffers wouldn't get full at all and the figures should both be 0. There are quite a few articles around on the internet that explain the buffers in more detail.


You will see these even on a running virtual machine with no issues. The Virtual machines will still run but there may be packet loss or latency due to the bottleneck at the Virtual networking level. I assume PVS must have a certain threshold and then the Target just becomes unresponsive

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Create New...