Jump to content
Updated Privacy Statement
  • 1

PVS 7.15 CU3 - BNIStack failed - BSOD (CVhdMp.sys) XenServer


Tim Roumlser

Question

Hi Community! 

 

I hope i can find someone who can help me with my Problem, this is my first entry here. I already read many articles but all none of the suggestions could resolve my Problem. :2_grimacing:

 

I'm currently receiving a BSOD as you can see in the attached screenshot

bsod_bnistack.thumb.png.44aff67acfed6611189379fc346e593e.png

 This happens on every single one of our 3 different vDisks. the current versions are booting up fine, but since i create a new version the next reboot will end in the BSOD. It doesn't matter if i perform a reboot in the new version or if i close this version and boot any Server from this new version. 

 

Here are my system specs: 

- 2 PVS Server running un VMWare with Version 7.15.19.11 (CU3)

- we have 13 Xenserver  Version 7.1 CU2 which run about 70 active VMs

- vDisks are installed with 2012r2 and PVS Target 7.15 CU3

 

I already have a open ticket with Citrix support but no solution untill now. 

 

Here is what i already tried:

- added Reg entrys for  BNIStack (socketopenretrylimit and socket openretryintervalms) this has resolved the problem once when we had this BSOD before but this not this time.. (already tried to increase the values to maximum)

- build a reverse image on a local Disk of a VM and reinstalled XenTools and PVSTarget (no Bluescreen in local reboots!!) then converted it with imaging wizard to a new vdisk. this vdisk boots up fine the first time and i'm able to reboot it once!! When i login after this reboot the message appears "PV Storage Host needs to reboot to finish installation" 

5cf0bfe9b4c5c_PVStorageHosterror.thumb.png.345efa29551b5e2aa6712c4c065e5bf1.png

After this reboot the BSOD appears again!! This message was shown before on the local image after installing XenTools and rebooting was no problem with local disk. 

- deleted ghots NICs as suggest in other Citrix Discussion Articles

 

Hope someone has some new tipps for me.. 

 

Thanks & greets from Germany!

 

Tim 

 

 

Edited by Tim Röser
Added Screenshots!
Link to comment

Recommended Posts

  • 1
On 1/10/2020 at 12:46 PM, Lewis Barclay said:

hi,

 

Did you get anywhere with this yet?

 

We don't have the answer yet!

 

Thanks

 

16 hours ago, Chris Marreel said:

Hi all,

 

I'm experiencing EXACTLY the same as cbenali69 and Lewis Barclay are expiriencing. 

On saturday morning we do a weekly reboot of 5 XenApp Workers using PVS.  1 or 2 days later a few of these 5 are hanging, and I only can REBOOT them to get them back on-line.

In the eventlog I see first :

- BNIStack error, Event ID 84 : [MIoWorkerThread] Too many retries Initiate reconnect.

- BNIStack information, Event ID 155 : [IosReconnectHA]  HA Reconnect in progress.

- BNIStack information, Event ID 156 : [IosReconnectHA]  Invalid socket error, trying another socket.

And this last eventlog message is repaeting thousands of times...

 

We are running the PVS Servers and the PVS targets on VMware.  Hypervisor:    VMware ESXi, 6.7.0, 15160138

 

I have no idea exactly when this began happening, as at first we simply RESTARTED the hanging VM's without investigating.  Only after a while I started to follow-up on this and at the moment I don't see a pattern and I have idea how to trigger this.

All the components on the PVS Servers and the PVS Targets are up-to-date:

- PVS Target version : 1912.0.0 LTSR

- PVS Server version : 1912.0.0 LTSR

- VMware Tools on the PVS Target : 11.0.5

 

I have at the moment a citrix support ticket for this issue : 79445951 (started this ticket on 20-jan-2020).

Can others share their support ticket number so we can bundle all the gathered knowlege at Citrix Support ?

Can others, who are experiencing the same, share their versions of Hypervisor ?

 

We are a Citrix-partner, and I think I have some other customer-setups who are experiencing exactly the same, but I haven't dived into this deeper in these environments as I don't want it to happen more often.

 

 

 

Thanks,

  Chris Marreel

 

Hi both,

 

I am really pleased to say this issue was resolved for us after 3 months of what can only be described as hell. No crashed target devices and unregistered machines  in going on two weeks.

 

The issue was the upgrade to VMware ESXI 6.7 U2, investigation by a VMware support engineer revealed an issue with a memory leak on all of the hosts running this version, an upgrade of all the hosts, the Vcentre Appliance and the Platform Service Controller to 6.7 U3 resolved the issue.

 

I must add that during the upgrade process the hosts were rebooted as part of the process and DRS was switched off which hasnt been switched back on again yet so I cant rule them out as a potential cause.

 

I dont know why any of this would affect target devices, it would be nice to know but at this stage I'm just glad its fixed.

 

I hope this helps for you as if the problem you are experiencing is the same one it was one of the most difficult issues I've dealt with in all of my time working in IT and I am really happy to see the back of it!

 

Thanks

Chris

 

  • Like 1
Link to comment
  • 1

This had to be done in two parts for us. The Vmware hosts (in our case Cisco) and the guest OS. Both these need to be done. Doing just the guest OS wouldn’t resolve the issue.

 

There are commands that can be run at a VMware level to show if the servers are experiencing the ring buffer issue.

 

Hope it all make sense.

 

Checking buffers

 

Logon to an esx host

 

Run – esxtop

 

Press n for network and then t (to order them correctly)

 

The nine digit number on the left is what you need (this identifies the NIC of the VM)  134218117 and the Dvsportset-5

 

Run the command – vsish

 

That takes you into the network diag shell then run -

 

cd /net/portsets/DvsPortset-5/ports/134218117/vmxnet3/

 

then run – cat rxSummary

 

which will show something like

 

/net/portsets/DvsPortset-5/ports/134218117/vmxnet3/> cat rxSummary

stats of a vmxnet3 vNIC rx queue {

   LRO pkts rx ok:2848

   LRO bytes rx ok:8321911

   pkts rx ok:84397268

   bytes rx ok:20721483722

   unicast pkts rx ok:83394480

   unicast bytes rx ok:20654601285

   multicast pkts rx ok:0

   multicast bytes rx ok:0

   broadcast pkts rx ok:1002788

   broadcast bytes rx ok:66882437

   running out of buffers:2289

   pkts receive error:0

   1st ring size:512

   2nd ring size:512

   # of times the 1st ring is full:94

   # of times the 2nd ring is full:0

   fail to map a rx buffer:0

   request to page in a buffer:0

   # of times rx queue is stopped:0

   failed when copying into the guest buffer:0

   # of pkts dropped due to large hdrs:0

   # of pkts dropped due to max number of SG limits:0

   pkts rx via data ring ok:0

   bytes rx via data ring ok:0

   Whether rx burst queuing is enabled:0

   current backend burst queue length:0

   maximum backend burst queue length so far:0

   aggregate number of times packets are requeued:0

   aggregate number of times packets are dropped by PktAgingList:0

   # of pkts dropped due to large inner (encap) hdrs:0

   number of times packets are dropped by burst queue:0

   number of packets delivered by burst queue:0

   number of packets dropped by packet steering:0

 

I have replaced some of the steps with XXX as these are bespoke to us

 

Cisco UCS Level

 

Template

 

Logon to the UCS manager and navigate to – Servers > Service Profile Templates > root > Sub-Organisations > XXX

Select ‘Ethernet Adapter Policies’

Select ‘Add’ and name the policy ‘Eth Adapter Policy XXX

Set the transmit queues to 4 and the ring size to 1024

Set the receive queues to 4 and the ring size to 1024

Set the Complete Queues to 8

Set the Interrupts to 10

Set RSS to enabled

Save the profile

 

Apply the policy

 

Logon to the UCS manager and navigate to – Servers > Service Profile > root > Sub-Organisations > xxx > Service  Profile

Change the Adapter policy to xxx (in both vNic server profiles)

Changer VMQ Connection Policy to xxx

Reboot Host

 

Windows Guest Settings

 

Attached screenshots

 

 

 

1.jpg

2.jpg

3.jpg

  • Like 1
Link to comment
  • 0

I‘ll try this when i‘m back at work om friday - thanks for your response.

 

the case number currently is 78949189 with the xenserver team - the base case was 78909619 which is currently paused.

 

sorry i cant see the screenshots,too. i‘ll paste these in again on friday as well

 

Tim

Link to comment
  • 0
On 29.5.2019 at 2:47 PM, Carl Fallis said:

I do not see the screen shots you mention, make sure that the MCSIO driver is not installed, if it is remove it see  https://support.citrix.com/article/CTX247843  

can you send me the case number so I can look at the data in the case? 

 

Carl

 

I tried the steps mentioned in the article but it didn't solve the BSOD I'm receiving. :-( 

 

Tim

Link to comment
  • 0

I am having exact the same issue, but running on vDisk on PVS 7.12 (server 2012 r2) on HP hardware,

using xenserver 7.1 cu2 we see the exact same behavior.

Existing VDI images running fine as long as they are in read-only mode.

Creating new vDisk version does work at initial boot, after restart we get same BSOD issue.

We have 3 different vDisk images, all having same issue.

No vmware NIC involved, PVS and Xenserver are running HP physical hardware.

 

My impression is that issue started with latest xenserver 7.1 cu2 updates.

 

Raised ticket as well: Case #78959146

Edited by barcoxsk
added ticket nr
Link to comment
  • 0

I think I have solved my issue by following steps:

- create new vdisk version in maintenance mode

- start master, logon via xencenter console

- uninstall the NIC "XenServer PV Network Device #0"

- reboot master

- logon with cached credentials via Xencenter console

- let the NIC reinstall itself and restart master again

- reboot is working perfectly

 

Damaged vDisk is repaired.

So in my case, the existing "XenServer PV Network Device" as caused the BSOD.

No other hidden or unused NIC were to be found.

 

Link to comment
  • 0

@Sammy Knockaert This is what I'm testing as well - I remembered our changes in the Past and mentioned that we updated to 7.1 CU2 which fits with the . On Friday I saw outstanding Updates 7.1 CU2 007 and 008 which include XenTool fixes.. I updated one of our servers and till now i try to cleanly reinstall xentools in a reverse image which failed untin now.. 

 

please keep me up to date if you have any success with this error :) 

 

Tim

Link to comment
  • 0

I am experiencing this issue with VMWare after upgrading the VM Tools in Private mode. I used to get this on XenServer as well after upgrading the XenTools and the fix was to remove the deviceid on the NIC from the XEN CLI. However, I don't know if such parameters exist in ESXi.

 

Any update Tim?

Link to comment
  • 0

Hi Joe,

 

since support wasn't able to cleanly reinstall XenTools in our vDisks, too - we decided to update to XS 8.0 and start with a completly new Installation of our vDisks with new XenTools because we couldn't wait any longer to install new Software. We also had to recreate all VMs on XS 8.0 because the migrated VMs from 7.1CU2 caused the same bluescreen after putting the vDisk in multi-device-mode. :( 

 

Link to comment
  • 0

Hey Tim,

 

Thanks for the reply. I was just about on the verge of suffering the same fate as you when I had the notion to try one last thing. I booted the VM and then hit F8 to go to the Windows boot menu and chose "Last Known Good Configuration" (something I probably haven't used since NT/2000 days) and lo and behold, it booted!. I was then able to uninstall VMWare tools and then follow the method Aaron Silber documents in this article (https://www.helient.com/2017/10/updating-vmware-tools-pvs-secrets-revealed/) to update the VMWare Tools successfully. I am happy to report that I was then able to seal the VM back up without having to redo hours of work. Hopefully this will help the next poor fool who makes the same mistake.

 

I have to add that I am really beginning to question moving platforms from XenServer to VMWare as it can be somewhat unforgiving when something goes wrong.

 

Fwiw, this was what I used to have to do to the vms in XEN to remove the deviceid parameter to get them to boot after a XenTools update:

 

First, get the UUID of the vm in question:

xe vm-list

Then, verify the device_id parameter exists:

xe vm-param-get param-name=platform uuid=<VM UUID>

Lastly, remove the device_id parameter:

xe vm-param-remove uuid=<VM UUID> param-name=platform param-key=device_id

Link to comment
  • 0

So it looks like most of you have resolved your issue so I will post this just for reference.   This  would not have helped those with network configuration issues.  

 

Sometimes it takes longer than expected for the network to initialize and become available for the bnistack driver.  If you you see the error "Error: Bnistack failed, network stack could not be initialized"  means that the PVS driver tried and waited and could not find the network.  In some cases it just takes longer,  you can modify the wait using the following registry keys:

 

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\BNIStack\Parameters

    Name:SocketOpenRetryIntervalMS

    Type: REG_DWORD

    Data: <range of 200 (default) to 2000 (milliseconds)>

 

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\BNIStack\Parameters

    Name:SocketOpenRetryLimit

    Type: REG_DWORD

    Data: <range of 5 (default) to 40>

 

This is the number of times the BNIStack will try to initialize the network (default 5) and wait in between attempts (default 200 ms)

Link to comment
  • 0

I am experiencing a similar issue hopefully someone can help resolve this, been on-going for months.

 

Environment:

XenApp and PVS 7.15 CU3

ESXI 6.7U2

40 machines booting from vDisk

 

What happens is that the machines can be up and running for a random amount of time (they are rebooted every night) and one will suddenly lock up with the only way to get it to respond is by hard resetting. We store the machine logs on disks so they are retained, and once the machine has restarted I see hundreds of thousands of the following errors:

 

Error 84: bnistack -  [MIoWorkerThread] Too many retries Initiate reconnect.

Error 158: bnistack - [MIoProcessIosReadTransaction] Invalid reconnection handle returned.

Information 155:  - [IosReconnectHA]  HA Reconnect in progress.

 

Then repeat hundreds of thousands of times.

 

There is nothing before the first event that indicates anything has happened.

 

This does not happen to every machine, we maybe get 1-2 machines per day and just seems to be totally random. The machines can be up for 5 hours or 15 hours, doesn't seem to be any pattern.

 

We've tried updating to ESXI 6.7 U3 as there was a PVS fix mentioned in the release notes, but no dice.

 

Not sure if I am barking up the right tree here but thought I'd give it a go!

Link to comment
  • 0
On 9/4/2019 at 11:30 AM, Lewis Barclay said:

I am experiencing a similar issue hopefully someone can help resolve this, been on-going for months.

 

Environment:

XenApp and PVS 7.15 CU3

ESXI 6.7U2

40 machines booting from vDisk

 

What happens is that the machines can be up and running for a random amount of time (they are rebooted every night) and one will suddenly lock up with the only way to get it to respond is by hard resetting. We store the machine logs on disks so they are retained, and once the machine has restarted I see hundreds of thousands of the following errors:

 

Error 84: bnistack -  [MIoWorkerThread] Too many retries Initiate reconnect.

Error 158: bnistack - [MIoProcessIosReadTransaction] Invalid reconnection handle returned.

Information 155:  - [IosReconnectHA]  HA Reconnect in progress.

 

Then repeat hundreds of thousands of times.

 

There is nothing before the first event that indicates anything has happened.

 

This does not happen to every machine, we maybe get 1-2 machines per day and just seems to be totally random. The machines can be up for 5 hours or 15 hours, doesn't seem to be any pattern.

 

We've tried updating to ESXI 6.7 U3 as there was a PVS fix mentioned in the release notes, but no dice.

 

Not sure if I am barking up the right tree here but thought I'd give it a go!

 

Hi Lewis,

 

This sounds very similar to an issue we are experiencing where we are having to reboot the VDA's as a workaround causing major issues for users, we have it logged with Citrix but they haven't been able to resolve it.

 

We are running the same PVS and ESXI versions and the issue started around the time we upgraded to ESXI 6.7U2 and we see the same Event IDs

 

Did you manage to resolve the issue, if so please can let me know what you did it would be very much appreciated!

 

Thanks

 

Link to comment
  • 0
On 12/12/2019 at 10:17 AM, Chris Benali1709159201 said:

 

Hi Lewis,

 

This sounds very similar to an issue we are experiencing where we are having to reboot the VDA's as a workaround causing major issues for users, we have it logged with Citrix but they haven't been able to resolve it.

 

We are running the same PVS and ESXI versions and the issue started around the time we upgraded to ESXI 6.7U2 and we see the same Event IDs

 

Did you manage to resolve the issue, if so please can let me know what you did it would be very much appreciated!

 

Thanks

 

hi,

 

Did you get anywhere with this yet?

 

We don't have the answer yet!

 

Thanks

Link to comment
  • 0
On 1/10/2020 at 12:46 PM, Lewis Barclay said:

hi,

 

Did you get anywhere with this yet?

 

We don't have the answer yet!

 

Thanks

 

Hi,

 

No I am afraid not I'm still working with Citrix Support and Microsoft on the issue but not getting anywhere.

 

Two things I am looking at currently are updating software that could be causing the crash but in my opinion its doubtful and also creating a delivery group and migrating machines across to it as they fail.

 

Let me know if you have any break through's please because at this point I'm at a complete loss.

 

Thanks

Link to comment
  • 0

Hi all,

 

I'm experiencing EXACTLY the same as cbenali69 and Lewis Barclay are expiriencing. 

On saturday morning we do a weekly reboot of 5 XenApp Workers using PVS.  1 or 2 days later a few of these 5 are hanging, and I only can REBOOT them to get them back on-line.

In the eventlog I see first :

- BNIStack error, Event ID 84 : [MIoWorkerThread] Too many retries Initiate reconnect.

- BNIStack information, Event ID 155 : [IosReconnectHA]  HA Reconnect in progress.

- BNIStack information, Event ID 156 : [IosReconnectHA]  Invalid socket error, trying another socket.

And this last eventlog message is repaeting thousands of times...

 

We are running the PVS Servers and the PVS targets on VMware.  Hypervisor:    VMware ESXi, 6.7.0, 15160138

 

I have no idea exactly when this began happening, as at first we simply RESTARTED the hanging VM's without investigating.  Only after a while I started to follow-up on this and at the moment I don't see a pattern and I have idea how to trigger this.

All the components on the PVS Servers and the PVS Targets are up-to-date:

- PVS Target version : 1912.0.0 LTSR

- PVS Server version : 1912.0.0 LTSR

- VMware Tools on the PVS Target : 11.0.5

 

I have at the moment a citrix support ticket for this issue : 79445951 (started this ticket on 20-jan-2020).

Can others share their support ticket number so we can bundle all the gathered knowlege at Citrix Support ?

Can others, who are experiencing the same, share their versions of Hypervisor ?

 

We are a Citrix-partner, and I think I have some other customer-setups who are experiencing exactly the same, but I haven't dived into this deeper in these environments as I don't want it to happen more often.

 

 

 

Thanks,

  Chris Marreel

Link to comment
  • 0

Hi cbenali69, Hi Chris,

 

Thanks for your detailed input.

In one environment the ESX-level was already upgraded to the "ESXi 6.7 P01", which is build numer "15160138", and this is already NEWER than the "ESXi 6.7 Update 3" which is build number "14320388".  And even with this higher build-number we experienced this issue.

So this is very strange.  Can you have a look a which build-number your environment is running now ?  Just to be sure...

 

Besides the ESX-level upgrade you also mentioned that you switched off "DRS".  And that DRS is at the moment still switched OFF in your environment.

In other words, maybe switching OFF DRS is the solution for this annoying PVS-issue...

 

What do you think ?

Thanks for sharing your thoughts.

 

Greetings,

  Chris Marreel

 

Link to comment
  • 0
16 hours ago, Chris Marreel said:

Hi cbenali69, Hi Chris,

 

Thanks for your detailed input.

In one environment the ESX-level was already upgraded to the "ESXi 6.7 P01", which is build numer "15160138", and this is already NEWER than the "ESXi 6.7 Update 3" which is build number "14320388".  And even with this higher build-number we experienced this issue.

So this is very strange.  Can you have a look a which build-number your environment is running now ?  Just to be sure...

 

Besides the ESX-level upgrade you also mentioned that you switched off "DRS".  And that DRS is at the moment still switched OFF in your environment.

In other words, maybe switching OFF DRS is the solution for this annoying PVS-issue...

 

What do you think ?

Thanks for sharing your thoughts.

 

Greetings,

  Chris Marreel

 

 

Hi Chris,

 

The build number is 15160138, it seems when we discussed it with Vmware they said to upgrade from Update 2 to Update 3 but we ended up upgrading to 15160138 - its the first time I've done it and I found it a little confusing at first with the build numbers etc

 

DRS is still switched off for us due to a fault with a host, I cant confirm if this was the cause but you could easily switch off DRS and manually migrate a test VDA from one host to another to see if it crashes and becomes unregistered, I have observed a VDA crash when being migrating and become unregistered with the DDC but that could have been related to the faulty host which I am hoping to resolve soon and start testing again but I'm so behind from the core issue.

 

Just to  reiterate, for us the core issue symptoms were VDAs using Vdisks via PVS were experiencing a full lock up and becoming unregistered with the DDC with the error message in Studio stating that the VDA should be registered but isnt, only a restart of the VDA would bring it back up and the issue was completely intermittent affecting all VDAs

 

I hope that helps

Chris

Link to comment
  • 0

Hi,

On 2/5/2020 at 8:19 AM, Chris Benali1709159201 said:

 

 

Hi both,

 

I am really pleased to say this issue was resolved for us after 3 months of what can only be described as hell. No crashed target devices and unregistered machines  in going on two weeks.

 

The issue was the upgrade to VMware ESXI 6.7 U2, investigation by a VMware support engineer revealed an issue with a memory leak on all of the hosts running this version, an upgrade of all the hosts, the Vcentre Appliance and the Platform Service Controller to 6.7 U3 resolved the issue.

 

I must add that during the upgrade process the hosts were rebooted as part of the process and DRS was switched off which hasnt been switched back on again yet so I cant rule them out as a potential cause.

 

I dont know why any of this would affect target devices, it would be nice to know but at this stage I'm just glad its fixed.

 

I hope this helps for you as if the problem you are experiencing is the same one it was one of the most difficult issues I've dealt with in all of my time working in IT and I am really happy to see the back of it!

 

Thanks

Chris

 

Thanks for your reply, we updated to 6.7u3 pretty much as soon as it came out as I noted there was issues with memory leaks and there was also some specific PVS fix notes in that release, unfortunately this didn't cure our issue.

 

Do you have the build number you are on? We are on the original U3 release so maybe the newer one has the fixes.

 

Thanks for the help!

Link to comment
  • 0
Just now, Lewis Barclay said:

Hi,

Thanks for your reply, we updated to 6.7u3 pretty much as soon as it came out as I noted there was issues with memory leaks and there was also some specific PVS fix notes in that release, unfortunately this didn't cure our issue.

 

Do you have the build number you are on? We are on the original U3 release so maybe the newer one has the fixes.

 

Thanks for the help!

 

See my previous post which you missed as we literally just posted at the same time!

Link to comment
  • 0

Hi All,

 

cbenali69 described the issue as:

> Just to  reiterate, for us the core issue symptoms were VDAs using Vdisks via PVS were experiencing a full lock up

> and becoming unregistered with the DDC with the error message in Studio stating that the VDA should be registered

> but isnt, only a restart of the VDA would bring it back up and the issue was completely intermittent affecting all VDAs

 

And I have 2 setups experiencing this PVS-issue.  And this is EXACTLY the same as we are experiencing.

 

One environment is running “VMware ESXi, 6.7.0, 13004448” -> this corresponds with “ESXi 6.7 EP 07”  with a release date “03/28/2019”.

The second environment is running “VMware ESXi, 6.7.0, 15160138 ” -> this corresponds with “ESXi 6.7 P01” with a release date “12/05/2019”.

BOTH Environments had VMware DRS (Distributed Resource Scheduler ) Active.

 

At this moment we have added an extra DRS-rule "VM Overrides" and DISABLED DRS for the VDA-workers/PVS-targets AND the PVS-servers.

And we are again evaluating this annoying issue.

 

Greetings,

  Chris

  • Like 1
Link to comment
  • 0
16 minutes ago, Chris Marreel said:

Hi All,

 

cbenali69 described the issue as:

> Just to  reiterate, for us the core issue symptoms were VDAs using Vdisks via PVS were experiencing a full lock up

> and becoming unregistered with the DDC with the error message in Studio stating that the VDA should be registered

> but isnt, only a restart of the VDA would bring it back up and the issue was completely intermittent affecting all VDAs

 

And I have 2 setups experiencing this PVS-issue.  And this is EXACTLY the same as we are experiencing.

 

One environment is running “VMware ESXi, 6.7.0, 13004448” -> this corresponds with “ESXi 6.7 EP 07”  with a release date “03/28/2019”.

The second environment is running “VMware ESXi, 6.7.0, 15160138 ” -> this corresponds with “ESXi 6.7 P01” with a release date “12/05/2019”.

BOTH Environments had VMware DRS (Distributed Resource Scheduler ) Active.

 

At this moment we have added an extra DRS-rule "VM Overrides" and DISABLED DRS for the VDA-workers/PVS-targets AND the PVS-servers.

And we are again evaluating this annoying issue.

 

Greetings,

  Chris

Hey Chris,

 

Have you been running build 15160138 for a while and still been experiencing the issue? If so does that rule out that build as a fix?

 

How long have you had the DRS rules implemented?

 

Thanks for sharing!

 

Lewis

Link to comment
  • 0

Hi All, 

 

I'm a colleague of Chris M.

Today noticed it also in another environment.

XenApp Worker : OS 2012 R2 , VDA 7.15 CU1

PVS : OS 2012 R2 , Xendesktop 1912

DDC : 2 2012 R2 servers with 1912 and 2 2019 servers with 1912 (in migration fase)

VMWare VMware ESXi, 6.7.0, 15160138

DRS is turned off

 

 

This issue can be gone for a few weeks and suddenly come back...

Hopefully this is sorted quickly as it is very annoying for the customers.

 

Kenneth

  • Like 1
Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...