Jump to content
Welcome to our new Citrix community!
  • 0

XenApp servers slow or stuck on bootup


Bit-101

Question

XenApp servers slow or stuck on bootup

The machines gets stuck on the following phases according to the screenshots below.

We've never had a problem before and no major changes have been made in connection 
with problems occurred.

 

If you look at the vDisks, the machines that have problems have over 1000 attempts
See screenshot: vDisk retries again

 

I Also attach measurements on MTU - maximum transmission unit. There you can see 
my test of ping-f –l that the maximum MTU for PVS is 1472 but we have a previous 
configuration on MTU which is set at 1506. It's the only thing we can imagine 
causing these problems but we  always had that configuration (1506) on the PVSs and it worked historically.

 

Switches etc. are set at 1500 MTU. We have not yet tested to change MTU to 1472 
on PVS. See screenshot: MTU_PVS.

 

I am compiling data on the troubleshooting we have done, just waiting for 
network troubleshooting, of one more measurement that will capture the traffic for 
the virtual interfaces on machine that has boot problem.

(So far no network problemas are detected by my network guy.)

 

XENSERVER 7.1 - hypervisor

PROVISIONING SERVERS, PVS 7.1

Hardware

OS Windows 2012 R2

2 servers

DDC

2 st. Desktop Deliver Controllers - virtual

OS Windows 2012 R2

Citrix XenDesktop 7.6

2 servers

 

3. TARGET MACHINES

vDisks

70 XenApp servers

OS Windows server 2008 R2, 2012 R2 (mostly)

PVS target Devis 7.1

VDA 7.6

Citrix Receiver 4,12

 

-Anyone?

-I really appreciate your answer.

 

Link to comment

8 answers to this question

Recommended Posts

  • 1

hey mate. Been there done that. Exactly your scenario. our Windows Server 2012R2 OS VDA suffered from the same mtu treshold where our Windows Server 2019 OS VDA did not. Later we got even lower tresholds from russian users due to some extra infrastructural lower mtu in the chain of infrastructure that we have no control over. Eventually I solved the problem by changing the default mtu setting from 1500 to 1400 in the Citrix PVS build that is assigned to all these machines. Ever since not a single problem regardless of any unexpected infrastructure or software change.

 

Hope this helps

  • Like 1
Link to comment
  • 1
2 hours ago, Peter Fällman said:

-Do you actually means mtu=9000?

 

 

No. The value example here was taken from my Windows 2019 Server VDA where we want use 9000 as the value of choice because we want high speed 10Gb networking in the backbone area all the way into the Citrix session. 

 

In your case however you will want the opposite and set 1400 as value in order to address your mtu problems. The Example line I gave was just to show the command structure so you can look it up and use it.

  • Like 1
Link to comment
  • 0
23 minutes ago, Peter Fällman said:

Thanks for your reply.
I have tried 1472 but the problem remains the same.

 

 

Peter,

 

have you read my suggestion because it looks like you did not understand ?  Try making a build update of your VDA golden image in which you set the MTU  to exactly 1400 (not 1472!) and try connecting to that. That should solve it regardless of underlying infrastructure. Here's what it should look like inside your VDA test server that runs the new golden image:

 

image.thumb.png.c0a9c915c0561e2eec21e6b7465e0023.png

 

Use the following command to set the MTU of your choice (1400) inside the build in writeable edit(private) PVS mode:

 

netsh interface ipv4 set subinterface "89" mtu=9000 store=persistent

 

where the variable nr 89 stands for the Idx nr of your lan adapter and you can check it at any time using: netsh interface ipv4 show interfaces just as in the screenshot. 

Important note: the new mtu value that you set only takes effect after next reboot so reboot your golden image again in writeable edit(private) PVS mode to make that happen before you do your final shutdown. Otherwise your change will not go into effect.

 

Bottomline is that if any of the mtu's in the long chain of infrastructure devices is too low (for whatever reason) as in lower than your OS minus overhead, your VDA OS will start to fail. Therefore setting it as low as for instance 1400 mtu insside the VDA OS will circumvent and solve all problems immediately.

 

 

 

 

Link to comment
  • 0

Thank's I ´d not test your proposal, yet.

I´d  change the MTU on PVS servers.

That´s becuase as an example - 2 VM who are rebooted on the same host 

but who behave completely differently. One VM struggled with the boot-problem

and the other did not. The same scenario is repeated for all the other VM (servers)

So I think like this:

-If MTU were the basic problem, this scenario would not repeat  itself, with one VM, no problem 

and another VM, boot-problem, -  both VM on the same XenServer host.

But I hope I´m wrong, that configure MTU can solve this strange problem.

 

 

-I have taken your answer seriously and will test it.

 

 

 

Edited by Peter Fällman
Complete answer
Link to comment
  • 0
On 9/1/2021 at 4:55 PM, Andy Vanderbeken said:

 

Peter,

 

have you read my suggestion because it looks like you did not understand ?  Try making a build update of your VDA golden image in which you set the MTU  to exactly 1400 (not 1472!) and try connecting to that. That should solve it regardless of underlying infrastructure. Here's what it should look like inside your VDA test server that runs the new golden image:

 

image.thumb.png.c0a9c915c0561e2eec21e6b7465e0023.png

 

Use the following command to set the MTU of your choice (1400) inside the build in writeable edit(private) PVS mode:

 

netsh interface ipv4 set subinterface "89" mtu=9000 store=persistent

 

where the variable nr 89 stands for the Idx nr of your lan adapter and you can check it at any time using: netsh interface ipv4 show interfaces just as in the screenshot. 

Important note: the new mtu value that you set only takes effect after next reboot so reboot your golden image again in writeable edit(private) PVS mode to make that happen before you do your final shutdown. Otherwise your change will not go into effect.

 

Bottomline is that if any of the mtu's in the long chain of infrastructure devices is too low (for whatever reason) as in lower than your OS minus overhead, your VDA OS will start to fail. Therefore setting it as low as for instance 1400 mtu insside the VDA OS will circumvent and solve all problems immediately.

 

 

 

 

-Do you actually means mtu=9000?

 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...