Jump to content
Welcome to our new Citrix community!
  • 0

How do I stress test Citrix Hypervisor 8.2?


Jerry Perkinson

Question

I am new to hypervisor and  I want to stress test a Citrix Hypervisor installation, not an individual VM.  I have a tool that will stress the cpu, memory, drives etc that I use normally to test, but it runs a single instance and setting it up on multiple VMs and trying to pull the data together would be a nightmare.  How would I configure a single VM to best stress the server?

 

For example:  If I used a server has 2 xeon processors with 12 cores per processor, 96GB memory.  Would I configure the VM to use something like 80-90% of physical resources like 20 vCPUs (2 sockets with 10 cores per socket) and 80GB of memory, or is there a  better way?

Link to comment

3 answers to this question

Recommended Posts

  • 0

I do see that I will have to use multiple VMs simply to use up the vCPUs, as the actual server I am looking at has 96 threads (which citrix seems to look at instead of cores).  Even dropping down to the initial example means 48 threads so 2 VMs.  Beyond that requirement, I don't understand the principle of just creating more VM's as a better approach if each VM is running a test program to push it's utilization towards 100%. 

 

For example, If I build 50 VMs and assign each 5% of the cores and memory or build 5 VMs and give each 50%, that goes beyond the physical capabilities of the server, and definitely goes against what the documentation says to do.  If I build 50 and give 0.5% or 5 with 10% each, it would only engage half the physical server.  I am assuming that close to 100% assignment would be best, to mimic as close to possible the physical hardware.  I just don't know if some should be set aside for overhead, or if there is something different about how the hypervisor handles the virtual resources that would make this kind of testing invalid.

 

Most of the documentation is aimed towards normal use cases (as it should), so the magic numbers of multiplying threads/cores by 5 or 10 don't apply.  My goal is to test the server/hypervisor combo at extremes over time.  I am not testing usage of the VMs as the intent is just that they are running at max.  I was hoping that someone else had done similar testing and might know how much physical resource should remain unassigned for overhead, etc.

 

 

Link to comment
  • 0

Alas, the answer is nearly always "it depends" for situations like this. FOr very variable loads from VMs, over-commitment is normal; for XenDesktop for example we had an

over- commitment on CPUs of at least a factor of four or five. For very heavy loads that are constant, you are of course going t have to make sure your CPU load doesn't max out.

Also, take NUMA into consideration in assigning cores and how VMs are distributed as that can make a difference. Having enough dom0 resources (memory, in particular) is also important as it also affects storage I/O. Speaking of which, storage and networking can also be bottlenecks for heavily loaded hosts. If you can experiment with different setups, that is probably your best best, though it will take a lot of time and effort.

Maybe this and the previous two articles might be helpful: https://blogs.mycugc.org/2019/04/30/a-tale-of-two-servers-part-3-the-influence-of-numa-cpus-and-sockets-cores-persocket-plus-other-vm-settings-on-apps-and-gpu-performance/

 

-=Tobias

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...