Jump to content
  • 0

So confused with vCPU allocation


Jeremy Brook

Question

There is ALOT of misleading articles both dotted around the web and these forums in regards to vCPU allocation, expecially when you take into account modern CPUs with support for Hyper threading.

 

I have a xen server, it's running a slightly older CPU but capable.  The Host is running a dual slot, 6 Physical core cpu.  With HT this would mean Xen server sees a total of 24 vCPUs (2x6+HT).

 

I have read some articles that start Dom0 should be set to 50% of your VCPUs so in my case it should be 12, but then there are just as many contradicting articles that say to set Dom0 to 50% of you PHYSICAL cores, so in my case that would be 6.  Who is correct?

 

Then we move onto the actual Guests and their topoligy.  If I don't want to over provision, and I did set Dom0 to 50% this would leave either 12 or 18 VCPUs but how do you tie this into physical requirements vs xen server. 

Lets say Windows Server min spec was a 4 core CPU.  This would mean 4 Phyical cores it would never be 2 Cores + HT.  How would you set this up in Xen.  Would you pick 4Cpus, but then you can change the topology ( 1x4, 2x2 and 4x1) why would that really matter.  Because these are VCPUs they aren't going to have the same sort of benefits as a physical server with duals sockets (allowing for better memory management, cache and pci lanes) so in terms of Xen which topology is correct.  

Link to comment

7 answers to this question

Recommended Posts

  • 1

You shouldn't ever have to modify the dom0 settings, unless you are planning to run other software in that dom0 space.

 

The cores per socket is for vnuma alignment.  Putting a single socket will keep all the cores and memory on the same memory controller, keeping data off the interlink.  It also can reduce some software costs that are charged per socket, instead of per core.

 

But if software is specifically designed for multiple sockets for optimizations, then you would choose more sockets, but less cores.

  • Like 1
Link to comment
  • 0

The other thing to bear in mind with so few cores per socket (6 in your case), the order that you start VMs will also eventually affect vNUMA as the VMs can spread out only so much before there is overlap with another socket. Hence, restating VMs, servers. doing VM migrtions, etc.  can change the optimization. You can pin vCPUs, but that's a whole other topic.

Link to comment
  • 0
On 9/27/2023 at 9:26 AM, Mark Syms said:

That article should not be used (we're having it removed), it's actively dangerous on all modern hardware and doesn't do as stated on in-support versions of XenServer either.

 

Mark, 

 

so are Citrix going to release a new article that provides best practise advice?

 

Regards

 

Ken Z

Link to comment
  • 0
12 hours ago, Ken Zygmunt said:

so are Citrix going to release a new article that provides best practise advice?

There are no plans to do so, no, mainly because the settings are almost always specific to a given deployment. Therefore giving any sort of guidance beyond letting the system manage things based on its own internal rules is next to impossible.

Link to comment
  • 0

There are a number of good articles, blogs, etc. about optimizing VCPU configurations and NUMA/VNUMA cincerns. As Mark says, each system is different. Sometimes you have to run your own tests to determine empirically what works best. Check out some of Frank Denneman articles, for example: https://frankdenneman.nl/

I have a couple I published on the https://mycugc.org/ site, as well.

 

-=Tobias

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...