Jump to content
Welcome to our new Citrix community!
  • 0

Windows 10 VDI Desktops on ESX - sockets and Cores


Ken Z

Question

5 answers to this question

Recommended Posts

  • 1

It is better to go single socket, until you are exceeding the number of cores on a die, then go to multiple socket.  This keeps memory allocation on a single die, and not use the interconnect to access memory on another controller.   VMWare had a doc on this, but it was older.  https://blogs.vmware.com/performance/2017/03/virtual-machine-vcpu-and-vnuma-rightsizing-rules-of-thumb.html

 

The benefits though are really really tiny.  So unless your environment is running at 80% or higher utilization there may not be much difference.

 

  • Like 1
Link to comment
  • 0

Carl

 

thanks for responding...

 

What i meant is what's better in terms of performance and density? Can ESX provide a higher VM density using one config over another? Is ESX able to switch between VMs faster using one method over another?

 

I'm not an ESX expert so this might be the wrong question to ask?

 

Regards

 

Ken

Link to comment
  • 0

Guys, 

 

so if you specify a VM with (for example) 4 cores, and you're running ESX on a server with dual sockets, 8 cores (16 threads) per socket, then ESX will attempt to allocate all cores from the same socket when it switches in that VM?

 

Regards

 

Ken

 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...