Jump to content
Welcome to our new Citrix community!
  • 0

Looking for Feedback on replacement server specs


Christopher Fluhrer

Question

We are looking to upgrade our Citrix Environment to accommodate moving our users to Windows 10 with graphics acceleration and high availability. We are a not for profit so I am looking to get the best bang for my buck.

 

Our current hardware consists of two HP DL380 G7s

CPU: 1X Xeon X5670 2.93 GHz (6 core 12 thread)

Memory: 256GB pc3-10600r ddr3

MSA 2040 SAN for drive space

QLogic QLE4062C 1GB Dual port iSCSI HBA connection to SAN 

1GB Connections to Switch Bonded (6 Ports)

40 Desktops on one

32 Desktops (some are servers) on the other. (all running Windows 7 or Windows Server 2016)

 

I am looking at DL380 G9s (three instead of two)

CPU I am debating between several scenarios: 

(1 or 2)ES-2697 v3 14-core 2.6GHz 35MB 145W - $251 ($502 for dual)

ES-2698 v3 16-core 2.3GHz 40MB 135W - $831($1662 for dual)

ES-2699 v3 18-core 2.3GHz 2.3 GHz 145W - $894($1788 for dual)

 

I'm having a tough time deciding on which spec is the best for my needs. At windows 7 we aren't even touching 50% total CPU but I fully anticipate Windows 10 to eat more but probably any 1/2 cpu combination above would suffice. Would I be better off just dropping down to 2X8 core to get the 16+ cores and beyond that is overkill? There are so many options now as opposed to when we bought the G7s that it's rather overwhelming to get the best bang for your buck.

 

second question: Is there a significant enough increase in the performance to even bother looking at v4 CPUs and for memory going from DDR4-2133 vs DDR4-2400 to justify the cost variance? I would think the 2133 would be fine if you go with a higher CPU and 2400 if you go lower end CPU.

Link to comment

6 answers to this question

Recommended Posts

  • 0

The consultant answer always is - it depends :) 

 

But here are some guidelines: 

CPU speed is more important than cores.  As long a you have a 1:12 vCPU ratio then you should be ok, but always go for the fastest CPU you can.

You haven't mentioned what kind of VMs you are running (MCS, PVS, Persistent etc). If you are running PVS try to go for a 10GB Switch/NICS if you can

Are you upgrading your storage? If so go for SSD Storage if you can. Most people go for Hyper-Converged systems these days instead of SAN's as they are faster/cheaper. 

Would also recommend vGPU if you have the budget, helps with performance!

 

There is a really good guide here - https://docs.citrix.com/en-us/xenapp-and-xendesktop/7-15-ltsr/citrix-vdi-best-practices/design/design-userlayer5.html

 

Hope this helps!

 

Link to comment
  • 0
On 10/7/2019 at 12:56 PM, Jose Pablo Benavides said:

Hello,

 

I would start looking at the Hardware Compatibility List, since the next LTSR version is going to be based on CItrix Hypervisor 8.X I suggest you get hardware compatible.

 

For example DL380 Gen10 instead of Gen9

 

I did see where one document only listed the Gen10, but I found several others that posted G9 and G10 380s.

 

In the end I think I have made my decisions after some careful research:

 

DL380G9

1X E5-2687W v4 12 core 3.0 GHz 30MB Cache 160W

256 GB Memory

Nvidia Tesla M60

upgrading our MSA 2040 from iSCSI 1Gb to 8GB FC so swapping to a QLogic QLE2562 for the servers.

 

with the other misc. parts I have the price at $9196 per server. As a not-for-profit who is struggling in the healthcare industry with all the new regulations we are being even more cautious at throwing big dollars to keep at the cutting edge.

HPE.png

Link to comment
  • 0

For storage a Coraid device would cost you alot less and give you better performance.  Trying to save you a bunch of $$ since you are not-for-profit.

 

I run a development environment so have a tight budget also.

 

CoRaid h2441 enclosure (24 bays), 2 or 3 10 Dual Gbit NICs + two 1 Gbit NICs on the array plus dual 10GB HBA cards (NICs) for each of 2 host servers

Then added 14 x 4TB WD NAS RED PRO drives

 

This setup ran me about $11K and gives me 56TB RAW storage direct connected at 10Gbit which I divided as 5 x 4TB mirrors and a single 8TB Raid10 to spread my target load around.

 

I run about 100 CentOS Linux VMs and a handful of windows VMs in this pool.

 

They have never been on the HCL and probably never will but have been using CoRaid devices since 2009 without even a single disk failure.

Link to comment
  • 0

CoRaid Sr-1521 I used with Citrix Xenserver 5.5.2 from Sept 2009 until October 2018 with 10 x 1TB drives connected with 2 x 1Gbit links to a switch shared by my 4 Xen hosts each connected with 2 NICs.

 

CoRaid h2441 I am currently running Citrix Xenserver 7.2 on with 14 x 4TB drives (described above) connected using direct-connect cables from Host to Array thus eliminating the need for a 10 Gbit switch.

 

Install the HBA rpm and it auto-discovers the targets.  No config needed and doesn't use TCP/IP so you can share one of the Gbit ports for ssh management or CEC.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...