Michael Newiger Posted January 16, 2023 Share Posted January 16, 2023 We intend to run MCS build XenApp Server on Citrix Hypervisor. So there won’t be any critical data hosted and VMs could be rebuild quickly in case of hardware errors. For this szenario what is an up to date recommendation for local storage layout? A single NVMe SSD offers best performance along with lowest price. But in case of an hardware failure host and VMs needs to be rebuilt. Splitting OS and VMs to two SSDs does not provide any real advantage, does it? Former best practice has been to use SAS based raid, this is what the vendor still proposed: XenServer on 2 SATA SSDs in raid 1, VMs on 4x SAS SSDs in raid 5. But is this the best soltions when using actual Enterpise SSDs? Isn’t the bottleneck a raid card talking SAS? One of my customers told me to have one SSD disk failed on 500 hosts in 3 years. This is quite a could AFR and would not justify the money to spent for multiple SSDs running in raid. So we appreciate any experience or recommendation on running XenServer on SSD based local storage. Link to comment
0 Sjoerd Van den Nieuwenhof Posted January 17, 2023 Share Posted January 17, 2023 Hi, We advised companies to run the Hypervisor OS on Two disks (in raid) and run the XenApp VMs on an NVMe. But after a while, we also concluded that the SSD disk for the OS never failed. So, I agree with you that running the OS in raid isn't necessary. You can also buy a spare SSD for the OS (that way you can replace it fast when an SSD fails), and when the disk in a server fails, reinstall the OS. Link to comment
0 Tobias Kreidl Posted January 18, 2023 Share Posted January 18, 2023 I beg to differ. I have seen several SSD disks fail and have always run the hypervisor installation with a RAID1 configuration. For a pair of small SSD drives, the added cost is pretty small compared to suffering from a host failing. -=Tobias Link to comment
0 Sjoerd Van den Nieuwenhof Posted January 18, 2023 Share Posted January 18, 2023 Hi Tobias, I understand from the question Michael asked that money could be an issue. Redundancy is what you always want, but sometimes you need to watch your budget and can save some bucks by taking a risk. As mentioned, we also didn’t see any failing SSDs in the last 3-4 years, so I can understand that this is a risk he could take, and the time it takes reinstalling a hypervisor isn't that long. Link to comment
0 Michael Newiger Posted January 19, 2023 Author Share Posted January 19, 2023 Thanks for your advice. Money is not most important criteria. SAS raid has been used during the last years. It still offers advantages in terms of scalability, reliability and support. But in our scenario there will be no need for scalability. Looking at reliability, I have good (but limited) experience with enterprise SSDs. This is why I wondered to deploy a single NVMe disk since it offers better performance at even lower price. Tobias tells different about reliability. Today we decided to stick to SAS but plan to deploy a single volume with a RAID5 configuration. Is there any advantage to deploy hypervisor installation with a RAID1 and guests on RAID5 configuration? Link to comment
0 Tobias Kreidl Posted January 20, 2023 Share Posted January 20, 2023 IMO, RAID1 is perfectly adequate for holding and running dom0. You don't want to overload the processor with a lot of extra storage I/O for your host OS, IMO. RAID5 is not very efficient and in particular, writes suffer. It partly of course depends on how many disks there are, their size, etc. I got great read times with a 5-disk RAID5 SSD array, but writes were better on a 20 single disk RAID20 instance. And, yes, I have seen more than one SSD drives fail, one within less than one year. -=Tobias Link to comment
0 Erno Alanen1709160612 Posted May 2, 2023 Share Posted May 2, 2023 Hi We have pooled random servers using intellicache on Hypervisor hosts. We have for years deployed the Hypervisor itself to the same raid array with the intellicache. Plain and simple, no problems / downtime so far. Yes, we have had few disk failures on past years, but thats nothing that raid controller cannot handle. Latest servers we bought had 4x960Gt nvme SSD disks in raid 10 configuration, Hypervisor installed on that array and local storage configured as ext3 for intellicache. Works fine. Link to comment
Question
Michael Newiger
We intend to run MCS build XenApp Server on Citrix Hypervisor.
So there won’t be any critical data hosted and VMs could be rebuild quickly in case of hardware errors.
For this szenario what is an up to date recommendation for local storage layout?
A single NVMe SSD offers best performance along with lowest price.
But in case of an hardware failure host and VMs needs to be rebuilt.
Splitting OS and VMs to two SSDs does not provide any real advantage, does it?
Former best practice has been to use SAS based raid, this is what the vendor still proposed:
XenServer on 2 SATA SSDs in raid 1, VMs on 4x SAS SSDs in raid 5.
But is this the best soltions when using actual Enterpise SSDs?
Isn’t the bottleneck a raid card talking SAS?
One of my customers told me to have one SSD disk failed on 500 hosts in 3 years.
This is quite a could AFR and would not justify the money to spent for multiple SSDs running in raid.
So we appreciate any experience or recommendation on running XenServer on SSD based local storage.
Link to comment
6 answers to this question
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now