Jump to content
Welcome to our new Citrix community!
  • 0

Looking to upgrade to an ISCSI shared storage solution

Stephen D. Holder




Working with a SMB, running Citrix Hypervisor that has two physical servers with everything on local storage. Looking to get them on a budget-friendly shared storage solution.


I did a little pre-testing and connected an existing NAS solution via a 1Gbe switch via ISCSI. The physical servers are also connected to the same 1gbe switch. A single 74gig VM took about eight – 15 minutes to transfer over, which I suspect is about right for a 1gbe backbone and on the ISCSI protocol (it took anywhere from 20 min+ via server to server.)


I’m looking to upgrade to a QNAP 10gbe NAS fiber connection, connected to a Ubiquiti 10gbe switch. I also ordered a server HBA 10gbe fiber card to keep everything consistent at 10gbe. Client will have anywhere from 14 -18 VMs and I’m suspecting total (usable) shared storage on a RAID 10 of 4-6TB. Customer is a light worker…. no CAD or anything heavy like that. Planning to convert the existing NAS as the backup solution.


I did a quick search for ISCSI on the forms and came across our veteran friends Alan and Tobias on some topics around ISCSI (ref1 ref2 ref3 .) I’m not sure my customer is as corporate or enterprise as some of the other posters, but the discussions were still a good read.


I am more interested in -


·         Is ISCSI a good budget-friendly choice for a shared storage solution?

·         Is there a noticeable benefit going from 1gbe to 10gbe? (I’ve read if going ISCSI for shared storage, you almost have to go 10gbe; 2.5Gbe cat6 won’t cut it.)

·         Client has their local storage on SSDs in a RADI10 with hot spares. Each physical machine has its own array. I’m hoping to leverage the 10gbe network in exchange for higher density spindle HDDs to increase storage. I understand spindle HDD are innately slower, but don’t necessarily know if SSDs are needed here.

·         Any grips/thoughts on the hardware mentioned to construct the solution?


Looking to see what other experiences are / have been with such a setup - as well as any advice... it’s always welcomed.

Link to comment

5 answers to this question

Recommended Posts

  • 0


I'd certainly go with 10 GbE just to be future-safe, if nothing else. Performance is a complicated beast; the RAID configuration, number of spindles/disks and size of disks, as well as how the volume is carved up all factor in. SSDs can make huge differences, in particular with reads.  You can readily get sub-millisecond read times. For writes, RAID 10 or 50 or 60 makes a huge difference vs. RAID 5 or 6. Fast CPUs and enough mmory for dom0 also matter. iostat is a useful tool to check your queues and I/O parameters. My last solution was based on NexentStor, but that may be overkill for your situation; you can get a trial license and give it a whirl. There are many other options out there, too many to be able to easily say what might be better than something else.



Link to comment
  • 0

if doing iSCSI you really want storage that can do multipathing across multiple switches. If you do single storage nic/single switch then any upgrade will

require downtime. 10Gb is definitely the standard. iSCSI is excellent as budget friendly, the downside is it is more configuration than an NFS solution 

would be. Fortunately a lot of devices will do both so you can experiment with which works best for you.




Link to comment
  • 0

NFS is simpler, easier to setup, downside no multipath. iSCSI can do multipathing but there is much more to it,

both hardware and software.  Performance of both are on par with each other. I prefer iSCSI only because I've

used it for years and I'm comfortable with it. You can do thin provisioning with iSCSI as well, and what I have

found on my Nimble storage is even if I say thick provision storage "lies" and its still thin in on the SAN. I do

find it convenient with NFS I can mount the storage and see files if I need too, with iSCSI the background is

more hidden. 




Link to comment
  • 0

Yes, no multipathing for NFS, but NIC bonding is possible and advisable. And, yes, some storage devices will do the thin provisioning for you with iSCSI, even if it's not visible

from the XenServer side as such. A number of storage devices provide that option. I've run both NFS and iSCSI over the years, as well as fiber channel. Current versions of CH seem to indeed do about equally well with both NFS and iSCSI.


Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Create New...