Jump to content
Welcome to our new Citrix community!
  • 0

Best Practices: Single iSCSI storage (4 10G ports), single 10G switch, two Dell vhost servers (2 10G ports each)


Pete Wason

Question

Looking for guidance on how to set this up. Right now the storage ports are bonded, and (I believe) the corresponding switch ports are bonded. (Do I need bonding at all? Some other threads seem to indicate not doing this.) Should I bond each pair of server ports? How to do this in XenCenter, while setting mtu=9000 (they are at 1500 now)? Or do I need to toss xe commands around? Currently getting R/W speeds on a Windows10 guest of ~115MB/s. Looking for considerably more speed (which is why we got the 10G switch).

 

Yes, I realize a second switch would provide more redundancy. It's in the works.

Link to comment

4 answers to this question

Recommended Posts

  • 0

Thanks for the link/advice. So, would each host only have a single eth line going to the switch? Are there any advantages to having two eth lines from each host, whether bonded or unbonded? (If I had a second switch, I could see splitting the two lines from each host between the two switches.) And what about my storage? Right now I've got 4 ports bonded to one IP, showing up as a single 40GbE connection -- is that also incorrect? Would I use just two ports, unbonded, to connect to the switch (also unbonded), one for each host? Or four ports? Like this:

 

host1 port0 IP .9

host1 port1 IP .10

host2 port0 IP .11

host2 port1 IP .12

(MPIO enabled/supported on both hosts)

 

switch (unbonded)

 

storage port5 IP .42

storage port6 IP .43

storage port7 IP .44

storage port8 IP .48

(MPIO enabled/supported on storage)

 

Link to comment
  • 0

Your hosts will just have a single IP address on the storage network. Your storage can have multiple IP's 

as targets, that will give you more iSCSI sessions.  But yea, multipathing doesn't do much good without

two true physical paths to go down, you need that second switch. I have bonded before and had two 

switches on a single subnet, but that again isn't what you should do. You should have two switches, different

networks, separate paths to your iSCSI storage.

 

--Alan--

 

Link to comment
  • 0

Without a second switch, extra ethernet connections won't help any. Any traffic is going to be limited to a single path, anyway, hence you don't achieve any bandwidth benefit, at least not for any individual VM. Yes, 10Gb switches are expensive, but necessary for true redundancy.

 

For iSCSI connections, you should use multipathing and not bonds as the preferred connection mechanism.

 

Remember also to not load too many VMs' storage on a single SR, as the queue processing will be affected once you get upwards of 30 or more logical disks or thereabouts, at least that's how it was under XS 7.X.

 

-=Tobias

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...