Jump to content
Welcome to our new Citrix community!
  • 0

XenServer 8.1 multiple 10Gb ISCSI NICs and Nimble Storage


Ken Z

Question

Hi everyone

 

it's been a while since I've built a XenServer connected to an iSCSI shared storage device, and wanted to know if any best practises have changed since then.

 

The shared storage device is a Nimble HF20 which has multiple 8 x 10GbE iSCSI interfaces with a single discovery interface. (I know the Nimble comes with 2 x 10GBASE-T but am getting RJ-45 > SFP+ transceivers for my fibre switches, plus I've added additional 2 x 10 GbE SFP+ FIO cards per controller)

I've got four XenServer 8.1/HP ProLiant Gen10 servers and each have 4 x 10GbE network interfaces,(total of 16 x 10GbE) two that are going to be dedicated to the iSCSI network and two to a dedicated Citrix VLAN for accessing XenApp/XenDesktop hosts.

 

My question is, do you recommend bonding the two iSCSI interfaces on each XenServer/HP into a single interface, then setting a management IP in the iSCSI VLAN, and connecting to the Nimble that way, or leaving the two iSCSI interfaces un-bonded, giving each a management IP address in the iSCSI VLAN, enabling multi-pathing, and letting the system handle all traffic over the links? bear in mind the 10 GbE switches I have don't support LACP so will need to be switch-independent bonding if that's enabled...

 

Alternatively, can anyone point me in the direction of a good document with this type of configuration?

 

Regards

 

Ken Z

Link to comment

17 answers to this question

Recommended Posts

  • 0

No worries, Alan, this is a fine point that is not very well documented, and at the same time, it is very important when designing and evaluating the minimum bandwidth requirements for VMs. This is why we went some tome ago with 10 Gb NICs and VLANs so that there would be plenty of capacity as needed without being constrained by 1 Gb NICs and at the same time, being able to load a lot of the networking onto just a pair of NICs. The 10 Gb NICs have never come close to saturation and with VLANs, there has been a lot of flexibility in terms of adding or subtracting various subnets.

 

As an added point, it's nice when using advanced features of Xenmotion to be able to specify over which network the transfer goes, as defaulting to the primary management interface is sometimes not the best choice.

 

Cheers,

-=Tobias

  • Like 1
Link to comment
  • 1

Okay, I had a server lightly loaded and checked it out. And I'm a big enough man to admit I'm wrong and 

Tobias is correct. A single VM moving data over multipathing in my case at least relied on a single nic. I 

could have something setup incorrect I suppose, but looks like multipathing isn't quite as smart as I 

thought it was.

 

--Alan--

 

Capture.JPG

  • Like 1
Link to comment
  • 0
8 minutes ago, Alan Lantz said:

Just off the top of my head without using Nimble ever I always perfer multipathing as that puts XenServer in charge of load balancing 

and using both networks for storage traffic. 

 

--Alan--

 

Again, thanks Alan

 

:-)

 

Link to comment
  • 0
5 hours ago, Tobias Kreidl said:

I've always using multipathing with iSCSI connections. https://support.citrix.com/article/CTX118791

With some devices, like EquaLogic, you can go either way: https://discussions.citrix.com/topic/392767-xenserver-equallogic-multipath-vs-active-active-bond/

Follow the storage device manufacturer's recommendations.

 

-=Tobias

thanks everyone for your thoughts.

 

I should have a bit a time at the beginning of the project, so what i'll do is try configuring the system using both ways and do some performance testing to see which is the better configuration performance-wise. If anyone's interested i'll post my results here. It'll be a few weeks before things start - haven't ordered the kit yet...

 

Regards

 

Ken Z

Link to comment
  • 0

My guess is that it probably won't make a difference in performance.  A single VM instance can only use one connection at the time, anyway, hence it's not like a bond will allow any VM to have twice the bandwidth available to it vs. a single NIC. This will be the care reggardless whether you use a bond or multipathing.

 

-=Tobias

Link to comment
  • 0

Yes, you should see both NICs active; I'm just saying an individual VM will only use one path. The many VMs will distribute their paths among the available NICs. The upshot of this is that the overall, aggregate bandwidth is larger but any individual VM will still be limited by the bandwidth of a single NIC path.

 

-=Tobias

Link to comment
  • 0

My opinion is multipathing is still superior since it is in charge and can pretty sure it can provide 

a much lower failover time. The downside to multipathing is the configuration is not automatic,

it usually requires modifying the multipath.conf file and is specific to your storage vendor. 

 

--Alan--

 

Link to comment
  • 0
2 minutes ago, Alan Lantz said:

My opinion is multipathing is still superior since it is in charge and can pretty sure it can provide 

a much lower failover time. The downside to multipathing is the configuration is not automatic,

it usually requires modifying the multipath.conf file and is specific to your storage vendor. 

 

--Alan--

 

Alan

 

I assume you meant to say "bonding is still superior..." ?

 

Regards

 

Ken Z

Link to comment
  • 0

Bonds are funny things. In some, the path taken by a connecting VM will actually be multiplexed all by itself, unless a preferred path is defined and used. I wrote a comprehensive article on preferred paths with  with a Dell MD3600i which very sadly has disappeared since the xenserver.org blog entries pretty much were all removed last year or so.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...