Jump to content
Welcome to our new Citrix community!
  • 0

VLANs, 10g DAC, 1g copper


Reyn Moran

Question

I am doing some revisiting on our Xen network design. I was in charge of setting everything up initially after our IT administrator left so I'm sure there are things I did that weren't best practice. I think one of those things was what I ran into today.

 

Some background on the installation. I have 1 - 10g DAC connection to each host (2 hosts total) and 6 available 1g copper ports (4 on the HP network card, 2 onboard). I have 4 networks connected to each host (Xen Mgmt, Data, SAN, Security). Each has it's own VLAN. As is, all the connectios are untagged with the Data network using the 10g DAC connection.

 

So today's question is: Why did I not just tag all the VLANs on my 10g DAC connection and just use the coppers as passive backups (also tagged)? If I remember correctly the management VLAN has to be separate or untagged? I forget.... I may be missing a key part of this discussion. Please advise.

Link to comment

17 answers to this question

Recommended Posts

We run almost all connections over a pair of LACP-bonded 10 Gb NICs (excepting iSCSI, which should be on physically different NICs from anything else) and with VLANs, do it all: management, VM traffic, and NFS storage. We have some servers hosting up to as many as 100 VMs and it all works flawlessly and without ever coming close to saturating the network.

 

-=Tobias

Link to comment
1 hour ago, Alan Lantz said:

Prior to 7.3 they need to be untagged, so access ports that are a member of vlan 50 are fine, but they frowned 

on people tagging management interfaces. I prefer as well to keep them access ports, its simple and cleaner,

especially if you need to reinstall like I am doing this weekend.

 

--Alan--

 

Correct; it was in 7.3 that the option to use a VLAN -- and even a tagged VLAN -- for the primary management interface was first supported.

 

I've been reading over various version notes recently and another recently introduced feature was creating automated periodic snapshots of VMs, which is pretty useful for VMs that change a lot or might react badly to an automated patch and need to be reverted to an earlier version.

 

-=Tobias

Link to comment

Prior to 7.3 they need to be untagged, so access ports that are a member of vlan 50 are fine, but they frowned 

on people tagging management interfaces. I prefer as well to keep them access ports, its simple and cleaner,

especially if you need to reinstall like I am doing this weekend.

 

--Alan--

 

Link to comment
1 hour ago, Tobias Kreidl said:

@Reyn,

Just not on the XenServer, configuration itself.  If you use VLANs in your organization, probably about everything will be on a VLAN at layer 2, so no worries there!

 

-=Tobias

 

Excellent. I figured as much, but I swear I find a truth taken for granted every day in the office with IT.

Link to comment
24 minutes ago, Tobias Kreidl said:

The management interface doesn't have to be untagged or not on a VLAN since XS 7.3, but we still do so.

 

-=Tobias

 

Sorry for going through this again, but are you saying prior to 7.3 the management VLAN should not be on a layer 2 VLAN on the switch or just in the Xen network configuration? We currently have the Xen management network on VLAN 50 on our switch is why I ask.

Link to comment

The management interface doesn't have to be untagged or not on a VLAN since XS 7.3, but we still do so.

 

Because iSCSI uses pretty critical timing, Citrix strongly recommends not using iSCSI networks for anything else and using separate ports exclusively for iSCSI; it's OK to use a NIC port if part of an existing physical card that has multiple ports.

 

-=Tobias

Link to comment

I would personally keep iSCSI networks on access ports and separated from any other traffic. You don't want network congestion

to interfere with access to storage. Bad things happen when VM's can't get to strorage. I think XenServer now supports tagged

management interfaces, but I still prefer to keep them access ports.  Unbonding nics could can be disruptive. 

 

--Alan--

 

Link to comment
2 hours ago, Alan Lantz said:

Its hard to say, but you can slice and dice it many ways. And a 10Gb connection is not equivalent to 10 of the 1Gb connections,

its usually more like 4 or 5.  I would setup this way if what I considered most critical was latency and speed of VM traffic in and

out of those VM's. In my environment 10gb is storage as I saw disk access as my most critical.  Another train of thought is maybe

those other connections are bonded for failover availability and that was seen as more important for storage than throughput.

 

--Alan--

 

 

Understood.

 

1 hour ago, Tobias Kreidl said:

We run almost all connections over a pair of LACP-bonded 10 Gb NICs (excepting iSCSI, which should be on physically different NICs from anything else) and with VLANs, do it all: management, VM traffic, and NFS storage. We have some servers hosting up to as many as 100 VMs and it all works flawlessly and without ever coming close to saturating the network.

 

-=Tobias

 

Understood. I think I have a plan for this now. If I remember correctly, as long as I keep my management port configured as is I can reconfigure all the host NICs (ie NIC 0, VLAN1.....NIC 0 VLAN10, etc.) and then reconfigure VM networks. What about unbonding NICs? Also, if I remember correctly, the Xen management network is advised to be on a separate untagged VLAN, correct? Also, I did not know the iSCSI networks were supposed to be on physically seperate NICs. Are you referring to the card itself or just ports on the card?

Link to comment

Its hard to say, but you can slice and dice it many ways. And a 10Gb connection is not equivalent to 10 of the 1Gb connections,

its usually more like 4 or 5.  I would setup this way if what I considered most critical was latency and speed of VM traffic in and

out of those VM's. In my environment 10gb is storage as I saw disk access as my most critical.  Another train of thought is maybe

those other connections are bonded for failover availability and that was seen as more important for storage than throughput.

 

--Alan--

 

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...