Jump to content
Welcome to our new Citrix community!
  • 0

Multipath config with MSA 2052 and Hypervisor 8.2


Nathan Platt1709158840

Question

Morning Everyone,

 

I've been seeing a performance issue for over a month now and have an ongoing case with HPE and Citrix, however the responses aren't coming fast enough and wanted to ask everyones elses opinion. Originally we realised early on that the mutlipath wasn't configured correct and was told to this;

 

Quote

4 - BIG ISSUE - We could see that you are using multipathing and bond for storage network. This causes problems with packet flow. ifconfig -a shows that hosts have dropped packets on this bond.  Please use only multipathing and remove Bonding.

5 - BIG ISSUE - The Hypervisor host is expecting ALUA config (seen trying to be passed in kern log from later in Feb) but hosts are set up to only use one path at a time rather than all paths
3600c0ff0003c5f0fed61b75b01000000 dm-2 HPE     ,MSA 2050 SAN   
size=9.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active     <--ONLY PATH ACTIVE
| `- 10:0:0:1 sdh 8:112 active ready running
|-+- policy='service-time 0' prio=1 status=enabled
| `- 11:0:0:1 sdj 8:144 active ready running
|-+- policy='service-time 0' prio=1 status=enabled
| `- 8:0:0:1  sdd 8:48  active ready running
`-+- policy='service-time 0' prio=1 status=enabled
  `- 9:0:0:1  sdf 8:80  active ready running

 

6 - Please reach out to HPE for correct ALUA config use with CentOS/RHEL 7.x as this is what CHV uses as dom0. I did some digging and found this config that could work :
device {
vendor                                  "HP"
product                                "MSA 2050"
path_grouping_policy          group_by_prio
uid_attribute                        "ID_SERIAL"
prio                                        alua
path_selector                       "round-robin 0"
path_checker                        tur
hardware_handler                "0"
failback                                  immediate
rr_weight                               uniform
rr_min_io_rq                          1
no_path_retry                       18
}

 

So fair enough i booked in some downtime and checked on HP SPOCK as Citrix say 8.2 isn't supported by HPE, when i looked it up on SPOCK it is, so wires must be crossed somewhere, anyway according to HPE the ALUA should be configured like this;

 

Quote


        device {
                vendor                  "HP"
                product                 "MSA2[02]*"
                path_grouping_policy    multibus
                getuid_callout          "/sbin/scsi_id -g -u -s /block/%n"
                path_selector           "round-robin 0"
                rr_weight               uniform
                prio_callout            "/bin/true"
                path_checker            tur
                hardware_handler        "0"
                failback                immediate
                no_path_retry           12
                rr_min_io               100
        }

 

So we did this and when i restarted the multipath we got this error;

 

o    [root@starbase3 ~]# multipath -ll

o    Mar 25 19:39:22 | /etc/multipath.conf line 112, invalid keyword: getuid_callour

o    Mar 25 19:39:22 | /etc/multipath.conf line 115, invalid keyword: prio_callout

o    Mar 25 19:39:22 | /etc/multipath.conf line 116, invalid keyword: hardware_handlet

 

I messed around and looked up a few forum, they suggested i change the default mode to this;

 

defaults {

        user_friendly_names yes
            polling_interval           10
            path_selector              "round-robin 0"
            path_grouping_policy       group_by_prio
            prio                       alua
            path_checker               tur
            rr_min_io_rq               100
            flush_on_last_del          no
            max_fds                    "max"
            rr_weight                  uniform
            failback                   immediate
            no_path_retry              18
            queue_without_daemon       no

}
 

When i restarted the multipathing i got this;

 

o    [root@starbase3 ~]# multipath -ll

o    Mar 25 19:39:22 | /etc/multipath.conf line 112, invalid keyword: getuid_callour

o    Mar 25 19:39:22 | /etc/multipath.conf line 115, invalid keyword: prio_callout

o    Mar 25 19:39:22 | /etc/multipath.conf line 116, invalid keyword: hardware_handlet

o    mpathb (3600c0ff0003c5f0fa194615b01000000) dm-1 HPE     ,MSA 2050 SAN

o    size=10.0T features='1 queue_if_no_path' hwhandler='0' wp=rw

o    |-+- policy='round-robin 0' prio=50 status=active

o    | |- 10:0:0:0 sde 8:64 active ready running

o    | `- 8:0:0:0  sdc 8:32 active ready running

o    `-+- policy='round-robin 0' prio=10 status=enabled

o      |- 7:0:0:0  sdb 8:16 active ready running

o    `- 9:0:0:0 sdd 8:48 active ready running

 

So round robin is workign but it still doesn't look right, when checking the SAN only 2 ports are ever used out of the 8 we have. We also enabled JUMBO frames setting the MTU to 8900 (as the MSA takes the last 100 for something).

 

I'm really struggling with performance here, most of my VM's are only getting 20 - 40MB/s read/write while the SAN doesn't see to be doing much and my switches seem to pretty quiet as well.

 

Any ideas?

 

Quote

Envirnoment Details

 

3 Physical Servers

Hypervisor 8.2 (12 GB Dom0)

All patches up to and including E016 complete

2 NIC's per server (4 x 1Gb ports)

2 Aruba 2390F switches

1 MAS 2050 SAN

iSCSI over LVM

 

 

Link to comment

3 answers to this question

Recommended Posts

  • 0

Make a backup of your multipath.conf file on one of your XenServers and create a new one with the below.

I would also make the Jumbo frames an even 9k and do pings with jumbo sizes to verify that indeed your 

jumbo frames end to end are okay. Also, I assume you have a separate segregated the Management/VM 

traffic on one nic and the storage traffic is on a separate nic on a separate storage network. I don't like

the errors you are seeing when doing multipath -ll. As far as performance if you are using 1Gb links you

should be able to hit over 70MB/sec on storage. 30-40MB/sec would be usable, but not as good as what

I would expect. 

 

 

 

defaults {
user_friendly_names               yes
polling_interval                        10
path_selector                           "round-robin 0"
path_grouping_policy             group_by_prio
prio                                          alua
path_checker                          tur
rr_min_io_rq                            100
flush_on_last_del                    no
max_fds                                  "max"
rr_weight                                uniform
failback                                   immediate
no_path_retry                        18
queue_without_daemon       no
}

device {
vendor                                  "HP"
product                                "MSA 2050"
path_grouping_policy          group_by_prio
uid_attribute                        "ID_SERIAL"
prio                                        alua
path_selector                       "round-robin 0"
path_checker                        tur
hardware_handler                "0"
failback                                  immediate
rr_weight                               uniform
rr_min_io_rq                          1
no_path_retry                       18
}

 

Link to comment
  • 0

I'm not sure which bit of you comments are accurate or not but the error messages

Quote

 

o    [root@starbase3 ~]# multipath -ll

o    Mar 25 19:39:22 | /etc/multipath.conf line 112, invalid keyword: getuid_callour

o    Mar 25 19:39:22 | /etc/multipath.conf line 115, invalid keyword: prio_callout

o    Mar 25 19:39:22 | /etc/multipath.conf line 116, invalid keyword: hardware_handlet

 

 

Show typos in the parameters getuid_callour instead of getuid_callout and hardware_handlet instead of hardware_handler are you sure the contents of the multipath.conf are actually correct?

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...