Jump to content
Welcome to our new Citrix community!
  • 0

NFS Storage has high latency


Question

Hi,

we build a new xenserver 8.1 pool with dell r740 servers with intel x550 nics. 

The storage comes from a emc unity. On the unity are running thousands of other vms, but presented with fiber channel. So the bottleneck could not be the unity.

 

We attached the nfs server from the unity to the xenserver pool. If i open in my vm the resource manager i see disk latency of 200 -500ms. This is ultra high. 

The bios settings of the server are configured to high performance. Should we set any settings to optimize nfs storage ?

We dont want to use block storage, because nfs has much more features in xenserver. 

Link to comment

12 answers to this question

Recommended Posts

  • 1
2 hours ago, Tobias Kreidl said:

Yes, /etc/mtab is where you see the mount point.  Odd, maybe fstab has been superseded by something else under CH 8.1.  You can also do just

mount -o  remount

command (need to check the exact syntax) with the rsize=131072,wsize=131072 parameters. Somewhere, there is a place to add permanent changes so that these modifications survive a reboot.

 

There are also other factors involved, such as how many VMs are on a single SR, how similar or different the I/O characteristics of the VMs are on a common SR, etc. etc.

-=Tobias

 

that would be great if you can check for the right parameters and where to enter them. my linux know how is not best. And asking google doesnt find any articles.

  • Like 1
Link to comment
  • 0

Some TCP settings might help.  However, maybe it's the storage configuration,, itself? What's the RAID configuration? Types of disks, number of disks, etc.?

I saw with SSD drives sub-millisecond read and around 10-30 msec latency for writes, and for regular big spinning disk arays similar 10-30 msec for reads and writes.

 

-=Tobias

Link to comment
  • 0

The RAID configuration makes a big difference. My RAID5 array -- even with SSD drives -- didn't perform too great, partly because there were only 5 dives, total, and that's not so good for writes.

Also, make sure you've adjusted the NFS mount point entries to use much larger rsize and wsize buffers. The defaults are generally way too small. Are you using 10 Gb or 1 Gb NICs?

 

-=Tobias

Link to comment
  • 0

The EMC unity has 24 drives. I don’t know which raid. But all other vms which are running on the same raid but where the lun is provisioned through a fiber channel hba are working perfect. No disc latency. Only with the vms running through the nfs share. 
 

we use intel x550 10gbit nics in active Standby bond. 
 

what do you mean with rsize and wsize. Which are better parameter?

Link to comment
  • 0
5 hours ago, Tobias Kreidl said:

You need both... they should be something like 100k each in your /etc/fstab entry.: rsize=131072,wsize=131072

Are you using NFS3 or 4?

 

-=Tobias

 

in my /etc/fstab there are no nfs mounts. My mounts are in the /etc/mtab. But this file is read only. The default size is both 65536.

How can i change this?

 

again. Running my vms through our hardware hba through iscsi the disc latency has gone. The vms are running on the same raid as with nfs. But only other protocoll (fc against nfs)

with nfs we see latency around 100-500 ms.

Link to comment
  • 0

Yes, /etc/mtab is where you see the mount point.  Odd, maybe fstab has been superseded by something else under CH 8.1.  You can also do just

mount -o  remount

command (need to check the exact syntax) with the rsize=131072,wsize=131072 parameters. Somewhere, there is a place to add permanent changes so that these modifications survive a reboot.

 

There are also other factors involved, such as how many VMs are on a single SR, how similar or different the I/O characteristics of the VMs are on a common SR, etc. etc.

-=Tobias

Link to comment
  • 0

Things have changed a lot going to 8.1.  My guess is with the CLI you may be able to modify parameters with the device-config token; documentation in this areas seems really sparse!

It used to be really easy, just editing the /etc/fstab file and restarting the NFS daemon. :-(  Unfortunately, I do not have an 8.1 system I have access to.

 

-=Tobias

Link to comment
  • 0

Not sure if Xen Server 8.1 is the same as the 'newer' Citrix Hypervisor 8 but we did setup one server recently to test some things and I can confirm that there is still a /etc/fstab file available.

 

In my eyes this would be quite an important change in a minor update from 8.0 to 8.1.

 

this was just for information

-t

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...