-
Posts
296 -
Joined
-
Last visited
-
Days Won
8
Content Type
Forums
Articles
Labs
Videos
TechZone
Citrix Community Articles
Events
Profiles
Posts posted by Mark Syms
-
-
The preparation VM must have the exact hardware configuration that the final VMs has, it is the last time that persistent OS configuration can be made to the base catalog disk. You need to have one GPU slot free and the hypervisor should select a host with the available resource.
-
Yes, just update the configuration of the resource in the hosting node in the tree and select the new storage. When you create new VMs they'll use that storage as well as the existing. You can also make (possibly temporarily) the other storage not accept new VMs, which will allow you to balance the storage usage a bit.
-
Just on the VM xlk you have 5.8 TB used, if any of the other VMs have disks on the SR then you probably have filled it as far as XenServer is concerned as the disks are fully provisioned at the hypervisor level even if only 4TB has been written to the SAN the rest of the space is reserved.
-
Unfortunately XenServer 6.5 SP1 is long out of support now. This may very well be a bug in the software but it won't get fixed.
-
Intellicache uses a local ext type Storage Repository (or SR) so the SSDs would need to be in an SR. You can do this from the XenServer commandline
First, you need to identify the disk devices
fdisk -l
will report this and then you can create an SR (the sizes of the disks should allow you to work out which ones are the SSDs.
xe sr-create type=ext name-label="Local SSDs" device-config:device=<comma separated list of devices in the form of /dev/sdc,/etc/sdd etc from the fdisk command>
you then need to make this the cache SR for the host
xe host-enable-local-storage-caching sr-uuid=<UUID from the SR create command>
There is no need to reboot the host afterwards.
Intellicache won't automatically get used by anything other than Citrix Virtual Apps and Desktops when creating new machine catalogs.
-
Yes, that should work fine.
-
1
-
-
The SR that is failing to attach is f5fd8b5b-5f1e-d95c-dd69-7d0801a59e67 which is the one associated with /dev/sdc1
uuid ( RO) : 36366d8b-366a-2d68-2977-2b7d1fd89910 host-uuid ( RO): e131ed35-1eb9-4220-9f7d-47d83901fabb sr-uuid ( RO): f5fd8b5b-5f1e-d95c-dd69-7d0801a59e67 device-config (MRO): device: /dev/sdc1 currently-attached ( RO): false
Is this a second disk in your host or did you have an SR on an external USB drive that is no longer there? This SR failing to attach is blocking the startup operations and so the subsequent SRs aren't being plugged either. If this is an internal disk then check that it's still properly attached and recognised by the machine hardware bios/BMC etc.
-
This has been fixed in an update for XenServer 7.6 (XS76E004) and Citrix Hypervisor 8.0 never had the issue. XenServer 7.1.2 is not susceptible to this issue and all other versions are out of support.
-
7.4 is EOL right now, has been for over a year.
-
It's not luck, it's design. The template definition and it's virtual disk are only loosely associated and as soon as there is a VM using the template disk it is shared and reference counted.
-
No, there is no copy down consolidation tool.
-
All the virtual disk elements are reference counted so even if the template were to be deleted the base disk for the template would not be deleted so long as there were VMs referencing it. The template's disk is read only so it's no more likely to be damaged than any other virtual disk in the system. As for the template itself (which is just a special type of VM) it doesn't play any role in the created VMs after they have been created.
-
XenServer 6 is long out of support. Sorry, you need to upgrade.
-
sr probe doesn't work with GSF2 which is why sr probe-ext was created. GFS2 is implemented with a new and mostly incompatible storage management framework architecture. It would have been better if the sr-probe operation had errored and made that clear and I confess I don't know why that didn't happen.
Citrix Hybrid Cloud Platforms, storage team.
-
1
-
-
If there isn't then you'll need to create a ServerStatusReport and either contact technical support if you're licensed or file a bug at https://bugs.xenserver.org if you're a free user.
-
And there's nothing in the notification pane or status bar in XenCenter when you issue a repair operation request from the UI?
-
Please look in the /var/log/SMlog for errors at the point where you tried to run xe pbd-plug.
-
That's very good to know. Was the array management software not giving you any pre-failure alerts? Doesn't seem like anything that the hypervisor would have any visibility of other than the performance being sucky (I guess the memory failures were causing the firmware on the RAID controller to do recovery operations, or even crash and restart).
-
No, 512e support is not new in CH8.
-
And that's a separate issue, not that this helps you. The ext local sr type uses ext3 not 4 and that has a maximum filesystem size of 8TB.
-
512e drives are supported.
4kn drives will work but not with the default LVM local storage. If you create the local storage as ext, either by selecting the install time option (something like optimise for XenDesktop) or by deleting and recreating the LocalStorage through the xe CLI tool then it should work.
-
15 minutes ago, Christof Giesers said:
If I wouldn't know better, I'd guess you're joking.
25.7k IOPS out of 12 SSDs in RAID 0 should give several 100.000s of IOPS (if storage and network can handle).
I recommend to run that again on a clean environment and post again, because this is... something between ridiculous and embarrassing.
Even a single (enterprise) SSD should do more than your result (HPE usually gives numbers of between 60 and (for SAS) >100k IOPS for their SSDs and (also for SAS) > 1 GB/s).
Here's my result with our settings (as far as you've given the parameters) on LVMoHBA after running for 5 minutes:
Ig does ~300 MB/s with ~36.5 k IOPs on a poor HPE MSA2040 SAN/FC with 2x FC16G on tiered SAS SSD/HDD storage (high priority, so should run on RAID 6 with 8 SAS SSDs).
Bottleneck is the MSA2040, as it's not a high-performance storage. HPE EVA and 3Par should add a digit there, if not busy with other loads.
I can't say what your bottleneck is, if it's GFS2 (that would make it an absolute no-go for any production) of somewhere else.
I expect you use 'slow' NFS, as bonding is nonsense for iSCSI and ('slow') usual bonding is for availability but doesn't help NFS performance.
Regards!
Christof
The bottleneck is the single 10g network card in the host which is saturated.
-
Have you enabled multipath on the host after rebuilding it? What does
xe host-param-list uuid=<host uuid>
say for the affected host? Depending on the version you should see "multipathing: true" in other-config and on newer versions "multipathing ( RW): true" as a top level parameter.
-
I would guess that it occurred due to some condition relating to the storage and that condition holds true still.
Fibre Channel VDI migration
in Storage
Posted
The network you are choosing is the network for the VM to be attached to, not the network to be used to import the data over. It make no sense to be able to specify that as an HBA which is used for the storage on which the VM is placed.