horizon limit
-
Posts
23 -
Joined
-
Last visited
Content Type
Forums
Articles
Labs
Videos
TechZone
Citrix Community Articles
Events
Profiles
Posts posted by horizon limit
-
-
Just a brief info : using a vm-copy command copies vm-metadata and the vdi the will ask xapi to create a vbd between the dom0 and the vdi, that is whay in the Storage tab, you would be able to see something like "control domain on <hostname of the xenserver>" listed under the virtual machine against the vdi.
there are three methods to resolve this ;
- identify the vbd between the vdi and the control domain and destroy it.
- forget the vdi in question and scan the SR again to the see the vdi again . Make sure the sr-scan works before forgetting the vdi.
- reboot the xenserver.
HTH !.
-
Glad it worked..!! :)
-
yes that should work as the last resort, however please chk you inbx for more info .
-
the uuid of the PV was wrong one , we need to verify it from the lvm backup file . Can you find the device using fdisk -l ? and paste it here ?
Edited by: kon vikkt on Oct 1, 2013 3:09 PM -
pleas have a look at the txt file for the right command, for some secuirty isiues i am not able to post this as comment.
-
we need the sr-uuid for which the pv is missing . or simply grab it from the xencenter. next open the file named VG_Xenstorage-< sr-uuid in question> in the /etc/lvm/backup using any editor you can get the uuid of the pv and get the info about the pv (we need that to create the pv )
and then follow from the step2. pvcreate in that article and it should be all good.- 1
-
we need the sr-uuid for which the pv is missing . or simply grab it from the xencenter. next open the file named VG_Xenstorage-< sr-uuid in question> in the /etc/lvm/backup using any editor you can get the uuid of the pv and get the info about the pv (we need that to create the pv )
and then follow from the step2. pvcreate in that article and it should be all good. -
http://support.citrix.com/article/CTX116017 article will tell you how to create the pv and resotre th VG over it. :)
hope this helps. -
can you paste the output as cat VG-<sr-uuid> and i hope you have zeroed the the device. we would need to re-create the pv as mentioned in the VG
-
right so try this :
> run dd if=/dev/zero of=/dev/sdg bs=512 count=1
> now look in the pvs if you can read it.
> paste the pvs vgs output then . also can you please paste the output of the /etc/lvm/backup/VG-<sr-uuid> -
alright the screenshot has provided much info about the pool so two things here:
> is the Storage full of the vdisks ?
> we would need to create the pv and restore the VG on it. as there was no pv in the pvs output.
> also we can do one more thing,we can simply eject the server out of the pool and add it back to the pool not sure if this would work but we can try,). -
ok now run dd if=/dev/sdg bs=1024 count=1 | hexdump -C and paste the output here , there has been partition made in the device and that's the issue i believe.
also do we have the backup of the VG in the /etc/lvm/backup ?
Edited by: kon vikkt on Sep 25, 2013 12:51 PM -
hmmm can you paste the output of the pvs vgs for the problematic SR and also the fdisk -l ?
-
looks like the VG is missing what happened to the SR or what chnages were done after which the issue occurred.
-
the vhd-util command is wrongly executed . please follow the article http://support.citrix.com/article/CTX123461 and provide the output and the there is one lv as per the output.
-
That means there are no orphan disk, i tested the same script in my lab server and it gave the expected results. Another conclusion is that there re no orphan disks attached. Please attach the output of the vhd-util scan -f -c -m "VHD-*" -l VG_XenStroage-<sr-uuid> -p and lvscan | grep <sr-uuid>. here. let me have a look.
-
yes because the sr-uuid has not been provided. Please go through the post above, it clealry says
perl FindOrpahnVDI.pl <sr-uuid>. -
it will list all the vdis which do not have a vbd connection with them. A vbd is block device and a connector to the vm and the vdi, it wille xists even if the vm is shutdown.
it will only list the vdi-uuid and would not delete them, you would need to delete them using xe vdi-destroy uuid=<>
now the big question how can we judge if it is listing the correct vdi, simply run xe vbd-list vdi-uuid=<> and xe vdi-list uuid=<vdi-uuid>( this will give info about the vdi in xapi DB, but the main is the former command), if returns to the prompt in xe vbd-list then simply don;t think much, go ahead and delete it.
hope this helps. -
cool.. this is lvmoiscsi as i was suspecting so it is thick provisioned. Now can you please run the this script as
perl FindOrpahnVDI.pl <sr-uuid>. this will tell you what are the orphan vdi and can be deleted.
Make sure it will list the HA disks as orphan as well. You can confirm by running xe vbd-list vdi-uuid=<> and if it returns to prompt, you can delete the vdi. -
can you check the sr type xe sr-list params=all uuid= sr-uuid ? i am sure it is lvm and the lvms are more than it should be . seems like the space is still not released by the snapshots.
-
well SM logs can tell if there is corruption or not. you can grep the keyword corrupt incase there are nay, that will tell you if the disks re corrupt or not.
-
the storage issue and the lack of space when the PHD is in action is a legacy now. I am not sure when there will be a fix for such problems
the only workaround to deal with this is to move/copy the vm to another storage which will coalesce the disks.
Coalesce what i have seen fails in the cases where the vdis are corrupt or the chains consisting of more than 32 vdis.
Iscsi Multipath - unknown key DMP
in Storage
Posted
Hi Stuart,
This problem seems to be specific to the multipath i guess. If you disable the multipath, the SR shall be connected. Also, make sure to get the multipath settings from the vendor.
HTH!