Jump to content
Welcome to our new Citrix community!
  • 0

Issue trying to replug a ISCSI Lun after a power outage.


Lee Quince

Question

Hi, Looking for a bit of a lifeline if possible, after a controlled power outage one of our LUN's will not replug in Xenserver. I'm pretty sure the data is safe but for the life of me I cannot get the SR to reconnect / Replug. The LUN is 2TB with 750Gb's of data so it does have free space available to it.

So far I have removed/forgot the SR, updated Xenserver, but I don't really want to start changing things with LVM without some advice.

In the SMlog i have the following. The issue is with SR 22465dab-b5a5-273e-17d8-9f64386946cb, it can see the VDI's but never connects.

Apr  1 20:15:20 xen01 SM: [5625] FAILED in util.pread: (rc 1) stdout: '', stderr: '/bin/dd: error writing '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-cc9b4018-1202-4581-b6cc-c09dd386f986': Input/output error
Apr  1 20:15:20 xen01 SM: [5625] 1+0 records in
Apr  1 20:15:20 xen01 SM: [5625] 0+0 records out
Apr  1 20:15:20 xen01 SM: [5625] 0 bytes (0 B) copied, 0.000333901 s, 0.0 kB/s
Apr  1 20:15:20 xen01 SM: [5625] '
Apr  1 20:15:20 xen01 SM: [5625] ['/sbin/lvchange', '-ay', '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/leaf_cc9b4018-1202-4581-b6cc-c09dd386f986_1']
Apr  1 20:15:20 xen01 SM: [5625]   pread SUCCESS
Apr  1 20:15:20 xen01 SM: [5625] ['/sbin/lvchange', '-an', '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/leaf_cc9b4018-1202-4581-b6cc-c09dd386f986_1']
Apr  1 20:15:20 xen01 SM: [5625]   pread SUCCESS
Apr  1 20:15:20 xen01 SM: [5625] ['/sbin/dmsetup', 'status', 'VG_XenStorage--22465dab--b5a5--273e--17d8--9f64386946cb-leaf_cc9b4018--1202--4581--b6cc--c09dd386f986_1']
Apr  1 20:15:20 xen01 SM: [5625]   pread SUCCESS
Apr  1 20:15:20 xen01 SM: [5625] ['/sbin/lvremove', '-f', '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/inflate_cc9b4018-1202-4581-b6cc-c09dd386f986_406847488']
Apr  1 20:15:28 xen01 SM: [5625]   pread SUCCESS
Apr  1 20:15:28 xen01 SM: [5625] ['/sbin/dmsetup', 'status', 'VG_XenStorage--22465dab--b5a5--273e--17d8--9f64386946cb-inflate_cc9b4018--1202--4581--b6cc--c09dd386f986_406847488']
Apr  1 20:15:28 xen01 SM: [5625]   pread SUCCESS
Apr  1 20:15:28 xen01 SM: [5625] ['/sbin/lvchange', '-ay', '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/leaf_cc9b4018-1202-4581-b6cc-c09dd386f986_1']
Apr  1 20:15:28 xen01 SM: [5625]   pread SUCCESS
Apr  1 20:15:28 xen01 SM: [5625] ['/sbin/lvchange', '-an', '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/leaf_cc9b4018-1202-4581-b6cc-c09dd386f986_1']
Apr  1 20:15:28 xen01 SM: [5625]   pread SUCCESS
Apr  1 20:15:28 xen01 SM: [5625] ['/sbin/dmsetup', 'status', 'VG_XenStorage--22465dab--b5a5--273e--17d8--9f64386946cb-leaf_cc9b4018--1202--4581--b6cc--c09dd386f986_1']
Apr  1 20:15:28 xen01 SM: [5625]   pread SUCCESS
Apr  1 20:15:28 xen01 SM: [5625] ['/sbin/lvchange', '-ay', '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/leaf_cc9b4018-1202-4581-b6cc-c09dd386f986_1']
Apr  1 20:15:28 xen01 SM: [5625]   pread SUCCESS
Apr  1 20:15:28 xen01 SM: [5625] ['/sbin/lvchange', '-an', '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/leaf_cc9b4018-1202-4581-b6cc-c09dd386f986_1']
Apr  1 20:15:28 xen01 SM: [5625]   pread SUCCESS
Apr  1 20:15:28 xen01 SM: [5625] ['/sbin/dmsetup', 'status', 'VG_XenStorage--22465dab--b5a5--273e--17d8--9f64386946cb-leaf_cc9b4018--1202--4581--b6cc--c09dd386f986_1']
Apr  1 20:15:28 xen01 SM: [5625]   pread SUCCESS
Apr  1 20:15:28 xen01 SM: [5625] *** INTERRUPTED COALESCE-LEAF OP DETECTED ***
Apr  1 20:15:28 xen01 SMGC: [5625] === SR 22465dab-b5a5-273e-17d8-9f64386946cb: gc_force ===
Apr  1 20:15:28 xen01 SMGC: [5625] Requested no SR locking
Apr  1 20:15:28 xen01 SMGC: [5625] SR 22465dab-b5a5-273e-17d8-9f64386946cb not attached on this host, ignoring
Apr  1 20:15:28 xen01 SMGC: [5625] Not checking if we are Master (SR 22465dab-b5a5-273e-17d8-9f64386946cb)
Apr  1 20:15:28 xen01 SM: [5625] LVMCache created for VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb
Apr  1 20:15:28 xen01 SM: [5625] lock: tried lock /var/lock/sm/22465dab-b5a5-273e-17d8-9f64386946cb/running, acquired: True (exists: True)
Apr  1 20:15:28 xen01 SMGC: [5625] Nothing was running, clear to proceed
Apr  1 20:15:28 xen01 SM: [5625] LVMCache: refreshing
Apr  1 20:15:28 xen01 SM: [5625] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb']
Apr  1 20:15:28 xen01 SM: [5625]   pread SUCCESS
Apr  1 20:15:28 xen01 SM: [5625] ['/usr/bin/vhd-util', 'scan', '-f', '-c', '-m', 'VHD-*', '-l', 'VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb']
Apr  1 20:15:28 xen01 SM: [5625]   pread SUCCESS
Apr  1 20:15:28 xen01 SMGC: [5625] SR 2246 ('Synology Rackstation01 - Vol02 - Lun01') (34 VDIs in 12 VHD trees):
Apr  1 20:15:28 xen01 SMGC: [5625]         *75991afa[VHD](100.002G//57.309G|n)
Apr  1 20:15:28 xen01 SMGC: [5625]             9ec63a4f[VHD](100.002G//8.000M|n)
Apr  1 20:15:28 xen01 SMGC: [5625]             cbccf961[VHD](100.002G//100.203G|n)
Apr  1 20:15:28 xen01 SMGC: [5625]             3a5859a1[VHD](100.002G//8.000M|n)
Apr  1 20:15:28 xen01 SMGC: [5625]         b61961d4[VHD](10.002G//10.027G|n)
Apr  1 20:15:28 xen01 SMGC: [5625]         *e38a1438[VHD](80.002G//68.289G|n)
Apr  1 20:15:28 xen01 SMGC: [5625]             43621705[VHD](80.002G//8.000M|n)
Apr  1 20:15:28 xen01 SMGC: [5625]             *ee0f901a[VHD](80.002G//924.000M|n)
Apr  1 20:15:28 xen01 SMGC: [5625]                 28034446[VHD](80.002G//8.000M|n)
Apr  1 20:15:28 xen01 SMGC: [5625]                 *46d28b6f[VHD](80.002G//568.000M|n)
Apr  1 20:15:28 xen01 SMGC: [5625]                     b7085f34[VHD](80.002G//8.000M|n)
Apr  1 20:15:28 xen01 SMGC: [5625]                     94f3fe0e[VHD](80.002G//80.164G|n)
Apr  1 20:15:28 xen01 SMGC: [5625]         75993936[VHD](40.004G//40.090G|n)
Apr  1 20:15:28 xen01 SMGC: [5625]         a0576c3b[VHD](35.000G//35.074G|n)
Apr  1 20:15:28 xen01 SMGC: [5625]         1c3db62d[VHD](300.000G//300.594G|n)
Apr  1 20:15:28 xen01 SMGC: [5625]         *c1faef5c[VHD](20.002G//15.684G|n)
Apr  1 20:15:28 xen01 SMGC: [5625]             941a3b00[VHD](20.002G//20.047G|n)
Apr  1 20:15:28 xen01 SMGC: [5625]         *6a1b97c9[VHD](80.002G//80.066G|n)
Apr  1 20:15:28 xen01 SMGC: [5625]             080e0b3b[VHD](80.002G//8.000M|n)
Apr  1 20:15:28 xen01 SMGC: [5625]             *3bd105db[VHD](80.002G//736.000M|n)
Apr  1 20:15:28 xen01 SMGC: [5625]                 fc441b24[VHD](80.002G//80.164G|n)
Apr  1 20:15:28 xen01 SMGC: [5625]                 e2e19425[VHD](80.002G//8.000M|n)
Apr  1 20:15:28 xen01 SMGC: [5625]         *c4c57b10[VHD](100.002G//95.516G|n)
Apr  1 20:15:28 xen01 SMGC: [5625]             *5394de67[VHD](100.002G//1.590G|n)
Apr  1 20:15:28 xen01 SMGC: [5625]                 *ac14710c[VHD](100.002G//1.629G|n)
Apr  1 20:15:28 xen01 SMGC: [5625]                     b84de79c[VHD](100.002G//100.203G|n)
Apr  1 20:15:28 xen01 SMGC: [5625]                     5ee762a0[VHD](100.002G//8.000M|n)
Apr  1 20:15:28 xen01 SMGC: [5625]                 cf4d72e2[VHD](100.002G//8.000M|n)
Apr  1 20:15:28 xen01 SMGC: [5625]             d3153aa2[VHD](100.002G//8.000M|n)
Apr  1 20:15:28 xen01 SMGC: [5625]         78ccc7d3[VHD](10.002G//10.027G|n)
Apr  1 20:15:28 xen01 SMGC: [5625]         *3d7137c9[VHD](60.004G//24.332G|n)
Apr  1 20:15:28 xen01 SMGC: [5625]             315dd215[VHD](60.004G//60.129G|n)
Apr  1 20:15:28 xen01 SMGC: [5625]             cc9b4018[VHD](60.004G//388.000M|a)
Apr  1 20:15:28 xen01 SMGC: [5625]         4cd5098b[VHD](16.000G//16.039G|n)
Apr  1 20:15:28 xen01 SMGC: [5625]
Apr  1 20:15:28 xen01 SM: [5625] ['/sbin/lvchange', '-ay', '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/leaf_cc9b4018-1202-4581-b6cc-c09dd386f986_1']
Apr  1 20:15:28 xen01 SM: [5625]   pread SUCCESS
Apr  1 20:15:28 xen01 SM: [5625] ['/sbin/lvchange', '-an', '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/leaf_cc9b4018-1202-4581-b6cc-c09dd386f986_1']
Apr  1 20:15:29 xen01 SM: [5625]   pread SUCCESS
Apr  1 20:15:29 xen01 SM: [5625] ['/sbin/dmsetup', 'status', 'VG_XenStorage--22465dab--b5a5--273e--17d8--9f64386946cb-leaf_cc9b4018--1202--4581--b6cc--c09dd386f986_1']
Apr  1 20:15:29 xen01 SM: [5625]   pread SUCCESS
Apr  1 20:15:29 xen01 SMGC: [5625] *** FINISH LEAF-COALESCE
Apr  1 20:15:29 xen01 SM: [5625] lock: opening lock file /var/lock/sm/lvm-22465dab-b5a5-273e-17d8-9f64386946cb/cc9b4018-1202-4581-b6cc-c09dd386f986
Apr  1 20:15:29 xen01 SM: [5625] lock: acquired /var/lock/sm/lvm-22465dab-b5a5-273e-17d8-9f64386946cb/cc9b4018-1202-4581-b6cc-c09dd386f986
Apr  1 20:15:29 xen01 SM: [5625] Refcount for lvm-22465dab-b5a5-273e-17d8-9f64386946cb:cc9b4018-1202-4581-b6cc-c09dd386f986 (7, 0) + (1, 0) => (8, 0)
Apr  1 20:15:29 xen01 SM: [5625] Refcount for lvm-22465dab-b5a5-273e-17d8-9f64386946cb:cc9b4018-1202-4581-b6cc-c09dd386f986 set => (8, 0b)
Apr  1 20:15:29 xen01 SM: [5625] lock: released /var/lock/sm/lvm-22465dab-b5a5-273e-17d8-9f64386946cb/cc9b4018-1202-4581-b6cc-c09dd386f986
Apr  1 20:15:29 xen01 SM: [5625] lock: closed /var/lock/sm/lvm-22465dab-b5a5-273e-17d8-9f64386946cb/cc9b4018-1202-4581-b6cc-c09dd386f986
Apr  1 20:15:29 xen01 SM: [5625] ['/sbin/lvchange', '-ay', '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/leaf_cc9b4018-1202-4581-b6cc-c09dd386f986_1']
Apr  1 20:15:29 xen01 SM: [5625]   pread SUCCESS
Apr  1 20:15:29 xen01 SM: [5625] ['/sbin/lvchange', '-an', '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/leaf_cc9b4018-1202-4581-b6cc-c09dd386f986_1']
Apr  1 20:15:29 xen01 SM: [5625]   pread SUCCESS
Apr  1 20:15:29 xen01 SM: [5625] ['/sbin/dmsetup', 'status', 'VG_XenStorage--22465dab--b5a5--273e--17d8--9f64386946cb-leaf_cc9b4018--1202--4581--b6cc--c09dd386f986_1']
Apr  1 20:15:29 xen01 SM: [5625]   pread SUCCESS
Apr  1 20:15:29 xen01 SM: [5625] ['/sbin/lvcreate', '-n', 'inflate_cc9b4018-1202-4581-b6cc-c09dd386f986_406847488', '-L', '4', 'VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb', '--addtag', 'journaler', '-W', 'n']
Apr  1 20:15:29 xen01 SM: [5625]   pread SUCCESS
Apr  1 20:15:29 xen01 SM: [5625] ['/sbin/lvresize', '-L', '61572', '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-cc9b4018-1202-4581-b6cc-c09dd386f986']
Apr  1 20:15:29 xen01 SM: [5625] FAILED in util.pread: (rc 5) stdout: '', stderr: '  /run/lvm/lvmetad.socket: connect failed: No such file or directory
Apr  1 20:15:29 xen01 SM: [5625]   WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
Apr  1 20:15:29 xen01 SM: [5625]   Insufficient free space: 15296 extents needed, but only 14379 available
Apr  1 20:15:29 xen01 SM: [5625] '
Apr  1 20:15:29 xen01 SM: [5625] lock: released /var/lock/sm/22465dab-b5a5-273e-17d8-9f64386946cb/sr

 

PV Display Showes

/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
  /dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-cc9b4018-1202-4581-b6cc-c09dd386f986: read failed after 0 of 4096 at 0: Input/output error
  /dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-cc9b4018-1202-4581-b6cc-c09dd386f986: read failed after 0 of 4096 at 406781952: Input/output error
  /dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-cc9b4018-1202-4581-b6cc-c09dd386f986: read failed after 0 of 4096 at 406839296: Input/output error
  /dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-cc9b4018-1202-4581-b6cc-c09dd386f986: read failed after 0 of 4096 at 4096: Input/output error
  /dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/inflate_cc9b4018-1202-4581-b6cc-c09dd386f986_406847488: read failed after 0 of 4096 at 0: Input/output error
  /dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/inflate_cc9b4018-1202-4581-b6cc-c09dd386f986_406847488: read failed after 0 of 4096 at 4128768: Input/output error
  /dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/inflate_cc9b4018-1202-4581-b6cc-c09dd386f986_406847488: read failed after 0 of 4096 at 4186112: Input/output error
  /dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/inflate_cc9b4018-1202-4581-b6cc-c09dd386f986_406847488: read failed after 0 of 4096 at 4096: Input/output error

--- Physical volume ---
  PV Name               /dev/sde
  VG Name               VG_XenStorage-4027c93d-4ae4-7bec-8ddb-2440b5640fe1
  PV Size               1.00 TiB / not usable 12.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              262141
  Free PE               262140
  Allocated PE          1
  PV UUID               tdUA5h-0tAT-Hf6Z-U7Mg-zaNE-Z1nu-HUXXEp

  --- Physical volume ---
  PV Name               /dev/sdc
  VG Name               VG_XenStorage-45f9d04f-f7ba-9126-6d78-f7d495c260b7
  PV Size               500.00 GiB / not usable 12.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              127997
  Free PE               77706
  Allocated PE          50291
  PV UUID               OFFqLs-1Zrb-LiuU-u6EO-1zMt-uxH4-yOwDVu

  --- Physical volume ---
  PV Name               /dev/sda3
  VG Name               VG_XenStorage-ac65cdcd-5824-c9e7-2d39-e784b92e8c10
  PV Size               26.25 GiB / not usable 14.98 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              6716
  Free PE               6715
  Allocated PE          1
  PV UUID               GcxBpf-9tfB-821o-8GWj-a07D-jHaR-C2AoO6

 

 

So just looking for a steer or some help.

Kind Regards

Lee

 

 

 

Link to comment

15 answers to this question

Recommended Posts

I'd start with a "vgchange -ay" but it looks like that already ran  and perhaps run a new iscsciadm rediscovery process. There's also pvchange 

Are the links OK in the /dev area? Is there a PBD associated with the VDI (I guess not since you are trying to plug it in, hence maybe it needs to be re-created)?  Is there any multipathing involved?

 

Depending on the damage you may be to recover with an LVM backup.

 

Was the power outage on the storage, the server or both?

 

-=Tobias

Link to comment

Hi Tobias,

 

vgchange -ay - tired this already.

 

[root@xen01 lvm]# vgchange -ay
  /run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
  /dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-cc9b4018-1202-4581-b6cc-c09dd386f986: read failed after 0 of 4096 at 0: Input/output error
  /dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-cc9b4018-1202-4581-b6cc-c09dd386f986: read failed after 0 of 4096 at 406781952: Input/output error
  /dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-cc9b4018-1202-4581-b6cc-c09dd386f986: read failed after 0 of 4096 at 406839296: Input/output error
  /dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-cc9b4018-1202-4581-b6cc-c09dd386f986: read failed after 0 of 4096 at 4096: Input/output error
  /dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/inflate_cc9b4018-1202-4581-b6cc-c09dd386f986_406847488: read failed after 0 of 4096 at 0: Input/output error
  /dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/inflate_cc9b4018-1202-4581-b6cc-c09dd386f986_406847488: read failed after 0 of 4096 at 4128768: Input/output error
  /dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/inflate_cc9b4018-1202-4581-b6cc-c09dd386f986_406847488: read failed after 0 of 4096 at 4186112: Input/output error
  /dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/inflate_cc9b4018-1202-4581-b6cc-c09dd386f986_406847488: read failed after 0 of 4096 at 4096: Input/output error
  Operation prohibited while global/metadata_read_only is set.
  1 logical volume(s) in volume group "VG_XenStorage-4027c93d-4ae4-7bec-8ddb-2440b5640fe1" now active
  Operation prohibited while global/metadata_read_only is set.
  Operation prohibited while global/metadata_read_only is set.
  Operation prohibited while global/metadata_read_only is set.
  Operation prohibited while global/metadata_read_only is set.
  Operation prohibited while global/metadata_read_only is set.
  Operation prohibited while global/metadata_read_only is set.
  Operation prohibited while global/metadata_read_only is set.
  Operation prohibited while global/metadata_read_only is set.
  Operation prohibited while global/metadata_read_only is set.
  Operation prohibited while global/metadata_read_only is set.
  Operation prohibited while global/metadata_read_only is set.
  Operation prohibited while global/metadata_read_only is set.
  4 logical volume(s) in volume group "VG_XenStorage-45f9d04f-f7ba-9126-6d78-f7d495c260b7" now active
  Operation prohibited while global/metadata_read_only is set.
  1 logical volume(s) in volume group "VG_XenStorage-ac65cdcd-5824-c9e7-2d39-e784b92e8c10" now active
[root@xen01 lvm]#
 

 

iscsiadm - There is no actual session to the SAN as its not plugged.

Links don't look good either as it never gets that far, interesting tho it connects on a second host in the cluster. 

No Multipath..  How can i do a LVM backup? 

Power outage was a clean shutdown of all hardware, I have other LUN's that are working ok.

 

 

Link to comment

I'm a little out of my league on this one. Since its remote storage If the state of the LUN

is good what about forgetting the SR and reattaching to it ? During the connection process it should

see the SR on the LUN and prompt you to reattach. Just thinking what I would try. A lot harder when its

production data and you want to minimize the risk.

 

--Alan--

 

Link to comment

Can you make the LUN size larger? It was complaining about out of space and was trying to create/extend 

some LV stuff. Have you tried that yet ? Alas, when it comes to iSCSI and trying to repair the LUN I'm not 

sure what one could do other than from the SAN revert to a previous snapshot.

 

--Alan--

 

Link to comment

Already expanded the LUN as the size is now 2Tb, i have a little luck, on the second host I seem to have the SR connected as..

 

lvdisplay returns the LV's, but this is not displayed in a xe vdi-list output.

Wondering if there a CLI way to copy out the VHD's?

Link to comment

On the second host a lvscan returns this..

 

[root@xen02 backup]# lvscan
  /run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/MGT' [4.00 MiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-c1faef5c-3e1d-48fc-a75e-bdf30f805545' [15.68 GiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-4cd5098b-3f14-40b8-9c11-5a33ac6bdc9a' [16.04 GiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-3d7137c9-4673-440a-84b1-729425370920' [24.33 GiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-78ccc7d3-5cf3-4318-b693-1b87b5c85290' [10.03 GiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-75993936-fe29-4505-b506-4e88094b35c5' [40.09 GiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-b61961d4-b05e-4e06-a343-9351994cc6d0' [10.03 GiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-75991afa-4d2d-477e-8583-5a2d6cd5d4bd' [57.31 GiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-6a1b97c9-d72c-4e87-8242-a216f25c4a7b' [80.07 GiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-c4c57b10-09ad-41d9-bfd0-d790782a3d59' [95.52 GiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-1c3db62d-bd99-4d01-88dd-222b5060a01e' [300.59 GiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-e38a1438-6f95-443b-9426-7a979fc87c71' [68.29 GiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-a0576c3b-16c5-4d19-9144-1dbb14becfef' [35.07 GiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-cc9b4018-1202-4581-b6cc-c09dd386f986' [388.00 MiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-ee0f901a-e7d0-4f5c-8cfc-c04ed7346668' [924.00 MiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-43621705-6c11-4d36-aa1c-f3e6d49eb066' [8.00 MiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-5394de67-4543-45fc-a22e-e2d4ffb1eefc' [1.59 GiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-d3153aa2-c90a-4f35-9167-dbfe2c9a0660' [8.00 MiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-3a5859a1-03f2-4e8f-bfb9-e0c3edc0f07b' [8.00 MiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-3bd105db-9c88-4eb9-acc9-f5b4ff3b8b34' [736.00 MiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-080e0b3b-bb4e-4e93-9fe6-66fdc9cb16a2' [8.00 MiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-46d28b6f-286b-4750-8b18-a3ed416514f5' [568.00 MiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-28034446-178c-4afb-ba64-dcc6b43aead7' [8.00 MiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-ac14710c-b7b1-45f7-b7ec-c5113ecc8225' [1.63 GiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-cf4d72e2-e911-4d96-b335-38b42142fa36' [8.00 MiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-94f3fe0e-d895-48f7-98d9-b87fcba4fbdf' [80.16 GiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-b7085f34-bdca-4fd6-b642-e35e6c9f8d99' [8.00 MiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-b84de79c-dac8-4cac-965e-047fe850d2d7' [100.20 GiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-5ee762a0-e524-496b-801c-43ff39e52fe7' [8.00 MiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-9ec63a4f-c6e0-4fef-ac86-367c11724fa7' [8.00 MiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-fc441b24-08bf-4cfd-b61d-ed05ab190f35' [80.16 GiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-e2e19425-cd8e-4b65-8a44-11153e62747a' [8.00 MiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-941a3b00-29f1-48f7-b517-6c38f76bed12' [20.05 GiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-315dd215-1045-4569-90cd-bf1b7a73d924' [60.13 GiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-cbccf961-f0cc-47ae-8bb3-a688b133d16d' [100.20 GiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/leaf_cc9b4018-1202-4581-b6cc-c09dd386f986_1' [4.00 MiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/inflate_cc9b4018-1202-4581-b6cc-c09dd386f986_406847488' [4.00 MiB] inherit
  inactive          '/dev/VG_XenStorage-4027c93d-4ae4-7bec-8ddb-2440b5640fe1/MGT' [4.00 MiB] inherit
  inactive          '/dev/VG_XenStorage-45f9d04f-f7ba-9126-6d78-f7d495c260b7/MGT' [4.00 MiB] inherit
  inactive          '/dev/VG_XenStorage-45f9d04f-f7ba-9126-6d78-f7d495c260b7/VHD-c7a1e4b6-e99b-4e37-bab2-2c1c6a2a2ca8' [40.09 GiB] inherit
  inactive          '/dev/VG_XenStorage-45f9d04f-f7ba-9126-6d78-f7d495c260b7/VHD-99ea1480-a367-401e-bfb5-a7e145f9b84e' [6.13 GiB] inherit
  ACTIVE            '/dev/VG_XenStorage-45f9d04f-f7ba-9126-6d78-f7d495c260b7/VHD-395fe158-b965-4cfe-9bfa-1bda9ae41fd0' [10.03 GiB] inherit
  ACTIVE            '/dev/VG_XenStorage-45f9d04f-f7ba-9126-6d78-f7d495c260b7/VHD-58b694b7-d95f-447c-aaae-59623bfef801' [16.04 GiB] inherit
  ACTIVE            '/dev/VG_XenStorage-45f9d04f-f7ba-9126-6d78-f7d495c260b7/VHD-888e280b-564f-4220-a2f9-be243d29556e' [10.03 GiB] inherit
  inactive          '/dev/VG_XenStorage-45f9d04f-f7ba-9126-6d78-f7d495c260b7/VHD-524805ae-222c-4d9c-9b46-b839fc443835' [20.05 GiB] inherit
  inactive          '/dev/VG_XenStorage-45f9d04f-f7ba-9126-6d78-f7d495c260b7/VHD-23513408-a051-4f18-bd4e-fd682e2172d0' [60.13 GiB] inherit
  inactive          '/dev/VG_XenStorage-45f9d04f-f7ba-9126-6d78-f7d495c260b7/VHD-ba7267f2-3cbf-463b-bfc9-43777434e840' [40.09 GiB] inherit
  inactive          '/dev/VG_XenStorage-45f9d04f-f7ba-9126-6d78-f7d495c260b7/VHD-1219ed3b-0b39-4fae-95fe-36af1fcb425c' [8.00 MiB] inherit
  ACTIVE            '/dev/VG_XenStorage-bc317a62-06b2-870b-7927-ab8d09d894f1/MGT' [4.00 MiB] inherit


On the cluster master it returns 

[root@xen01 backup]# lvscan
  /run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
  /dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-cc9b4018-1202-4581-b6cc-c09dd386f986: read failed after 0 of 4096 at 0: Input/output error
  /dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-cc9b4018-1202-4581-b6cc-c09dd386f986: read failed after 0 of 4096 at 406781952: Input/output error
  /dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-cc9b4018-1202-4581-b6cc-c09dd386f986: read failed after 0 of 4096 at 406839296: Input/output error
  /dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-cc9b4018-1202-4581-b6cc-c09dd386f986: read failed after 0 of 4096 at 4096: Input/output error
  /dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/inflate_cc9b4018-1202-4581-b6cc-c09dd386f986_406847488: read failed after 0 of 4096 at 0: Input/output error
  /dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/inflate_cc9b4018-1202-4581-b6cc-c09dd386f986_406847488: read failed after 0 of 4096 at 4128768: Input/output error
  /dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/inflate_cc9b4018-1202-4581-b6cc-c09dd386f986_406847488: read failed after 0 of 4096 at 4186112: Input/output error
  /dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/inflate_cc9b4018-1202-4581-b6cc-c09dd386f986_406847488: read failed after 0 of 4096 at 4096: Input/output error
  ACTIVE            '/dev/VG_XenStorage-4027c93d-4ae4-7bec-8ddb-2440b5640fe1/MGT' [4.00 MiB] inherit
  ACTIVE            '/dev/VG_XenStorage-45f9d04f-f7ba-9126-6d78-f7d495c260b7/MGT' [4.00 MiB] inherit
  inactive          '/dev/VG_XenStorage-45f9d04f-f7ba-9126-6d78-f7d495c260b7/VHD-c7a1e4b6-e99b-4e37-bab2-2c1c6a2a2ca8' [40.09 GiB] inherit
  inactive          '/dev/VG_XenStorage-45f9d04f-f7ba-9126-6d78-f7d495c260b7/VHD-99ea1480-a367-401e-bfb5-a7e145f9b84e' [6.13 GiB] inherit
  inactive          '/dev/VG_XenStorage-45f9d04f-f7ba-9126-6d78-f7d495c260b7/VHD-395fe158-b965-4cfe-9bfa-1bda9ae41fd0' [10.03 GiB] inherit
  inactive          '/dev/VG_XenStorage-45f9d04f-f7ba-9126-6d78-f7d495c260b7/VHD-58b694b7-d95f-447c-aaae-59623bfef801' [16.04 GiB] inherit
  inactive          '/dev/VG_XenStorage-45f9d04f-f7ba-9126-6d78-f7d495c260b7/VHD-888e280b-564f-4220-a2f9-be243d29556e' [10.03 GiB] inherit
  inactive          '/dev/VG_XenStorage-45f9d04f-f7ba-9126-6d78-f7d495c260b7/VHD-524805ae-222c-4d9c-9b46-b839fc443835' [20.05 GiB] inherit
  inactive          '/dev/VG_XenStorage-45f9d04f-f7ba-9126-6d78-f7d495c260b7/VHD-23513408-a051-4f18-bd4e-fd682e2172d0' [60.13 GiB] inherit
  inactive          '/dev/VG_XenStorage-45f9d04f-f7ba-9126-6d78-f7d495c260b7/VHD-ba7267f2-3cbf-463b-bfc9-43777434e840' [40.09 GiB] inherit
  inactive          '/dev/VG_XenStorage-45f9d04f-f7ba-9126-6d78-f7d495c260b7/VHD-1219ed3b-0b39-4fae-95fe-36af1fcb425c' [8.00 MiB] inherit
  ACTIVE            '/dev/VG_XenStorage-ac65cdcd-5824-c9e7-2d39-e784b92e8c10/MGT' [4.00 MiB] inherit

So can I just remove 

  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/leaf_cc9b4018-1202-4581-b6cc-c09dd386f986_1' [4.00 MiB] inherit
  inactive          '/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/inflate_cc9b4018-1202-4581-b6cc-c09dd386f986_406847488' [4.00 MiB] inherit

As based on their ID's they relate to a VHD that I dont need.. As I only need to save 6 in here 2x 80Gb, 2x 100Gb, 35Gb and 300Gb.

 

 

Link to comment

Several times.. 

In the end I deleted with lvremove,,

 

/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/VHD-cc9b4018-1202-4581-b6cc-c09dd386f986
/dev/VG_XenStorage-22465dab-b5a5-273e-17d8-9f64386946cb/inflate_cc9b4018-1202-4581-b6cc-c09dd386f986_406847488

 

Which allowed me to replug the SR and mount the drives I needed.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...