I think this is a Coalesce issue but i'm not sure and want to check. I've recently installed a 4TB SSD onto our server so i can move everyone off our SAN. The SSD shows as having 3.6TB of usage space, so i copied over two servers first of all too it.
First one, 100GB
Second one, 1TB
SSD now shows 2.1TB used of 3.6TB and (3.1TB Allocated). I've tried running a Rescan and checked the /var/log/SMlog
[root@starbase1 ~]# tail -f /var/log/SMlog
Mar 16 17:36:32 starbase1 SMGC: [5391] Coalesced size = 998.562G
Mar 16 17:36:32 starbase1 SM: [5391] ['/usr/bin/vhd-util', 'check', '--debug', '-n', '/dev/VG_XenStorage-90e06808-a71b-99ca-3836-1f3272e772c7/VHD-5603b81f-8eff-4720-93ad-a56299e5d8e2']
Mar 16 17:36:34 starbase1 SM: [5391] pread SUCCESS
Mar 16 17:36:34 starbase1 SM: [5391] ['/usr/bin/vhd-util', 'check', '--debug', '-n', '/dev/VG_XenStorage-90e06808-a71b-99ca-3836-1f3272e772c7/VHD-8da6a29d-dd75-481a-adec-39e9ae751ad2', '-B']
Mar 16 17:36:34 starbase1 SM: [5391] pread SUCCESS
Mar 16 17:36:34 starbase1 SM: [5391] Update-on-resize: *8da6a29d[VHD](1000.000G//998.562G|ao) not attached on any slave
Mar 16 17:36:34 starbase1 SMGC: [5391] Running VHD coalesce on *5603b81f[VHD](1000.000G//5.000G|ao)
Mar 16 17:36:34 starbase1 SM: [31219] ['/usr/bin/vhd-util', 'query', '--debug', '-s', '-n', '/dev/VG_XenStorage-90e06808-a71b-99ca-3836-1f3272e772c7/VHD-5603b81f-8eff-4720-93ad-a56299e5d8e2']
Mar 16 17:36:34 starbase1 SM: [31219] pread SUCCESS
Mar 16 17:36:34 starbase1 SM: [31219] ['/usr/bin/vhd-util', 'coalesce', '--debug', '-n', '/dev/VG_XenStorage-90e06808-a71b-99ca-3836-1f3272e772c7/VHD-5603b81f-8eff-4720-93ad-a56299e5d8e2']
Mar 16 17:39:42 starbase1 SM: [32432] lock: opening lock file /var/lock/sm/bd7638e0-3979-cf9a-dd96-96b378007d10/sr
Mar 16 17:39:42 starbase1 SM: [32432] LVMCache created for VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10
Mar 16 17:39:42 starbase1 SM: [32432] lock: opening lock file /var/lock/sm/.nil/lvm
Mar 16 17:39:42 starbase1 SM: [32432] ['/sbin/vgs', '--readonly', 'VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10']
Mar 16 17:39:42 starbase1 SM: [32432] pread SUCCESS
Mar 16 17:39:42 starbase1 SM: [32432] lock: acquired /var/lock/sm/bd7638e0-3979-cf9a-dd96-96b378007d10/sr
Mar 16 17:39:42 starbase1 SM: [32432] LVMCache: will initialize now
Mar 16 17:39:42 starbase1 SM: [32432] LVMCache: refreshing
Mar 16 17:39:42 starbase1 SM: [32432] lock: acquired /var/lock/sm/.nil/lvm
Mar 16 17:39:42 starbase1 SM: [32432] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10']
Mar 16 17:39:43 starbase1 SM: [32432] pread SUCCESS
Mar 16 17:39:43 starbase1 SM: [32432] lock: released /var/lock/sm/.nil/lvm
Mar 16 17:39:43 starbase1 SM: [32432] lock: released /var/lock/sm/bd7638e0-3979-cf9a-dd96-96b378007d10/sr
Mar 16 17:39:43 starbase1 SM: [32432] Entering _checkMetadataVolume
Mar 16 17:39:43 starbase1 SM: [32432] lock: acquired /var/lock/sm/bd7638e0-3979-cf9a-dd96-96b378007d10/sr
Mar 16 17:39:43 starbase1 SM: [32432] sr_scan {'sr_uuid': 'bd7638e0-3979-cf9a-dd96-96b378007d10', 'subtask_of': 'DummyRef:|954e9292-80d5-4231-ae8f-c2fbebb81f79|SR.scan', 'args': [], 'host_ref': 'OpaqueRef:df66acc6-fed1-4b98-92a9-45579218275a', 'session_ref': 'OpaqueRef:299002a4-36a4-4de3-9aa2-ed6da8cd82b7', 'device_config': {'device': '/dev/mapper/mpathc', 'SRmaster': 'true'}, 'command': 'sr_scan', 'sr_ref': 'OpaqueRef:fa34f30e-02a7-472e-96cd-dac7039d45b6', 'local_cache_sr': '39fbd0f6-8f83-d600-e811-d43c1b04947e'}
Mar 16 17:39:43 starbase1 SM: [32432] LVHDSR.scan for bd7638e0-3979-cf9a-dd96-96b378007d10
Mar 16 17:39:43 starbase1 SM: [32432] lock: acquired /var/lock/sm/.nil/lvm
Mar 16 17:39:43 starbase1 SM: [32432] ['/sbin/vgs', '--noheadings', '--nosuffix', '--units', 'b', 'VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10']
Mar 16 17:39:43 starbase1 SM: [32432] pread SUCCESS
Mar 16 17:39:43 starbase1 SM: [32432] lock: released /var/lock/sm/.nil/lvm
Mar 16 17:39:43 starbase1 SM: [32432] LVMCache: refreshing
Mar 16 17:39:43 starbase1 SM: [32432] lock: acquired /var/lock/sm/.nil/lvm
Mar 16 17:39:43 starbase1 SM: [32432] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10']
Mar 16 17:39:43 starbase1 SM: [32432] pread SUCCESS
Mar 16 17:39:43 starbase1 SM: [32432] lock: released /var/lock/sm/.nil/lvm
Mar 16 17:39:43 starbase1 SM: [32432] ['/usr/bin/vhd-util', 'scan', '-f', '-m', 'VHD-*', '-l', 'VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10']
Mar 16 17:39:43 starbase1 SM: [32432] pread SUCCESS
Mar 16 17:39:43 starbase1 SM: [32432] lock: acquired /var/lock/sm/.nil/lvm
Mar 16 17:39:43 starbase1 SM: [32432] ['/sbin/vgs', '--noheadings', '--nosuffix', '--units', 'b', 'VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10']
Mar 16 17:39:43 starbase1 SM: [32432] pread SUCCESS
Mar 16 17:39:43 starbase1 SM: [32432] lock: released /var/lock/sm/.nil/lvm
Mar 16 17:39:43 starbase1 SM: [32432] lock: opening lock file /var/lock/sm/bd7638e0-3979-cf9a-dd96-96b378007d10/running
Mar 16 17:39:43 starbase1 SM: [32432] lock: tried lock /var/lock/sm/bd7638e0-3979-cf9a-dd96-96b378007d10/running, acquired: False (exists: True)
Mar 16 17:39:43 starbase1 SM: [32432] LVMCache created for VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10
Mar 16 17:39:43 starbase1 SM: [32432] LVMCache: will initialize now
Mar 16 17:39:43 starbase1 SM: [32432] LVMCache: refreshing
Mar 16 17:39:43 starbase1 SM: [32432] lock: acquired /var/lock/sm/.nil/lvm
Mar 16 17:39:43 starbase1 SM: [32432] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10']
Mar 16 17:39:43 starbase1 SM: [32432] pread SUCCESS
Mar 16 17:39:43 starbase1 SM: [32432] lock: released /var/lock/sm/.nil/lvm
Mar 16 17:39:43 starbase1 SM: [32432] LVMCache: refreshing
Mar 16 17:39:43 starbase1 SM: [32432] lock: acquired /var/lock/sm/.nil/lvm
Mar 16 17:39:43 starbase1 SM: [32432] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10']
Mar 16 17:39:43 starbase1 SM: [32432] pread SUCCESS
Mar 16 17:39:43 starbase1 SM: [32432] lock: released /var/lock/sm/.nil/lvm
Mar 16 17:39:43 starbase1 SM: [32432] ['/usr/bin/vhd-util', 'scan', '-f', '-m', 'VHD-*', '-l', 'VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10']
Mar 16 17:39:43 starbase1 SM: [32432] pread SUCCESS
Mar 16 17:39:43 starbase1 SMGC: [32432] SR bd76 ('SSD') (4 VDIs in 2 VHD trees):
Mar 16 17:39:43 starbase1 SMGC: [32432] 9136f128[VHD](100.000G//100.203G|ao)
Mar 16 17:39:43 starbase1 SMGC: [32432] *59c4eb73[VHD](1024.000G//1018.367G|ao)
Mar 16 17:39:43 starbase1 SMGC: [32432] *8d2130bb[VHD](1024.000G//15.664G|ao)
Mar 16 17:39:43 starbase1 SMGC: [32432] f908a214[VHD](1024.000G//1026.008G|ao)
Mar 16 17:39:43 starbase1 SMGC: [32432]
Mar 16 17:39:43 starbase1 SM: [32432] A GC instance already running, not kicking
Mar 16 17:39:43 starbase1 SM: [32432] lock: released /var/lock/sm/bd7638e0-3979-cf9a-dd96-96b378007d10/sr
Mar 16 17:39:44 starbase1 SM: [32505] lock: opening lock file /var/lock/sm/bd7638e0-3979-cf9a-dd96-96b378007d10/sr
Mar 16 17:39:44 starbase1 SM: [32505] LVMCache created for VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10
Mar 16 17:39:44 starbase1 SM: [32505] lock: opening lock file /var/lock/sm/.nil/lvm
Mar 16 17:39:44 starbase1 SM: [32505] ['/sbin/vgs', '--readonly', 'VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10']
Mar 16 17:39:44 starbase1 SM: [32505] pread SUCCESS
Mar 16 17:39:44 starbase1 SM: [32505] lock: acquired /var/lock/sm/bd7638e0-3979-cf9a-dd96-96b378007d10/sr
Mar 16 17:39:44 starbase1 SM: [32505] LVMCache: will initialize now
Mar 16 17:39:44 starbase1 SM: [32505] LVMCache: refreshing
Mar 16 17:39:44 starbase1 SM: [32505] lock: acquired /var/lock/sm/.nil/lvm
Mar 16 17:39:44 starbase1 SM: [32505] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10']
Mar 16 17:39:44 starbase1 SM: [32505] pread SUCCESS
Mar 16 17:39:44 starbase1 SM: [32505] lock: released /var/lock/sm/.nil/lvm
Mar 16 17:39:44 starbase1 SM: [32505] lock: released /var/lock/sm/bd7638e0-3979-cf9a-dd96-96b378007d10/sr
Mar 16 17:39:44 starbase1 SM: [32505] Entering _checkMetadataVolume
Mar 16 17:39:44 starbase1 SM: [32505] lock: acquired /var/lock/sm/bd7638e0-3979-cf9a-dd96-96b378007d10/sr
Mar 16 17:39:44 starbase1 SM: [32505] sr_update {'sr_uuid': 'bd7638e0-3979-cf9a-dd96-96b378007d10', 'subtask_of': 'DummyRef:|4b7b0d5c-dc64-4f20-92b7-447746cf0d61|SR.stat', 'args': [], 'host_ref': 'OpaqueRef:df66acc6-fed1-4b98-92a9-45579218275a', 'session_ref': 'OpaqueRef:5095999a-bb6d-4b63-840a-48d62a50a73a', 'device_config': {'device': '/dev/mapper/mpathc', 'SRmaster': 'true'}, 'command': 'sr_update', 'sr_ref': 'OpaqueRef:fa34f30e-02a7-472e-96cd-dac7039d45b6', 'local_cache_sr': '39fbd0f6-8f83-d600-e811-d43c1b04947e'}
Mar 16 17:39:44 starbase1 SM: [32505] ['/sbin/vgs', '--readonly', 'VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10']
Mar 16 17:39:44 starbase1 SM: [32505] pread SUCCESS
Mar 16 17:39:44 starbase1 SM: [32505] Setting virtual_allocation of SR bd7638e0-3979-cf9a-dd96-96b378007d10 to 1209259786240
Mar 16 17:39:44 starbase1 SM: [32505] lock: acquired /var/lock/sm/.nil/lvm
Mar 16 17:39:44 starbase1 SM: [32505] ['/sbin/vgs', '--noheadings', '--nosuffix', '--units', 'b', 'VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10']
Mar 16 17:39:44 starbase1 SM: [32505] pread SUCCESS
Mar 16 17:39:44 starbase1 SM: [32505] lock: released /var/lock/sm/.nil/lvm
Mar 16 17:39:44 starbase1 SM: [32505] Updating metadata : {'objtype': 'sr', 'name_description': '', 'name_label': 'SSD'}
Mar 16 17:39:44 starbase1 SM: [32505] entering updateSR
Mar 16 17:39:44 starbase1 SM: [32505] lock: released /var/lock/sm/bd7638e0-3979-cf9a-dd96-96b378007d10/sr
Reading this i can see;
Mar 16 17:39:43 starbase1 SMGC: [32432] *59c4eb73[VHD](1024.000G//1018.367G|ao)
Mar 16 17:39:43 starbase1 SMGC: [32432] *8d2130bb[VHD](1024.000G//15.664G|ao)
Mar 16 17:39:43 starbase1 SMGC: [32432] f908a214[VHD](1024.000G//1026.008G|ao)
Which seems to be the issue, but i'm guessing if the machine is on it can't reclaim the space? Am i right or is there another way around this?
Question
Nathan Platt1709158840
Hi Everyone,
I think this is a Coalesce issue but i'm not sure and want to check. I've recently installed a 4TB SSD onto our server so i can move everyone off our SAN. The SSD shows as having 3.6TB of usage space, so i copied over two servers first of all too it.
First one, 100GB
Second one, 1TB
SSD now shows 2.1TB used of 3.6TB and (3.1TB Allocated). I've tried running a Rescan and checked the /var/log/SMlog
[root@starbase1 ~]# tail -f /var/log/SMlog Mar 16 17:36:32 starbase1 SMGC: [5391] Coalesced size = 998.562G Mar 16 17:36:32 starbase1 SM: [5391] ['/usr/bin/vhd-util', 'check', '--debug', '-n', '/dev/VG_XenStorage-90e06808-a71b-99ca-3836-1f3272e772c7/VHD-5603b81f-8eff-4720-93ad-a56299e5d8e2'] Mar 16 17:36:34 starbase1 SM: [5391] pread SUCCESS Mar 16 17:36:34 starbase1 SM: [5391] ['/usr/bin/vhd-util', 'check', '--debug', '-n', '/dev/VG_XenStorage-90e06808-a71b-99ca-3836-1f3272e772c7/VHD-8da6a29d-dd75-481a-adec-39e9ae751ad2', '-B'] Mar 16 17:36:34 starbase1 SM: [5391] pread SUCCESS Mar 16 17:36:34 starbase1 SM: [5391] Update-on-resize: *8da6a29d[VHD](1000.000G//998.562G|ao) not attached on any slave Mar 16 17:36:34 starbase1 SMGC: [5391] Running VHD coalesce on *5603b81f[VHD](1000.000G//5.000G|ao) Mar 16 17:36:34 starbase1 SM: [31219] ['/usr/bin/vhd-util', 'query', '--debug', '-s', '-n', '/dev/VG_XenStorage-90e06808-a71b-99ca-3836-1f3272e772c7/VHD-5603b81f-8eff-4720-93ad-a56299e5d8e2'] Mar 16 17:36:34 starbase1 SM: [31219] pread SUCCESS Mar 16 17:36:34 starbase1 SM: [31219] ['/usr/bin/vhd-util', 'coalesce', '--debug', '-n', '/dev/VG_XenStorage-90e06808-a71b-99ca-3836-1f3272e772c7/VHD-5603b81f-8eff-4720-93ad-a56299e5d8e2'] Mar 16 17:39:42 starbase1 SM: [32432] lock: opening lock file /var/lock/sm/bd7638e0-3979-cf9a-dd96-96b378007d10/sr Mar 16 17:39:42 starbase1 SM: [32432] LVMCache created for VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10 Mar 16 17:39:42 starbase1 SM: [32432] lock: opening lock file /var/lock/sm/.nil/lvm Mar 16 17:39:42 starbase1 SM: [32432] ['/sbin/vgs', '--readonly', 'VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10'] Mar 16 17:39:42 starbase1 SM: [32432] pread SUCCESS Mar 16 17:39:42 starbase1 SM: [32432] lock: acquired /var/lock/sm/bd7638e0-3979-cf9a-dd96-96b378007d10/sr Mar 16 17:39:42 starbase1 SM: [32432] LVMCache: will initialize now Mar 16 17:39:42 starbase1 SM: [32432] LVMCache: refreshing Mar 16 17:39:42 starbase1 SM: [32432] lock: acquired /var/lock/sm/.nil/lvm Mar 16 17:39:42 starbase1 SM: [32432] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10'] Mar 16 17:39:43 starbase1 SM: [32432] pread SUCCESS Mar 16 17:39:43 starbase1 SM: [32432] lock: released /var/lock/sm/.nil/lvm Mar 16 17:39:43 starbase1 SM: [32432] lock: released /var/lock/sm/bd7638e0-3979-cf9a-dd96-96b378007d10/sr Mar 16 17:39:43 starbase1 SM: [32432] Entering _checkMetadataVolume Mar 16 17:39:43 starbase1 SM: [32432] lock: acquired /var/lock/sm/bd7638e0-3979-cf9a-dd96-96b378007d10/sr Mar 16 17:39:43 starbase1 SM: [32432] sr_scan {'sr_uuid': 'bd7638e0-3979-cf9a-dd96-96b378007d10', 'subtask_of': 'DummyRef:|954e9292-80d5-4231-ae8f-c2fbebb81f79|SR.scan', 'args': [], 'host_ref': 'OpaqueRef:df66acc6-fed1-4b98-92a9-45579218275a', 'session_ref': 'OpaqueRef:299002a4-36a4-4de3-9aa2-ed6da8cd82b7', 'device_config': {'device': '/dev/mapper/mpathc', 'SRmaster': 'true'}, 'command': 'sr_scan', 'sr_ref': 'OpaqueRef:fa34f30e-02a7-472e-96cd-dac7039d45b6', 'local_cache_sr': '39fbd0f6-8f83-d600-e811-d43c1b04947e'} Mar 16 17:39:43 starbase1 SM: [32432] LVHDSR.scan for bd7638e0-3979-cf9a-dd96-96b378007d10 Mar 16 17:39:43 starbase1 SM: [32432] lock: acquired /var/lock/sm/.nil/lvm Mar 16 17:39:43 starbase1 SM: [32432] ['/sbin/vgs', '--noheadings', '--nosuffix', '--units', 'b', 'VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10'] Mar 16 17:39:43 starbase1 SM: [32432] pread SUCCESS Mar 16 17:39:43 starbase1 SM: [32432] lock: released /var/lock/sm/.nil/lvm Mar 16 17:39:43 starbase1 SM: [32432] LVMCache: refreshing Mar 16 17:39:43 starbase1 SM: [32432] lock: acquired /var/lock/sm/.nil/lvm Mar 16 17:39:43 starbase1 SM: [32432] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10'] Mar 16 17:39:43 starbase1 SM: [32432] pread SUCCESS Mar 16 17:39:43 starbase1 SM: [32432] lock: released /var/lock/sm/.nil/lvm Mar 16 17:39:43 starbase1 SM: [32432] ['/usr/bin/vhd-util', 'scan', '-f', '-m', 'VHD-*', '-l', 'VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10'] Mar 16 17:39:43 starbase1 SM: [32432] pread SUCCESS Mar 16 17:39:43 starbase1 SM: [32432] lock: acquired /var/lock/sm/.nil/lvm Mar 16 17:39:43 starbase1 SM: [32432] ['/sbin/vgs', '--noheadings', '--nosuffix', '--units', 'b', 'VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10'] Mar 16 17:39:43 starbase1 SM: [32432] pread SUCCESS Mar 16 17:39:43 starbase1 SM: [32432] lock: released /var/lock/sm/.nil/lvm Mar 16 17:39:43 starbase1 SM: [32432] lock: opening lock file /var/lock/sm/bd7638e0-3979-cf9a-dd96-96b378007d10/running Mar 16 17:39:43 starbase1 SM: [32432] lock: tried lock /var/lock/sm/bd7638e0-3979-cf9a-dd96-96b378007d10/running, acquired: False (exists: True) Mar 16 17:39:43 starbase1 SM: [32432] LVMCache created for VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10 Mar 16 17:39:43 starbase1 SM: [32432] LVMCache: will initialize now Mar 16 17:39:43 starbase1 SM: [32432] LVMCache: refreshing Mar 16 17:39:43 starbase1 SM: [32432] lock: acquired /var/lock/sm/.nil/lvm Mar 16 17:39:43 starbase1 SM: [32432] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10'] Mar 16 17:39:43 starbase1 SM: [32432] pread SUCCESS Mar 16 17:39:43 starbase1 SM: [32432] lock: released /var/lock/sm/.nil/lvm Mar 16 17:39:43 starbase1 SM: [32432] LVMCache: refreshing Mar 16 17:39:43 starbase1 SM: [32432] lock: acquired /var/lock/sm/.nil/lvm Mar 16 17:39:43 starbase1 SM: [32432] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10'] Mar 16 17:39:43 starbase1 SM: [32432] pread SUCCESS Mar 16 17:39:43 starbase1 SM: [32432] lock: released /var/lock/sm/.nil/lvm Mar 16 17:39:43 starbase1 SM: [32432] ['/usr/bin/vhd-util', 'scan', '-f', '-m', 'VHD-*', '-l', 'VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10'] Mar 16 17:39:43 starbase1 SM: [32432] pread SUCCESS Mar 16 17:39:43 starbase1 SMGC: [32432] SR bd76 ('SSD') (4 VDIs in 2 VHD trees): Mar 16 17:39:43 starbase1 SMGC: [32432] 9136f128[VHD](100.000G//100.203G|ao) Mar 16 17:39:43 starbase1 SMGC: [32432] *59c4eb73[VHD](1024.000G//1018.367G|ao) Mar 16 17:39:43 starbase1 SMGC: [32432] *8d2130bb[VHD](1024.000G//15.664G|ao) Mar 16 17:39:43 starbase1 SMGC: [32432] f908a214[VHD](1024.000G//1026.008G|ao) Mar 16 17:39:43 starbase1 SMGC: [32432] Mar 16 17:39:43 starbase1 SM: [32432] A GC instance already running, not kicking Mar 16 17:39:43 starbase1 SM: [32432] lock: released /var/lock/sm/bd7638e0-3979-cf9a-dd96-96b378007d10/sr Mar 16 17:39:44 starbase1 SM: [32505] lock: opening lock file /var/lock/sm/bd7638e0-3979-cf9a-dd96-96b378007d10/sr Mar 16 17:39:44 starbase1 SM: [32505] LVMCache created for VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10 Mar 16 17:39:44 starbase1 SM: [32505] lock: opening lock file /var/lock/sm/.nil/lvm Mar 16 17:39:44 starbase1 SM: [32505] ['/sbin/vgs', '--readonly', 'VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10'] Mar 16 17:39:44 starbase1 SM: [32505] pread SUCCESS Mar 16 17:39:44 starbase1 SM: [32505] lock: acquired /var/lock/sm/bd7638e0-3979-cf9a-dd96-96b378007d10/sr Mar 16 17:39:44 starbase1 SM: [32505] LVMCache: will initialize now Mar 16 17:39:44 starbase1 SM: [32505] LVMCache: refreshing Mar 16 17:39:44 starbase1 SM: [32505] lock: acquired /var/lock/sm/.nil/lvm Mar 16 17:39:44 starbase1 SM: [32505] ['/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10'] Mar 16 17:39:44 starbase1 SM: [32505] pread SUCCESS Mar 16 17:39:44 starbase1 SM: [32505] lock: released /var/lock/sm/.nil/lvm Mar 16 17:39:44 starbase1 SM: [32505] lock: released /var/lock/sm/bd7638e0-3979-cf9a-dd96-96b378007d10/sr Mar 16 17:39:44 starbase1 SM: [32505] Entering _checkMetadataVolume Mar 16 17:39:44 starbase1 SM: [32505] lock: acquired /var/lock/sm/bd7638e0-3979-cf9a-dd96-96b378007d10/sr Mar 16 17:39:44 starbase1 SM: [32505] sr_update {'sr_uuid': 'bd7638e0-3979-cf9a-dd96-96b378007d10', 'subtask_of': 'DummyRef:|4b7b0d5c-dc64-4f20-92b7-447746cf0d61|SR.stat', 'args': [], 'host_ref': 'OpaqueRef:df66acc6-fed1-4b98-92a9-45579218275a', 'session_ref': 'OpaqueRef:5095999a-bb6d-4b63-840a-48d62a50a73a', 'device_config': {'device': '/dev/mapper/mpathc', 'SRmaster': 'true'}, 'command': 'sr_update', 'sr_ref': 'OpaqueRef:fa34f30e-02a7-472e-96cd-dac7039d45b6', 'local_cache_sr': '39fbd0f6-8f83-d600-e811-d43c1b04947e'} Mar 16 17:39:44 starbase1 SM: [32505] ['/sbin/vgs', '--readonly', 'VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10'] Mar 16 17:39:44 starbase1 SM: [32505] pread SUCCESS Mar 16 17:39:44 starbase1 SM: [32505] Setting virtual_allocation of SR bd7638e0-3979-cf9a-dd96-96b378007d10 to 1209259786240 Mar 16 17:39:44 starbase1 SM: [32505] lock: acquired /var/lock/sm/.nil/lvm Mar 16 17:39:44 starbase1 SM: [32505] ['/sbin/vgs', '--noheadings', '--nosuffix', '--units', 'b', 'VG_XenStorage-bd7638e0-3979-cf9a-dd96-96b378007d10'] Mar 16 17:39:44 starbase1 SM: [32505] pread SUCCESS Mar 16 17:39:44 starbase1 SM: [32505] lock: released /var/lock/sm/.nil/lvm Mar 16 17:39:44 starbase1 SM: [32505] Updating metadata : {'objtype': 'sr', 'name_description': '', 'name_label': 'SSD'} Mar 16 17:39:44 starbase1 SM: [32505] entering updateSR Mar 16 17:39:44 starbase1 SM: [32505] lock: released /var/lock/sm/bd7638e0-3979-cf9a-dd96-96b378007d10/sr
Reading this i can see;
Mar 16 17:39:43 starbase1 SMGC: [32432] *59c4eb73[VHD](1024.000G//1018.367G|ao)
Mar 16 17:39:43 starbase1 SMGC: [32432] *8d2130bb[VHD](1024.000G//15.664G|ao)
Mar 16 17:39:43 starbase1 SMGC: [32432] f908a214[VHD](1024.000G//1026.008G|ao)
Which seems to be the issue, but i'm guessing if the machine is on it can't reclaim the space? Am i right or is there another way around this?
Link to comment
4 answers to this question
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now