Jump to content
Welcome to our new Citrix community!
  • 0

Can I delete two "base copy" in tree?


Pavel Hladik

Question

Hi, we have problem on one of our HBA storage.

 

When we do storage re-scan (or other operation) we see in /var/log/SMlog:

Jan 18 19:14:35 host-1 SM: [5298] Found inflate journal e64f7449-6324-4579-8e70-a511024162d7, deflating /dev/VG_XenStorage-500e39e1-fae3-9356-f5b4-a45ad7faadff/VHD-e64f7449-6324-4579-8e70-a511024162d7 to 58720256
Jan 18 19:14:35 host-1 SM: [5298] ['/usr/bin/vhd-util', 'modify', '--debug', '-s', '58720256', '-n', '/dev/VG_XenStorage-500e39e1-fae3-9356-f5b4-a45ad7faadff/VHD-e64f7449-6324-4579-8e70-a511024162d7']
Jan 18 19:14:35 host-1 SM: [5298] FAILED in util.pread: (rc 22) stdout: 'failed to set physical size to 58720256: -22

When we run:

$ /usr/bin/vhd-util scan -f -m VHD-* -l VG_XenStorage-500e39e1-fae3-9356-f5b4-a45ad7faadff -p -a | grep -B 1 e64f7449-6324-4579-8e70-a511024162d7
vhd=VHD-7b85b983-3ff0-4645-9d41-8ffdc3d75761 capacity=21474836480 size=16840130560 hidden=0 parent=none
   vhd=VHD-e64f7449-6324-4579-8e70-a511024162d7 capacity=21474836480 size=21525168128 hidden=0 parent=VHD-7b85b983-3ff0-4645-9d41-8ffdc3d75761

We get VHD's parent and when we run:

$ xe vdi-list sr-uuid=500e39e1-fae3-9356-f5b4-a45ad7faadff | grep -A 6 7b85b983-3ff0-4645-9d41-8ffdc3d75761
uuid ( RO)                : 7b85b983-3ff0-4645-9d41-8ffdc3d75761
          name-label ( RW): base copy
    name-description ( RW):
             sr-uuid ( RO): 500e39e1-fae3-9356-f5b4-a45ad7faadff
        virtual-size ( RO): 21474836480
            sharable ( RO): false
           read-only ( RO): true

$ xe vdi-list sr-uuid=500e39e1-fae3-9356-f5b4-a45ad7faadff | grep -A 6 e64f7449-6324-4579-8e70-a511024162d7
uuid ( RO)                : e64f7449-6324-4579-8e70-a511024162d7
          name-label ( RW): base copy
    name-description ( RW):
             sr-uuid ( RO): 500e39e1-fae3-9356-f5b4-a45ad7faadff
        virtual-size ( RO): 21474836480
            sharable ( RO): false
           read-only ( RO): false

We see that problematic VDIs e64f7449-6324-4579-8e70-a511024162d7 and parent 7b85b983-3ff0-4645-9d41-8ffdc3d75761 are base copy.

 

So, is it safe to delete those VDIs?

 

Thank you,
Pavel

Link to comment

5 answers to this question

Recommended Posts

  • 0

Hi Tobias! Thanks for jump in.

 

uuid ( RO)                    : e64f7449-6324-4579-8e70-a511024162d7
              name-label ( RW): base copy
        name-description ( RW):
           is-a-snapshot ( RO): false
             snapshot-of ( RO): <not in database>
               snapshots ( RO):
           snapshot-time ( RO): 19700101T00:00:00Z
      allowed-operations (SRO): generate_config; update; forget; destroy; snapshot; resize; copy; clone
      current-operations (SRO):
                 sr-uuid ( RO): 500e39e1-fae3-9356-f5b4-a45ad7faadff
           sr-name-label ( RO): xen1
               vbd-uuids (SRO):
         crashdump-uuids (SRO):
            virtual-size ( RO): 21474836480
    physical-utilisation ( RO): 58720256
                location ( RO): e64f7449-6324-4579-8e70-a511024162d7
                    type ( RO): System
                sharable ( RO): false
               read-only ( RO): false
            storage-lock ( RO): false
                 managed ( RO): false
     parent ( RO) [DEPRECATED]: <not in database>
                 missing ( RO): false
            is-tools-iso ( RO): false
            other-config (MRW):
           xenstore-data (MRO):
               sm-config (MRO): vhd-blocks: eJxjYBgF+EAAlOYiVoM8LgkFsux3IEsX9YADQwKYTj/BwMDBCBFjZAISLNjVMzYwcADJA9R2h/z/+v8g8IHaBhMADnS2Dx1UDLD9CQMcAgk0Nh8ANyENQw==; vhd-parent: 7b85b983-3ff0-4645-9d41-8ffdc3d75761; vdi_type: vhd
                 on-boot ( RW): persist
           allow-caching ( RW): false
         metadata-latest ( RO): false
        metadata-of-pool ( RO): <not in database>
                    tags (SRW):
             cbt-enabled ( RO): false
               
uuid ( RO)                    : 7b85b983-3ff0-4645-9d41-8ffdc3d75761
              name-label ( RW): base copy
        name-description ( RW):
           is-a-snapshot ( RO): false
             snapshot-of ( RO): <not in database>
               snapshots ( RO):
           snapshot-time ( RO): 19700101T00:00:00Z
      allowed-operations (SRO): generate_config; update; forget; destroy; snapshot; resize; copy; clone
      current-operations (SRO):
                 sr-uuid ( RO): 500e39e1-fae3-9356-f5b4-a45ad7faadff
           sr-name-label ( RO): xen1
               vbd-uuids (SRO):
         crashdump-uuids (SRO):
            virtual-size ( RO): 21474836480
    physical-utilisation ( RO): 16840130560
                location ( RO): 7b85b983-3ff0-4645-9d41-8ffdc3d75761
                    type ( RO): System
                sharable ( RO): false
               read-only ( RO): true
            storage-lock ( RO): false
                 managed ( RO): false
     parent ( RO) [DEPRECATED]: <not in database>
                 missing ( RO): false
            is-tools-iso ( RO): false
            other-config (MRW):
           xenstore-data (MRO):
               sm-config (MRO): vhd-blocks: eJx7wQAB/6Hgz384aACJ35HgYZIBYh4JDiYeASBmgGAOBgYgBkMozQDG6EABxuBnbGCQP4Amy4hFB+nAgA+rMPsHUgyx/38AmVsBJWoY5MGBcf8/KqCUTyx4wFAPJOvJ1I3bfnQ+fvv//6DcBeQBmP/vD6T9/+pxykNk/jFAVTSgqbwP4f+BCT8n1f4ftPE/ujtgPvwO5T+B8v9/gPjsQP1/7ACXOLXAB3bK9NdDKFj6fQCl4TEKAO6/3uo=; vdi_type: vhd
                 on-boot ( RW): persist
           allow-caching ( RW): false
         metadata-latest ( RO): false
        metadata-of-pool ( RO): <not in database>
                    tags (SRW):
             cbt-enabled ( RO): false

And I see there is existing logical volume record but the device itself in /dev/ path does not exist. There is only some try of inflate.

$ lvdisplay | grep -A 5 -B 2 e64f7449-6324-4579-8e70-a511024162d7

  --- Logical volume ---
  LV Path                /dev/VG_XenStorage-500e39e1-fae3-9356-f5b4-a45ad7faadff/VHD-e64f7449-6324-4579-8e70-a511024162d7
  LV Name                VHD-e64f7449-6324-4579-8e70-a511024162d7
  VG Name                VG_XenStorage-500e39e1-fae3-9356-f5b4-a45ad7faadff
  LV UUID                XJRHYo-32mN-xLBf-jYYv-Nf4J-Z1fq-rkmG7S
  LV Write Access        read/write
  LV Creation host, time host-1, 2020-12-05 12:39:31 +0100
  LV Status              NOT available

  --- Logical volume ---
  LV Path                /dev/VG_XenStorage-500e39e1-fae3-9356-f5b4-a45ad7faadff/leaf_e64f7449-6324-4579-8e70-a511024162d7_1
  LV Name                leaf_e64f7449-6324-4579-8e70-a511024162d7_1
  VG Name                VG_XenStorage-500e39e1-fae3-9356-f5b4-a45ad7faadff
  LV UUID                GyAFFd-QOae-5k3E-9WU3-ipxB-opd5-XtVFkz
  LV Write Access        read/write
  LV Creation host, time host-1, 2021-01-17 20:46:44 +0100
  LV Status              NOT available

  --- Logical volume ---
  LV Path                /dev/VG_XenStorage-500e39e1-fae3-9356-f5b4-a45ad7faadff/inflate_e64f7449-6324-4579-8e70-a511024162d7_58720256
  LV Name                inflate_e64f7449-6324-4579-8e70-a511024162d7_58720256
  VG Name                VG_XenStorage-500e39e1-fae3-9356-f5b4-a45ad7faadff
  LV UUID                vLirha-Zzep-LX7I-RUk3-bn7N-T4XL-aR3dJT
  LV Write Access        read/write
  LV Creation host, time host-1, 2021-01-17 20:46:45 +0100
  LV Status              available

$ ls -la /dev/VG_XenStorage-500e39e1-fae3-9356-f5b4-a45ad7faadff/VHD-e64f7449-6324-4579-8e70-a511024162d7
ls: cannot access /dev/VG_XenStorage-500e39e1-fae3-9356-f5b4-a45ad7faadff/VHD-e64f7449-6324-4579-8e70-a511024162d7: No such file or directory

$ ls -la /dev/VG_XenStorage-500e39e1-fae3-9356-f5b4-a45ad7faadff/leaf_e64f7449-6324-4579-8e70-a511024162d7_1
ls: cannot access /dev/VG_XenStorage-500e39e1-fae3-9356-f5b4-a45ad7faadff/leaf_e64f7449-6324-4579-8e70-a511024162d7_1: No such file or directory

$ ls -la /dev/VG_XenStorage-500e39e1-fae3-9356-f5b4-a45ad7faadff/inflate_e64f7449-6324-4579-8e70-a511024162d7_58720256
lrwxrwxrwx 1 root root 125 Jan 17 20:46 /dev/VG_XenStorage-500e39e1-fae3-9356-f5b4-a45ad7faadff/inflate_e64f7449-6324-4579-8e70-a511024162d7_58720256 -> /dev/mapper/VG_XenStorage--500e39e1--fae3--9356--f5b4--a45ad7faadff-inflate_e64f7449--6324--4579--8e70--a511024162d7_58720256

And similar for parent VHD:

$ lvdisplay | grep -A 5 -B 2 VHD-7b85b983-3ff0-4645-9d41-8ffdc3d75761

  --- Logical volume ---
  LV Path                /dev/VG_XenStorage-500e39e1-fae3-9356-f5b4-a45ad7faadff/VHD-7b85b983-3ff0-4645-9d41-8ffdc3d75761
  LV Name                VHD-7b85b983-3ff0-4645-9d41-8ffdc3d75761
  VG Name                VG_XenStorage-500e39e1-fae3-9356-f5b4-a45ad7faadff
  LV UUID                PL4NLd-1NSc-dsZM-HFXD-Wip7-qtQF-PzJyNX
  LV Write Access        read/write
  LV Creation host, time host-1, 2020-12-03 22:40:14 +0100
  LV Status              NOT available

$ ls -la /dev/VG_XenStorage-500e39e1-fae3-9356-f5b4-a45ad7faadff/VHD-7b85b983-3ff0-4645-9d41-8ffdc3d75761
ls: cannot access /dev/VG_XenStorage-500e39e1-fae3-9356-f5b4-a45ad7faadff/VHD-7b85b983-3ff0-4645-9d41-8ffdc3d75761: No such file or directory

 

 

Link to comment
  • 0

Tobias thank you for helping me. I did those additional steps to solve the issue completely:

 

1. Remove unwanted vdi and theirs non-active lv.

$ xe vdi-forget uuid=e64f7449-6324-4579-8e70-a511024162d7
$ xe vdi-forget uuid=7b85b983-3ff0-4645-9d41-8ffdc3d75761

$ lvremove /dev/VG_XenStorage-500e39e1-fae3-9356-f5b4-a45ad7faadff/VHD-e64f7449-6324-4579-8e70-a511024162d7
$ lvremove /dev/VG_XenStorage-500e39e1-fae3-9356-f5b4-a45ad7faadff/VHD-7b85b983-3ff0-4645-9d41-8ffdc3d75761
$ lvremove /dev/VG_XenStorage-500e39e1-fae3-9356-f5b4-a45ad7faadff/leaf_e64f7449-6324-4579-8e70-a511024162d7_1

2. But, there were another error message "Introduce VDI 7b85b983-3ff0-4645-9d41-8ffdc3d75761 as it is present in metadata and not in XAPI", so I had to rename MGT metadata record.

$ lvscan | grep MGT | grep 500e39e1-fae3-9356-f5b4-a45ad7faadff
  ACTIVE            '/dev/VG_XenStorage-500e39e1-fae3-9356-f5b4-a45ad7faadff/MGT' [4.00 MiB] inherit

$ lvrename /dev/VG_XenStorage-500e39e1-fae3-9356-f5b4-a45ad7faadff/MGT /dev/VG_XenStorage-500e39e1-fae3-9356-f5b4-a45ad7faadff/MGT-backup

 

3. And recreate it with sr-scan.

$ xe sr-scan uuid=500e39e1-fae3-9356-f5b4-a45ad7faadff

 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...