Jump to content
Welcome to our new Citrix community!
  • 0

Why am I seeing a VHD chain with no snapshots?


Rob Hall

Question

Kind of related to my last thread - Xenserver 7.3, attempting to use the migration wizard to migrate a VM to another cluster so I can upgrade the hypervisor. The migration failed, now the source SR is out of space, and it looks like snapshotting is the issue. a vhd-util scan reveals this:
 

vhd=VHD-bb9f4cb9-0c7f-445a-bb73-f192d14541f8 capacity=536870912000 size=537885933568 hidden=1 parent=none
   vhd=VHD-1652df38-e366-40ea-8c3d-e5f90c5429f4 capacity=536870912000 size=511705088 hidden=1 parent=VHD-bb9f4cb9-0c7f-445a-bb73-f192d14541f8
      vhd=VHD-5f509a7b-e50b-454d-bc5f-d65edff3c002 capacity=536870912000 size=537927876608 hidden=0 parent=VHD-1652df38-e366-40ea-8c3d-e5f90c5429f4
 

However, XenCenter shows no snapshots for the VM.  Attempting a leaf coalesce returns "VM has no leaf-coalesceable VDIs".

Suggestions? Why did this happen?

Link to comment

3 answers to this question

Recommended Posts

  • 0

Storage migration uses snapshots internally (they aren't visible in the UI). The reason for this is that you can't copy a disk file/volume while the VM is writing into it (it might write a block that already been copied). So a snapshot is taken to give a read only parent which can safely be copied and the writeable leaf is set up with mirroring so all susbsequent writes get sent to both the source and destination storage. Once the migration is completed the garbage collector "should" be able to collapse this tree, but that requires some amount of temporary free space in order to make the parent file/volume a complete superset of the child's data. It sounds like this is what now can't happen as your storage is full.

Link to comment
  • 0
10 hours ago, Mark Syms said:

Storage migration uses snapshots internally (they aren't visible in the UI). The reason for this is that you can't copy a disk file/volume while the VM is writing into it (it might write a block that already been copied). So a snapshot is taken to give a read only parent which can safely be copied and the writeable leaf is set up with mirroring so all susbsequent writes get sent to both the source and destination storage. Once the migration is completed the garbage collector "should" be able to collapse this tree, but that requires some amount of temporary free space in order to make the parent file/volume a complete superset of the child's data. It sounds like this is what now can't happen as your storage is full.

There's enough temporary space to allow the garbage collector to do it's thing, but it wouldn't. The issue is null and void now anyway, I got the VM to migrate to another cluster.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...