Jump to content
Welcome to our new Citrix community!
  • 0

"Gave up on leaf coalesce..." on CH8.1

Jacob Degeling


Hi All,


I've been paying attention to SMlog of late after CH8.0 and its garbage collection process causing issues (CA-327382). Basically it would find it couldn't coalesce VHDs and then mark them for offline coalescing and then promptly go around again and try to coalesce them ignoring the offline coalesce flag. CH8.1 fixes that issue. We stopped using snapshots and CBT for our backup software (Alike) because of this, which is disapointing because backups take way longer now. Alike doesn't have support for in-VM backups for Linux unfortunately so those VMs still use the snapshot method. Yesterday I turned VM snapshotting back on for some hourly backups on. This leads me to my issue.


For background I have 2 SRs of 10.9TB each, one 4.2TB used (5.1TB allocated) and the other 4.9TB used (5.2TB allocated). I am getting these messages in SMlog now, not many but enough: 

Snapshot-coalesce did not help, previous 29540864, current 86274560. Abandoning attempts
Set leaf-coalesce = offline for 68eb7e07[VHD](128.000G/82.278M/128.258G|a)
Removed leaf-coalesce from 68eb7e07[VHD](128.000G/82.278M/128.258G|a)


gc: EXCEPTION <class 'SR.SROSError'>, Gave up on leaf coalesce after leaf grew bigger than before snapshot taken [opterr=VDI=68eb7e07[VHD](128.000G/82.278M/128.258G|a)]

So GC tries a few times to coalesce a VHD tree, then marks the child VHD as needing offline coalescing and throws the above exception and stops the GC process. During this time you can see that the VHD tree increases in size by 1 VHD in between the original parent and the original child. This new VHD doesn't stick around however which is good.


From reading throught the code for the storage manager (SM) on Github you can see here that VHDs need to be coalesced offline for two reasons: not enough space on SR, or "unable to keep up with VDI". I guess that this is what is happening in my case given the error message (leaf grew bigger). It is interesting however because the VMs affected are not high use at all: one is our Jira Service Desk server, one is our time clock managment software (IIS, SQL server) that handles people clocking in and out.


Anyway it's a pain because I know there is a performance hit with VHD trees which I want to try and avoid and I just want to be able to use snapshots for backups without worrying about VHD trees growing too large. The pain comes from having to suspend or shut down the VMs, scanning the SR to kick off the GC process. It doesn't ususally take too long, but is a hassle nevertheless.


I've atached the SMlog from today with everything in it.


Any suggestions or tips are most welcome. Sorry about the length of the post. If I have left anything out (likely) please ask me and I will get you any information you need.


Link to comment

2 answers to this question

Recommended Posts

  • 1

That does not strictly indicate a problem.


But, what it does mean is that your VM is writing data faster than the GC process can coalesce it to the parent.


This line



Snapshot-coalesce did not help, previous 29540864, current 86274560. Abandoning attempts


Says that whilst the GC was coalescing 29M of data, the VM wrote 86M. The aim of the snapshot coalesce is to get the leaf file small enough that the VM can be temporarily paused whilst the last bit of data is coalesced. In order to do this the leaf must get smaller over time and in this case it is not doing so and so the operation aborts.

  • Like 1
Link to comment
  • 0

Thanks Mark, that's what I thought. What puzzles me is that they are not high usage VMs, although 86MB isn't a lot of data.


Keeping an eye on this over today, it does seem that these VHDs have been coalesced. So I guess that at the original time it couldn't coalesce, but on subsequent runs it has managed to coalesce. Which is good. I can probably stop worrying now :-S

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Create New...