Jump to content
Welcome to our new Citrix community!
  • 0

chkdsk runs after setting vDisk in "Standard Image" mode on PVS 7.1

itsw itsw


Hi community


In our development environment I have a big problem to use a vDisk for XenApp 6.5 on W2K8R2SP1.


After creating the vDisk on the master device (a VM on XenServer) I change the mode of the vDisk to Standard on the PVS 7.1 and assign it to a target device (a VM as well).


As soon as I run the target device checkdisk message appears automatically stating ".... One of your disks needs to be checked for consistency. You may cancel the disk check...."

Chkdsk runs until the end and the device boots again. Chkdsk runs again and again......

If I interrupt chkdsk the target deviceboots correctly to OS.


Changing the vdisk to Private mode, running chkdsk to the end, booting again (this time without chkdsk message) to OS, shutting down and changing the vDisk to "Standard Image" generates the problem again.


Can anyone help me here out?




Link to comment

6 answers to this question

Recommended Posts

  • 0

After spending a lot of time I could localize this issue a bit but it's still an open case.

Here the details:
This problem appears only in our development environment which has the PVS 7.1 servers as vSphere 5 based VMs.
I don't experience this behaviour in our productive environment whereas the PVS servers are physical.

I discovered furthermore on the PVS server (W2K12), as soon as I change the mode of the vDisk from Private (read/write) to Standard (read only), I see two pop ups appearing stating
"System Reserved (F:). There's a problem with this drive. Scan the drive now and fix it." and the same message for drive G:.
F: and G: are the next free drive letters at that time on the PVS Server. F: points to the "System Reserved" partition and G: points to the OS which is C: drive.
These messages indicate me that the PVS server mounts the vDisk(s) to prepare them for read only modus and I will surely get the checkdisk behaviour as next at the
start of the vDisk.

My workaround is following:
1. Note the drive identification of the "System Reserved" drive while vDisk is on checkdisk procedure. It looks like
\\?\Volume{008f092f-d6c8-11e3-85a7-122ea843dfca}. Interrupt checkdisk procedure and shut vDisk down.
2. Set the vDisk back to Private (r/w) mode
3. Boot vDisk to OS, go in registry to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager and
change the value of BootExecute from "autocheck autochk *"
to "autocheck autochk /k:C /k:D /k:\\?\Volume{008f092f-d6c8-11e3-85a7-122ea843dfca} *"
(I included also the drive D: because while running we have a local D: drive for writecache)
4. Shut OS down and change the mode to Standard again. While changing mode if you watch Disk Management on the PVS server you might see mounting and dismounting of the vDisk drives.
5. Boot vDisk and if everything is right checkdisk shouldn't run at startup.

I wish you a happy workaround ;-)
As a conclusion I guess that the filesystem of the vSphere 5 is the root cause for this problem. I couldn't find a solution so far and since it happens in our development environment I don't want spend more time on this.

Link to comment
  • 0

The below resolved the issue for myself.  Both F: and G: volumes return not dirty and this process does not modify the vdhx file in anyway, so is something going wrong in the streaming?

  1. Mount vhdx through windows explorer (after changing to standard mode)
  2. Query dirty bit using fsutil (optional)
    fsutil dirty query F:
    fsutil dirty query G:


  3. Detach vhdx through disk management
  4. Boot target device
Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Create New...