Jump to content
Welcome to our new Citrix community!
  • 0

Can not Move VHD from LocalSR to iSCSI , Object Deleted.VDI


Trang Wardhani

Question

Hi community


I need urgent help.

Currently i'am using Xen-server ver. 7.1 with a pool consists of 3 host with HA enabled.

I tried to live migration VM's VHD from Shared storage (iSCSI) to Local Storage on a host-1 inside a Pool using Xen-Center and it was success.

The reason is that i want to rebuild the storage and re-attached the shared storage (iSCSI) to the pool, so i did detach and forget the iscsi.

I am succesfull re-attached the new shared storage to the pool and migrate some VM's VHD back from LocalSR to iSCSI.


But unfortunately, there is one VM that i can not move the VHD back from LocalSR to iSCSI.
When i tried to live migration the VM, it can not be done and show error on the xencenter: " Object has been deleted.VDI:OpaqueREF:<numbers>

Then i shutting down the VM, but now it won't go back online again, an error said virtual disk could not be found, but actually the VHD still there on the LocalSR.

I already have the snapshoot before for the VM but also moved it to LocalSR.

 

My questions are:

1.Can i get the VDI's back? and why the VM failed to start/online even the VHD is there on the LocalSR?

2.Can anyone help me to solve the issue please? and why this is happening only for this one VM...

 

here is the vdi-list of the VM from the console:

[root@eng-svr-4 log]# xe vdi-list name-label=vhd-new-ares 
uuid ( RO)                : 7476144a-a3f6-4dd3-a137-6f95a7395caa
          name-label ( RW): vhd-new-ares
    name-description ( RW): 
             sr-uuid ( RO): bb3a2a96-fddb-2e72-edb1-169bf05f762e
        virtual-size ( RO): 85899345920
            sharable ( RO): false
           read-only ( RO): false


uuid ( RO)                : 9951eed5-5110-4ad5-a91e-9c95ef12b734
          name-label ( RW): vhd-new-ares
    name-description ( RW): 
             sr-uuid ( RO): bb3a2a96-fddb-2e72-edb1-169bf05f762e
        virtual-size ( RO): 85899345920
            sharable ( RO): false
           read-only ( RO): false

uuid ( RO)                : 785bbeb4-b98b-42df-8ccf-d45381561911
          name-label ( RW): vhd-new-ares
    name-description ( RW): 
             sr-uuid ( RO): bb3a2a96-fddb-2e72-edb1-169bf05f762e
        virtual-size ( RO): 85899345920
            sharable ( RO): false
           read-only ( RO): false


uuid ( RO)                : e6baa1ec-502f-4737-9432-3b282afc1fd0
          name-label ( RW): vhd-new-ares
    name-description ( RW): 
             sr-uuid ( RO): bb3a2a96-fddb-2e72-edb1-169bf05f762e
        virtual-size ( RO): 85899345920
            sharable ( RO): false
           read-only ( RO): false

 

Thank you.

error message.PNG

local storage.PNG

structure.PNG

VHD.PNG

Link to comment

4 answers to this question

Recommended Posts

  • 0

Yea, I have seen weirdness like that.  I think the hosts get out of sync with each other. Try doing an xe-toolstack-restart 

on each host and verify time between all is close with ntpstat -S. If that doesn't work may require restarting all of the

hosts.  It is possible something is wrong with the vdi, but hopefully not.

 

--Alan--

 

 

  • Like 1
Link to comment
  • 0

Hi Alan,

 

I already did recover the VM by using snapshoot, and create new VM from it. Thanks for that feature!!
I suspect the reason why Ares VM error because before maintenance i created a snapshoot using memory parameter, Because i have 2 others VM that have the same symptom when backed up the same way.

 

But the other VM's snapshoot without memory parameter was not affected, Hopefully this is not the reason and not related at all.

 

Never tought about restart xe-toolstack, thanks Alan.

 

Trang.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...