Jump to content
Updated Privacy Statement
  • 0

Invoke-XenVM-XenAction MigrateSend


Sergey Polyakov

Question

Using PowerShell Commands to CrossPool Migrate VMs

        Invoke-XenVM -VM $vm -XenAction MigrateSend -Live $true -Dest $dest -VifMap $vifmap -VdiMap $vdimap -Options @null -VgpuMap @null -SessionOpaqueRef $session1.opaque_ref -verbose -Async -PassThru 

Whenever the migration is complete, the VM is rebooted with unexpected error 

 

If Use XenCenter migration going  without reboot

 

Link to comment

20 answers to this question

Recommended Posts

  • 0

xensource.txt

 

 

Dec 21 10:43:48 vdi-dgt-31 xapi: [debug||88 |org.xen.xapi.xenops.classic events D:4d964b047c26|xenops] xenopsd event: Updating VM be83f8f0-cb3b-216a-a68b-00936609eebd domid 40 memory target

 

Dec 21 10:45:43 vdi-dgt-31 xapi: [debug||90 ||xenops] Event on VM be83f8f0-cb3b-216a-a68b-00936609eebd; resident_here = true
Dec 21 10:45:44 vdi-dgt-31 xapi: [debug||90 ||xenops] xapi_cache: not updating cache for be83f8f0-cb3b-216a-a68b-00936609eebd


Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] Got QMP event, domain-40: RESET
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] Current domains: 0, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 29, 34, 35, 37, 40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] Domain 40 may have changed state
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] Removing watches for: domid 40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||5 ||xenops_server] Received an event on managed VM be83f8f0-cb3b-216a-a68b-00936609eebd
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/5696/kthread-pid token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] QMP command for domid 40: {"execute":"query-migratable","id":"qmp-000219-40"}
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||5 |queue|xenops_server] Queue.push ["VM_check_state","be83f8f0-cb3b-216a-a68b-00936609eebd"] onto be83f8f0-cb3b-216a-a68b-00936609eebd:[  ]
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||15 ||xenops_server] Queue.pop returned ["VM_check_state","be83f8f0-cb3b-216a-a68b-00936609eebd"]
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||15 |events|xenops_server] Task 6807 reference events: ["VM_check_state","be83f8f0-cb3b-216a-a68b-00936609eebd"]
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/5696/tapdisk-pid token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/5696/shutdown-done token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] received QMP response qmp-000219-40 (File "xc/device_common.ml", line 502, characters 49-56)
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] query-migratable precheck passed (domid=40)
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] xenstore-rm /local/domain/40/data/cant_suspend_reason
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/5696/hotplug-status token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/5696/params token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] Got QMP event, domain-40: RESET
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/5696/state token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/768/kthread-pid token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] QMP command for domid 40: {"execute":"query-migratable","id":"qmp-000220-40"}
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/768/tapdisk-pid token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/768/shutdown-done token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] received QMP response qmp-000220-40 (File "xc/device_common.ml", line 502, characters 49-56)
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] query-migratable precheck passed (domid=40)
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] xenstore-rm /local/domain/40/data/cant_suspend_reason
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/768/hotplug-status token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/768/params token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/768/state token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vif/40/0/kthread-pid token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vif/40/0/tapdisk-pid token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vif/40/0/shutdown-done token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vif/40/0/hotplug-status token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vif/40/0/params token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vif/40/0/state token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenops] Cancelling watches for: domid 40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenops] removing device cache for domid 40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/attr token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/data/updated token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/data/ts token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/memory/target token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/memory/uncooperative token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/console/vnc-port token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/console/tc-port token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/qemu-pid-signal token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/control token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/device token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/rrd token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/vm-data token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/feature token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/vm/be83f8f0-cb3b-216a-a68b-000000000001/rtc/timeoffset token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/xenserver/attr token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||15 |events|xenops_server] VM.reboot be83f8f0-cb3b-216a-a68b-00936609eebd

Link to comment
  • 0

 

Pool clean installed 11 days ago ( CH8.2 up to XS82E034 )

 

 chronyc tracking

System time     : 0.000209621 seconds slow of NTP time
Last offset     : -0.000242432 seconds
RMS offset      : 0.000119454 seconds
Frequency       : 19.081 ppm slow
Residual freq   : -0.032 ppm
Skew            : 0.074 ppm
Root delay      : 0.002107475 seconds
Root dispersion : 0.053449847 seconds
Update interval : 1030.8 seconds
Leap status     : Normal

 

 

Link to comment
  • 0

I don't know why but if using :

 Invoke-XenVM -VM $vm -XenAction MigrateSend -Live $true -Dest $dest -VifMap $vifmap -VdiMap $vdimap -Options @null -VgpuMap @null -SessionOpaqueRef $session1.opaque_ref -verbose -Async -PassThru | Wait-XenTask  -SessionOpaqueRef $session1.opaque_ref -ShowProgress

It completed with error

 

When use :

$task=Invoke-XenVM -VM $vm -XenAction MigrateSend -Live $true -Dest $dest -VifMap $vifmap  -VdiMap $vdimap -Options @null -VgpuMap @null -SessionOpaqueRef  session1.opaque_ref -verbose -Async -PassThru
Wait-XenTask -Task $task -SessionOpaqueRef $session1.opaque_ref -ShowProgress

It work OK

 

 

Link to comment
  • 0

Work Ok if use timeout after Invoke-XenVM :

 Invoke-XenVM -VM $vm -XenAction MigrateSend -Live $true -Dest $dest -VifMap $vifmap -VdiMap $vdimap -Options @null -VgpuMap @null -SessionOpaqueRef $session1.opaque_ref -verbose -Async -PassThru | Wait-XenTask  -SessionOpaqueRef $session1.opaque_ref -ShowProgress

start-sleep -Seconds 180

Link to comment
  • 0

             

 

$criptjob = {
                    param ($psource,
                        $usource,
                        $pwsource,
                        $pdest,
                        $vmuuid,
                        $sruuid,
                        $xdadrs,
                        $xdui,
                        $xdchk
                    )
                    
                    $session1 = Connect-XenServer -url $psource -UserName $usource -Password $pwsource -PassThru
                    $session2 = Connect-XenServer -url $pdest -UserName $usource -Password $pwsource -PassThru
                    
                    $vm = Get-XenVM -uuid $vmuuid -SessionOpaqueRef $session1.opaque_ref
                    
                    #example; VDIs should be resolved from $vm.VBDs
                    $vdirefs = @()
                    foreach ($itemvbd in $vm.VBDs)
                    {
                        $vbdvm = Get-XenVBD -opaque_ref $itemvbd.opaque_ref -SessionOpaqueRef $session1.opaque_ref
                        if ($vbdvm.type -eq "Disk")
                        {
                            $vdirefs += Get-XenVDI -opaque_ref $vbdvm.VDI.opaque_ref -SessionOpaqueRef $session1.opaque_ref | ConvertTo-XenRef
                        }
                        
                    }
                    
                    #$vdirefs = Get-XenVDI -Name "MyVmDisk" -SessionOpaqueRef $session1.opaque_ref | ConvertTo-XenRef
                    $vifref = $vm.VIFs[0]
                    $vmvif = Get-XenVIF -opaque_ref $vifref.opaque_ref -SessionOpaqueRef $session1.opaque_ref
                    
                    $VMNetname = Get-XenNetwork -opaque_ref $vmvif.network.opaque_ref -SessionOpaqueRef $session1.opaque_ref
                    
                    
                    $host2s = Get-XenHost -SessionOpaqueRef $session2.opaque_ref
                    $hcount = 1000
                    $master= (Get-XenPool -SessionOpaqueRef $session2.opaque_ref).master.opaque_ref
                    foreach ($itemh in $host2s)
                    {
                        if (($itemh.resident_VMs.count -lt $hcount) -and ($itemh.opaque_ref -ne $master))
                        {
                            $hcount = $itemh.resident_VMs.count
                            $host2= $itemh
                        }
                    }
                    
                    $net2 = Get-XenNetwork -SessionOpaqueRef $session2.opaque_ref -Name "10.44.140.0/24"
                    $dest = Invoke-XenHost -XenHost $host2 -XenAction MigrateReceive -SessionOpaqueRef $session2.opaque_ref -Network $net2 -PassThru
                    
                    $net3ref = Get-XenNetwork -SessionOpaqueRef $session2.opaque_ref -Name $VMNetname.name_label | ConvertTo-XenRef
                    $sr2ref = Get-XenSR -uuid $sruuid -SessionOpaqueRef $session2.opaque_ref | ConvertTo-XenRef
                    
                    #the casts are needed because ConvertTo-XenRef internally uses the framework's
                    #WriteObject method which wraps the object we want into a PSObject
                    $vdimap = @{ }
                    foreach ($vdiref in $vdirefs)
                    {
                        $vdimap.Add([XenAPI.XenRef[XenAPI.VDI]]$vdiref, [XenAPI.XenRef[XenAPI.SR]]$sr2ref);
                    }
                    
                    $vifmap = @{ $vifref = [XenAPI.XenRef[XenAPI.Network]]$net3ref }
                    
                    # for a halted VM this will copy cross-pool; to migrate the VM leave it empty
                    # = @{"copy" = "true"}
                    
                    $task=Invoke-XenVM -VM $vm -XenAction MigrateSend -Live $true -Dest $dest -VifMap $vifmap `
                                 -VdiMap $vdimap -Options @null -VgpuMap @null -SessionOpaqueRef $session1.opaque_ref `
                                 -verbose -Async -PassThru
                    Wait-XenTask -Task $task -SessionOpaqueRef $session1.opaque_ref -ShowProgress
                    $rs = Get-XenTask -opaque_ref $task -SessionOpaqueRef $session1.opaque_ref
                      
                    #$machine = Get-BrokerMachine -HostedMachineName $vm.name_label -MaxRecordCount 1 -AdminAddress $xdadrs
                    if ($xdchk)
                    {
                        $machine = Get-BrokerMachine -HostedMachineName $vm.name_label -MaxRecordCount 1 -AdminAddress $xdadrs | Set-BrokerMachine -HypervisorConnectionUid $xdui -ErrorVariable err
                        if ($err)
                        { }
                        else
                        {
                        #    Write-Progress ""
                        }
                    }
                    
                    #Start-Sleep -Seconds 120
                    Disconnect-XenServer -Session $session1
                    Disconnect-XenServer -Session $session2
                }
 

Link to comment
  • 0
35 minutes ago, Sergey Polyakov said:

By default, XenServer only allows LiveMigrate 3 VMs at a time.

I needed to transfer about 30 virtual machines between two pools.

To solve this problem, I wrote a powershell program that starts the next migration automatically.

Why would that be an issue, provided that the other migration jobs are jut queued up and not executed until a free slot is available? Just curious how XS/CH commands via CLI/powershell handles requests for large numbers of migrations? After all, in XenCenter you can queue up a large number of  VM migrations and it all seems to be handled jut fine.

Link to comment
  • 0
37 minutes ago, Tobias Kreidl said:

Why would that be an issue, provided that the other migration jobs are jut queued up and not executed until a free slot is available? Just curious how XS/CH commands via CLI/powershell handles requests for large numbers of migrations? After all, in XenCenter you can queue up a large number of  VM migrations and it all seems to be handled jut fine.

1 hour ago, Carsten Eckert said:

Hey Sergey,

 

thank you so much für sharing your Script with me / us! That will help me ver very lot.

 

Stay safe and healthy!

 

Best regards from Erfurt, Germany!

Maybe I'm doing something wrong. I cannot find how to start the fourth migration

XenCenter does not allow creating migrations if 3 are already running.

 

Link to comment
  • 0
3 hours ago, Sergey Polyakov said:

Maybe I'm doing something wrong. I cannot find how to start the fourth migration

XenCenter does not allow creating migrations if 3 are already running.

 

Which version and edition of XS/CH are you running? This was not an issue at least with 7.3 and Enterprise as far as I recall.

Link to comment
  • 0
7 hours ago, Sergey Polyakov said:

Maybe I'm doing something wrong. I cannot find how to start the fourth migration

XenCenter does not allow creating migrations if 3 are already running.

 

Ah, that's different for sure. I never tried that with simultaneous VM motion requests across pools, rather just scripted it to do one at the time. Honestly, resource usage goes up so fast that any more than two concurrently already bogs things down enough to give you minimal gains. Make sure dom0 has plenty of memory and VCPUs to work with; you can monitor the load with xen or xentop.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...