Jump to content

Sergey Polyakov

Members
  • Posts

    59
  • Joined

  • Last visited

Posts posted by Sergey Polyakov

  1. My steps: (the order of the steps is important)

    1. Uninstall Citrix VM Tools with NO reboot

    2. Open "Device Manager"-> "Storage controllers"

    3. Uninstall Device "XenServer PV Storage Host Adapter" 

    4. Set checkbox "Delete the driver.."  Press Uninstall, no reboot

    5. Uninstall devices in System devices:

     -XenServer Interface

    - XenServer  PV Network Class

    - XenServer PV BUS (002) 

    - XenServer PV BUS (C02)

    with selecting "Delete the driver.."  and no reboot

    6 Open registry HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services

    7. Delete keys:

    - xenagent

    - xenbus

    - xenbus_monitor

    - xendisk

    - XENFILT

    - xeniface

    -XenInstall

    - xennet

    - XenSvc

    - xenvbd

    - xenvif

    9. Reboot VM

    10. Reboot again when appear message  "XenServer PV Host Adapter needs to restart ..."

    11. Install Windows

     

     

     

  2. I have  exact problem with the same network adapter on new HP 380 g10 server

    As workaround, I add task on all VM (Reset vm network if Desktop Service eventid=1014)

     

    another problem with this networkadaper if use bond LACP network

    If usage grows, the portlink begin flapping

  3. 37 minutes ago, Tobias Kreidl said:

    Why would that be an issue, provided that the other migration jobs are jut queued up and not executed until a free slot is available? Just curious how XS/CH commands via CLI/powershell handles requests for large numbers of migrations? After all, in XenCenter you can queue up a large number of  VM migrations and it all seems to be handled jut fine.

    1 hour ago, Carsten Eckert said:

    Hey Sergey,

     

    thank you so much für sharing your Script with me / us! That will help me ver very lot.

     

    Stay safe and healthy!

     

    Best regards from Erfurt, Germany!

    Maybe I'm doing something wrong. I cannot find how to start the fourth migration

    XenCenter does not allow creating migrations if 3 are already running.

     

  4.              

     

    $criptjob = {
                        param ($psource,
                            $usource,
                            $pwsource,
                            $pdest,
                            $vmuuid,
                            $sruuid,
                            $xdadrs,
                            $xdui,
                            $xdchk
                        )
                        
                        $session1 = Connect-XenServer -url $psource -UserName $usource -Password $pwsource -PassThru
                        $session2 = Connect-XenServer -url $pdest -UserName $usource -Password $pwsource -PassThru
                        
                        $vm = Get-XenVM -uuid $vmuuid -SessionOpaqueRef $session1.opaque_ref
                        
                        #example; VDIs should be resolved from $vm.VBDs
                        $vdirefs = @()
                        foreach ($itemvbd in $vm.VBDs)
                        {
                            $vbdvm = Get-XenVBD -opaque_ref $itemvbd.opaque_ref -SessionOpaqueRef $session1.opaque_ref
                            if ($vbdvm.type -eq "Disk")
                            {
                                $vdirefs += Get-XenVDI -opaque_ref $vbdvm.VDI.opaque_ref -SessionOpaqueRef $session1.opaque_ref | ConvertTo-XenRef
                            }
                            
                        }
                        
                        #$vdirefs = Get-XenVDI -Name "MyVmDisk" -SessionOpaqueRef $session1.opaque_ref | ConvertTo-XenRef
                        $vifref = $vm.VIFs[0]
                        $vmvif = Get-XenVIF -opaque_ref $vifref.opaque_ref -SessionOpaqueRef $session1.opaque_ref
                        
                        $VMNetname = Get-XenNetwork -opaque_ref $vmvif.network.opaque_ref -SessionOpaqueRef $session1.opaque_ref
                        
                        
                        $host2s = Get-XenHost -SessionOpaqueRef $session2.opaque_ref
                        $hcount = 1000
                        $master= (Get-XenPool -SessionOpaqueRef $session2.opaque_ref).master.opaque_ref
                        foreach ($itemh in $host2s)
                        {
                            if (($itemh.resident_VMs.count -lt $hcount) -and ($itemh.opaque_ref -ne $master))
                            {
                                $hcount = $itemh.resident_VMs.count
                                $host2= $itemh
                            }
                        }
                        
                        $net2 = Get-XenNetwork -SessionOpaqueRef $session2.opaque_ref -Name "10.44.140.0/24"
                        $dest = Invoke-XenHost -XenHost $host2 -XenAction MigrateReceive -SessionOpaqueRef $session2.opaque_ref -Network $net2 -PassThru
                        
                        $net3ref = Get-XenNetwork -SessionOpaqueRef $session2.opaque_ref -Name $VMNetname.name_label | ConvertTo-XenRef
                        $sr2ref = Get-XenSR -uuid $sruuid -SessionOpaqueRef $session2.opaque_ref | ConvertTo-XenRef
                        
                        #the casts are needed because ConvertTo-XenRef internally uses the framework's
                        #WriteObject method which wraps the object we want into a PSObject
                        $vdimap = @{ }
                        foreach ($vdiref in $vdirefs)
                        {
                            $vdimap.Add([XenAPI.XenRef[XenAPI.VDI]]$vdiref, [XenAPI.XenRef[XenAPI.SR]]$sr2ref);
                        }
                        
                        $vifmap = @{ $vifref = [XenAPI.XenRef[XenAPI.Network]]$net3ref }
                        
                        # for a halted VM this will copy cross-pool; to migrate the VM leave it empty
                        # = @{"copy" = "true"}
                        
                        $task=Invoke-XenVM -VM $vm -XenAction MigrateSend -Live $true -Dest $dest -VifMap $vifmap `
                                     -VdiMap $vdimap -Options @null -VgpuMap @null -SessionOpaqueRef $session1.opaque_ref `
                                     -verbose -Async -PassThru
                        Wait-XenTask -Task $task -SessionOpaqueRef $session1.opaque_ref -ShowProgress
                        $rs = Get-XenTask -opaque_ref $task -SessionOpaqueRef $session1.opaque_ref
                          
                        #$machine = Get-BrokerMachine -HostedMachineName $vm.name_label -MaxRecordCount 1 -AdminAddress $xdadrs
                        if ($xdchk)
                        {
                            $machine = Get-BrokerMachine -HostedMachineName $vm.name_label -MaxRecordCount 1 -AdminAddress $xdadrs | Set-BrokerMachine -HypervisorConnectionUid $xdui -ErrorVariable err
                            if ($err)
                            { }
                            else
                            {
                            #    Write-Progress ""
                            }
                        }
                        
                        #Start-Sleep -Seconds 120
                        Disconnect-XenServer -Session $session1
                        Disconnect-XenServer -Session $session2
                    }
     

  5. Work Ok if use timeout after Invoke-XenVM :

     Invoke-XenVM -VM $vm -XenAction MigrateSend -Live $true -Dest $dest -VifMap $vifmap -VdiMap $vdimap -Options @null -VgpuMap @null -SessionOpaqueRef $session1.opaque_ref -verbose -Async -PassThru | Wait-XenTask  -SessionOpaqueRef $session1.opaque_ref -ShowProgress

    start-sleep -Seconds 180

  6. I don't know why but if using :

     Invoke-XenVM -VM $vm -XenAction MigrateSend -Live $true -Dest $dest -VifMap $vifmap -VdiMap $vdimap -Options @null -VgpuMap @null -SessionOpaqueRef $session1.opaque_ref -verbose -Async -PassThru | Wait-XenTask  -SessionOpaqueRef $session1.opaque_ref -ShowProgress

    It completed with error

     

    When use :

    $task=Invoke-XenVM -VM $vm -XenAction MigrateSend -Live $true -Dest $dest -VifMap $vifmap  -VdiMap $vdimap -Options @null -VgpuMap @null -SessionOpaqueRef  session1.opaque_ref -verbose -Async -PassThru
    Wait-XenTask -Task $task -SessionOpaqueRef $session1.opaque_ref -ShowProgress

    It work OK

     

     

  7.  

    Pool clean installed 11 days ago ( CH8.2 up to XS82E034 )

     

     chronyc tracking

    System time     : 0.000209621 seconds slow of NTP time
    Last offset     : -0.000242432 seconds
    RMS offset      : 0.000119454 seconds
    Frequency       : 19.081 ppm slow
    Residual freq   : -0.032 ppm
    Skew            : 0.074 ppm
    Root delay      : 0.002107475 seconds
    Root dispersion : 0.053449847 seconds
    Update interval : 1030.8 seconds
    Leap status     : Normal

     

     

  8. xensource.txt

     

     

    Dec 21 10:43:48 vdi-dgt-31 xapi: [debug||88 |org.xen.xapi.xenops.classic events D:4d964b047c26|xenops] xenopsd event: Updating VM be83f8f0-cb3b-216a-a68b-00936609eebd domid 40 memory target

     

    Dec 21 10:45:43 vdi-dgt-31 xapi: [debug||90 ||xenops] Event on VM be83f8f0-cb3b-216a-a68b-00936609eebd; resident_here = true
    Dec 21 10:45:44 vdi-dgt-31 xapi: [debug||90 ||xenops] xapi_cache: not updating cache for be83f8f0-cb3b-216a-a68b-00936609eebd


    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] Got QMP event, domain-40: RESET
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] Current domains: 0, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 29, 34, 35, 37, 40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] Domain 40 may have changed state
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] Removing watches for: domid 40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||5 ||xenops_server] Received an event on managed VM be83f8f0-cb3b-216a-a68b-00936609eebd
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/5696/kthread-pid token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] QMP command for domid 40: {"execute":"query-migratable","id":"qmp-000219-40"}
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||5 |queue|xenops_server] Queue.push ["VM_check_state","be83f8f0-cb3b-216a-a68b-00936609eebd"] onto be83f8f0-cb3b-216a-a68b-00936609eebd:[  ]
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||15 ||xenops_server] Queue.pop returned ["VM_check_state","be83f8f0-cb3b-216a-a68b-00936609eebd"]
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||15 |events|xenops_server] Task 6807 reference events: ["VM_check_state","be83f8f0-cb3b-216a-a68b-00936609eebd"]
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/5696/tapdisk-pid token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/5696/shutdown-done token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] received QMP response qmp-000219-40 (File "xc/device_common.ml", line 502, characters 49-56)
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] query-migratable precheck passed (domid=40)
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] xenstore-rm /local/domain/40/data/cant_suspend_reason
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/5696/hotplug-status token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/5696/params token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] Got QMP event, domain-40: RESET
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/5696/state token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/768/kthread-pid token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] QMP command for domid 40: {"execute":"query-migratable","id":"qmp-000220-40"}
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/768/tapdisk-pid token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/768/shutdown-done token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] received QMP response qmp-000220-40 (File "xc/device_common.ml", line 502, characters 49-56)
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] query-migratable precheck passed (domid=40)
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] xenstore-rm /local/domain/40/data/cant_suspend_reason
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/768/hotplug-status token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/768/params token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/768/state token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vif/40/0/kthread-pid token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vif/40/0/tapdisk-pid token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vif/40/0/shutdown-done token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vif/40/0/hotplug-status token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vif/40/0/params token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vif/40/0/state token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenops] Cancelling watches for: domid 40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenops] removing device cache for domid 40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/attr token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/data/updated token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/data/ts token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/memory/target token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/memory/uncooperative token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/console/vnc-port token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/console/tc-port token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/qemu-pid-signal token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/control token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/device token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/rrd token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/vm-data token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/feature token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/vm/be83f8f0-cb3b-216a-a68b-000000000001/rtc/timeoffset token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/xenserver/attr token=xenopsd-xc:domain-40
    Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||15 |events|xenops_server] VM.reboot be83f8f0-cb3b-216a-a68b-00936609eebd

  9. Using PowerShell Commands to CrossPool Migrate VMs

            Invoke-XenVM -VM $vm -XenAction MigrateSend -Live $true -Dest $dest -VifMap $vifmap -VdiMap $vdimap -Options @null -VgpuMap @null -SessionOpaqueRef $session1.opaque_ref -verbose -Async -PassThru 

    Whenever the migration is complete, the VM is rebooted with unexpected error 

     

    If Use XenCenter migration going  without reboot

     

×
×
  • Create New...