![](http://content.invisioncic.com/m329563/set_resources_3/84c1e40ea0e759e3f1505eb1788ddf3c_pattern.png)
Sergey Polyakov
-
Posts
59 -
Joined
-
Last visited
Content Type
Forums
Articles
Labs
Videos
TechZone
Citrix Community Articles
Events
Profiles
Posts posted by Sergey Polyakov
-
-
-
UUIDLIST= `xe vm-list name-label="server1" | grep "uuid ( RO) " | awk '{print $5}'`
xe vm-param-set uuid=$UUIDLIST affinity=25533e7a-4263-4c4f-8951-8d2ef9ed7245 -
-
Invoke-XenVDI -Uuid <guid> -XenAction Resize -Size 80GB
-
My steps: (the order of the steps is important)
1. Uninstall Citrix VM Tools with NO reboot
2. Open "Device Manager"-> "Storage controllers"
3. Uninstall Device "XenServer PV Storage Host Adapter"
4. Set checkbox "Delete the driver.." Press Uninstall, no reboot
5. Uninstall devices in System devices:
-XenServer Interface
- XenServer PV Network Class
- XenServer PV BUS (002)
- XenServer PV BUS (C02)
with selecting "Delete the driver.." and no reboot
6 Open registry HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services
7. Delete keys:
- xenagent
- xenbus
- xenbus_monitor
- xendisk
- XENFILT
- xeniface
-XenInstall
- xennet
- XenSvc
- xenvbd
- xenvif
9. Reboot VM
10. Reboot again when appear message "XenServer PV Host Adapter needs to restart ..."
11. Install Windows
-
xe vlan-destroy uuid=vlan_uuid
xe vlan-create network-uuid=network_uuid pif-uuid=pif_uuid vlan=VLAN
-
Nested virtualization must be enabled to run VirtualBox
xe vm-param-set uuid=UUID platform:nested-virt=1
-
VMNL=`xe vm-list | grep ABC | awk '{print $4}'`
for VMN in $VMNL
do
`xe vm-start name-label=$VMN`
done-
1
-
-
I have exact problem with the same network adapter on new HP 380 g10 server
As workaround, I add task on all VM (Reset vm network if Desktop Service eventid=1014)
another problem with this networkadaper if use bond LACP network
If usage grows, the portlink begin flapping
-
-
You can install Win11 without waiting TPM support
-
You can create a task on the VM that will sYou can create a task on the VM that will save the system event log to the file ball when it is turned offave the system event log to the fileshare when VM turned off
-
I am using this script for LiveMigration between two different Pools
-
CH 8.2 LTSR Premium
16 minutes ago, Tobias Kreidl said:Which version and edition of XS/CH are you running? This was not an issue at least with 7.3 and Enterprise as far as I recall.
-
37 minutes ago, Tobias Kreidl said:
Why would that be an issue, provided that the other migration jobs are jut queued up and not executed until a free slot is available? Just curious how XS/CH commands via CLI/powershell handles requests for large numbers of migrations? After all, in XenCenter you can queue up a large number of VM migrations and it all seems to be handled jut fine.
1 hour ago, Carsten Eckert said:Hey Sergey,
thank you so much für sharing your Script with me / us! That will help me ver very lot.
Stay safe and healthy!
Best regards from Erfurt, Germany!
Maybe I'm doing something wrong. I cannot find how to start the fourth migration
XenCenter does not allow creating migrations if 3 are already running.
-
By default, XenServer only allows LiveMigrate 3 VMs at a time.
I needed to transfer about 30 virtual machines between two pools.
To solve this problem, I wrote a powershell program that starts the next migration automatically.
-
$criptjob = {
param ($psource,
$usource,
$pwsource,
$pdest,
$vmuuid,
$sruuid,
$xdadrs,
$xdui,
$xdchk
)
$session1 = Connect-XenServer -url $psource -UserName $usource -Password $pwsource -PassThru
$session2 = Connect-XenServer -url $pdest -UserName $usource -Password $pwsource -PassThru
$vm = Get-XenVM -uuid $vmuuid -SessionOpaqueRef $session1.opaque_ref
#example; VDIs should be resolved from $vm.VBDs
$vdirefs = @()
foreach ($itemvbd in $vm.VBDs)
{
$vbdvm = Get-XenVBD -opaque_ref $itemvbd.opaque_ref -SessionOpaqueRef $session1.opaque_ref
if ($vbdvm.type -eq "Disk")
{
$vdirefs += Get-XenVDI -opaque_ref $vbdvm.VDI.opaque_ref -SessionOpaqueRef $session1.opaque_ref | ConvertTo-XenRef
}
}
#$vdirefs = Get-XenVDI -Name "MyVmDisk" -SessionOpaqueRef $session1.opaque_ref | ConvertTo-XenRef
$vifref = $vm.VIFs[0]
$vmvif = Get-XenVIF -opaque_ref $vifref.opaque_ref -SessionOpaqueRef $session1.opaque_ref
$VMNetname = Get-XenNetwork -opaque_ref $vmvif.network.opaque_ref -SessionOpaqueRef $session1.opaque_ref
$host2s = Get-XenHost -SessionOpaqueRef $session2.opaque_ref
$hcount = 1000
$master= (Get-XenPool -SessionOpaqueRef $session2.opaque_ref).master.opaque_ref
foreach ($itemh in $host2s)
{
if (($itemh.resident_VMs.count -lt $hcount) -and ($itemh.opaque_ref -ne $master))
{
$hcount = $itemh.resident_VMs.count
$host2= $itemh
}
}
$net2 = Get-XenNetwork -SessionOpaqueRef $session2.opaque_ref -Name "10.44.140.0/24"
$dest = Invoke-XenHost -XenHost $host2 -XenAction MigrateReceive -SessionOpaqueRef $session2.opaque_ref -Network $net2 -PassThru
$net3ref = Get-XenNetwork -SessionOpaqueRef $session2.opaque_ref -Name $VMNetname.name_label | ConvertTo-XenRef
$sr2ref = Get-XenSR -uuid $sruuid -SessionOpaqueRef $session2.opaque_ref | ConvertTo-XenRef
#the casts are needed because ConvertTo-XenRef internally uses the framework's
#WriteObject method which wraps the object we want into a PSObject
$vdimap = @{ }
foreach ($vdiref in $vdirefs)
{
$vdimap.Add([XenAPI.XenRef[XenAPI.VDI]]$vdiref, [XenAPI.XenRef[XenAPI.SR]]$sr2ref);
}
$vifmap = @{ $vifref = [XenAPI.XenRef[XenAPI.Network]]$net3ref }
# for a halted VM this will copy cross-pool; to migrate the VM leave it empty
# = @{"copy" = "true"}
$task=Invoke-XenVM -VM $vm -XenAction MigrateSend -Live $true -Dest $dest -VifMap $vifmap `
-VdiMap $vdimap -Options @null -VgpuMap @null -SessionOpaqueRef $session1.opaque_ref `
-verbose -Async -PassThru
Wait-XenTask -Task $task -SessionOpaqueRef $session1.opaque_ref -ShowProgress
$rs = Get-XenTask -opaque_ref $task -SessionOpaqueRef $session1.opaque_ref
#$machine = Get-BrokerMachine -HostedMachineName $vm.name_label -MaxRecordCount 1 -AdminAddress $xdadrs
if ($xdchk)
{
$machine = Get-BrokerMachine -HostedMachineName $vm.name_label -MaxRecordCount 1 -AdminAddress $xdadrs | Set-BrokerMachine -HypervisorConnectionUid $xdui -ErrorVariable err
if ($err)
{ }
else
{
# Write-Progress ""
}
}
#Start-Sleep -Seconds 120
Disconnect-XenServer -Session $session1
Disconnect-XenServer -Session $session2
}
-
Work Ok if use timeout after Invoke-XenVM :
Invoke-XenVM -VM $vm -XenAction MigrateSend -Live $true -Dest $dest -VifMap $vifmap -VdiMap $vdimap -Options @null -VgpuMap @null -SessionOpaqueRef $session1.opaque_ref -verbose -Async -PassThru | Wait-XenTask -SessionOpaqueRef $session1.opaque_ref -ShowProgress
start-sleep -Seconds 180
-
I don't know why but if using :
Invoke-XenVM -VM $vm -XenAction MigrateSend -Live $true -Dest $dest -VifMap $vifmap -VdiMap $vdimap -Options @null -VgpuMap @null -SessionOpaqueRef $session1.opaque_ref -verbose -Async -PassThru | Wait-XenTask -SessionOpaqueRef $session1.opaque_ref -ShowProgress
It completed with error
When use :
$task=Invoke-XenVM -VM $vm -XenAction MigrateSend -Live $true -Dest $dest -VifMap $vifmap -VdiMap $vdimap -Options @null -VgpuMap @null -SessionOpaqueRef session1.opaque_ref -verbose -Async -PassThru
Wait-XenTask -Task $task -SessionOpaqueRef $session1.opaque_ref -ShowProgressIt work OK
-
Pool clean installed 11 days ago ( CH8.2 up to XS82E034 )
chronyc tracking
System time : 0.000209621 seconds slow of NTP time
Last offset : -0.000242432 seconds
RMS offset : 0.000119454 seconds
Frequency : 19.081 ppm slow
Residual freq : -0.032 ppm
Skew : 0.074 ppm
Root delay : 0.002107475 seconds
Root dispersion : 0.053449847 seconds
Update interval : 1030.8 seconds
Leap status : Normal
-
Dec 21 10:43:48 vdi-dgt-31 xapi: [debug||88 |org.xen.xapi.xenops.classic events D:4d964b047c26|xenops] xenopsd event: Updating VM be83f8f0-cb3b-216a-a68b-00936609eebd domid 40 memory target
Dec 21 10:45:43 vdi-dgt-31 xapi: [debug||90 ||xenops] Event on VM be83f8f0-cb3b-216a-a68b-00936609eebd; resident_here = true
Dec 21 10:45:44 vdi-dgt-31 xapi: [debug||90 ||xenops] xapi_cache: not updating cache for be83f8f0-cb3b-216a-a68b-00936609eebd
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] Got QMP event, domain-40: RESET
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] Current domains: 0, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 29, 34, 35, 37, 40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] Domain 40 may have changed state
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] Removing watches for: domid 40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||5 ||xenops_server] Received an event on managed VM be83f8f0-cb3b-216a-a68b-00936609eebd
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/5696/kthread-pid token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] QMP command for domid 40: {"execute":"query-migratable","id":"qmp-000219-40"}
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||5 |queue|xenops_server] Queue.push ["VM_check_state","be83f8f0-cb3b-216a-a68b-00936609eebd"] onto be83f8f0-cb3b-216a-a68b-00936609eebd:[ ]
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||15 ||xenops_server] Queue.pop returned ["VM_check_state","be83f8f0-cb3b-216a-a68b-00936609eebd"]
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||15 |events|xenops_server] Task 6807 reference events: ["VM_check_state","be83f8f0-cb3b-216a-a68b-00936609eebd"]
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/5696/tapdisk-pid token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/5696/shutdown-done token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] received QMP response qmp-000219-40 (File "xc/device_common.ml", line 502, characters 49-56)
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] query-migratable precheck passed (domid=40)
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] xenstore-rm /local/domain/40/data/cant_suspend_reason
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/5696/hotplug-status token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/5696/params token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] Got QMP event, domain-40: RESET
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/5696/state token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/768/kthread-pid token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] QMP command for domid 40: {"execute":"query-migratable","id":"qmp-000220-40"}
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/768/tapdisk-pid token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/768/shutdown-done token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] received QMP response qmp-000220-40 (File "xc/device_common.ml", line 502, characters 49-56)
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] query-migratable precheck passed (domid=40)
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||8 ||xenops] xenstore-rm /local/domain/40/data/cant_suspend_reason
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/768/hotplug-status token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/768/params token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vbd3/40/768/state token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vif/40/0/kthread-pid token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vif/40/0/tapdisk-pid token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vif/40/0/shutdown-done token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vif/40/0/hotplug-status token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vif/40/0/params token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/0/backend/vif/40/0/state token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenops] Cancelling watches for: domid 40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenops] removing device cache for domid 40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/attr token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/data/updated token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/data/ts token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/memory/target token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/memory/uncooperative token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/console/vnc-port token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/console/tc-port token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/qemu-pid-signal token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/control token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/device token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/rrd token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/vm-data token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/feature token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/vm/be83f8f0-cb3b-216a-a68b-000000000001/rtc/timeoffset token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||7 ||xenstore_watch] xenstore unwatch path=/local/domain/40/xenserver/attr token=xenopsd-xc:domain-40
Dec 21 10:48:09 vdi-dgt-31 xenopsd-xc: [debug||15 |events|xenops_server] VM.reboot be83f8f0-cb3b-216a-a68b-00936609eebd -
Using PowerShell Commands to CrossPool Migrate VMs
Invoke-XenVM -VM $vm -XenAction MigrateSend -Live $true -Dest $dest -VifMap $vifmap -VdiMap $vdimap -Options @null -VgpuMap @null -SessionOpaqueRef $session1.opaque_ref -verbose -Async -PassThru
Whenever the migration is complete, the VM is rebooted with unexpected error
If Use XenCenter migration going without reboot
-
VMS=`xe vm-list | grep -E 'a40a|0146' | awk '{print $5}'`
for VMSU in $VMS
do
xe vm-list uuid=$VMSU
done
Result:
uuid ( RO) : 2a698fe1-a40a-3295-92a1-67d1662604ac
name-label ( RW): PC1
power-state ( RO): halted
uuid ( RO) : b0b5b91e-0146-19ac-f8ab-8ea2450a6c31
name-label ( RW): PC2
power-state ( RO): halted -
Hi,
I suggest use high quality audio or you can try adaptive audio
https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/multimedia/audio.html
According to my tests:
When the quality changes, the audio codec also changes.
The transmission protocol (UDP or TCP) does not affect the quality
Decreasing the audio quality also decreases the volume level.
PowerShell get vms hosted on a particular pool member
in XenServer SDK
Posted
$h=Get-XenHost -Name "<host_name>"
$vms=Get-XenVM | Where-Object{ $_.opaque_ref -in ($h.resident_VMs )}