Jump to content
Welcome to our new Citrix community!
  • 0

Error: Cannot free this much memory


Harun Baris Bulut

Question

Hi all,

 

We are managing a XenServer 7.1 Pool with 2 hosts. The hosts says that there is 64G available memory however Xen returns an error below. I am suspecting about RAMs but I am not sure. What do you think the problem could be ?

Internal error: xenopsd internal error: Memory_interface.Cannot_free_this_much_memory(_)

Thank you

 

Link to comment

6 answers to this question

Recommended Posts

  • 0

The output of top is like this. I dont think there is a problem with dom0 memory;

top - 14:04:02 up 295 days, 17:21,  2 users,  load average: 0.76, 0.52, 0.50
Tasks: 715 total,   1 running, 714 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.4 us,  0.7 sy,  0.0 ni, 98.5 id,  0.2 wa,  0.0 hi,  0.0 si,  0.2 st
KiB Mem :  4018772 total,  1266528 free,  1536512 used,  1215732 buff/cache
KiB Swap:  1048572 total,   992192 free,    56380 used.  2256256 avail Mem

There is no error at /var/log/messages however xensource output is below

Mar  8 14:05:57 host2 xenopsd-xc: [debug|host2|145775 |Async.VM.pool_migrate R:fa2393a1c76c|xenops_server] VM.add {"id": "a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0", "name": "20-mustafatan-17065-test-ubuntu-14-04-lts-64-bit", "ssidref": 0, "xsdata": {"vm-data": ""}, "platformdata": {"featureset": "17cbfbff-f7fa3223-2d93fbff-00000123-00000001-001c0fbb-00000000-00000000-00000000", "generation-id": "", "timeoffset": "0", "usb": "true", "usb_tablet": "true", "nx": "true", "acpi": "1", "apic": "true", "pae": "true", "viridian": "true"}, "bios_strings": {"bios-vendor": "Xen", "bios-version": "", "system-manufacturer": "Xen", "system-product-name": "HVM domU", "system-version": "", "system-serial-number": "", "hp-rombios": "", "oem-1": "Xen", "oem-2": "MS_VM_CERT\/SHA1\/bdbeb6e0a816d43fa6d3fe8aaef04c2bad9d3e3d"}, "ty": ["HVM", {"hap": true, "shadow_multiplier": 1.000000, "timeoffset": "0", "video_mib": 4, "video": "Cirrus", "acpi": true, "serial": "pty", "pci_emulations": [], "pci_passthrough": false, "boot_order": "dc", "qemu_disk_cmdline": false, "qemu_stubdom": false}], "suppress_spurious_page_faults": false, "memory_static_max": 8589934592, "memory_dynamic_max": 8589934592, "memory_dynamic_min": 8589934592, "vcpu_max": 6, "vcpus": 6, "scheduler_params": {"priority": [256, 0], "affinity": []}, "on_crash": ["Start"], "on_shutdown": ["Shutdown"], "on_reboot": ["Start"], "pci_msitranslate": true, "pci_power_mgmt": false, "has_vendor_device": false}
Mar  8 14:06:02 host2 xapi: [debug|host2|128166 INET :::80|Querying services D:2abc55432309|xapi] hand_over_connection PUT /services/xenops/memory/a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0 to /var/lib/xcp/xenopsd.forwarded
Mar  8 14:06:02 host2 xenopsd-xc: [debug|host2|145776 ||xenopsd] Received request = [{"uri": "\/services\/xenops\/memory\/a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0", "query": {"session_id": "OpaqueRef:9ebba918-dd2e-5bb8-d8de-1da9edf9fc83"}, "cookie": {"instance_id": "8c2e753e-b684-4a32-a1bd-aac6ef1b9345", "dbg": "Async.VM.pool_migrate R:fa2393a1c76c", "memory_limit": "8589934592"}}]
Mar  8 14:06:02 host2 xenopsd-xc: [debug|host2|145776 |Async.VM.pool_migrate R:fa2393a1c76c|xenops_server] VM.receive_memory
Mar  8 14:06:02 host2 xenopsd-xc: [debug|host2|145776 |Async.VM.pool_migrate R:fa2393a1c76c|xenops_server] VM.receive_memory id = a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0 is_localhost = false
Mar  8 14:06:02 host2 xenopsd-xc: [debug|host2|145776 |queue|xenops_server] Queue.push ["VM_receive_memory", ["a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0", 8589934592, 17]] onto a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0:[  ]
Mar  8 14:06:02 host2 xenopsd-xc: [debug|host2|37 ||xenops_server] Queue.pop returned ["VM_receive_memory", ["a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0", 8589934592, 17]]
Mar  8 14:06:02 host2 xenopsd-xc: [debug|host2|37 |Async.VM.pool_migrate R:fa2393a1c76c|xenops_server] Task 120851 reference Async.VM.pool_migrate R:fa2393a1c76c: ["VM_receive_memory", ["a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0", 8589934592, 17]]
Mar  8 14:06:02 host2 xenopsd-xc: [debug|host2|37 |Async.VM.pool_migrate R:fa2393a1c76c|xenops_server] VM.receive_memory a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0
Mar  8 14:06:02 host2 xenopsd-xc: [debug|host2|37 |Async.VM.pool_migrate R:fa2393a1c76c|xenops_server] VM.create a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0 memory_upper_bound = 8589934592
Mar  8 14:06:02 host2 xenopsd-xc: [debug|host2|37 |Async.VM.pool_migrate R:fa2393a1c76c|memory] VM = a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0; reloading stored domain-level configuration
Mar  8 14:06:02 host2 xenopsd-xc: [debug|host2|37 |Async.VM.pool_migrate R:fa2393a1c76c|memory] VM = a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0; using memory_upper_bound = 8589934592
Mar  8 14:06:02 host2 xenopsd-xc: [debug|host2|37 |Async.VM.pool_migrate R:fa2393a1c76c|memory] Requesting a host memory reservation between 8462336 and 8462336
Mar  8 14:06:02 host2 xenopsd-xc: [error|host2|37 |Async.VM.pool_migrate R:fa2393a1c76c|memory] Cannot free 8665432064; only 548118528 are available
Mar  8 14:06:02 host2 xenopsd-xc: [ info|host2|37 |Async.VM.pool_migrate R:fa2393a1c76c|xenops_server] Caught Memory_interface.Cannot_free_this_much_memory(_) executing ["VM_receive_memory", ["a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0", 8589934592, 17]]: triggering cleanup actions
Mar  8 14:06:02 host2 xenopsd-xc: [debug|host2|37 |Async.VM.pool_migrate R:fa2393a1c76c|memory] Domain for VM a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0 does not exist: ignoring
Mar  8 14:06:02 host2 xenopsd-xc: [error|host2|18 |Parallel:task=120852.atoms=2.(VBD.unplug vm=a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0)|memory] Failed to read /vm/a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0/domains: has this domain already been cleaned up?
Mar  8 14:06:02 host2 xenopsd-xc: [error|host2|38 |Parallel:task=120852.atoms=2.(VBD.unplug vm=a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0)|memory] Failed to read /vm/a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0/domains: has this domain already been cleaned up?
Mar  8 14:06:02 host2 xenopsd-xc: [error|host2|18 |Parallel:task=120852.atoms=2.(VBD.unplug vm=a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0)|memory] Failed to read /vm/a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0/domains: has this domain already been cleaned up?
Mar  8 14:06:02 host2 xenopsd-xc: [debug|host2|18 |Parallel:task=120852.atoms=2.(VBD.unplug vm=a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0)|memory] VM = a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0; does not exist in domain list
Mar  8 14:06:02 host2 xenopsd-xc: [error|host2|38 |Parallel:task=120852.atoms=2.(VBD.unplug vm=a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0)|memory] Failed to read /vm/a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0/domains: has this domain already been cleaned up?
Mar  8 14:06:02 host2 xenopsd-xc: [debug|host2|18 |Parallel:task=120852.atoms=2.(VBD.unplug vm=a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0)|memory] VM = a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0; VBD = xvda; Ignoring missing domain
Mar  8 14:06:02 host2 xenopsd-xc: [debug|host2|38 |Parallel:task=120852.atoms=2.(VBD.unplug vm=a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0)|memory] VM = a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0; does not exist in domain list
Mar  8 14:06:02 host2 xenopsd-xc: [debug|host2|38 |Parallel:task=120852.atoms=2.(VBD.unplug vm=a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0)|memory] VM = a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0; VBD = xvdd; Ignoring missing domain
Mar  8 14:06:02 host2 xenopsd-xc: [error|host2|37 |Async.VM.pool_migrate R:fa2393a1c76c|memory] Failed to read /vm/a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0/domains: has this domain already been cleaned up?
Mar  8 14:06:02 host2 xenopsd-xc: [debug|host2|37 |Async.VM.pool_migrate R:fa2393a1c76c|memory] VM = a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0; does not exist in domain list
Mar  8 14:06:02 host2 xenopsd-xc: [debug|host2|37 |Async.VM.pool_migrate R:fa2393a1c76c|memory] VM = 0; Ignoring missing domain
Mar  8 14:06:02 host2 xenopsd-xc: [error|host2|37 |Async.VM.pool_migrate R:fa2393a1c76c|memory] Failed to read /vm/a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0/domains: has this domain already been cleaned up?
Mar  8 14:06:02 host2 xenopsd-xc: [debug|host2|37 |Async.VM.pool_migrate R:fa2393a1c76c|memory] VM = a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0; does not exist in domain list
Mar  8 14:06:02 host2 xenopsd-xc: [debug|host2|37 |Async.VM.pool_migrate R:fa2393a1c76c|memory] VM = 2; Ignoring missing domain
Mar  8 14:06:02 host2 xenopsd-xc: [error|host2|37 |Async.VM.pool_migrate R:fa2393a1c76c|memory] Failed to read /vm/a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0/domains: has this domain already been cleaned up?
Mar  8 14:06:02 host2 xenopsd-xc: [debug|host2|37 |Async.VM.pool_migrate R:fa2393a1c76c|memory] VM = a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0; does not exist in domain list
Mar  8 14:06:02 host2 xenopsd-xc: [debug|host2|37 |Async.VM.pool_migrate R:fa2393a1c76c|memory] VM = 1; Ignoring missing domain
Mar  8 14:06:02 host2 xenopsd-xc: [debug|host2|37 |Async.VM.pool_migrate R:fa2393a1c76c|memory] Domain for VM a801059e-ba6d-cfec-5fd5-48c7c3dbf1e0 does not exist: ignoring
Mar  8 14:06:02 host2 xenopsd-xc: [error|host2|37 |Async.VM.pool_migrate R:fa2393a1c76c|task_server] Task 120851 failed; Memory_interface.Cannot_free_this_much_memory(_)
Mar  8 14:06:02 host2 xenopsd-xc: [debug|host2|37 ||xenops_server] TASK.signal 120851 = ["Failed", ["Internal_error", "Memory_interface.Cannot_free_this_much_memory(_)"]]

Also XenCenter tells us that there is 65.2G available;

image.thumb.png.e72b331ebb4ba3b4b2f84194804b26a8.png

 

 

Link to comment
  • 0

in case if others are still looking on this topic after so many years, I will present the solution that worked in my case.

I have a cluster with 3 hosts and approx. 20 VMs in total, but due to some operations in the past or for whatever reason, in one host which apparently had 103 GB RAM free, were running 2 Ghost VMs which they were meantime migrated to another host. Even if in XenCenter I saw 103GB RAM Free, I couldn't power on a VM with 45GB RAM, ending up on Internal error: xenopsd internal error: Memory_interface.Cannot_free_this_much_memory(_)

 

Checking with the command "xe vm-list", I saw all VMs only once as usual, but when I did "xl vm-list" on each host, I have noticed that Host 1 and Host 2 are running the same two VM's with the identical uuids... but a VM cannot run on two hosts at the same time in normal configuration, therefore we have some ghost VMs. I have used XenCenter to check the duplicates and figure out where are they actually running, and I killed the ghost VMs on the other host using the command "xl destroy <domain_id>". 

 

xl destroy <dom_id> will not delete your VM data... it will just kill the power to the VM and free up the RAM memory

 

"Visual example":

bash$: xe vm-list

vm1

vm2

vm3

vm4

vm5

vm6

...

vm20

 

host1$: xl vm-list

UUID    Domain ID     VM Name

uuid1     5134                vm1

uuid2    5135                vm2

uuid3    5136                vm3

uuid4    5137                vm4

 

host2$: xl vm-list

UUID    Domain ID     VM Name

uuid5     5138                vm5

uuid2    5139                vm2 <--- ghost VM

uuid3    5140                vm3 <--- ghost VM

uuid6    5141                vm6

 

Xen Center shows that vm2 and vm3 are running on Host1, therefore we kill the ghosts VMs from Host2:

 

host2$: xl destroy 5139

host2$: xl destroy 5140

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...