Using top to Find Memory Overhead for qemu/KVM

Having jumped from the VMware world where the memory overheads and performance characteristics of ESXi and varied workloads are well known, I was surprised to learn that such was not always true in the case of KVM. In this post we talk a bit about KVM/Qemu, and how it’s memory overhead breaks down.

Understanding Memory in KVM

The gist of KVM memory management, which you can read about in detail here, is that each guest is more or less a Linux process. That is, at spawn time, if the guest has 1GB memory assigned, KVM will do a malloc for 1GB and then let the system manage as needed.

This enables interesting things like KSM or Kernel Shared Memory, which, on the surface, works much like Transparent Page Sharing or TPS in the ESXi world. That is, the Linux kernel will periodically check it’s page tables for matching pages.

Still with me?

A quick TL;DR for understanding KVM memory: KVM guests & memory, are managed as Linux processes. Thus, they inherit and share memory from the parent qemu process. KSM will in turn reduce memory duplication.

Finding the Per VM Overhead

So, knowing that memory is inherited from the parent qemu process and KSM will in turn ‘dedupe’ memory pages, how do we find the per VM overhead? The answer to this isn’t exactly straight forward. That is, there is no esxtop for KVM that will tell you this right out. That said, with top (or some scripting) you can get from A -> Overhead fairly simply.

Our Environment

Our environment is a nested KVM setup on Ubuntu 14.04 with 6GB total ram. From there we are booting 2x Fedora 20 instances with 2GB ram assigned to each:

$ nova flavor-list
| ID  | Name      | Memory_MB | Disk |
| 1   | m1.tiny   | 512       | 1    |
| 2   | m1.small  | 2048      | 20   |


$ nova show 6e551cb0-9ace-4084-afb3-3f46274f3717
| Property                             | Value
| created                              | 2014-07-26T02:54:40Z
| flavor                               | m1.small (2)

Finding the Overhead With top

top is a great tool, and fairly easy to use. For what we’re looking for, we’re going to sort on memory “M” to get an idea of what’s going on:

18239 libvirt+  20   0 3487916 289892   9448 S   0.7  4.7   0:51.00 qemu-system-x86
15453 libvirt+  20   0 3491580 287156   9448 S   0.7  4.7   1:01.33 qemu-system-x86

There are a few interesting columns here:
– VIRT = Total of all virtual memory associated with a process. That is, the memory requested by the thing. In this case about 3.5GB
– RES – Resident memory size, or the physical memory associated with the process. In this case about 3GB

So, looking at these, it seems, for a 2GB VM, we lose about 512MB in non-shared overhead and about another 9.5MB in shared memory overhead.


In this post, we explained a bit about how Memory works in KVM. We then booted some instances, and finally used top sorted by memory “M” to find the memory used by our VMs.

Top is by no means the only way to do this. However, it provides a wealth of information and should be one of the first tools in your toolkit.


4 thoughts on “Using top to Find Memory Overhead for qemu/KVM

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.