Home > Hyper-V R2, Tips&Tricks, Virtualization, Windows 2008 R2 > Microsoft details upcoming Hyper-V Dynamic Memory feature

Microsoft details upcoming Hyper-V Dynamic Memory feature

Source

There’s a lot of interest around the just announced Dynamic Memory feature that will be included in Hyper-V as soon as Microsoft release the Service Pack 1 for Windows Server 2008 R2 (rumored to arrive no earlier than Q4 2010).

For a lot of time Microsoft downplayed the VMware’s memory overcommitment techniques, suggesting that they are the solution for every problem and that even the competitor recommends to not use them. Now this Dynamic Memory, which was originally planned for a 2009 release, seems exactly a memory overcommitment feature.

James O’Neill, IT Pro Evangelist at Microsoft, shares some concrete details about the feature for the first time, trying to explain why Dynamic Memory is not about overcommit memory:

…CPU is naturally dynamic: the CPU switches workload to another; we can reserve a quantity of CPU for a VM and or cap the amount it gets and if total CPU demand exceeds supply, weightings mean the shares granted to each VM do not need to be equal. Network is the same: packets go in or out – the NIC is works for one task at a time but multiple tasks are easily multiplexed through a single physical Network interface. Storage is different – because stuff is stored whether it is actively being processed or not, so commitment of disk space is long term. We don’t have to allocate all the disk a VM might use when we create the VM: instead of a fixed-size disk, dynamic disks can grow as needed – the sum of maximum disk sizes can be greater than disk capacity of the host, but at any given time we can’t use more disk than there is present. Now we’re applying that to memory…

VMs either have fixed size memory or a minimum , maximum and weight. So we don’t need to commit memory based on the peaks in load in the VM – dynamic memory will monitor demand for memory and use the hot-add capabilities of modern OSes to  increase memory. Eventually all the memory will be committed and since you can’t hot unplug memory we have a component to take memory out of use in one VM so that it can be given to another – and Hyper-V will take memory from the VMs which need it least. Dynamic memory won’t be supported on every possible operating system…

By de-allocating memory in VMs Hyper-V ensures the total allocated remains below the total present: when a VM can’t receive any more memory the OS in it will decide which caches should be abandoned, and which pages they should swap to disk. Hyper-V never swaps VM’s memory to disk.  You can have a design which over-allocates, and swaps when over-committed (and uses page sharing to allow some over-allocation before reaching the point of over-committing) or one which doesn’t swap and so doesn’t need page sharing to reduce swapping – but can’t over-allocate either…

So Dynamic Memory uses the memory ballooning technique. Problem is that multiple virtualization players, not just VMware, seem to agree that ballooning can be considered a form of memory overcommitment.
Oracle for example.

It may be just a matter of terminology, but this kind of things confuses the customers that, at the end of the day, need comparison terms to evaluate competing solutions.
The choice to not adopt the same terminology that the rest of the industry is using is not going to avoid comparison, it will just slow it down, leaving room for misunderstandings and misleading statements.

  1. No comments yet.
  1. March 24, 2010 at 11:41 am
  2. March 24, 2010 at 3:02 pm

Leave a comment