Welcome back to my third article, I hope these provide some useful information. This posting is based around my observations of the differences in how vSphere and Hyper-V manage memory on the host. Without further rambling, let’s get started.
Hyper-V handles memory in a noticeably different way than vSphere does. This has taken me some getting used to but the largest take away is that it does not overcommit memory. Microsoft uses the term Dynamic Memory for their version, and based on my observations that is a good term for it.
Memory over allocation in vSphere is handled through the VMTools and the balloon driver in instances of the memory allocation actually being utilized. This historically, to my understanding, results in paging to disk when recovered memory isn’t adequate for the needs to be met. This is where Dynamic Memory kind of breaks my brain. Unlike vSphere, which more or less just assumes you will overcommit at some point, you need to explicitly enable this functionality. It’s not complicated, just not something you might think to do when coming from a VMware environment.
The basics around this are less complicated than they seem at first glance. The Startup RAM is exactly what it sounds like. The best descriptor I have heard around this is that certain applications may take more RAM on startup than they do during regular runtime, so you allocate more RAM for startup to make sure the program doesn’t have any issues. You don’t want to throw a ton of memory at this, but you want to provide enough to allow everything to properly fire up. The risk here is if you allocate too much to this it has the potential to Dynamically alter the amount allocated to other machines, because this amount will be provided even at the expense of other systems.
Enabling the Dynamic Memory is as simple as clicking the checkbox before powering the VM on. You can’t change this while the VM is running, but I can’t think of a circumstance where that would be a good idea anyway. After that, you have your minimum and maximum RAM allocations, these are pretty much what you would expect. The minimum is the amount “reserved” for the VM in VMware terms, while the maximum is the upper limit of what will be provided. These values will probably vary somewhat but unless you’re extremely constrained on memory I don’t expect much problem. I haven’t set an upper limit on my hosts and it hasn’t run wild on me. Again, I’m running a lab environment so in production you need to pay more attention, but this is pretty forgiving.
The memory buffer % value is how much additional memory Hyper-V will allocate on top of the demanded memory. This gives you some breathing room in the event of an unexpected spike in memory demand. Again you will need to come up with an appropriate number, but unless you have a very erratic program it’s unlikely that your memory needs will climb dramatically and fast enough to catch the host off guard.
Memory weight is more or less exactly what it says, the priority of the VM to get memory in comparison to other VM’s. This is comparable to Shares in vSphere. If you have more VM’s on the host than can be provided with maximum RAM (Overcommit) it will prioritize as instructed to make sure memory gets allocated. I’ve never put myself into a situation that causes this to trigger, so I’m not 100% certain how it differs from vSphere’s algorithm. The technet article above goes into more detail on this subject if you need it.
Lastly, let’s compare vSphere and Hyper-V’s memory overcommitment technologies. vSphere makes use of Transparent Page Sharing (TPS), a method in which guests that all have the same programs and such loaded into memory will share a single copy of duplicated memory pages, essentially RAM deduplication in concept. There are some potential security issues around that, what with multiple guests sharing the same RAM copy. Additionally it has to hash all of the RAM to verify what’s loaded into memory is in fact the same, which when simply something like a base Windows boot isn’t that intensive. However when you get some larger draw systems that can get out of sync with the rest quite easily (Sharepoint, Lync, SQL, etc…) you get into some fairly heavy computational penalties. I’ve never been in a virtual environment where CPU usage has been high enough for this to matter so it may never come up, but it could present an issue.
Hyper-V’s Smart Paging doesn’t appear to be anything like TPS. Smart Paging will page to disk only on VM startup if the amount of startup memory, which we set above, is unavailable. Hyper-V in any other instance does not allow for memory overcommit. Basically Hyper-V does all of its math up front so the possibility of RAM starvation is substantially reduced up front. Conversely, vSphere only reacts when memory becomes constrained, so while there is no up-front computational overhead for that, TPS kind of negates the benefit and when memory does become constrained it needs to activate the balloon driver. Once you’ve trigged the balloon driver you’ve already come up against paging issues so it’s a reactive technology as opposed to what can be termed a more proactive technology.
In my opinion, all of this balances out to a degree. You can’t overcommit memory in Hyper-V so there is a harder cap on the number of VM you can run. The positive to that is you more effectively negate the possibility of paging issues in the event all of your overcommits come due. On the VMware side of this argument, how often do you really have every VM attempt to access all of its allocated memory at once?