While both Xen and OpenVZ are open source server virtualization technology, there exists some big differences between the two. I think potential VPS customers might need to check the applications that need to be hosted to determine which one is the preferable virtualization technology.

On one hand you have Xen, a para-virtualization platform that gives you much of the dedicated server behavior. You run your own instance of Linux kernel, you can load your own kernel modules, you have properly virtualized memory, IO and scheduler, and it’s stable and predictable. On the other hand you have OpenVZ, an operating-system level virtualization system that is just a thin layer on top of the underlying OS. It is simple to understand, has lower overhead, which usually translates to better performance.

OpenVZ Memory Model

First of all, when VPSLink’s OpenVZ Link-3 says “256MB guaranteed”, it actually means around 232MB of “privvmpages”, 14MB of “kmemsize” and other miscellaneous resources. When an application calls malloc(), the allocated amount will be added to “privvmpages”. However when “privvmpages” hits the limit, malloc() will fail with a NULL. When the host server ran out of memory, then processes in VE (virtual environment, OpenVZ’s term for a VPS) that exceeded “oomguarpages” will be terminated, although I do not think it will ever apply to VPSLink.

There are a few problems and a few advantages with OpenVZ’s approach to memory management. One of the biggest problem is the amount of memory an application uses and the amount of memory an application allocates is actually different, and the difference can vary a lot depending on the application. Take Java for example, it usually allocates a huge chunk of memory — usually everything it can see the host node has — but it might only use/commit a small fraction of allocated memory. It can usually render a Java program unusable as you will pretty much hit the privvmpages limit straight away. A bit of tweaking on bootstrapping parameters might fix it, but it is definitely not as clean as Xen or a dedicated server. In fact, almost all applications that use internal memory allocator suffer from this issue under OpenVZ.

Then there are issues associated with /proc/meminfo itself. While OpenVZ has already provided a way to virtualise it, “free” command on my OpenVZ VPS at VPSLink still shows the memory size of the host node. It makes some tasks, like running Java or heavy compilation with gcc almost impossible on a small VPS.

The advantage of OpenVZ’s memory model is that it is simple to understand — you are pretty much limited by only privvmpages on a VPSLink OpenVZ VPS. Unlike a dedicated server or Xen, your disk cache and your buffered pages are not counted against your overall memory usage. Therefore on an under-sold OpenVZ system with lots of cache and buffer memory on the host server, it might actually perform better than a similar spec’ed Xen VPS.

Xen Memory Model

Memory model for Xen VPS is much easier to explain. A 256MB Xen VPS is just like a 256MB dedicated server — that segment of memory is reserved for this VPS only, and no other VPS nodes can touch it. Like a real dedicated server, it counts only resident pages, i.e. only the memory blocks are allocated and used.

Moreover, what happens when you run out of memory? You VPS starts to swap. Each VPSLink Xen VPS comes with a swap partition that is twice the size of memory. When your application requires more memory, least-often used pages will be swapped out to make more rooms. Therefore a 256MB Xen VPS actually has 768MB of total memory (256MB RAM + 512MB swap), and believe me, swap space is very useful to handle that sudden spike of demand.

So Xen is always much better than OpenVZ? Not quite. While your 256MB VPS can theoretical use up to 768MB of memory, in reality Kernel, cache, buffer — they all take up memory.

Swapping kills performance.

Yes, you can tune the swappiness so you can keep on reducing cached and buffers without touching swap, however performance will suck. On the other hand you can bump up memory usage on an OpenVZ VPS all the way to the privvmpages limit without much degrading of performance, provided the host node still has room to spare. It is a good thing but can also a bad thing, which I will explain later.

Performance vs. Predictability

At the end it comes down to performance and predictability, and my preference has always been with Xen. While Xen has more overhead thus possibly slower VPS, its out-of-memory behaviour is much more predictable than OpenVZ, and this predictability won me over. As I have already said, that OpenVZ will continue to perform well when its memory usage approaches the limit. However, if privvmpages has been exhausted, the next malloc() will return a NULL pointer, and depending on how the applications handle NULL pointer, they either die gracefully or die with a segmentation fault (usually the later). It is like driving at 100km/hour and then suddenly hits a brick wall. You do not even know that there is a memory shortage issue because everything just sails so smoothly.

On the other hand, when a Xen VPS used up free memory, it will start taking memory from buffers and cached pages. Then it will start swapping. And then it will finally die when the last bit of swap partition gets exhausted. Performance will start to degrade when it started swapping. The load will go up, and the server will get less and less responsive. Your Xen VPS will spend more of its time swapping pages in and out than actually handling tasks. Even if it dies at the end, it will be a very noticeable long struggle than a head-on smash just like a dedicated server!

My preference? Predictable performance. I’ll rather have my sites slowing down to its knees, than having it crash and burn when the memory is exhausted.

What About Burstable Memory in OpenVZ?

“Burstable memory” in OpenVZ is overrated IMHO, as it makes the behaviour of your VPS even less predictable. It is often advised to set privvmpages (burstable amount) at twice the amount as vmguarpages (guaranteed amount) as allocated amount vs. used amount is usually 2:1. However it is not always the case. At work where we had lots of Java development it’s a bit like 5:1, but on my VPSLink OpenVZ VPS where it is mostly Lighttpd, MySQL and PHP, the ratio is about as low as 1.45:1.

Therefore out-of-memory could still happen when you have burstable (privvmpages) set to twice the guaranteed (vmguarpages). On the other hand, VPSLink’s “no burst, no swap” policy, i.e. making burstable amount the same as guaranteed amount, actually gives each VPS less memory to play with, at least it guarantees that when OOM occurs, no VE will be held responsible as all of them will be under their guaranteed limit.

Contact Us On WhatsApp