Linux-VServer 

Info 

Linux-VServer is a virtual private server implementation done by adding operating system-level virtualization capabilities to the Linux kernel.

Source 

http://linux-vserver.org/Welcome_to_Linux-VServer.org

http://en.wikipedia.org/wiki/Linux-VServer

Description 

http://en.wikipedia.org/wiki/Linux-VServer

Linux-VServer is a jail mechanism in that it can be used to securely partition resources on a computer system (such as the file system, CPU time, network addresses and memory) in such a way that processes cannot mount a denial-of-service attack on anything outside their partition.

Booting a virtual private server is then simply a matter of kickstarting init in a new security context; likewise, shutting it down simply entails killing all processes with that security context. The contexts themselves are robust enough to boot many Linux distributions unmodified, including Debian and Fedora Core.

Virtual private servers are commonly used in web hosting services, where they are useful for segregating customer accounts, pooling resources and containing any potential security breaches. To save space on such installations, each virtual server's file system can be created as a tree of copy-on-write hard links to a "template" file system. The hard link is marked with a special filesystem attribute and when modified, is securely and transparently replaced with a real copy of the file.

Advantages 

Disadvantages 

Linux-VServer vs Linux Virtual Server 

Linux-VServer is not related to the Linux Virtual Server project, which implements network load balancing.

Linux-VServer vs XEN/UML/QEMU 

http://linux-vserver.org/Frequently_Asked_Questions#Is_VServer_comparable_to_XEN.2FUML.2FQEMU.3F

Q: Is VServer comparable to XEN/UML/QEMU?

A: Nope. XEN/UML/QEMU and VServer are just good friends. Because you ask, you probably know what XEN/UML/QEMU are. VServer in contrary to XEN/UML/QEMU not "emulate" any hardware you run a kernel on. You can run a VServer kernel in a XEN/UML/QEMU guest. This is confirmed to work at least with Linux 2.6/vs2.0.

Q: Performance? A: For a single guest, we basically have native performance. Some tests showed insignificant overhead (about 1-2%) others ran faster than on an unpatched kernel. This is IMVHO significantly less than other solutions waste, especially if you have more than a single guest (because of the resource sharing).

derjohn

Linux-VServer vs OpenVZ 

http://lwn.net/Articles/162718/

Comparing to Linux-VServer, I could say OpenVZ has a better resource management mechanisms (such as User Beancounters, or UBC for short) and full network stack virtualization (called venet in OpenVZ and ngnet in VServer) - the thing which is still not ready in VServer.

About how OpenVZ is different from the Linux-Vserver project,

UBC is an OpenVZ kernel component which implements VPS' resources accounting and limiting. There is a set of different resources (about 20 of them) for each VPS, and you can set their barrier and limit (can think of it as a soft and hard limit), and see their current usage.

The thing is there are some resources (like kernel memory, or open file descriptors) which are not limited in any other way, so any single VPS can abuse such a resource thus rendering the whole system (which can host hundreds of such VPSs) unusable.

So UBC is used to prevent such abuses (by different means - from returning ENOMEM from a syscall to killing the process that abuses resources), this is an important part of isolation. But this is not its only use.

The other UBC use is to provide guarantees for VPSs.

PS OpenVZ VPS == VServer guest

documented on: Dec 6, 2005, kolyshkin

Linux VServer Project is doing just fine now 

Linux-vserver's 2.6.12.4-vs2.0 is the most recent stable release, there are patches available for 2.6.13 and 2.6.14, but they are not "blessed" as stable (although they contain bugfixes, and are very likely *stable*).

From the mailing list on the subject of the differences:

(will use Z for OpenVZ and S for Linux-VServer)

>> Factors of interest are
>> - stability,

Z: the announcement reads "first stable OVZ version"

S: we are at version 2.0.1 (> two years stable releases)

>> - Debian support,

Z: afaik they are redhat oriented (and recently trying to get gentoo support done)

S: L-VS is in sarge (although with older/broken packages), etch and sid but either using recent packages or compiling the tool yourself works pretty fine on debian

>> - hardware utilization,

Z: no idea

S: support for 90% of all kernel archs at (almost) native speed (utilization? I'd say 100% if required)

>> - documentation and

Z: no idea (*Comment*, it has an extensive 100-pages user's guide. Also all utilities has man pages, and there are some short to-the-point howtos on the site and the forum.)

S: the wiki, the L-VS paper(s) and google

>> - community support,

Z: irc channel and forum/bug tracker

S: ML, irc channel (I guess we have excellent support)

>> - security.

guess both projects are trying to keep high security and IMHO the security is at least as high as with the vanilla kernel release …

documented on: Dec 6, 2005 by micah

Linux-Vserver vs Xen 

http://allmybrain.com/2008/01/14/linux-vserver-vs-xen/

January 14th, 2008

By: Dennis
Tags: linux, linux-vserver, virtual servers, Xen

A while back, I found myself running out of hardware and wanting to host more sites than I currently was. In addition, I wanted to create a little bit more redundancy for some of the services I host.

At the time, I was hosting a number of services with Xen. One physical server hosted 3 or 4 virtual servers. After a certain amount of reading over different solutions, I decided to convert all my production virtual servers to Linux-vserver. I'm not advocating either solution here. I'm simply going to point out my reasons for changing and hopefully help my readers understand the issue more.

About Xen 

Xen is powered by what the Xen team has named the "Hypervisor". The Hypervisor is a thin operating system that runs in between the hardware and the real operating system, like Linux or BSD. Having the Hypervisor run in place of an operating system creates the ability to start multiple virtual operating systems that each function with the same level of access to the physical hardware. In other words, it would be possible to allow a guest operating system access a disk drive without talking to the host operating system first.

In Xen terminology, the first operating system to boot with the Hypervisor is called "Dom0". Using Dom0, you can then use the Xen tools to boot other operating systems. These are termed "DomU" systems. Each DomU system needs a special kernel that is compiled to run under the Dom0 operating system. Xen allows a range of different operating systems to be run under Dom0. I read some time ago, that the overhead for running multiple virtual systems was somewhere around 3%. In my experience running Xen, it really did seem to perform without a lot of overhead.

One downside of Xen is that you have to partition your physical resources for your virtual machines. You have to decide how much memory a virtual server needs and then allocate that memory when you start the system. Although there are tools to change, in real time, the memory that a virtual server is taking, most people are probably not going to attempt this on a regular basis.

Overall, Xen works quite well when you need to run different operating systems on one machine. For me however, I use virtual servers to separate business needs. In other words, I don't want to install DNS, Mail, and a number of other disimilar servers in one operating system. This allows me to easily scale up or reconfigure a particular service if I need without interfering with the other services. I was running the same operating system on all my virtual servers. I basically found that after partitioning the physical memory a number of times, I didn't have room to start additional virtual servers and had to look for more resources. While it's true that some services could have probably been configured to use less memory, I don't like to limit those virtual machines if possible.

About Linux-Vserver 

Linux-Vserver uses a simpler approach to virtualization than Xen does. Linux-Vserver exists as a patch to the Linux kernel. Support for virtualization is implemented by providing a set of tools that start virtual machines with the support provided by the new kernel features. Basically, each guest machine runs on the same host kernel. They use the same memory and may or may not use the same disk drive. This has the advantage that while virtual machines can still be separate, they don't require more memory than the processes contained within require. For example, instead of allocating 125 Megabytes of memory for a server, you just start it up and it'll take however much memory it takes. If it only takes 12 Megs to run that type of server, you can effectively start up 10 instances on one virtual server where the Xen solution only had room for 1.

Now, you might argue that you could allocate less memory for the Xen virtual servers. The problem is that what if a virtual server suddenly needs more than you allocate. With Linux-vserver, as long as you have more physical memory available, virtual servers can grow. Of course, if you run out of physical memory, you'll have a problem. Even though Linux-vserver guests could potentially utilize of the physical memory, they can be configured to have a maximum amount of memory per guest.

Summary 

You'll have to decide which solution works best for you. In my case, I've been able to add quite a few more virtual servers to our existing hardware by switching to Linux-vserver. I've been able to use the same methods I was using to make backup copies of virtual servers with both Xen and Linux-vserver. Most of the disk images I was using with Xen actually worked with only minor modification when starting them with Linux-vserver. Overall, the transition has been quite painless and the solution suites my needs.

documented on: 2008-02-03

Experimenting with Linux-Vserver 

After my disappointing XEN experiments, i try to use vserver. The way it handles virtualization is just a lot more simple than xen. The problem with this is that you only get linux installation. All is running upon the same kernel, every guest is separated using security context. It has advantage over chrooted env (i have also try this, but it is not worth to write a blog entry for it): it can use natively several IP/hostname.

It is a lot less fun than Xen: you cannot run windows concurrently with linux at native speed. But it is a lot more stable. I did some test and it show me that it has more or less the same stability as the Linux kernel. For now, i am able run my server for 9 days. I think this is stable.

Concerning my other need:

Just to give a quick summary of my network problem :

documented on: 16 June 2007, gildor

From Zero to Virtualization: Linux-Vserver vs. OpenVZ 

https://www.fsfe.org/en/fellows/rca/from_out_there/from_zero_to_virtualization_linux_vserver_vs_openvz

We're currently evaluating solutions for virtualizing GNU/Linux servers at the HGKZ. Gino is evaluating OpenVZ while I'm looking at Linux-Vserver. Both solutions have a similar approach: Don't create virtual machines. Instead, create virtual servers that are sealed away from each other, but running on the same kernel. This has its own set of advantages and disadvantages, but I won't go into that, you can read about it elsewhere.

Here's a handy comparison table of what we found out so far:

[... comparison table omitted, doesn't quite make sense to me ...]

After all of this probing, compiling, tickling, testing and general mayhem, we have decided to go with OpenVZ. To reach that decision, we of course evaluated both solutions on many levels (don't let that table fool you). There are clear philosophical and architectural differences between the two solutions, but one key factor in our decision was that for the administrator, both systems are almost too similar.

Yes, OpenVZ takes a more complicated approach to networking, but Linux-Vserver takes a more complicated approach to configuration. Yes, a Linux-Vserver host's default config is mostly what you want and just seems to work out of the box, but this lowers your motivation for learning the details of the resource management system. And details, as you surely know, are nearly always ugly.

With OpenVZ, you are forced to learn these things up front, which presents a steeper learning curve but gifts you with a more solid grasp of the technology. You get to flex your math muscle to fit virtual servers into your actual hardware's limitations without creating an impossible physical paradoxon that rips a hole into space-time, and that's quite handy. With Linux-Vserver, these things might come back to haunt you later, when you're trying to put vserver no. 22 onto your machine and discover your 16 GB of memory are already spent, and that's when details bite a tasty chunk right out of your lower backside. The decrepit state of Linux-Vserver's documentation does nothing to ease your fears in this department, either. Convoluted configuration is okay, as long as it's well-documented convoluted configuration.

Now what if we are wrong, and within the next 8 months someone writes The Linux-Vserver Bible (Illustrated Swimsuit Edition) and SWSoft decides to pull the plug on support for OpenVZ, leaving us without any burly Russian engineers to take care of the code? That may seem sad, but it paves the way for such a beautiful pink-colored fluffy thought that it nearly makes my skull burst: We would still be fine. Both of the solutions are open. No proprietary formats. No secrets. We can migrate from one to the other at any time.

documented on: 13 April 2007, rca