Table of Contents
Linux-VServer is a virtual private server implementation done by adding operating system-level virtualization capabilities to the Linux kernel.
http://en.wikipedia.org/wiki/Linux-VServer
Linux-VServer is a jail mechanism in that it can be used to securely partition resources on a computer system (such as the file system, CPU time, network addresses and memory) in such a way that processes cannot mount a denial-of-service attack on anything outside their partition.
Booting a virtual private server is then simply a matter of kickstarting init in a new security context; likewise, shutting it down simply entails killing all processes with that security context. The contexts themselves are robust enough to boot many Linux distributions unmodified, including Debian and Fedora Core.
Virtual private servers are commonly used in web hosting services, where they are useful for segregating customer accounts, pooling resources and containing any potential security breaches. To save space on such installations, each virtual server's file system can be created as a tree of copy-on-write hard links to a "template" file system. The hard link is marked with a special filesystem attribute and when modified, is securely and transparently replaced with a real copy of the file.
Linux-VServer is not related to the Linux Virtual Server project, which implements network load balancing.
http://linux-vserver.org/Frequently_Asked_Questions#Is_VServer_comparable_to_XEN.2FUML.2FQEMU.3F
Q: Is VServer comparable to XEN/UML/QEMU?
A: Nope. XEN/UML/QEMU and VServer are just good friends. Because you ask, you probably know what XEN/UML/QEMU are. VServer in contrary to XEN/UML/QEMU not "emulate" any hardware you run a kernel on. You can run a VServer kernel in a XEN/UML/QEMU guest. This is confirmed to work at least with Linux 2.6/vs2.0.
Q: Performance? A: For a single guest, we basically have native performance. Some tests showed insignificant overhead (about 1-2%) others ran faster than on an unpatched kernel. This is IMVHO significantly less than other solutions waste, especially if you have more than a single guest (because of the resource sharing).
derjohn
http://lwn.net/Articles/162718/
Comparing to Linux-VServer, I could say OpenVZ has a better resource management mechanisms (such as User Beancounters, or UBC for short) and full network stack virtualization (called venet in OpenVZ and ngnet in VServer) - the thing which is still not ready in VServer.
About how OpenVZ is different from the Linux-Vserver project,
UBC is an OpenVZ kernel component which implements VPS' resources accounting and limiting. There is a set of different resources (about 20 of them) for each VPS, and you can set their barrier and limit (can think of it as a soft and hard limit), and see their current usage.
The thing is there are some resources (like kernel memory, or open file descriptors) which are not limited in any other way, so any single VPS can abuse such a resource thus rendering the whole system (which can host hundreds of such VPSs) unusable.
So UBC is used to prevent such abuses (by different means - from returning ENOMEM from a syscall to killing the process that abuses resources), this is an important part of isolation. But this is not its only use.
The other UBC use is to provide guarantees for VPSs.
PS OpenVZ VPS == VServer guest
documented on: Dec 6, 2005, kolyshkin
Linux-vserver's 2.6.12.4-vs2.0 is the most recent stable release, there are patches available for 2.6.13 and 2.6.14, but they are not "blessed" as stable (although they contain bugfixes, and are very likely *stable*).
From the mailing list on the subject of the differences:
(will use Z for OpenVZ and S for Linux-VServer)
>> Factors of interest are >> - stability,
Z: the announcement reads "first stable OVZ version"
S: we are at version 2.0.1 (> two years stable releases)
>> - Debian support,
Z: afaik they are redhat oriented (and recently trying to get gentoo support done)
S: L-VS is in sarge (although with older/broken packages), etch and sid but either using recent packages or compiling the tool yourself works pretty fine on debian
>> - hardware utilization,
Z: no idea
S: support for 90% of all kernel archs at (almost) native speed (utilization? I'd say 100% if required)
>> - documentation and
Z: no idea (*Comment*, it has an extensive 100-pages user's guide. Also all utilities has man pages, and there are some short to-the-point howtos on the site and the forum.)
S: the wiki, the L-VS paper(s) and google
>> - community support,
Z: irc channel and forum/bug tracker
S: ML, irc channel (I guess we have excellent support)
>> - security.
guess both projects are trying to keep high security and IMHO the security is at least as high as with the vanilla kernel release …
documented on: Dec 6, 2005 by micah