flexible restore/install system 

Newsgroups:  gmane.linux.debian.user
Date:        Tue, 18 Oct 2005 14:18:37 +0200

at home I run a server with actually to many stuff on it to be safe. But i want to test a lot of things that it evolved to this situation. It has ldap, samba, courier, spamassassin, clamav, squirrel, exim4,… Anyway, for normal quick restores, restoring a backup is quick and painless.

But i was thinking of a way to not only "restore" the system but also "move" it. For instance, my system acts weird sometimes because of my tampering with it so i would love to start from a clean sarge install but with all my data and services running on it without spending to much time on the reinstall.

To manage things better i've begun moving services to uml instances on the server and it keeps thing organised. Now this also makes restoring the services easy as one would only have to install a base system, install needed utils (uml,bridge) and copy the uml files containing the systems and start those.

As for other files like /etc/profile, /etc/inputrc, /etc/environment, i guess you could make a package containing those files and installing them when you install the "customizing" package or whatever you would call such a package.

A server reinstall or recovery would look like this:

This would leave you with a server "customized" to your liking and with the base apps you can't do without. Then further customizing would be required to run the services:

Next would be restoring the data

As for backups, you would need to backup /root /home and the uml systems + maintain changes you make to files used in the custom packages.

As said, this is not the fastest restore method but with regards to reinstalling a server it might be pretty quick and versatile.

  1. Is this doable? Any things i'm overlooking/comments/…
  2. What would be an easy way to making such custom packages be it for installing config files or fake packages used to install your favourite apps?

Benedict Verheyen

flexible restore/install system 

> As said, this is not the fastest restore method but with regards to
> reinstalling a server it might be pretty quick and versatile.
>
> 1. Is this doable? Any things i'm overlooking/comments/...
> 2. What would be an easy way to making such custom packages be it for
> installing config files or fake packages used to install your favourite
> apps?

There are folks who have been thinking about this problem for a lot longer than you or me. Check out http://infrastructures.org for the concepts and ISConf[1] for the software.

Ryan Nowakowski

flexible restore/install system 

> There are folks who have been thinking about this problem for a lot

very interesting site. Wish I had know it earlier.

Just that there is not much said on the web. Can you give more links? Because skimming through it, I get a feeling that we are talking about apple and banners here. The OP's goal is a flexible restore/install system, but I think the site talks about a monster organization wide distributed cloning system.

,-----
| It's not for use in environments where you want to still make manual or
| ad-hoc changes to machines at the same time.
|
| * No gold server. You work from the command line of any representative
| target machine.
|
| * No central repository. Packages and change orders are stored in a
| distributed cache, checksummed, replicated, and spread across all
| participating machines.
|
| * No CVS server. See the previous point.
|
| * No single point of failure. See above.
|
| * Better workflow. No more futzing around with CVS checkins, rsync updates,
| or ssh'ing back to the gold server -- there isn't one. You log into one of
| the machines you want to change -- more of a natural sysadmin workflow.
`-----

T

flexible restore/install system 

> | * No gold server. You work from the command line of any representative
> | target machine.

bad thing to have …. if it fails and there is no silver or bronze server with identical contents

> | * No central repository. Packages and change orders are stored in a
> | distributed cache, checksummed, replicated, and spread across all
> | participating machines.

repository is nice … if people need to be disciplined to comment all their changes before releasing to the rest of the machines

  • many hundred ways to do that "task" so that only tested files and patches gets out to the rest of the machines

    > | * No CVS server. See the previous point.

ditto

> | * No single point of failure. See above.

ditto

> | * Better workflow.

if it takes more than a few minutes per day .. something is wrong

if the admin is afraid of 50 or 100 or 500 or 1000 or 5000 servers, something is wrong with the admin

the hardest part is to maintain and test the first 10-20 systems

and clone those patches/fixes onto the next 10-20 machines and the rest of the world picks up its changes from the "release" servers

>  No more futzing around with CVS checkins, rsync updates,

that'd be a good or bad thing … depending on where you're looking

for small world of machines … say 5-25 …

make a cdrom with minimum drivers to support a network
install off the net or clone any of your own local servers
it'd be more (geeky) fun to boot from (network) floppy
and do a network install  or even a usb-stick if you need
more than 1.44/2.88MB to boot and install
*
* if you lose your cdrom or floppy or usb-stick,
* you better have a few backup or a way to recreate a new one
*
  • or do a network boot for all machines, so they are all identical, except that the "pxe servers" will have to have a kernel that supports all the various client boxes
  • you cannot get away from "single point of failure"

fun stuff and ez to do …. in 5 min or 5hr or 5 days … would depend on what else *one* needs to do for the other 100 hrs of the week and what happens when any of the machine decides to go on holiday while you're on your honeymoon or vacation

alvin