Discussion:
[Fwd: Re: softraid+encryption+ubuntu = fail!]
Steve Holdoway
2011-06-03 00:34:18 UTC
Permalink
Sorry, I added code to refuse mail servers without reverse DNS
entries... seems nzlug may be one of these. Anyway I missed the answers.

This is the conclusion I've come to. LTS is just a sop - they just want
the shing bits ( schoolboy programming - functionality at all costs and
bugger the quality ) - and it's really time to revert... to debian or
CentOS I suppose. Unfortunately, it's a circular argument... every major
player has screwed up at some point in time, and now seems to be
Ubuntu's - last security update I did screwed PAM up - not much as the
relevant services just needed restarting - but another mistake.

I'm getting further now. Remove all irrelevant hardware - a floppy and a
buggered CD - and if I go through and redo each step when I get an error
( and there are a lot! - trying to mount the swap partition is my fave
so far ), the install moves on. But only from a CD mind, the USB is way
faster, but keeps on generating phantom Raid partitions. It really is
like the days when installfests were a necessary function.

It's good to know I'm no madder than usual...

Cheers,

Steve

On Fri, 2011-06-03 at 12:06 +1200, a friend forwarded this to me...
-------- Forwarded Message --------
Subject: Re: [nzlug] softraid+encryption+ubuntu = fail!
Date: Fri, 3 Jun 2011 10:20:58 +1200
I asked on #ubuntu about a similar setup, brand new ubuntu server 10.04.2
install with software raid1 on two drives, but it kept failing installing
grub on both. The usually helpful people on #ubuntu told me I should "go
back to windows", to which I said something that got me banned from the
channel for the rest of the week.
Ubuntu is borked. I've since set up that server and another similar one
using debian, which is probably what I should have done in the first place.
This setup is for my favourite local charity, so input would be
gratefully received.
The idea is to have an unencrypted root partition so support people can
remote in via openvpn to get the rest of the systems ( KVM images on an
encrypted partition ) up and running in the event of a power, etc
failure, but still render the data useless in case of theft. To do this
we use a manually entered password to encrypt the partition.
For robustness, we're running this encrypted partition on a pair of raid
1 softraid disks.
Initial setup was done on 10.04 LTS and worked fine. Security upgrades
to 10.04.2 have made this a real mess, as the process requesting the
password no longer times out or actually talks to a physical device - so
it has to be killed off and manually started. I can probably get around
that with a noauto in the fstab, but that's not my real worry.
I'm trying to replicate the softraid/encryption configuration on another
server as a backup. It just will not install. I haven't any 10.04 alt
disks around, so used 10.04.2, and not 11.04. All fail. Manually
creating the encrypted disk works fine, but even with all of the
necessary bits in crypttab/fstab, the config is lost on reboot, and no
password is ever requested. I can find no answers, either in the logs or
google.
Does anyone have an idea as to what has changed / what is so terrible
about what I'm trying to do??
Cheers,
Steve
--
http://www.greengecko.co.nz
Skype: sholdowa
_______________________________________________
http://www.linux.net.nz/cgi-bin/mailman/listinfo/nzlug
_______________________________________________
http://www.linux.net.nz/cgi-bin/mailman/listinfo/nzlug
--
Steve Holdoway BSc(Hons) MNZCS <***@greengecko.co.nz>
http://www.greengecko.co.nz
MSN: ***@greengecko.co.nz
Skype: sholdowa
Jan Bakuwel
2011-06-03 01:43:59 UTC
Permalink
Hi Steve,

+1 for using Debian on a server.

If you want to be as trouble free as you can get consider:

Set up partitions for three independent systems, one "rescue" just used
to boot, and two for whatever OS you want to use. This way you don't
have to throw away your old shoes before your new fit (no matter what
the shoemakers say about the process being seamless). You can have two
Debian Squeeze OSes that are identical. When an update comes along,
update only one of them and when happy switch over to that one as your
default OS. Next updates should be applied to the former default OS etc
etc. Saved my bacon many times, especially with systems you're
supporting remotely. The "rescue" partition is there so you're in full
control of the boot process rather than grub2 messing it all up. I'd
even recommend using Extlinux on your rescue OS rather than grub2. You'd
chainload the two other OSes from the "rescue" partition.

Not using LVM nor encrypted discs for your root fs.

Avoid booting off software RAID if you can help it. Either use a
hardware RAID controller or simply boot off one disc while making sure
you keep the equivalent partition on the other disc fully up to date
(including MBR and boot records) manually.

Offload as many tasks as possible to VMs, ie. your host OS just provides
the basic functionality to host VMs.

Hope this helps,

Jan
Post by Steve Holdoway
Sorry, I added code to refuse mail servers without reverse DNS
entries... seems nzlug may be one of these. Anyway I missed the answers.
This is the conclusion I've come to. LTS is just a sop - they just want
the shing bits ( schoolboy programming - functionality at all costs and
bugger the quality ) - and it's really time to revert... to debian or
CentOS I suppose. Unfortunately, it's a circular argument... every major
player has screwed up at some point in time, and now seems to be
Ubuntu's - last security update I did screwed PAM up - not much as the
relevant services just needed restarting - but another mistake.
I'm getting further now. Remove all irrelevant hardware - a floppy and a
buggered CD - and if I go through and redo each step when I get an error
( and there are a lot! - trying to mount the swap partition is my fave
so far ), the install moves on. But only from a CD mind, the USB is way
faster, but keeps on generating phantom Raid partitions. It really is
like the days when installfests were a necessary function.
It's good to know I'm no madder than usual...
Cheers,
Steve
On Fri, 2011-06-03 at 12:06 +1200, a friend forwarded this to me...
-------- Forwarded Message --------
Subject: Re: [nzlug] softraid+encryption+ubuntu = fail!
Date: Fri, 3 Jun 2011 10:20:58 +1200
I asked on #ubuntu about a similar setup, brand new ubuntu server 10.04.2
install with software raid1 on two drives, but it kept failing installing
grub on both. The usually helpful people on #ubuntu told me I should "go
back to windows", to which I said something that got me banned from the
channel for the rest of the week.
Ubuntu is borked. I've since set up that server and another similar one
using debian, which is probably what I should have done in the first place.
This setup is for my favourite local charity, so input would be
gratefully received.
The idea is to have an unencrypted root partition so support people can
remote in via openvpn to get the rest of the systems ( KVM images on an
encrypted partition ) up and running in the event of a power, etc
failure, but still render the data useless in case of theft. To do this
we use a manually entered password to encrypt the partition.
For robustness, we're running this encrypted partition on a pair of raid
1 softraid disks.
Initial setup was done on 10.04 LTS and worked fine. Security upgrades
to 10.04.2 have made this a real mess, as the process requesting the
password no longer times out or actually talks to a physical device - so
it has to be killed off and manually started. I can probably get around
that with a noauto in the fstab, but that's not my real worry.
I'm trying to replicate the softraid/encryption configuration on another
server as a backup. It just will not install. I haven't any 10.04 alt
disks around, so used 10.04.2, and not 11.04. All fail. Manually
creating the encrypted disk works fine, but even with all of the
necessary bits in crypttab/fstab, the config is lost on reboot, and no
password is ever requested. I can find no answers, either in the logs or
google.
Does anyone have an idea as to what has changed / what is so terrible
about what I'm trying to do??
Cheers,
Steve
--
http://www.greengecko.co.nz
Skype: sholdowa
_______________________________________________
http://www.linux.net.nz/cgi-bin/mailman/listinfo/nzlug
_______________________________________________
http://www.linux.net.nz/cgi-bin/mailman/listinfo/nzlug
_______________________________________________
http://www.linux.net.nz/cgi-bin/mailman/listinfo/nzlug
_______________________________________________
NZLUG mailing list ***@linux.net.nz
http://www.linux.net.nz/cgi-bin/mailman/listinfo/nzlug
Volker Kuhlmann
2011-06-03 07:58:53 UTC
Permalink
Post by Jan Bakuwel
Avoid booting off software RAID if you can help it. Either use a
hardware RAID controller or simply boot off one disc while making sure
you keep the equivalent partition on the other disc fully up to date
(including MBR and boot records) manually.
Sorry, but that's just nonsense. I've been booting off raid1 for almost
10 years and it just works. It's also required if you want to keep on
booting if one of your disks self-destructs. If your distro is too
stupid to install the boot loader properly you have to kick it into
shape manually. I had to do that a few times.

The critical issue really only is to make sure to install a boot loader
on *both* disks. It's slightly easier if the location (start/end LBA) of
the constituent partitions is the same. Linux SW raid doesn't require
this, the boot loader doesn't either, but it makes it a bit more
complicated (maybe too complicated for some distro tools).

It looks something like this:

# cat /etc/grub.conf
setup --stage2=/boot/grub/stage2 --force-lba (hd0) (hd0,2)
setup --stage2=/boot/grub/stage2 --force-lba (hd1) (hd1,2)
quit

Plus the usual in /boot/grub.menu.conf.

Each disk may have to have an active partition too, but that might
depend on exactly where you put the boot loader. MBR is always a smart
choice.

For raid5 things may be a bit different and you probably can't boot off
a striped array, but for raid1 it's trivial.

Volker
--
Volker Kuhlmann is list0570 with the domain in header.
http://volker.dnsalias.net/ Please do not CC list postings to me.

_______________________________________________
NZLUG mailing list ***@linux.net.nz
http://www.linux.net.nz/cgi-bin/mailman/listinfo/nzlug
Jan Bakuwel
2011-06-04 00:32:32 UTC
Permalink
Hello Volker,
Post by Volker Kuhlmann
Post by Jan Bakuwel
Avoid booting off software RAID if you can help it. Either use a
hardware RAID controller or simply boot off one disc while making sure
you keep the equivalent partition on the other disc fully up to date
(including MBR and boot records) manually.
Sorry, but that's just nonsense. I've been booting off raid1 for almost
10 years and it just works. It's also required if you want to keep on
booting if one of your disks self-destructs. If your distro is too
stupid to install the boot loader properly you have to kick it into
shape manually. I had to do that a few times.
Nonsense? I'd just call it a different approach :-)

My point is that you do not need RAID1 if you want to keep on booting if
one of the disks self-destructs. You do indeed do need to configure the
bootloader taking that into account though (as well as manually doing
the work that the RAID system otherwise would do automatically).
Particularly for systems that are managed remotely, one should seriously
consider hardware RAID, at least for the root filing systems so the
systems can be booted and remotely accessed in case of disc failures.

I guess the point I'm trying to make is that it sometimes helps to keep
things as simple as possible, especially if access to systems is limited
(to remote access only). I prefer to keep the root filing system easily
accessible (ideally on hardware RAID, but if that's not possible then
just on a single disc and not on software RAID, not on LVM, not on an
encrypted disc) and keep a copy of the OS on the other disc in sync. All
data by all means resides on the RAID array.

When all services have been moved to VMs, the virtualization host's OS
is fairly static and keeping a mirror on a 2nd (3rd) disc is not a lot
of work. It even has some advantages as you can upgrade your currently
running OS while keeping a copy of your old shoes so to speak.

I do recall having had issues booting off software RAID arrays but it's
been so long (easily over 10 years) that I don't recall anymore exactly
what it was (perhaps boot loaders on one disc referring to another
other) - what I do recall though is that I wasn't able to boot the system.

Jan


_______________________________________________
NZLUG mailing list ***@linux.net.nz
http://www.linux.net.nz/cgi-bin/mailman/listinfo/nzlug
Volker Kuhlmann
2011-06-04 02:15:29 UTC
Permalink
On Sat 04 Jun 2011 12:32:32 NZST +1200, Jan Bakuwel wrote:

Hi Jan,
Post by Jan Bakuwel
Nonsense? I'd just call it a different approach :-)
I was primarily responding to your advice "Avoid booting off software
RAID if you can help it". That's bad advice, unqualified like that. The
advantage of booting from softraid1 is that it's easy to do, and you
still have a booting system after disk failure.

Of course it's always better to have a hardware raid card - except for
your wallet. They're hideously expensive. Obviously some purposes will
warrant the cost easily, but my desktop PC doesn't :-) and there booting
from soft raid1 has saved my bacon a few times. For a budget server I'd
use the same approach. For remote server locations you have to pull
other strings ideally anyway. Real performance of course costs real
money. If you meant "avoid soft raid in favour of hard raid" - well
that's just a cost/performance question without generic answer.
Post by Jan Bakuwel
I guess the point I'm trying to make is that it sometimes helps to keep
things as simple as possible, especially if access to systems is limited
(to remote access only).
I would call soft raid 1 easy. You can mount each constituent disk by
itself without going through the md device - dangerous of course, don't
do it with the md running, but it works (or at least used to). It can be
useful for emergency repairs.
Post by Jan Bakuwel
When all services have been moved to VMs, the virtualization host's OS
is fairly static and keeping a mirror on a 2nd (3rd) disc is not a lot
of work. It even has some advantages as you can upgrade your currently
running OS while keeping a copy of your old shoes so to speak.
That should be possible with soft raid too:
* Mark 1 disk as bad, remove from array.
* Upgrade disk, while running from array.
* (Here it gets fuzzier) Tell system to run array from updated disk
instead, on reboot.
* Hot-add the not updated disk to array.

I wouldn't try that on a remote server first though...

Volker
--
Volker Kuhlmann is list0570 with the domain in header.
http://volker.dnsalias.net/ Please do not CC list postings to me.

_______________________________________________
NZLUG mailing list ***@linux.net.nz
http://www.linux.net.nz/cgi-bin/mailman/listinfo/nzlug
Loading...