[svlug] New server plans moving forward

Daniel Gimpelevich daniel at gimpelevich.san-francisco.ca.us
Sun Jan 21 13:37:57 PST 2007


On Sun, 21 Jan 2007 01:18:50 -0800, SVLUG President wrote:

> With some serious help and SCSI Mojo from Daniel Gimpelvich, we've got

Tip for names: Highlighting text copies, middle-click pastes. Console
mode? sudo apt-get install gpm

> SCSI Channel B is SE - single-ended - and is internal to the server

Well, both channels are internal to the server. I'm sure not everybody
here is aware of the status of that cable out the back.

> - the individual partitions are raid0 mirrored

They're raid1 mirrored.

> SCSI Channel A is LVD (low-voltage-differential?) and is the external JBOD

I believe that it would probably technically be termed an external RAID
device, regardless of any RAID capability it may or may not have
internally.

> - they're set up as 4 raid0 mirrored pairs

They are raid1 mirrored pairs.

> - they're a RAID1+0 setup - a raid1 stripe joining 4 raid0 mirrored
> pairs into one big huge ~144Gb virtual drive.

It's a raid0 stripe joining 4 raid1 mirrored pairs.

> - it's permanent mount point is /media/jbod

In hindsight, since we're not actually using any JBOD configuration, but
only RAID, perhaps /media/raid would have been more appropriate?

> 20-20 hindsight wise, I'd have gone with LILO for our boot loader, since
> its architecture would probably work better with raided boot.
> Experiments are still needed to verify that, but from recent experience
> I can tell you, grub is definitely NOT the right boot loader to use with
> raided partitions.

No, that's 20/40 hindsight. A good night's sleep made me realize that LILO
needs to be reinstalled from scratch with every kernel update, which would
be quite impractical for this application. The current boot situation
really is the best we can do.

> Some of you are probably wondering why we'd give up 50% of our drive
> capacity to do raid0 mirroring all over the place.  Well, right in the

Again, raid1 mirroring.

> middle of installing and configuring all of this, I got a clear answer
> as to why - one of the 8 drives in the jbod failed... but we were still
> able to proceed, and finish configuring the box, even with a drive out
> of service.  That put a smile on my face!
> 
> AFAIK 36GB SCA SCSI drives are pretty abundant.  I'm sure hoping that's
> true, since we now need at least one, to replace the one that failed. 
> If we can get a few more extras as spares, that'd be great. So, if you
> have any old 36GB drives laying around, PLEASE let me know, so I can
> twist your arm till you donate them. :-)))

Don't forget to mention that they must be SCA.

> We need to investigate mdadm's monitoring capability, and add something
> to rc2.d to autorun monitoring, so we can get emails if/when something
> significant like a drive failure happens to the various RAID instances. 
> Maybe setup a mailinglist for "sysadmins" that we can have software send
> emails to?

That may already be in Debian. It's easy to check.

> After the "configure it" phase (probably a week or two), we'll take it
> live, letting it take over for the current server.    Once we're sure
> everything's working right, we can (after a backup of course) wipe the
> old server and repurpose it.  I'm thinking it might make a good
> honeypot, or otherwise help us with our ongoing war against spam... but
> I'm totally open to other ideas as well.  Maybe it can run vmware or
> xen?  Who knows.

It's only 700MHz X 2, so if you want to look into that, and if the VA
Linux Systems 9008 really does support some sort of hardware RAID after
all, it would really pay to redo it with that, to take some load off the
CPUs.





More information about the svlug mailing list