[svlug] Lots o' disk space
Brian De Smet
svlug.org at breg.fief.org
Mon Sep 29 12:21:54 PDT 2008
> A friend of mine does video for a living and has
> amassed between 3 and 4 terabytes of data that
> currently reside on a plethora of external drives
> all over the place.
> She'd like to put together a box to back it all up
> too, so I did some Googling around and came
> across this article that makes it look pretty
> painless and straightforward.
How about a real world example with some advice and snide comments thrown
in. My own home server;
An Antec 300 case. This has 6 internal 3.5 bays and three external 5.25
ones. I have two three drive arrays in the 6 internal bays. A dvd drive,
a fan controller, and a single drive in the three 5.25 bays. The single
drive is the OS. The system has space for two 120mm fans right in front of
the drives. In my un-air conditioned house, a little airflow over the 6
internal drives is required when it hits 90 in the room. There are full
tower cases that can handle many more drives. I believe the Cooler Master
Stacker series would be one.
The motherboard is a SuperMicro X2SBA+II (but really most any modern
motherboard should be fine).
I use two Promise TX4 four port sata cards. I could in theory have gotten
away with one one of these cards and the 6 onboard sata ports, but. I
would strongly avoid any cheap (where cheap is less than $200 I believe)
"raid" card. Most motherboards which claim to do raid are actually doing
software raid with a special driver. Either buy a real raid card from
3ware, Areca, or Adaptec or go with linux software raid.
I use CentOS 5 as it's nice and stable, quite boring, and used enough for
there to be plenty of others on the internets to commiserate with when I
have problems. I performed a basic install on the single disk. I really
should migrate to a mirrored pair for my OS (particularly with all the
smart errors from that disk in the last day or so). Installing to a
mirrored set of disks is easy with CentOS 5 or Ubuntu 8.04 at least. Once
the system is up and running, I would add the other drives and hand create
the raid arrays.
On the matter of LVM; I avoid it like the plague. Sure, you can use it to
grow a disk array in place. But it makes recovery more complicated. It
adds another layer of abstraction into the map you need to understand in
your brain, and it's performance appears lacking. I suggest migration to
new disks instead of growing a current array. When one of my arrays is
nearing full, I go buy three new disks, I open the case, plug them in
hanging out of the case, and migrate my smallest array to the new array.
Once the smallest array is empty, I put the new drives back where the
smallest drives were and once again I have disk space available. This of
course assumes that drive sizes will continue to increase in a reasonable
fashion (it's held true for my migrations from 120 pata disks through my
just installed 1TB array).
As to setting up the raid array, I can't provide links to a specific
how-to, because I don't know of one that's very good. The Software-RAID
Howto at the Linux Documentation Project isn't a bad starting point. I
found software raid to be fairly simple from just the man page for mdadm.
That said, I did have a friend or two I could bounce questions off of when
I was confused.
All that said, I'm not sure I would recommend this route unless you like
your friend enough to be their technical support and check up on the
machine on occasion. Have you considered one of the pre-assembled disk
devices. I'm currently playing with a Drobo as a backup device for my
server. It's certainly not perfect, and it is expensive, but it's elegant
it's simplicity and I think I could recommend it to my parents.
http://www.drobo.com/. I've also heard mostly positive things about the
Terastations and ReadyNas devices. They have limitations in business use,
but for home use where security and high performance for a dozen users are
not needed they seem excellent. Sometimes black box appliances are worth it.
--Brian De Smet
More information about the svlug