[volunteers] RAID array rework

Daniel Gimpelevich daniel at gimpelevich.san-francisco.ca.us
Mon Mar 19 16:39:14 PST 2007


> What happens if you just ignore the existing stripe information stored 
> on
> those drives and try to create new stripe sets using mdadm's "create"
> mode?

I asked this very same question at the installfest. Unfortunately, I 
don't remember what Paul's response was.

> If mdadm is actually balking at so doing, which would seem a really odd
> thing for it to do, then wouldn't it suffice to just overwrite the 
> first
> few sectors of the drive using "dd"?  I mean, what happens after you do
> that?  That's the natural way to zero out troublesome data on a hard
> drive, right?

I don't think that prescription is indicated quite yet, but it's always 
an option. BTW, mdadm apparently looks at the last sectors, not the 
first ones. Anyway, I would expect any balking to be the result of a 
stale mdadm.conf file.

> In case it will help others, here's what I wrote when you asked me 
> about
> this stuff a few weeks ago, in private mail:
>
> > Can you help me construct the right set of commands to
> > convert that mess into what it should be now?
>
> Er, sorry to say, I have almost zero experience with the mdadm utility,
> so I'm a bit at sea about details of its proper use.  Literally the 
> only
> time I've created an "md" driver array, I used the Debian "etch" 4.0
> installer to do it, rather than wielding mdadm from the command line.
> (I _do_ have a lot of experience with RAID at a prior firm, but it was
> always hardware RAID.)

The current setup I originally attempted to create using the etch 
installer, but it didn't in any way allow that, so I dove headfirst 
into the mdadm manpage. Use of it is very obviously not everyday enough 
for any of us for that to be avoided.

> So, any "recipe" I attempt to conjure out of thin air is extremely
> likely to be missing some essential steps, among other things.  For
> example, can I just ignore what's currently on the drives and do 
> "mdadm -C"
> to create fresh array data?  I have no idea (though I do suspect one
> can).

Not just you, but anyone -- Paul's request for advice from the 
sidelines is ultimately unworkable here. Someone needs to sit down and 
actually try stuff.

> I'm unclear on how many good drives are now in the VA Linux model 9008
> JBOD (but you would know).  For purposes of this mail, let's assume 8,
> and that we're creating a single 6-drive RAID array with 2 spare 
> drives.
> I'll assume that /dev/sda through /dev/sdh each have a single partition
> taking up the whole drive, having partition type "fd" (Linux raid
> autodetect).

This was addressed in the very message you quote:
> You'll also remember one of the drives died, so one mirror's
> degraded, and we have 7 drives to play with, not 8.
>
> So... I'd like to reconfigure the array, and I've managed to
> mess up the old config a bit, but haven't reached a new config yet.
>
> I'd like to set it up as RAID5 using 7 drives.  That'd be 5 drives
> plus 2 spares.


> # mknod /dev/sdh1 b 8 113  ## You already did this step.
> # mdadm --create --verbose /dev/md0 --level=5 --raid-devices=6 
> --spare-devices=2
>  --chunk=256 /dev/sd[a-h]1
> # mdadm --detail --scan >> /etc/mdadm/mdadm.conf
> # mkfs.ext3 -b 4096 -R stride=8 /dev/md0
>
> Then, create a mountpoint (/media/jbod, or whatever), mount it, and add
> an appropriate line for /dev/md0 to /etc/fstab.
>
> But I should stress that I really don't know what I'm doing, so the
> above may have some holes in it.  I'm guesstimating that it's basically
> right, though.

Looks OK to me so far, but nothing's an indication until it's actually 
tried.





More information about the volunteers mailing list