[svlug] Data Recovery company recomendations

Scott Hess scott at doubleu.com
Thu Oct 16 07:52:41 PDT 2003


On Wed, 15 Oct 2003, Ira Weiny wrote:
> Well it has finally happened to me...
> 
> I have had a "home server" running for last 2 years and my data drive in
> it failed.  I have about 5 gig's of stuff that was not burned to CD's
> since about July.  The drive was 60 Gig's and was only about 1/2 full.  
> Right now it just clicks and holds up the IDE bus.  Niether my G3 Mac
> nor an Intel machine at work would even recognize the drive.

Bummer.

> Questions:
> 
> 1) Any recommendations on data recovery companies and/or what they would
>    cost? (I can call for the 2nd answer.)

I have never used such a company, but from what I've read about how they
salvage data, you sort of have three classes of problems:

 - The drive can be recognized, but there are discrete sections of errors, 
   or it only works for a short period.  They can probably read all of the 
   data off an reconstruct without physical action, which will cost in the 
   couple of hundreds.

 - The hardware is generally fine, but the electronics are messed up.  
   Best-case is that the controller board can be replaced, worst-case is 
   that they transfer parts of the drive to another drive.  Either way, 
   you're looking at hundreds of dollars an hour for a large number of 
   hours, so this will cost thousands.

 - The platters themselves are iffy, and require extreme measures, like 
   spinning them very slowly, and using statistical methods to read off 
   the data.   This is almost certain to cost tens of thousands of 
   dollars.

You sound like the second case.  The problem is that this is a process
requiring high amounts of skill, and labor-intensive, so it will probably
cost more than $1k per day of work for labor alone.

> 2) I am probably going to set up RAID mirroring on 2 new drives I am
>    buying.  Any pointers on that.  I have never set up RAID and I am not 
>    a sys-admin. (But I do write a lot of C code.)

RAID1 is _easy_.  Well, compared to four or five years ago!  Most
distributions have the current generation of drivers built in.  Rather 
than describe it by writing an impromptu howto, you can go read:

  http://www.tldp.org/HOWTO/Software-RAID-HOWTO.html

If you're starting from scratch, I believe you can also setup RAID as part 
of the install process for at least RedHat, and perhaps others.  
Converting to RAID1 is somewhat finicky - you build your system on one 
drive, then setup the other as a degraded RAID1, then get the system 
booting from that drive, then add the original drive to the RAID1.  LILO 
fun!  But good to know, for if/when a drive heads south.

The main suggestions I have are:

 - Keep an offline log.  Or an online log in a distinct place (blogs can 
   be useful for this).

 - Make certain you have a bootable CD which can handle all of this and
   actually boots your hardware.  It's much easier to muck with this stuff 
   if you haven't actually _mounted_ the drive.

 - I've never trusted /boot on RAID1, though you can apparently do it.  I 
   just have a small duplicate /boot on the front of each drive.  With a
   little work, you could have duplicate lilo entries to boot either
   driver first, and even to install the bootloader on both drives.  Or
   have the bootloader on the /boot partition, and then have a cron job dd
   one to the other.  [Getting things just right so you can boot from
   anywhere in any circumstance can take some effort.  I choose to rely on 
   a bootable CD.  Maybe it's time to learn grub?]

 - Consider getting another IDE controller.  RAID1 means you'll be
   accessing both drives all the time, so if you also have a CD-R burner
   sharing one of the IDE channels, you may notice more frequent issues.

 - Avoid cheap hardware RAID1.  If the controller fails, it can be 
   almost impossible to rebuild the system.  Besides, the cheap 
   controllers almost always have no hardware RAID support, they're just
   BIOS trickery with post-boot software-RAID drivers.  ["Cheap" in this 
   context means <$300 or so.]

 - Don't put swap on RAID1, there have been kernel issues with this in
   the past, and I'm not certain how useful it is in the first place.
   Just leave swap partitions in the same place on each drive, and use
   them both.  If one goes down, you just have less swap.

 - If paranoid about failures, consider buying different drive models, or 
   even drives from different manufacturers.  Performance may suffer 
   slightly, but the drives should be less likely to fail together.  Note, 
   though, that the drives _will_ have different sizes, so it's important 
   to build the RAID1 starting with the smaller drive (wasting space on 
   the larger, of course, but that's fine).

 - Consider LVM.  It may be easier to have a giant RAID1 partition and LVM 
   on top of it than 5 small RAID1 partitions.  I'm in the latter 
   situation, and I'm wondering if I need to convert to LVM.  That makes 
   the bootable CD even more important.

Um, that's all I can think of, offhand.

Later,
scott






More information about the svlug mailing list