[svlug] summary of nvidia/sil soft raid & Linux soft raid

Rick Moen rick at linuxmafia.com
Mon Mar 13 17:56:37 PST 2006


Quoting Ivan Sergio Borgonovo (mail at webthatworks.it):

> OK... different levels of hardness:
> 
> 1) really software raid like md
> 2) software raid with BIOS that take care of reconstructing sets and
> writing metadata. After POST raid is done by OS/drivers by the CPU 
> 3) hardware assisted raid. You still need drivers and most of the job
> is done by the cpu... but there is small bit of work done by the
> chipset.
> 4) true hardware raid. All the job is done by the ctl, the OS just see
> 1 disk.
> I was wondering if the 3rd category really exists.

I can think of a good candidate:  Those Promise FastTrak models equipped
with the Promise PDC20621 ASIC chip are given a "hardware assist" as to
the XOR calculations and related RAM -- but not the other RAID overhead,
which is offloaded to the host CPU.  Garzik's driver is designed to take
advantage of that.

HighPoint's proprietary "hptmv" driver probably supports similar
functionality in cards bearing the HighPoint HPT601 XOR engine chip.  

All of this is covered, at least to some degree, on my page.

> Maybe the nvidia or sil provide at least hardware crc or xorring... or
> maybe they don't and everything is done by the CPU.

Those are both definitely reported to be offloaded to the host CPU, in
Nvidia, SiI, and all other fakeraid designs -- except as noted above.

I think Garzik must have gotten tired of being asked about exactly that
sort of question, because eventually he snapped and wrote this rather
hilarious FAQ:  http://linux-ata.org/faq-sata-raid.html  (You probably
have to have a rather dry sense of humour, but *I* thought it was funny,
anyway.)


> I can't understand clearly if sil or nvidia are in category 2 or 3.

Category 2.  Also, "reconstructing sets" would be primarily handled by
the host CPU, to the best of my understanding.  All the BIOS does is
boot logic and establishing stripe sets for blank disks.  All work
involved with actual disk writing is CPU-directed.

> BTW... I think I already wrote it in previous emails... you were
> asking about nvidia ctl...  Another point it has over sil... is that
> the driver support queueing.

Sure about that?  The latest libata status report Garzik has put on the
Web (dtd 2006-01-26) says:

   NVIDIA
   
   Summary: No TCQ/NCQ. Looks like a PATA controller, but with full SATA
   control including hotplug and PM.

   Update: NVIDIA has provided information (under NDA) that permits
   implementation of NCQ support.

   libata driver status: Beta. 

It's not clear to me what use information provided only under NDA would
be, in coding an open-source driver[1] -- but it's clear that there was
no tagged command queuing _or_ native command queuing as of the date of
that report.  Anyway, onwards:

The SiI data are divided into two categories, because of course there
were the older and newer chipset families -- and it makes quite a
difference which one you have:

   Silicon Image 3112/3114

   Summary: No TCQ/NCQ. Looks like a PATA controller, but with full SATA
   control including hotplug and PM.

   libata driver status: Production, but appears to have issues with newer
   Seagate NCQ drives, and issues with "screaming interrupts."

   Update: The "screaming interrupts" issues appear to be fixed, when
   polling is disabled. drivers/ide driver status: Beta?


   Silicon Image 3124

   Summary: Full TCQ/NCQ support, with full SATA control including hotplug
   and PM.

   libata driver status: beta.

   The 3124 is a nice, open design. 

(Cards based on the SiI 3132 use the same really good sata_sil24 driver
as do ones based on SiI chip models 3124 and 3124-2.)

Frankly, I'd definitely favour the newer SiI family over Nvidia's stuff,
based on everything I've heard.

[1] He might mean an NDA with an expiration date aligned with the
intended driver release date.  The XFree86/X.org people have been known
to negotiate such things.




More information about the svlug mailing list