View Full Version : Hardware RAID

02-11-2001, 01:26 AM
Ok, figure someone is gonna ask this eventually. And I'm certainly curious. How does hardware-based IDE RAID compare to software-based? Software-based doesn't increase things much beyond the normal drive speeds.

02-11-2001, 01:00 PM

Should get you started. The first links are about IDE HW RAID's. THe last one is a great debate....IDE vs SCSI.

02-11-2001, 01:11 PM
Have you seen this http://www.barefeats.com/hard13.htm and it has some interesting results comparing:

The Ultra ATA controller:
UltraTek/66 Includes two cables and SoftRAID. (Courtesy of SmartDisk/VST)

AHARD RAID 66 (Courtesy of Acard ) which turns any pair of IDE drives formatted by Apple's driver into a striped array with a flip of the switch and reboot. Switch settings can be used to do mirroring or standard HFS. (FLASH: Since posting this page, Sonnet Technologies has announced they will offer this product under a joint venture with Acard. They call it the Tempo RAID66 and offer in-depth documentation and installation guide.)


02-11-2001, 01:59 PM
saw it.

the $200 Tempo is the same thing as the ACard, or so we've been told. They are listed at the bottom of the IDE page (http://www.macgurus.com/shoppingcart/showrampage.cgi?idepagesofheck.html).

02-11-2001, 03:29 PM
Interesting. I use a Beige G3 MT. I have 2 60GB DiamondMax Plus drives and a VST UltraTek card. Made them into RAID with SoftRAID, and I've had no problems with stability. It all plugged in with no trouble. I always keep a separate drive on the original ATA bus(the stock 6GB one right now) for backups ever 2 or 3 days, because I of course know something will happen eventually to the RAID, whether IDE or SCSI.

Couple of questions I can't find many answers to:

1. I've heard that only one ATA drive can send data while the other drive on the bus has to wait. Is this true? If so, is this a problem with the controller, or something inherant in the entire interface that can't be overcome?

2. If it is just a controller problem, are there any cards or upgrades that fix it?

3. I usually spin down my drives for 8-9 hours each night when I'm asleep. I'm assuming this does extend the life of a drive since it's not constantly spinning, but looking for verification. Any data on this?

02-11-2001, 03:50 PM
1. Inherent to IDE interface protocols. In essence, one of the myriad of reasons it is cheaper. The entire interface and controller arrangement are not setup to handle simultaneous I/O tasking in the same manner as SCSI.

2. Never heard of any way around it. That is the point of being an interface limitation. It can't be overcome. I've heard rumor of ATA protocols changing to try to eliminate this. It would be a great thing if we could get Ultra160 RAID performance out of IDE drives. Now, if we can only address most manufacturer's having a lower warranty on IDE drives.

3. Check out the drive power consumption for your drives. I have always kept the notion that things receive less wear if they are kept at a constant. Thus the reason a lot of us never shutdown our machines at night. Temperature and voltage surges (and recessions) are a source of expedited wear. Same reason an engine will last infinitely longer if it is kept running as opposed to being turned off and restarted.

Me personally, I have enough things going on that I love my Macs being on 24/7. It's nice to wake up to some emails or have MP3's playing into the night, lulling me to sleep or drowning out the noise. And I won't even mention how many hours of productivity are eaved letting backups or low level formats occur at night. I work on 2 machines at home, and at least once a week I am working on someone else's drive(s). But your mileage will vary.

02-11-2001, 08:37 PM
Hmm, interesting. I'll just leave the RAID drives running, then. I always keep the backup drive spun down though, except when doing the backups(it's above the Zip, so gets pretty hot).

02-11-2001, 08:48 PM
Above all else, do waht makes the most sense for what you (in particualr) are doing. That sounds like a plan to me.

02-12-2001, 03:35 AM
Littlebit off topic, but reading Kaye's link to the Barefeat's test I was surprised by the following:
(Mods please feel free to move this to the raid or scsi forums if it's not appropriate here)


Look at the performance comparison between de UL3 Atto card and the Adaptec card.
Any comments?

[This message has been edited by Ton (edited 12 February 2001).]

02-12-2001, 01:05 PM

I have pondered that since he posted that page in Sept., 2000. One of the problems is that he does not identify what software he used for the X15 striped RAID tests, either with the ATTO or Adaptec cards. Nor does he mention whether he let whatever software he used to reconfigure Mode Page Parameters, drivers, and what stripe size he used, as well as whether write cache for the drive's onboard cache was enabled for both cards and on all drives. So many unmentioned possibilities. There are a lot of unmentioned details that can be very important. My first inclination, if I had gotten results like that, is to question what I had done wrong because there are so many details with multiple drives. Each drive in the stripe has to be set the same for best results. Maybe he did that, but he does not say.

One very important item I got from someone else who asked Seagate Tech Support about MPP for the X15, why so many default Number of Cache Segments, 20? Their answer was that was optimized for max benchmark performance on a PC. I found that pretty close on my Macs, the S900 and PTP. I tried a bunch of different settings and found 23 best. However, most MPP software will not allow that many segments, and will reduce that number to 10 or 11 or something even way less. My only way to recover to default 20 or my preferred 23 for each drive was/is to reboot OS7.6.1 (PTP) or 8.0 (S900) and then use HDT 3.0.2 to manually reset 20 or 23 cache segments. HDT 4 will not allow that many segments.

Anyway, that page leaves too many unanswered questions to know for certain which card is faster or whether they really are the same in sustained writes. My 2. k

02-12-2001, 01:23 PM

I agree with you he 's a bit scarce with information about how the tests were done but I may assume both cards were tested under the same circumstances. Your remark about the cache segments puzzles me though.

Quote from the Softraid manual on Seagate disks:

Cache Segments - Most SCSI drives are preoptimized and never need adjustment with the number of cache segments. Typically, you would see "zero" as a setting, which allows a drive to use its defaults. A low number of cache segments is also good (e.g. 3), if the drive does not accept "0" as a setting. Many Seagate drives are preset to 16 cache segments, which is optimized to Windows, and actually perform relatively poorly on Mac OS systems.

My cache segments are set to 4 , I have 10k Cheetah's not 15's, should I increase cache segments http://www.macgurus.com/ubb/confused.gif

[This message has been edited by Ton (edited 12 February 2001).]

02-12-2001, 03:14 PM

I think that the SoftRAID manual is dated information. If anything, that info is based on 1st or 2nd generation 10k Cheetahs. My ST39204LW 10k Cheetahs, I tested cache segments from very low, 2 or 3, up to 10 or 11, and found 9 cache segments best for me. I tested with EPT 8MB and MB5 Disk and Pub Disk and leaned slightly towards MB5 Disk, though not much. With the improved seek/access of these latest 10k Seagates, I did not find a low number of cache segments improved performance in any of the tests, particularly the supposed improvements in EPT 8MB for pure I/O thruput.

I would agree that 16 cache segments on a 10k Cheetah in MacOS would be slower because I found that above 9 there was a significant dropoff at 10 cache segments. My tests with 10k Cheetahs was S900 with Newer G3-500, single Miles2, and single, dual, and triple ST39204LW 10k Cheetahs.

YMMV. Just try and see what your results are. k

02-12-2001, 05:11 PM
Right Kaye,
Within the next couple of days I'm gonna fiddle around with the cache segments settings and get back. Fyi, I got dual miles2's/500 G3 upgrade/9500 with 4 st39204 cheetah's which are now capable of about 68 MB /61.5 MB read/writes. Seriously doubt there will be much improvement but we'll see.

02-12-2001, 05:32 PM
I think Rob Art may have isolated a driver bug or something, as his results were specifically confined to duplicating a 101MB folder consisting of 265 documents. My question is, who told him to do this?

I am more interested in his blanket statement that the 39160N is 20% faster than the UL3D in sustained writes. I sure didn't see it that way the last time I used a 39160N.

Aside from the fact that I lack patience to deal with Adaptec (I can barely stand to deal with ATTO), the problems the later Adaptecs have in vintage PCI Power Macs also put me off.

I think Jorge was going to get an Adaptec board and try to duplicate Rob Arts's tests. Not sure where he's at with that.

02-12-2001, 09:17 PM

I agree. There is a lot left unsaid on that page.


My 10k Cheetah ST39204LW tests, as I mentioned, were all on the S900 with single Miles2. ATTO 8MB:

2x striped Cheetahs SR 65MB/s, SW 40MB/s
3x striped Cheetahs SR 75MB/s, SW 40MB/s

I think you can do better in SR, and obviously, my SW with one Miles2 is at its limit. k

02-13-2001, 04:03 AM
Kaye, you were absolutely right!
Increasing the cache segment settings to 9 raised my sustained reads to about 72000 KB/s and writes to about 64000 KB/s.
Thanks man for helping me tweaking this stuff! http://macgurus.com/infopop/emoticons/icon_smile.gif

I just ordered a 4 x15 Cheetha Gurus raid along with Atto UL3, so would you recommand to raise the cache segments settings to 23 with HDT 3/OS8 for this config. too just as you did on your PTP/S900? (it's going to be hooked up to a new 533 MP)
Thanks again, Ton

[This message has been edited by Ton (edited 13 February 2001).]

02-13-2001, 12:19 PM
First, just to be sure, which ATTO, UL3D (dual channel) or UL3S (single channel)? k

02-13-2001, 12:35 PM
The dual channel, http://macgurus.com/infopop/emoticons/icon_biggrin.gif

02-13-2001, 04:02 PM
Ton, that UL3D with two X15s on each channel in a 533DP is going to be so fast, I don't know. For me, 23 cache segments was best in the PTP and S900, but you may want to do some testing in that new box. One other tidbit, I found 128k or, as SoftRAID puts it, 256 SU (stripe units) stripe size worked fastest. 128KB = 256 SU. Default in SoftRAID is 64KB or 128 SU, as I recall. And I don't know why SoftRAID calls them stripe units. All other RAID software calls it in KB. k

02-13-2001, 04:25 PM
Kaye, as you say, guess we'll have to do some testing when the stuff arrives.
In any event I now know how to take advantage of MPP settings.
Once again proof of how valuable all this combined knowledge here is at this forum.
Will keep you posted, Ton

02-14-2001, 08:01 AM
Kaye, final question, do you have to re-format to change the striped unit size.
Mine is 128 and I want to try the 256 setting.
Thanks, Ton

02-14-2001, 11:29 AM
Good question Ton. At the time I was running tests, I had nothing on the drives to lose and I did initialize each time to be sure I was starting from scratch and that I was accessing the outside of the platters (the first partition if you partition) so that each test was a level playing field.

The very first time I partitioned, I set up six partitions, each with a different stripe size, knowing that the inner partitions would be taking a speed hit compared to an outer partition, just to see if any size or sizes would stick out as being faster. 32KB (64SU), 64KB (128SU), and 128KB (256SU) did in SoftRAID. HDT 3.0.2 striped RAID was a little murkier. So then I started over and over with each of those stripe sizes on the outer partition.

I don't think I ever just changed stripe size except once each for SoftRAID and HDT just to see if it could be done on an existing but empty partition. My recollection is that it was possible but it does/may change the usable size of an already created partition because when you go from a smaller stripe size to a larger one, sometimes less stripes will fit on the partition. I think it was HDT that complained about that but did let me do it. SoftRAID just let me do it. Once I realized the result of doing that, I never did it again. And never tested it that way. k

02-14-2001, 01:45 PM
I think simply initializing will do, here. It shouldn't be necessary to do a low-level format, right?

02-14-2001, 01:54 PM
Right, simply initialize, no low level format. k