Results 1 to 12 of 12

Thread: hardware raid controller vs software

  1. #1
    Join Date
    Dec 2000
    Location
    Kampen, OV, Netherlands
    Posts
    4

    Default

    Hello all,

    I've read numerous messages in this forum, which were all of great help. However, I want some opinions on the mentioned subject.

    I want to build a workgroup server with as much speed as possible. The gigabit interfaces on the new G4 are capable of handling a lot of data. I want to have a storage volume of about 300 - 400 GB so the obvious choice is to put a bunch of disks in a raid configuration. I want to use the fastest drives possible, probably 15k rpm ones with lvd interface in a seperate enclosure. This part seems to me pretty obvious.

    However, I think that if I want to get the most out of the gigabit interface, I have to unload the computer of low level tasks so the server software can perform at maximum speed. Because of this, I think a software raid solution is out of the picture for obvious reasons. I want to use a dedicated SCSI to SCSI raid hostcontroller so the server sees one big volume without the hazzle of special drivers or other software components which can cause trouble. This is of extra importance since I haven't decided which OS to use (OS 9 / OS X). It even comes to mind to use a separate pci gigabit NIC so scsi controller and NIC can use dma to exchange data.

    Has anyone used or tested a similar storage configuration? Or can someone give me information why I should go for a particular setup and use specific components?

    Thanks, Edgar

  2. #2
    Join Date
    Sep 2000
    Location
    boston
    Posts
    405

    Default

    http://www.macgurus.com/ubb/Forum1/HTML/000060.html

    Magician's post should answer your question.

    j

  3. #3
    Join Date
    Dec 2000
    Location
    Kampen, OV, Netherlands
    Posts
    4

    Default

    Thanks for the tip. However, I'm thinking the raid controllers mentioned are PCI ones. I too believe that these solutions are error prone. I want to use a scsi to scsi host controller, like in f.i. micronet's genesis One or MacAlly's Arena or the old Mylex DAC960S (forgive me if I am wrong on this one). These solutions present themselves to the computer's scsi controller as one big disk. I don't see why this approach should give the problems mentioned in your link. ( I know I'm curious :-)

    I'm particulary interested because I want to build a fileserver which will outrun a USD 150k Compaq NT server solution in speed and price (and I mean cheaper, folks.. ;-)

    [This message has been edited by Edgar (edited 19 December 2000).]

  4. #4
    Join Date
    Jun 2000
    Posts
    769

    Default

    Hey Edgar,

    I'm not sure why you think connecting a SCSI abstraction backplane to a PCI SCSI Host controller is all that different from connecting a Software array to the same card. You cannot take the PCI bus out of the equation because it is the only bus to accept a SCSI host. Sure, the capabilities of Raid 0+1 and beyond can only be implemented in hardware right now, but that is not the reason you are giving for preferring a hardware solution. The Gurus do not endorse any any hardware schemes because of firmware problems at this time and they are in the business of building arrays. I would simply heed their advice. With current SCSI controllers and the command structure of SCSI-3, there is very little load on the processor. Your server would still be quite multifunctional while running a large array.

    Regards

  5. #5
    Join Date
    May 2000
    Location
    wherever I hang my hat
    Posts
    3,575

    Default

    the RAID controllers I refer to include hardware abstraction backplanes--apparently what you consider "SCSI-to-SCSI."

  6. #6
    Join Date
    Jul 2000
    Location
    Walnut Creek,Ca.US
    Posts
    172

    Default

    Hi

    I suspect that the current adaptec nuetered standard of U3 is one of the current reasons for problems with RAID cards.

    ATTO's U3 works fine. Another forum, storagereview.com has endless threads on Adaptec POS 2100's etc. that are rated as 160 cards,but run at uw speeds(less then 40 mb A SEC).

    Currently I am looking for a decent pci raid card that transfer data at a reasonable rate, and is stable.

    I don't know of ANY in the 500-1000 range.

    I have not noted ONE decent raid controller, for a half way reasonable price, yet used in the forum.

    I think this maybe because the major players are waiting to develop for real U3, and not Adaptec's market fraud 160.

    So far, most of the cards Adaptec has done have had problems.

    The cards that work, Mylex, etc. are usually LVD limited.

    Finding a hardware raid card, on the pc side, that does what you want it to do
    is a search for a needle in a hay stack, unless you don't mine paying 1000 plus dollars, and or buying a preconfigured, big bucks, Back plane setup.

    Another possibility is the ACARD for the mac. They are cheap, ide, bootable, and may provide an alternative.

    Good luck Edgar, and have a Merry Christmas all.

    gs

  7. #7
    Join Date
    Dec 2000
    Location
    Kampen, OV, Netherlands
    Posts
    4

    Default

    Ok, thanks for your reply.

    I'm convinced too that the Atto's work fine. It will be first choice when I build my server. But what to use from that point on. I still do not want to use a software RAID for a number of reasons, one because I haven't decided on the OS yet (OS 9, X Server or maybe Linux (very slight chance)), and I don't want to be dependant of a software solution when something goes wrong. I've used softraid and I think it does a marvelous job, but it's not my choice in this setup.

    I want to use a seperate disk enclosure with build-in controller. Somehow nobody can give me a real reason why this apparantly called "SCSI abstraction backplane" will not work or isn't stable except for some pointers to (rather general) opinions (no pun). Maybe it's true for homebrew solutions (again, no pun), but I realy have a hard time believing that I cannot use a raid array that's used with f.i. a Compaq NT server.

    It would be nice if someone had already tried a configuration like this. Of course all data on this is welcome.

  8. #8
    Join Date
    Sep 2000
    Location
    Vienna
    Posts
    278

    Default

    quote:
    Somehow nobody can give me a real reason why this apparantly called "SCSI abstraction backplane" will not work or isn't stable except for some pointers to (rather general) opinions (no pun). Maybe it's true for homebrew solutions (again, no pun), but I realy have a hard time believing that I cannot use a raid array that's used with f.i. a Compaq NT server.

    I'll take a stabe at this. You are looking for some serious server side storage. This may not be available on the Mac side as mostly NT or SUN or SGI machines would use something like this. So the 'drivers' or 'firmware' for such cards may not work on the Mac - or they may not work any better than SoftRAID.

    I would think you could get some serious speed and redundancy by using TWO stripped RAIDs with softRAID. Get the 73GB 10K Cheetah drives - so you would only need 4 to 8 to get 300 - 400GB. You could run two of these raids off a dual channel U160 ATTO card (or you could get two cards). Even better you could get two 1Gb ethernet Macs and put one RAID on each and then backup via the GigaNet (1,000Mbps = 125MB/s )

    This should work on CLASSIC MAC OS. I would imagine ATTO would have to support their cards for X because they are so expensive and fast.

    I wounder what kinds of solutions MacOS X - Server has?

    I would doubt that LinuxPPC has the support for such a system/setup. Maybe BlackLabs Linux or some other 'business end' Linux has this?

    Maybe you are a Mac-PPC PIONEER?
    Seagate should be releasing a 180GB drive soon...

    Not sure that helped much.... but that's my $0.02 !

    ------------------
    Have fun storming the castle!

  9. #9
    Join Date
    May 2000
    Location
    wherever I hang my hat
    Posts
    3,575

    Default

    we have tested several. All exhibit firmware problems that make them problematic on the Mac. Examples include failures during extended data transfers, refusal to mount partitions on the desktop (except manually), and an inability to use all installed drives.

    we are considering MicroNet arrays, for those enterprise customers who aren't spending their own money (they are insanely expensive), but cannot endorse, at this time, any other array using a hardware abstraction backplane.

    you are welcome to bitch all you like, here, but we have no further information to share at this time.

  10. #10
    Join Date
    Dec 2000
    Location
    Kampen, OV, Netherlands
    Posts
    4

    Default

    Ok, it seems I had to stir a little harder to get the answers I need.;-)

    I'll start testing very soon. Any interesting data will be posted here. There is a slight chance I can get my hands on a hardware raid (don't know the manuf. yet) before it will be used for a NT server.

    I want to thank everybody who have taken time to answer me.

  11. #11
    Join Date
    Dec 2000
    Location
    Lynwood, CA USA
    Posts
    2

    Default

    I have attached a Micronet DD7000 hardware RAID to a Mac server running AppleShare. It works fine, but the environment only has about a half dozen Macs hitting the server via 100T. The customer hasn't seen any major problem.

    There was a comment in an earlier post about Mac to Mac Gigibit transfers. I doubt that you'd see more than 20MB/sec transfer rates between two Mac servers over GigE.

    - Allen

  12. #12
    Join Date
    May 2000
    Location
    wherever I hang my hat
    Posts
    3,575

    Default

    hi, Allen. Thanks for posting!

    how much capacity does this array have, and how much did it cost, ball-park?

    just curious! Is the customer using RAID 5?

    thanks!


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •