View Full Version : Mac Server ?

09-19-2001, 05:23 PM
I am going to put in a new file server and RIP. I would love to replace the NT servers that we are currently using with Mac servers. I am having alot of trouble finding a vendor who will spec a high end server solution with a Mac server. So I guess I could buckle under and go with another NT station, but I would like to really keep this whole solution in the Mac camp.

Has anyone built a large Mac OS file server. I need about 500GB of raid and I need tremendous speed over the network to the Macs. My vendors are pushing me toward this NT fiberchannel solution. How much faster is fiber channel than Ultra160 SCSI over gigabit copper? Has anyone done fiber channel on the Mac?

The other part of the problem is I need a RIP that will work on a Mac. I specificly am looking to generate 1 bit tiff out of the RIP. Those files will be ported to an output queue. Harlequin based RIPs can do this but that only runs on NT.

Thanks for the help gang,

09-19-2001, 09:34 PM
Well - I'm using ASIP right now in our lab - no 500 GB RAID but several 72 GB arrays - it works well - serves up data to our 23 mixed platform machines.

What I sense though is that ASIP is on its way out in favour of Mac OS X Server, which w/ the Unix kernal should be very nice and stable. Note that all the new machines have Gigabit ethernet capabilities built-in so if you have the cabling infrastructure in place, you should be good to go.

No idea on fibre-channel stuff - we have some bioinformatics folks who have that stuff running but they are all under Unix


09-19-2001, 10:02 PM
Have you checked into PowerRIP? Check them out here: http://www.powerrip.com/

I use it for several different RIP solutions at the Art Department I support and have had pretty good luck with it. It used to be terrible when it was still owned by Birmy, Inc, but iProof systems have done a good job. It even runs on Macs!

09-19-2001, 11:32 PM
Not that I have set any up, yet... but

One limit of HFS+/MacOS (classic?) is that throughput in the FINDER is fairly slow. Software like TIMBUK2 can almost double data transfers. Not sure what MacOS X or OSX Server can do with HFS+. With X/Xserver you can run wtih UFS, but I havent heard many good things about this.

10.1 should support some type of RAIDs and Ultra160/320 is DAMN fast (on Mac or X86). Gigabit E-net is cool, but FibreChannel (from what I understnad) is faster at the 'same bandwidth'. That is it is more efficient with packet sizes and stuff - but it is also a LOT more expensive than GIGAnet.

500GB RAID? on a Mac- that should not be an issue.
Get 4x180GB 7200RPM Cudas or
Get 6x72GB 10K RPM Cheetahs or
Get 12x36GB 15K RPM Cheethas http://macgurus.com/infopop/emoticons/icon_biggrin.gif

Not sure if there is any support for RAID-5. I know classic does not have this, but X might. One other thing is that I have not seen ANY Mac/PPC systems with redundant hot-swap powersupplies. You may be able to get around BOTH of these issues by running load balanced servers under X. Again, not sure about this, but it should be available soon - if Apple is serious.

On a side note, I (with the Admin) will probably setup a Mac server for backup/Retrospect, FTP and Virus Update software. Remeber - if they say a Mac can not do this or that - It is mostly KRAP. Macs are usually better and often do not have the same issues with viruses, security and hardware.

One Nation, under God; with liberty and justice for all.

09-20-2001, 01:08 PM
Thanks gang,
I definetly want to use Mac OS X Server. I would love feedback on how difficult this would be to implement. I need to serve large graphic files in an all Mac enviorment. I am upgrading my network switch to Gigabit. There will be about 20 Macs in all. The primary function is serving files to the Mac workstations. Secondary will be its interface to the Mac based RIP. I have uncovered a few Mac based RIPs so far andd will be doing more research in this area. On a positive note there do seem to be options out there.

I am really sick of vendors who try to push you into Windows NT based solutions. So many people seem to be brainwashed into thinking NT is the only solution. I believe I will be able to pull this all off with Macs. The real questions now seem to be which RIP is best, what RAID (Gurus may be able to really build something http://www.macgurus.com/ubb/kickass.gif for me) and then what about automated backup. I will need some sort of automated back-up library for the file server. This will need to be capable of backing up gigs of data automaticaly .

I greatly appreciate all the help.

[This message has been edited by FrozenTundra (edited 20 September 2001).]

09-20-2001, 08:57 PM
Re: automated back-up...

for 500GB you need a AIT or something like that - maybe DLT.. I'm using the VXA system - 66 GB to a tape and I have 3 drives set up for automated backup over Retrospect. Note that Dantz has a Mac OS X client (beta) I'm using it in our lab now - using a G3-upgraded 7300 as the back-up server..

09-20-2001, 10:48 PM
You can't really compare Fiber Channel and Gigabit Ethernet. GbE is a local area network, whereas FC is a dedicated Storage area network. With GbE you will let limited speed because of the nature of Ethernet. It might be 200Mbit/s (25MB/s) at best.

With Fiber Channel, you should be directly accessing the storage media, and it should be much faster and with less interruption. Don't hold me to it, coz I haven't built one, but I have a little knowledge on SANS. It really depends how fast you want to go.

I have inplemented an ethernet-based network tape backup on a mac, but that requires much lower speeds.

.... Taffy.C

09-21-2001, 09:39 AM
I have scheduled a trip into a company that makes fiber channel SAN solutions. They also have high end raids and everything else that a graphic or video company needs to build a complete system. The main reason for the trip is to see a SAN and test the speeds vs. GBit and a server with a raid. Then there is the whole cost analysis side to this. SAN is not cheap!!!

I really need to consider how my production staff works. If someone opens a few very large files from a sever, works on them locally with a x15 internal raid, then saves several iterations of a job throughout the day (figure an average of 24 one gig files), then finally upon completion saves 2 completed files back over the network to the server. Would that be faster than doing all the work over the SAN. If not how much faster is the SAN and at what cost.

Think of how much I can put into the workstaions and server if I don't do a SAN.