Results 1 to 15 of 15

Thread: PCI buss throughput

  1. #1
    Join Date
    Jan 2001
    Location
    Chicago, IL 60610
    Posts
    339

    Default PCI bus

    I am thinking of adding a Magma chassis (4-slot, PCI-PCI):

    http://www.mobl.com/expansion/pci/

    so as to free up some PCI slots and in particular, use an external SATA array.

    Currently, I'm running a MotU PCI-424 in slot 2 (audio interface card), a Universal Audio UAD-1 in slot 3 (audio effects processing card), and a PowerCore in slot 4 (another audio effects processing card).

    The AGP slot is, I believe, called slot 1. The PowerCore, which is dual 33/66 MHz, is by itself on the fastest buss. The UAD and PCI-424 are both 33MHz cards.

    Here are my questions; I would greatly appreciate any input, either positive or negative.

    I'd like to add an external SATA array to avoid the whole FireWire/AMD bridge-chip boondoggle that appears to trouble systems running the PowerCore and UAD-1. I was thinking that it might be a good time to calculate/estimate the amount of throughput/bandwidth that the various cards, and thus the busses, would be likely to need.

    For instance, I guess Digital Performer runs 24-bit/48kHz audio uncompressed so that 1 track is:

    (24 bits/sample)*(48k samples/sec) =
    (3 bytes/sample)*(48000 samples/sec) = 144,000 bytes/sec

    Question #1: What is the bandwidth of a PCI slot? That is, how much throughput can be handled by a single, let's say 33MHz, slot?

    Question #2: Can one estimate the maximum data being processed by a 33MHz card like the PCI-424 or UAD-1? Would/could a 33MHz slot on a G5 get flooded if the cards were sufficiently bandwidth-hungry? That is, if several cards were housed on a break-out chassis, is there the possibility that the throughput used by the cards might exceed the capacity of the slot/buss on the G5?

    Question #3: Assuming the use of a Firmtek Seritek/1SE2 SATA card for external drives (which is also dual 33/66MHz), would you put that card on the 133MHz buss and "slow down" the PowerCore or would you leave the PowerCore to run at its full 66MHz and put the SATA card on the shared buss with the Magma chassis (which would hold the PCI-424 and UAD-1s)? This is arguably more of an audio question (in fact, I've got some queries out regarding the througput of the PowerCore), but, perhaps more to the point, do you think the Seritek would function well on the shared buss with the Magma chassis?

    Just to make a relatively high estimate, let's say the goal is to be able to handle 40 tracks of audio. Using the above single-track estimate that would translate into:

    (40 tracks)*(144,000 bytes/track-sec) = 5,760,000 Bytes/sec

    This of course is perfect-world arithmetic.

    What think ye?

    Any input is greatly appreciated!

  2. #2
    Join Date
    Nov 2004
    Location
    Germany
    Posts
    2,352

    Post

    Hello Revi,

    to your first question, here are some specs for PCI/PCI-X bus system.

    PCI = Peripheral Component Interconnect.

    PCI 32Bit, 33MHz = 132MB (theoretical throughput)
    PCI 64Bit, 33MHz = 264MB (theoretical throughput)
    PCI 32Bit, 66MHz = 264MB (theoretical throughput)
    PCI 64Bit, 66MHz = 512MB (theoretical throughput)

    PCI-X 64Bit, 133MHz = 1064MB (theoretical throughput)
    PCI-X 2.0 64Bit, 266MHz = 2132MB (theoretical throughput)

    Regards

    Nicolas

  3. #3
    Join Date
    Jan 2001
    Location
    Chicago, IL 60610
    Posts
    339

    Default Huzzah! Just what I was looking for!

    Nicolas,

    Thanks!

    I'm guessing that one could assume that most cards, even if they were compatible with the PCI-X slots, would likely still be PCI cards. Does that make sense?

    It sure would seem, at least naively, that it would take a lot of processing to flood a PCI buss!

    Thanks again for the info!

    Iver

  4. #4
    Join Date
    Aug 2001
    Location
    Grangeville, ID USA
    Posts
    9,142

    Default

    The limits are lower than that usually.

    230 MB/sec for the entire PCI Bridge on a Mirrored Drive Door.

    220 for all the Uni-N based G4s

    600 looks to be a good round maximum for the G5.

    These are total throughput of the entire PCI bus. Each slot and ALL the devices soldered to the motherboard south of the PCI bridge share that total (USB, FW, Audio,Graphics,etc).

    A 33 MHz slot has the theoretical maximums that Nicolas listed, most times you will only get within 60 or 70% of that at best when running a single card.

    I had not heard that a Magma expander would work in a G5. In fact I thought they wouldn't. If you find out differently let us all know.

    Rick
    molṑn labe'
    "I am a mortal enemy to arbitrary government and unlimited power. I am naturally very jealous for the rights and liberties of my country, and the least encroachment of those invaluable privileges is apt to make my blood boil."
--Ben Franklin

  5. #5
    Join Date
    Aug 2000
    Location
    Concord, CA
    Posts
    7,056

    Default

    I only looked at the MAGMA 64-bit/66MHz 7 Slot PCI-to-PCI Expansion System and it is
    See http://www.mobl.com/expansion/pci/7s...uirements.html
    Just a tad 'spensive tho. k

  6. #6
    Join Date
    Nov 2004
    Location
    Germany
    Posts
    2,352

    Post

    Hello Revi,

    the main design difference between PCI and PCI-X is, that PCI supports 5V cards PCI-X (only 3.3V) don't. If you have a card that can work with both voltages it could work in a PCI-X slot but, I would always check that with the manufactures tech support staff. PCI-X is PCI 3.3V backward compatiple PCIe not. The complete PCI specifications

    Some cards like USB simple Audio cards do not need the entire throughput of PCI-X. And the manufacturers don't need to design a entire new card or design a bridge like with PCIe (PCI-Express).


    Hello Rick,

    I thought the MDD has the KeyLargo IC for IDE and stuff and the U2 for PCI, AGP and stuff and each of them is on their own bus.

    I know the G5 has two controllers, one for the 133MHz slot and one for the two 100MHz slot.

    I am about to install a Radeon Mac Edition (32MB) in my MDD, instead of using the 9000Pro's second port, but, if the entire PCI throughput is 230MB/s, the two UL3D's performance would degrade.

    Or would it be not that bad?

    Regards

    Nicolas
    Last edited by Nicolas; 02-08-2005 at 06:42 AM.

  7. #7
    Join Date
    Aug 2001
    Location
    Grangeville, ID USA
    Posts
    9,142

    Default

    Max you can ever get through a MDD with any number of SCSI cards is 230 MB/sec. At least that has been our experience. Love to be proved wrong.

    All devices on the south end go through the same choke point, the PCI-PCI bridge. G5 upped the bandwidth considerably, but the two separate ASICs running the two separate PCI-X buses still come together at the PCI-PCI bridge and that is where it all has to share. That includes onboard SATA, EIDE, USB, Firewire, Audio... everything. No difference a device soldered to the motherboard or plugged into the PCI slot, it all has to go through the PCI bridge. The G5 placed the AGP graphics port on its own access point to the memory controller, that is a huge benefit.

    molṑn labe'
    "I am a mortal enemy to arbitrary government and unlimited power. I am naturally very jealous for the rights and liberties of my country, and the least encroachment of those invaluable privileges is apt to make my blood boil."
--Ben Franklin

  8. #8
    Join Date
    Nov 2004
    Location
    Germany
    Posts
    2,352

    Post

    Hello Rick,

    thank you for this info!

    I am using Linux on my MDD since a week and the DualHead feature is not working in Linux.

    Would it be better or even worse, performamce wise, using the Mac Radeon Edition for the second screen in OSX?

    Regards

    Nicolas

  9. #9
    Join Date
    Jan 2001
    Location
    Chicago, IL 60610
    Posts
    339

    Default

    Quote Originally Posted by ricks
    The limits are lower than that usually.

    230 MB/sec for the entire PCI Bridge on a Mirrored Drive Door.
    220 for all the Uni-N based G4s
    600 looks to be a good round maximum for the G5.

    These are total throughput of the entire PCI bus. Each slot and ALL the devices soldered to the motherboard south of the PCI bridge share that total (USB, FW, Audio,Graphics,etc).

    A 33 MHz slot has the theoretical maximums that Nicolas listed, most times you will only get within 60 or 70% of that at best when running a single card.
    Dig, that makes sense. I'm waiting on info from Universal Audio and TC Electronic as to what they think their cards would need per audio track.

    Now, by "south of the pci bridge", that would mean _per_ buss though, right? (i.e., slots 2 and 3 would have the stated value and slot 4 would also have that same amount of bandwidth, independent of 2 and 3)

    If my estimate of about 5.76 MB/sec for 40 tracks of 24-bit/48kHz audio is accurate, it would seem that there's headroom to spare. Not that I'm complaining either!

    Quote Originally Posted by ricks
    I had not heard that a Magma expander would work in a G5. In fact I thought they wouldn't. If you find out differently let us all know.
    Apparently they will...which is certainly good given the shortage of PCI slots on the G%. Here's what I received from mobility/Magma:

    "Thank you for your interest in Magma products. Our 4-slot PCI to PCI expansion system should work nicely with the configuration that you have listed below. We have worked closely with UAD, MOTU and TC Powercore for some time now. We have loaned each of these companies our products for testing and development and have had no problems running PCI to PCI. The only card(s) that you are planning to use that I am not familiar with is the SATA RAID cards that you are planning to use. Since you are concerned with bandwidth of the TC Powercore (runs at 133MHz). I would suggest the following set-up:

    Installed inside your G5:

    TC Powercore
    MAGMA PCI host interface card (that connects G5 to 4-slot)

    Installed inside the 4-slot (PCI4DR):

    MOTU
    UAD
    SATA RAID cards

    I am not sure if your RAID cards have hard drives, but the 4-slot also can fit up to (4) 1" disk drives OR (2) 1" disk drives and (1) 5.25" removable disk drive."

    I'd probably have the SATA card in its own slot (e.g., slot 2 or 3 with the Magma in the other). Still have to figure out whether it would be better to put the PowerCore in slot 4 or the SATA card in slot 4.

    They sent me a quote too...roughly 1.4 kilobux (kaye's right, they ain't cheap).

    If these numbers are correct (i.e., of bandwidth and throughput so that adding a SATA card and a Magma make sense), I reckon I'm going to start saving nickels.

    Cripes, I'll never own a car (heh, other than the ones in my home office). ;-)

    Thanks for all this info, this has been very informative. I'll let you know what I hear from Universal Audio and TC Electronic.

  10. #10
    Join Date
    Sep 2004
    Location
    Petaluma
    Posts
    13

    Default Slots different in any substantial way?

    Rick,

    I have a question about this. Diagram was great. I just got a new video card, the Nvidia 6800 GT DDL. I run twin monitors and this is supposed to give me 256 mb for both... vs the 9800 giving me 64. System is g5 2x2 8 Gigs RAM.

    However.... it takes up two slots, presumably 1 and 2. I have two Seritek 1EV4 - 4 channel external cards and I am wondering if there is a problem with slot 3. Some have told me that there is a difference in bandwith betweent he two.... you say in your post below it is all bottlenecked at the bridge - and maybe this is not an issue.

    I guess my question is - is there any difference in bandwith between 3 and 4 - (or 2 and 3) and should I bother to return this video card, get another one that lets me use slot two for my SATA card?

    TIA
    Lenny

    Quote Originally Posted by ricks
    Max you can ever get through a MDD with any number of SCSI cards is 230 MB/sec. At least that has been our experience. Love to be proved wrong.
    All devices on the south end go through the same choke point, the PCI-PCI bridge. G5 upped the bandwidth considerably, but the two separate ASICs running the two separate PCI-X buses still come together at the PCI-PCI bridge and that is where it all has to share. That includes onboard SATA, EIDE, USB, Firewire, Audio... everything. No difference a device soldered to the motherboard or plugged into the PCI slot, it all has to go through the PC
    I bridge. The G5 placed the AGP graphics port on its own access point to the memory controller, that is a huge benefit.

  11. #11
    Join Date
    Aug 2001
    Location
    Grangeville, ID USA
    Posts
    9,142

    Default

    A bunch of new technologies regulate what total maximum throughput for the G5 PCI bridge. They call it HyperTransport Technology, for some reason and it enables much higher throughput. Here's Apple's Developer notes:
    The HyperTransport bus between the U3 and the K2 IC is 8 bits wide, supporting a total of 1.6 GBps bidirectional throughput. The HyperTransport bus between the U3H IC and the PCI-X bridge is 16 bits wide, supporting total of 4.8 GBps bidirectional throughput. Between the PCI-X bridge and the K2 IC, the bus width is 8 bits, supporting total of 1.6 GBps bidirectional throughput.
    And a picture is worth a thousand words:


    A G4 was limited to a TOTAL of around 230 MB/sec through a separate PCI-PCI Bridge ASIC that attached immediately below the Uni-N Memory Controller.

    On a G5 - Hyper transport Technology is *SUPPOSED* to allow us 4.8 Giga Byte from all of your buses to the PCI-PCI Bridge (built into the U3 ASIC) Whether we can actually see that is yet to be seen. I have yet to ever hear of any throughput test that exceeded 600 MB/sec. But 600 is a BUNCH.

    The totals I am quoting you get from adding together all activity from all devices below the PCI-PCI Bridge. Audio, Firewire, USB, all PCI Slots regardless of whether you have a single bus or a dual bus like your G5, bluetooth, ethernet, internal hard drives, optical drives and so on. All are limited to a maximum composite throughput that is determined by the PCI-PCI Bridge.

    Each individual device is limited to the maximum of the bus itself as well. You mention having 2 x Seritek 4 port cards. No problem. They are each capable of 250 or 300 MB/sec or more burst rate throughput. You have them installed on separate PCI buses. One is running at 133 MHz while the other is limited to 100 MHz because it is installed in a 100 MHz PCI slot. With fast enough drives you could most likely burst rate them at the maximum possible bandwidth of the PCI bus they are installed on! That is if you don't have other devices hogging that bandwidth at the same time

    You have plenty of bandwidth for what you are doing. You can almost do it with a G4 I would bet.

    Rick
    molṑn labe'
    "I am a mortal enemy to arbitrary government and unlimited power. I am naturally very jealous for the rights and liberties of my country, and the least encroachment of those invaluable privileges is apt to make my blood boil."
--Ben Franklin

  12. #12
    Join Date
    Jan 2001
    Location
    Mobius Strip
    Posts
    13,045

    Lightbulb

    There is a G5 only Radeon 9800 Special Edition with 256MB, it is the 4X version that has 128MB (64MB per monitor).

  13. #13
    Join Date
    May 2005
    Posts
    1

    Default Firewire PCI versus Daisy Chained FW400 drives

    New to this forum and in need of a simple comment.

    I run Logic Pro and make use of VSL sample library.
    I have read threads on the VSL forum about chaining Firewire Drives and the potential problems of throughput.

    So my simple question is.

    Would attaching 3 FW400 drives attached to a PCI card be a much better system than Daisy Chaining those drives.

    G5 Dual 2.00Ghz 4Gig RAM Tiger 10.4.1

    I can expand on the details here if necessary.

    Stuart Warren

  14. #14
    Join Date
    Jan 2001
    Location
    Mobius Strip
    Posts
    13,045

    Default Daisy-chain FW drives?

    The ideal setup would be Serial ATA four-channel controller if you haven't invested into FireWire cases unless you have ATA drives sitting around you need to use. One drive per channel, not just per bus. There is a bit too much voodoo to FW drives and chaining, and FW400 is limited in performance - maybe not an issue for your needs. Also, FW RAID on G5s is slower than on G4 and I doubt that has changed even now with Tiger.

  15. #15
    Join Date
    Jan 2001
    Location
    Mobius Strip
    Posts
    13,045

    Lightbulb FW800 hubs and Intel

    Interesting, Intel to begin to support FW800:
    Intel FW800 chip

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •