View Full Version : PCI Architecture

12-08-2004, 08:00 AM
A Primer on PCI (http://arstechnica.com/articles/paedia/hardware/pcie.ars/2)

dictionary glossary? (http://www.techworld.com/html/dictionary.cfm)
PCI roadmap (1999) (http://www.geek.com/techupdate/aug99/pcifacelift.htm)
PCI Express will kill off AGP (http://www.geek.com/news/geeknews/2004Feb/bch20040211023816.htm) (good info here)

PCI 2.1, 2.2 each has its own variations.

Some cards are 33MHz/64-bit
Or capable of 33 or 66MHz (putting a 66 MHz card in a 33MHz bus might work, but putting a 33 MHz card in a fixed frequency 66MHz PCI bus can damage the card)

Then there is how the card is "keyed" and whether it supports
5v (won't work in G5's PCI-X 100/133MHz bus)
Universal 3.3v/5v
PCI-E takes more to explain and understand

Silicon Image has engineering samples of a PCI-E SATA chips.

NVIDA and ATI are working on PCI-E video. In some cases you will be able to use an AGP Pro 8X card in PCI-E to make the transition easier and to reduce development costs.

You may want to look at the motherboard and how PCI bus is built in.
IBM has - HyperTransport high-speed point-to-point (http://www-03.ibm.com/technology/power/newsletter/december2004/article4.html) (also useful for servers, routers and networking gear as well as on the mobo).
Intel PCI Express roadmap (http://www.geek.com/news/geeknews/2003Sep/bch20030919021855.htm)

PCIe mainstream video cards (http://www.xbitlabs.com/articles/video/display/mainstream-pciexpress.html) has a good article and helps to understand the whole PCI issue.

What it is and what you get

PCI Express incorporates several fundamental changes compared to PCI and PCI-X. Unlike these older architectures, PCI Express is a point-to-point switching architecture, which creates high-speed, bidirectional links between the CPU and peripherals such as video, network or storage adapters. Each of these links may be made over one or more "lanes" comprising four wires, two for transmitting and two for receiving data.

PCI Express is the new high-performance I/O bus architecture. As a point-to-point switching architecture, PCI Express offers multiple, bidirectional high-speed links between the CPU and peripheral adapters. PCI-Express: The new I/O express lane

In its initial implementation, PCI Express will provide transfer speeds of 250Mbit/sec. in each direction for each lane. Early PCI Express adapter cards are shipping in configurations of one, four, eight or 16 lanes (called x1, x4, x8 and x16.) Therefore, an x16 PCI Express card can provide as much as 4Gbit/sec. of throughput in each direction.
The new I/O express lane (http://www.computerworld.com/hardwaretopics/storage/story/0,10801,100559,00.html)

According to ATTO:
The UL3 and UL4 adapters have PCI and PCI-X connectors. PCI Express is a different type of connector, so the only SCSI adapters that will fit into these slots would be our up-and-coming UL5D. This adapter has the same dual-channel Ultra320 SCSI bus as the UL4D, but with the PCI Express connection bus.

PCI-Express II (http://www.theinquirer.net/?article=25774) (2nd generation PCI-e).
PCI Express Specifications (http://www.pcisig.com/specifications/pciexpress/review_zone#pcie)
LSI Logic PCI Express (http://www.lsilogic.com/products/pci_express_cores/)
LSI Logic: PCI Express 16-lane (http://www.lsilogic.com/products/pci_express_16_lane_link_core/index.html)

PCI-X 2.0 Specification (http://www.pcisig.com/specifications/pcix_20/)

02-07-2005, 06:50 PM
I am thinking of adding a Magma chassis (4-slot, PCI-PCI):


so as to free up some PCI slots and in particular, use an external SATA array.

Currently, I'm running a MotU PCI-424 in slot 2 (audio interface card), a Universal Audio UAD-1 in slot 3 (audio effects processing card), and a PowerCore in slot 4 (another audio effects processing card).

The AGP slot is, I believe, called slot 1. The PowerCore, which is dual 33/66 MHz, is by itself on the fastest buss. The UAD and PCI-424 are both 33MHz cards.

Here are my questions; I would greatly appreciate any input, either positive or negative.

I'd like to add an external SATA array to avoid the whole FireWire/AMD bridge-chip boondoggle that appears to trouble systems running the PowerCore and UAD-1. I was thinking that it might be a good time to calculate/estimate the amount of throughput/bandwidth that the various cards, and thus the busses, would be likely to need.

For instance, I guess Digital Performer runs 24-bit/48kHz audio uncompressed so that 1 track is:

(24 bits/sample)*(48k samples/sec) =
(3 bytes/sample)*(48000 samples/sec) = 144,000 bytes/sec

Question #1: What is the bandwidth of a PCI slot? That is, how much throughput can be handled by a single, let's say 33MHz, slot?

Question #2: Can one estimate the maximum data being processed by a 33MHz card like the PCI-424 or UAD-1? Would/could a 33MHz slot on a G5 get flooded if the cards were sufficiently bandwidth-hungry? That is, if several cards were housed on a break-out chassis, is there the possibility that the throughput used by the cards might exceed the capacity of the slot/buss on the G5?

Question #3: Assuming the use of a Firmtek Seritek/1SE2 SATA card for external drives (which is also dual 33/66MHz), would you put that card on the 133MHz buss and "slow down" the PowerCore or would you leave the PowerCore to run at its full 66MHz and put the SATA card on the shared buss with the Magma chassis (which would hold the PCI-424 and UAD-1s)? This is arguably more of an audio question (in fact, I've got some queries out regarding the througput of the PowerCore), but, perhaps more to the point, do you think the Seritek would function well on the shared buss with the Magma chassis?

Just to make a relatively high estimate, let's say the goal is to be able to handle 40 tracks of audio. Using the above single-track estimate that would translate into:

(40 tracks)*(144,000 bytes/track-sec) = 5,760,000 Bytes/sec

This of course is perfect-world arithmetic.

What think ye?

Any input is greatly appreciated!

02-07-2005, 07:18 PM
Hello Revi,

to your first question, here are some specs for PCI/PCI-X bus system.

PCI = Peripheral Component Interconnect.

PCI 32Bit, 33MHz = 132MB (theoretical throughput)
PCI 64Bit, 33MHz = 264MB (theoretical throughput)
PCI 32Bit, 66MHz = 264MB (theoretical throughput)
PCI 64Bit, 66MHz = 512MB (theoretical throughput)

PCI-X 64Bit, 133MHz = 1064MB (theoretical throughput)
PCI-X 2.0 64Bit, 266MHz = 2132MB (theoretical throughput)



02-07-2005, 07:20 PM
I'm guessing that one could assume that most cards, even if they were compatible with the PCI-X slots, would likely still be PCI cards. Does that make sense? It sure would seem, at least naively, that it would take a lot of processing to flood a PCI bus.

In which they squarely point the finger of blame at Apple's firmware updates (http://docs.info.apple.com/article.html?artnum=302212)

The Power Mac G5 (Late 2004) System Firmware Update for 10.3.9 and the Power Mac G5 (Late 2004) System Firmware Update for 10.4.3 is available for download.

"The July 2004 G5/1.8 GHz dual-processor model has a problem with its PCI-33 slots. This problem affects PCI cards from various manufacturers including DeckLink cards. The problem will need to be resolved by Apple." and more specifically, "The same problem also currently affects the April 2005 dual-processor G5/2.0 GHz machines with PCI slots and we have subsequently added that information to Radar bug 3736801 in the hope that Apple will be able to address it.

This machine originally worked fine with standard definition DeckLink cards but a subsequent firmware update for the April 2005 series of G5's stopped video playback"


02-07-2005, 09:41 PM
The limits are lower than that usually.

230 MB/sec for the entire PCI Bridge on a Mirrored Drive Door.

220 for all the Uni-N based G4s

600 looks to be a good round maximum for the G5.

These are total throughput of the entire PCI bus. Each slot and ALL the devices soldered to the motherboard south of the PCI bridge share that total (USB, FW, Audio,Graphics,etc).

A 33 MHz slot has the theoretical maximums that Nicolas listed, most times you will only get within 60 or 70% of that at best when running a single card.

I had not heard that a Magma expander would work in a G5. In fact I thought they wouldn't. If you find out differently let us all know.


02-08-2005, 06:29 AM
I only looked at the MAGMA 64-bit/66MHz 7 Slot PCI-to-PCI Expansion System and it is http://www.mobl.com/expansion/pci/images/linux_survey.jpg
See http://www.mobl.com/expansion/pci/7slot6466/7slot_6466_requirements.html
Just a tad 'spensive tho. k

02-08-2005, 07:20 AM
Hello Revi,

the main design difference between PCI and PCI-X is, that PCI supports 5V cards PCI-X (only 3.3V) don't. If you have a card that can work with both voltages it could work in a PCI-X slot but, I would always check that with the manufactures tech support staff. PCI-X is PCI 3.3V backward compatiple PCIe not. The complete PCI specifications (http://www.pcisig.com/specifications)

Some cards like USB simple Audio cards do not need the entire throughput of PCI-X. And the manufacturers don't need to design a entire new card or design a bridge like with PCIe (PCI-Express (http://en.wikipedia.org/wiki/PCIe)).

Hello Rick,

I thought the MDD has the KeyLargo IC for IDE and stuff and the U2 for PCI, AGP and stuff and each of them is on their own bus.

I know the G5 has two controllers, one for the 133MHz slot and one for the two 100MHz slot.

I am about to install a Radeon Mac Edition (32MB) in my MDD, instead of using the 9000Pro's second port, but, if the entire PCI throughput is 230MB/s, the two UL3D's performance would degrade.

Or would it be not that bad?



02-08-2005, 09:30 AM
Max you can ever get through a MDD with any number of SCSI cards is 230 MB/sec. At least that has been our experience. Love to be proved wrong.

All devices on the south end go through the same choke point, the PCI-PCI bridge. G5 upped the bandwidth considerably, but the two separate ASICs running the two separate PCI-X buses still come together at the PCI-PCI bridge and that is where it all has to share. That includes onboard SATA, EIDE, USB, Firewire, Audio... everything. No difference a device soldered to the motherboard or plugged into the PCI slot, it all has to go through the PCI bridge. The G5 placed the AGP graphics port on its own access point to the memory controller, that is a huge benefit.


02-08-2005, 10:29 AM
Hello Rick,

thank you for this info!

I am using Linux on my MDD since a week and the DualHead feature is not working in Linux.

Would it be better or even worse, performamce wise, using the Mac Radeon Edition for the second screen in OSX?



02-08-2005, 03:51 PM
The limits are lower than that usually.

230 MB/sec for the entire PCI Bridge on a Mirrored Drive Door.
220 for all the Uni-N based G4s
600 looks to be a good round maximum for the G5.

These are total throughput of the entire PCI bus. Each slot and ALL the devices soldered to the motherboard south of the PCI bridge share that total (USB, FW, Audio,Graphics,etc).

A 33 MHz slot has the theoretical maximums that Nicolas listed, most times you will only get within 60 or 70% of that at best when running a single card.

Dig, that makes sense. I'm waiting on info from Universal Audio and TC Electronic as to what they think their cards would need per audio track.

Now, by "south of the pci bridge", that would mean _per_ buss though, right? (i.e., slots 2 and 3 would have the stated value and slot 4 would also have that same amount of bandwidth, independent of 2 and 3)

If my estimate of about 5.76 MB/sec for 40 tracks of 24-bit/48kHz audio is accurate, it would seem that there's headroom to spare. Not that I'm complaining either!

I had not heard that a Magma expander would work in a G5. In fact I thought they wouldn't. If you find out differently let us all know.

Apparently they will...which is certainly good given the shortage of PCI slots on the G%. Here's what I received from mobility/Magma:

"Thank you for your interest in Magma products. Our 4-slot PCI to PCI expansion system should work nicely with the configuration that you have listed below. We have worked closely with UAD, MOTU and TC Powercore for some time now. We have loaned each of these companies our products for testing and development and have had no problems running PCI to PCI. The only card(s) that you are planning to use that I am not familiar with is the SATA RAID cards that you are planning to use. Since you are concerned with bandwidth of the TC Powercore (runs at 133MHz). I would suggest the following set-up:

Installed inside your G5:

TC Powercore
MAGMA PCI host interface card (that connects G5 to 4-slot)

Installed inside the 4-slot (PCI4DR):


I am not sure if your RAID cards have hard drives, but the 4-slot also can fit up to (4) 1" disk drives OR (2) 1" disk drives and (1) 5.25" removable disk drive."

I'd probably have the SATA card in its own slot (e.g., slot 2 or 3 with the Magma in the other). Still have to figure out whether it would be better to put the PowerCore in slot 4 or the SATA card in slot 4.

They sent me a quote too...roughly 1.4 kilobux (kaye's right, they ain't cheap).

If these numbers are correct (i.e., of bandwidth and throughput so that adding a SATA card and a Magma make sense), I reckon I'm going to start saving nickels.

Cripes, I'll never own a car (heh, other than the ones in my home office). ;-)

Thanks for all this info, this has been very informative. I'll let you know what I hear from Universal Audio and TC Electronic.

10-21-2005, 06:43 AM
Blackmagic Design today DeckLink Extreme with PCI Express (PCIe) compatibility with the new Power Macintosh G5s.

DeckLink Extreme PCIe is a PCI Express version of Blackmagic Design's famous DeckLink Extreme capture card that features a new high speed PCI Express 1 lane interface running at 2.5 Gbps, which is more than 6 times faster than FireWire-based solutions.

DeckLink Extreme PCIe connects to any standard definition deck in broadcast video, combining both 10-bit SDI digital video and analog YUV component video into a single compact low cost PCI Express card.

DeckLink Extreme PCIe includes full support for all QuickTime based Mac OS X applications including Final Cut Pro HD, Apple Shake, Adobe Photoshop, DVD Studio Pro, Adobe After Effects, Apple Motion, iDVD, and many more. The $900 board offers full component analog or NTSC/PAL video, with HDTV playback to SD and realtime effects.

10-21-2005, 06:45 AM
From the current Apple store detail for newest G5 towers:

Fibre Channel Card

A Fibre Channel PCI Express card is required to connect Xserve RAID to the new Power Mac G5. This PCI Express Card option is a Dual Channel 2Gb Fibre Channel PCI Express Host Bus Adapter and ships with two 2.9-meter Copper Fibre Channel SFP to SFP (small form factor pluggable) interconnect cables. The cables are used to connect to the SFP port on Xserve RAID with SFP connectors. The Fibre Channel PCI Express card will run at full bandwidth in either four-lane or eight-lane PCI Express slots on the new Power Mac G5.

SFP connectors on the card allow use of copper or optical cabling and provide the capability of directly connecting to Xserve RAID or Fibre Channel switches over long distances up to 500m.

The Apple Fibre Channel PCI Express card offers leading performance and compatibility with Mac OS X and Xserve RAID.

The PCI Express Fibre Channel Card is not compatible with previous Power Mac G5 systems that support PCI-X or 64-bit, 33MHz PCI.


* Installation of Apple Fibre Channel PCI Express in Power Mac G5 reclassifies these systems as FCC Class A devices.
* Optical connection requires SFP transceivers and optical cables.
* Customers who select this option and intend to connect to Xserve RAID products with HSSDC2 connection must purchase a separate HSSDC2 to SFP cable kit (part number M9360).
Apple Fibre Channel PCI Express

I wonder even IF developers have had access to one of the PCIe dual-core G5s to begin to play with and test or begin building products - sort of Catch-22 that you need access to one (or a PC) to even begin work.

Modern serial expansion architecture

PCI Express is a modern industry standard sponsored by the Peripheral Component
Interconnect Special Interest Group (PCI SIG). Because older parallel technologies
placed multiple devices on a single bus, the slowest device determined the speed
of the entire bus. A serial technology, PCI Express guarantees each device dedicated
bandwidth to and from the system controller.

PCI Express communicates in 250-MBps “data lanes.” PCI Express cards and slots are
defi ned by their bandwidth, or number of data lanes—typically one lane, four lanes,
eight lanes, or 16 lanes. At 250 MBps per lane, a four-lane slot can transfer data at
up to 1 GBps and an eight-lane slot, up to 2 GBps—approximately twice as fast as a
133MHz PCI-X slot.

Three expansion slots

In addition to the 16-lane graphics slot, the Power Mac G5 features three PCI Express
expansion slots: two four-lane slots and one eight-lane slot. Each slot uses a standard
connector that can accommodate a card of any size. This means a four-lane card works
perfectly in an eight-lane slot. If the card has more lanes than the slot, the card adjusts
to the bandwidth available and “downshifts” to that data rate.

With the high-bandwidth architecture in the new Power Mac G5, your system not
only will achieve faster performance today, but will be ready for future technologies
as well. For example, 10-gigabit networking technology, which can achieve up to
2.5 GBps of data throughput, will require an eight-lane slot. This promises to be an
ideal solution for working with uncompressed HD video, which demands over 120
MBps per individual stream—and far more in a multiple-stream or multiple-camera

Apple Fibre Channel PCI Express $599 (http://store.apple.com/1-800-MY-APPLE/WebObjects/AppleStore?productLearnMore=MA139G/A)

11-01-2005, 08:32 AM
Digidesign Pro Tools|HD PCI Express (PCIe)
Digidesign PCIe Pro Tools Cards, Chassis

With the PCIe now adopted by Apple for desktop models, many manufacturers will announce in coming month availability of PCIe version for their different products/models. Today, it is PCI Express.

Digidesign will soon be offering two versions of its award-winning, professional Pro Tools|HD® digital audio production systems. By continuing to provide the existing PCI version and releasing a PCI Express (PCIe) version, Digidesign is ensuring continued compatibility of its Pro Tools|HD systems with a wide range of PCI, PCI-X, and PCIe-equipped computers. Existing Pro Tools|HD core systems are designed to work in PCI and PCI-X slots only and are not compatible with PCIe technology.

PCIe-compatible Pro Tools|HD core systems are scheduled to ship before the end of this year and will be sold at the same price as their existing PCI counterparts. Digidesign will offer a cross-grade program for PCI users who wish to switch to a PCIe solution. Full details and pricing of this offer will be communicated soon.

To avoid so many problems linked to format change leaving the final user without any expansion card options, DigiDesign will also release the Digidesign Expansion|HD chassis, a brand-new six-slot PCI expansion chassis that connects to the host Windows or Mac computer using either a PCI/PCI-X or PCIe expansion slot.

11-04-2005, 10:14 AM
Texas Instruments a PCI-EXPRESS <--> PCI converter : the XIO2000.

This is able to convert signals between both formats, and can "transform" a single PCIe 1x slot into a 6-slot PCI or PCI-X device. This can have many applications :

- This could turn many PC PCI cards (sound card, acquisition, FireWire, USB2, SATA, etc...) into Mac-compatible cards without any deep modifications.

- This card could be added directly on the motherboard to add PCI/PCI-X slots without changing the chipset.

- And it could be used to build PCI/PCI-X expansion kit for the new PMG5 dualcore at a relatively low cost.

TI is proposing a reference design for such expansion card.

It could be the ideal solution for many users who have deeply invested in PCI cards for adding functionalities/capabilities to their Mac hardware.

For additional information on the XIO2000 :



DALLAS (November 1, 2005) - Texas Instruments Inc. (TI) (NYSE: TXN) announces another addition to its PCI Express product line with the availability of a new PCI Express-to-PCI Bridge. The XIO2000 allows seamless bridging between legacy PCI devices to the latest PCI Express applications used in PCs, docking stations, ExpressCard, split-chassis systems, and test and measurement instruments.

"The XIO2000 allows customers to bridge their existing PCI application to the newer PCI Express architecture without having to worry about compliance and interoperability issues," said Jawaid Ahmad, product marketing manager for TI's Interface Business Unit. "Prior to release we did extensive validation and testing to ensure customers get a compatible device to use with the plethora of add-in cards and root complex devices available in today's market."

The device has a x1 lane upstream port connection to PCI Express root complex, providing support for up to six x1 PCI endpoint devices on its downstream side. By using TI's XIO2000 customers are now able to expand one x1 PCI Express lane to support up to six devices.

Available Today

The XIO2000 PCI Express-to-PCI Bridge is available now from TI and its authorized distributors. The XIO2000 comes in a 201-pin Microstar BGATM package. Suggested resale pricing in quantities of 1,000 starts at $14.95.

TI´s Commitment to PCI Express

TI's roadmap includes a complete portfolio of products built for the PCI Express architecture, enabling chip-to-chip interconnect, I/O interconnect for adapter cards and an I/O attach point to other interconnects such as PCI, 1394 (FireWire) and USB.

About PCI Express

PCI Express is a point-to-point serial differential low-voltage interconnect that consolidates application requirements for use by multiple market segments. PCI Express architecture is a high-performance, highly flexible, scalable, reliable, stable and cost-effective general-purpose I/P architecture that seamlessly complements existing PCI buses and will transition the market over the next several years, allowing system and communication designers to use new topologies.

Ars: PCIe to PCI bridging (http://arstechnica.com/articles/paedia/hardware/pcie.ars/7)

11-17-2005, 10:11 AM
ComputerWorld looks at the Quad-G5. Here is one snippet.

PCI Express Bus

The Power Mac G5 now supports PCI Express. Each processor core has direct access to the front-side bus controller, and the new architecture supports 16, 250MB per/sec. lanes for a total throughput of 4Gbit per/sec. -- almost twice as fast as a 133MHz PCI-X slot. There are 150 watts of power in the bus, enabling current and future high-end display cards to operate at peak performance. In addition to the 16-lane graphics slot, the Power Mac G5 features three PCI Express expansion slots: two four-lane slots and one eight-lane slot. You can install a PCI-Express graphics card in any PCI Express slot, allowing a single Power Mac G5 to support four, six or even eight displays.

There are PCI Express cards available for video from Blackmagic Design Pty. Ltd. and Aja Video. National Instruments Corp. has cards for specialized scientific applications, and Digidesign Inc. announced that cards for its pro-tools systems are imminent.

On the graphics display, the Nvidia Quadro FX 4500 is amazingly fast, sporting 512MB of Synchronous Dynamic RAM just for the card, along with the requisite heat sinks and fan. And the piping gives it a look reminiscent of a Harley. The stereoscopic viewing option for the card enables scientists to experience molecules and other complex objects in true 3-D. The 3-D system is also Maya-certified, and great for IMAX production work.

Review (http://www.computerworld.com/printthis/2005/0,4814,106263,00.html)

11-22-2005, 03:18 PM
Aaxeon PCI Express Cards
Some of their cards have a "Mac G5 Compatible" icon on the page.
Seems like deep sleep isn't working though.

* USB2/FW800 * UFC2412 USB/1394B Combo PCI Express Card (http://www.aaxeon.com/products/Productdetail.aspx?cate=8&modelno=UFC2412) $150 G5 compatible.

* USB2 4 external ports * USB4414N USB 4+1 ports PCI Express Card (http://www.aaxeon.com/products/Productdetail.aspx?cate=8&modelno=USB4414N) $89, reported to work.

FireWire 800 FWB3414 Firewire 2 ext. + 1 int. 1394B & 1 ext. 1394A port PCIe Card (http://www.aaxeon.com/products/Productdetail.aspx?cate=8&modelno=FWB3414) $159.00, poor write speeds.

Serial ATA II 1+1 channel PCIe Card SATA1414 (http://www.aaxeon.com/products/Productdetail.aspx?cate=8&modelno=SATA1414) $79.00

Serial ATA II 2 channel PCI Express Card SATA2400 (http://www.aaxeon.com/products/Productdetail.aspx?cate=8&modelno=SATA2400) $69.00

11-23-2005, 08:07 AM
Tom's Hardware: Comparing PCI-X vs. PCI Express SATA II RAID (http://www.tomshardware.com/storage/20051123/index.html)

HighPoint's RocketRAID 2320 bandwidth vs the PCI-X version.
A good explaination of PCI Express technology.

11-25-2005, 07:56 AM
PCMCIA: broad acceptance of ExpressCard likely in 2H 2007


04-11-2006, 02:48 PM
April 11th, 2006 -- PCI Express 8 Lane Slot (3) does produce a real world speed drop compared to the 16 Lane Slot (1). We finally got around to installing the GeForce 7800 GT in slot 3 of our Quad-Core. We ran various apps to see if there was any speed penalty to using the 8 lane PCI Express slot compared to the 16 lane slot (1) factory default. In other words, if you add a second graphics card in the 8 lane slot, will it actually run slower than the one in the 16 lane slot?

The answer is, "Yes, but not always by much." Of course, CPU intensive tasks won't be affected but when we ran Motion and iMaginator -- which hand off Core Image functions to the graphics card, the 16 lane slot was 3% and 7% faster repectively. With OpenGL 3D games at 1600x1200 High Quality, the advantage of the 16 lane slot varied from 2% with Doom 3 to 45% with Unreal Tournament 2004's Flyby. Most other game scenarios were 13% faster with the 16 lane slot in use.


Also, AMUG Sonnet E4P PCIe (http://www.amug.org/amug-web/html/amug/reviews/articles/sonnet/e4p/) notes problems using some slots. Highpoint's RocketRAID is a PCI-X board with PCIe bridge, Sonnet's is "native" PCIe.

05-05-2006, 01:43 PM
Two-port FW800 Express Card Adapter

NitroAV.com, a developer and manufacturer of FireWire800/1394 hardware peripherals, storage and RAID subsystems, today announced the ExpressWay Series 34mm FireWire800 Express Card Adapters.

34mm Express Card adapters, the "ExpressWay" series brings its two FireWire 800/1394b ports. It offers 2x 9-pin connectors for an easy and affordable way to add FireWire800 storage and devices to the MacBookPro.

It will be available for $90 to registered partners and resellers. End users can purchase the cards from a variety of resellers, including FireWireDirect.

05-11-2006, 07:52 AM
Digital Cowboy is going to sell an adapter allowing the use of a PCI card in a PCI-Express slot in Japan.

If the adapter functioned on the Mac, it would be possible to reuse a number of half-height PCI cards, such as FireWire or ATA/SATA cards.

As soon as it is available over there, we will enlist one of our team members that live in Japan to perform some tests for us.

06-26-2006, 11:12 AM
Customers use multilane for both storage and connectivity. Products using multilane for storage tend to top out with 16 drives most of the time, between one and four RAID cards, although a few products are up to 24 drives.

Infiniband connectors are also commonly seen here for 10Gbps connections.


Like Fibre Channel (http://en.wikipedia.org/wiki/Fibre_Channel), PCI Express (http://en.wikipedia.org/wiki/PCI_Express), and many other modern interconnects, InfiniBand uses a bidirectional serial bus (http://en.wikipedia.org/wiki/Serial_bus) so that it can avoid the signal skew problems of parallel busses when communicating over relatively long (room- or building-scale) distances. Although it is a serial connection, it is very fast, with 2.5 gigabits per second (Gbit/s) links in each direction per connection. InfiniBand also supports double and quad data rates for 5 Gbit/s or 10 Gbit/s respectively. Links use 8B/10B encoding (http://en.wikipedia.org/wiki/8B/10B_encoding) so every 10 bits sent carry 8 bits of data, such that the actual data rate is 4/5ths the raw rate. Thus the single, double and quad data rates carry 2, 4 or 8 Gbit/s respectively.

Links can be aggregated in units of 4 or 12, called 4X or 12X. A quad-rate 12X link therefore carries 120 Gbit/s raw, or 96 Gbit/s of user data. Most systems today use a 4X single data rate connection, though the first double-data-rate products are already entering the market. Larger systems with 12x links are typically used for various cluster (http://en.wikipedia.org/wiki/Computer_cluster)supercomputer (http://en.wikipedia.org/wiki/Supercomputer) interconnects and for inter-switch (http://en.wikipedia.org/wiki/Network_switch) connections.
Wikipedia - InfiniBand (http://en.wikipedia.org/wiki/InfiniBand)

The most common configuration of the PCI bus is a 32-bit 33MHz version that provides a bandwidth of 133MB per second, although the 2.2 version of the specification allows for a 64-bit version at 33MHz for a bandwidth of 266MB per second and even a 64-bit 66MHz version for a bandwidth of 533MB per second.

Even today's powerful desktop machines have lots of capacity available with the PCI bus in the typical configuration, but server machines are starting to hit the upper limits of the shared bus architecture.

The availability of multiport Gigabit Ethernet NICs, along with one or more Fibre Channel I/O controllers can easily consume even the highest 64-bit, 66MHz version of the PCI bus.
To resolve this limitation on the bandwidth of the PCI bus, a number of solutions are becoming available in the market as interim solutions such as PCI-X and PCI DDR (Mellanox Technologies, "Understanding PCI Bus, 3GIO, and InfiniBand Architecture (http://www.mellanox.com/)"). Both of them are backwards compatible upgrade paths to the current PCI bus. The PCI-X specification allows for a 64-bit version of the bus operating at the clock rate of 133 MHz, but this is achieved by easing some of the timing constraints. The shared bus nature of the PCI-X bus forces it to lower its fanout in order to achieve the high clock rate of 133 MHz.

A PCI-X system that is running at 133 MHz can have only one slot on the bus, two PCI-X slots would allow a maximum clock rate of 100 MHz, whereas the four slot configuration would drop down to a clock rate of 6 MHz (Compaq Computer Corporation, "PCI-X: An Evolution of the PCI Bus (http://www.compaq.com/)," September 1999, TC990903TB.). So, despite the temporary resolution of the PCI bandwidth limitation through these new upgrade technologies, there is a long term solution needed that cannot rely on a shared bus architecture.

InfiniBand breaks through the bandwidth and fanout limitations of the PCI bus by migrating from the traditional shared bus architecture into a switched fabric architecture.

Each connection between nodes, switches, and routers is a point-to-point, serial connection. This basic difference brings about a number of benefits:

Because it is a serial connection, it only requires four as opposed to the wide parallel connection of the PCI bus.
The point-to-point nature of the connection provides the full capacity of the connection to the two endpoints because the link is dedicated to the two endpoints. This eliminates the contention for the bus as well as the resulting delays that emerge under heavy loading conditions in the shared bus architecture.
The InfiniBand channel is designed for connections between hosts and I/O devices within a Data Center. Due to the well defined, relatively short length of the connections, much higher bandwidth can be achieved than in cases where much longer lengths may be needed.
Other related interesting developments include the definition of the SCSI RDMA Protocol (ftp://download.intel.com/technology/infiniband/data/IBAS87.htm) (SRP) over InfiniBand which is work in progress and the definition of the Sockets Direct Protocol (ftp://download.intel.com/technology/infiniband/data/IBAS83.htm) (SDP) whose goal is to define a sockets type API over InfiniBand.

An Introduction to InfiniBand Architecture (http://www.oreillynet.com/pub/a/network/2002/02/04/windows.html)

08-09-2007, 01:19 PM
PCI Express 3.0 to push 8GT/s
by Geoff Gasior (dissonance@techreport.com) - 01:20 pm, August 9, 2007

PCI-SIG has announced (http://www.pcisig.com/news_room/08_08_07/) that the next, next-gen PCI Express 3.0 spec will offer a maximum bit rate of 8GT/s—double that of PCI Express 2.0. Interestingly, PCI-SIG considered pushing a 10GT/s maximum bit rate, but settled on a slightly slower speed, in part to maintain backwards compatibility.
“Experts in the PCIe Electrical Workgroup analyzed both 10GT/s and 8GT/s as target bit rates for the next generation of PCIe architecture, and after careful consideration of several factors, including power, implementation complexity and silicon area, recommended 8GT/s,” said Al Yanes, PCI-SIG chairman.

“This allows us to satisfy the next generation performance requirements for all existing PCIe applications while maintaining backward compatibility, and at the same time broadening the adoption of this pervasive technology into new and emerging applications and usage models.” Along with additional bandwidth, PCI Express 3.0 adds support for "transmitter and receiver equalization, PLL improvements, clock data recovery, and channel enhancements for currently supported topologies."

Don't expect to see PCIe 3.0 anytime soon, though. Core logic chipsets with PCI Express 2.0 support aren't due until the fall, and it could be 2010 before products built for PCIe 3.0 become available.

Thanks to Xbit Labs (http://www.xbitlabs.com/news/other/display/20070808225423.html) for the tip.