Interview with Tom Petersen – technical marketing manager NVIDIA MCP Group

By

LR: What about mobile? That seems like a huge market opportunity for NVIDIA.

NV: It’s coming. Sooner than you think!

LR: ATI is getting ready to release RD580, and have been critical of your PCIe implementation in your NVIDIA SLI X16 architecture, which splits the dual PCIe x16 lanes between two chips. Care to comment?

NV: Well, we haven’t seen RD580, so I couldn’t comment on their product and how their products will be perceived in the market. That said, we have noticed that there seems to be a higher level of misinformation about our products floating around recently.

LR: Like what?

NV: Well, for example, there have been reports that the link between our SPP and MCP is limited to an 8x HyperTransport connection. This is not true. The link width is 16x in both directions between the SPP and MCP.
The HyperTransport frequency between both of these links is 1000MHz (5x multiplier).This amounts to a staggering 4GB/sec of bandwidth between the two chips. We have the same link between the SPP and the AMD CPU. Remember, the AMD CPU has a memory controller built-in.

LR: But, doesn’t having the PCIe x16 lanes split between two chips impede performance?

NV: Nope. There are no performance issues caused by having the PCI Express X16 slots connected to different chips. NVIDIA SLI graphics performance is predominantly determined by GPU performance and their bandwidth to memory and each other – not latency. Since NVIDIA’s X16 architecture provides more than enough bandwidth through its full x16 HT link, there is no appreciable performance difference.

As we said, the NVIDIA two-chip design is comprised of the System Platform Processor (SPP) and the Media and Communications Processor (MCP). The HyperTransport x16 link between them delivers 4GB/sec of bandwidth. This is four times the bandwidth of other chipsets, including those from ATI and Intel, which only use a PCIe x4 or PCIe x2 lane between their two chips. We use HyperTransport. They don’t.

LR: Yes, but what about a situation where you would completely utilize all of the bandwidth of the link?

NV: Well, the truth is, you would never ever have a real-world environment where this would happen. Nobody, including Intel, designs chipsets for hypothetical usage models. At no time would you ever encounter a situation where you would have to sustain peak bandwidth for PCIe, SATA, and networking, all at the same time. All chipset designers can do, is to design for real-world usage models. This applies not just to NVIDIA, but to everyone else too, including ATI and Intel.

For example, ATI uses a slow 2x PCIe interconnect between their northbridge and southbridge with their SB400 families, and a 4x PCIe interconnect when they pair their northbridge with the Uli/NV southbridges. In these cases as well, based on a hypothetical usage model, they too would not have sufficient bandwidth to handle all of the southbridge functions (dual GigE, 4 Sata 3GB/sec drives, USB, etc.) The same applies for Intel, which also uses a 4x PCIe interconnect between their two chips.

Comments are closed.