Interview with Tom Petersen – technical marketing manager NVIDIA MCP Group
To address your claim, NVIDIA provides plenty of bandwidth between the SPP and MCP for games. And, outside of games, that link provides plenty of bandwidth for the PCI, SATA, gigabit Ethernet, USB 2.0 and other connectors tied to it. The same cannot be said for the competition.
While hypothetical arguments are fun to engage in, they really have no basis for drawing performance conclusions since they don’t relate toreal-world usage models.
LR: Is there an architectural advantage to combining both X16 PCI Express slots on a single chip rather than splitting them between the SPP and MCP as NVIDIA does with the NVIDIA nForce4 SLI X16 architecture?
ANSWER: Not really. The platform is only as good as the design of the overall architecture. HyperTransport is a high-speed, low latency, chip-to-chip interface that is optimized for data throughput. As we mentioned earlier, we provide sufficient bandwidth in the inter-chip HyperTransport link to yield the same effective performance.
One way to think about this intuitively is to understand that the functions accomplished by both architectures are identical. Both single chip and dual-chip designs must queue up PCIe packets at the GPU interfaces and then arbitrate, steer and transfer the packets to the CPU and main memory. It turns out that for an average packet, the transfer latency is dominated by the first 3 functions and actual transfer between the SPP and MCP represents a small fraction of total packet latency.
When you look at the actual architecture of most chipset solutions, the link you really need to look at is the one between the northbridge and CPU. This is where all data eventually has to go anyways, in order to access main memory.
LR: But, we’ve seen some reviews where you see negative scaling going from NVIDIA nForce4 SLI (dual x8) to the new NVIDIA nForce SLI X16. Why is that?
NV: (laughs) Yes, we have seen those too. Unfortunately, some of those early reviews were conducted with improper BIOS settings. It is imperative that you run with proper BIOS settings to ensure full performance on NVIDIA nForce4 SLI X16 platforms. Running the HyperTransport link at 8x as some have done in comparative reviews unfairly negatively impacts performance. The HyperTransport link width of the NVIDIA nForce4 SLI X16 MCP is x16 in both directions and platforms should be reviewed properly. For those interested, there is measurable scaling between NVIDIA dual X8 and dual X16 platforms with selected applications, most notably when turning on SLI AA modes.
LR: Why does NVIDIA continue to offer SLI technology only on NVIDIA-based platforms?
NV: NVIDIA has an extensive certification program for the entire SLI ecosystem, which now consists of GPUs, motherboards, power supplies, memory, PC cases and full PC systems. This certification program and the
testing of the interoperability of all of these components is very complex and requires many resources. As a result, we have elected to focus our certification program on motherboards featuring NVIDIA nForce MCP technology. Motherboards are available for both Intel and AMD platforms, at a variety of price segments, starting as low as $79 USD.
Comments are closed.