Review Index:
Feedback

Asus A8N32-SLI nForce4 SLI X16 Motherboard Review

Author: Ryan Shrout
Subject: Motherboards
Manufacturer: Asus
Tagged:

NVIDIA nForce4 SLI X16 Specifications

Before we get into the specifics on this particular Asus motherboard, I think we need to look at the features and options that the new NVIDIA chipsets offer. 


Below we have a block diagram of the AMD-based nForce4 SLI X16 chipset.



As you can see this chipset returns NVIDIA to a two-chip design for their AMD chipsets; something that has been missing since the nForce3 was first released.  The NF4 SLI X16 north bridge or SPP provides a total of 18 lanes of PCI Express, 16 for a single GPU slot and 2 x1 PCIe slots.  A HyperTransport connection travels to the south bridge where the X16 MCP provides another full x16 PCIe lanes for either another GPU or any other PCIe peripheral.  In addition there are another three x1 PCIe connections for additional peripherals.


The rest of the features remain the same on the chipset including support for NVIDIA Gigabit  networking with ActiveArmor firewall, 10 USB 2.0 connections 7.1 audio, four IDE and 4 SATA II channels.  Up to five legacy PCI slots are also supported.



The Intel version of this chipset is very similar to the AMD counterpart with a few required exceptions.  First, the SPP or north bridge features support for the Intel 1066 MHz FSB and DDR2 memory support up to 667 MHz.  Just as with the AMD chipset, the two X16 graphics card slots are seperated between the north and south bridge, and are connected via a HyperTransport connection for communication.  Interestingly though the Intel chipset has 4 PCIe x1 slots available in the SPP in addition to the x16 lanes for the GPU while the AMD has two in the SPP and 3 in the MCP. 


The nForce4 SLI X16 chipset is the first desktop product to offer two full x16 PCI Express slots for GPU connectivity; the previous nForce4 SLI chipset used two x8 PCIe electrical connections on two x16 PCIe slots. 



This new scenario does beg the question of why didn't they just include BOTH x16 connections in the north bridge?  Chances are the die for that chip would have been too hot and too large (in transistors and traces) for a simple integration into motherboards.  NVIDIA's method of seperating the two GPU slots addresses that issue but also adds one additional "hop" for the data to take between GPU to GPU.  In other words, any data that has to get from one video card to the other via the PCI Express bus, must now jump down to their host chip, across the HyperTransport bus and then through the other chip to the card.  This will mean that HT bus speeds will be more important in SLI systems as we move forward.

No comments posted yet.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.