When mining some form of cryptographic coin, very few components in the system are utilized. A GPU is basically a self-contained massively parallel cruncher with its own memory and logic. The host system just needs to batch the tasks which leads to PCs with dirt-cheap CPUs, a very modest amount of RAM, and quite literally a half-dozen high-end graphics cards.

If you thought that gaming machines skew a little too much towards GPUs, you should see a mining rig with five R9 290X cards fed by a Sempron.

As you can guess, since many GPUs are double-slot, it might be difficult to fit seven of them in a seven-slot motherboard with a limited number PCIe lanes. To get around this limitation, miners attach their graphics cards to extension cables. Thankfully (for them), mining does not pass a lot of data across the bus to the host system. Even a single PCIe fails to be a bottleneck, apparently.

Anyway, the Corsair blog created an open-air rack which hangs six graphics cards (five HD 7970s and a R9 290X) above a motherboard housing an Intel Celeron G1830. For air, a quartet of Corsair fans suck air upwards and around the graphics cards. For power, of course they use the Corsair AX1500i because why not mine with an arc welding torch. It apparently had more power capacity than the breaker they originally hooked it up to. Whoops.

While ridiculous, I do hope to see systems with multiple (even mismatched) graphics processors as we move toward batches of general mathematics. PhysX was not entirely successful in teaching users that GPUs do not need to be in SLi or Crossfire configurations to load balance. It is just finding an appropriate way to split tasks without requiring a lot of bottlenecks in setting it up.

I might not mine coins, but I could see some benefit to having 35 TeraFLOPs across seven compute devices. I could also see Corsair wanting to sell me a power supply for said PC.