Alongside the Titan X, NVIDIA has announced the Quadro M6000. In terms of hardware, they are basically the same component: 12 GB of GDDR5 on a 384-bit memory bus, 3072 CUDA cores, and a reduction in double precision performance to 1/32nd of its single precision. The memory, but not the cache, is capable of ECC (error-correction) for enterprises who do not want a stray photon to mess up their computation. That might be the only hardware difference between it and the Titan X.
Compared to other Quadro cards, it loses some double precision performance as mentioned earlier, but it will be an upgrade in single precision (FP32). The add-in board connects to the power supply with just a single eight-pin plug. Technically, with its 250W TDP, it is slightly over the rating for one eight-pin PCIe connector, but NVIDIA told Anandtech that they're confident that it won't matter for the card's intended systems.
That is probably true, but I wouldn't put it past someone to do something spiteful given recent events.
The lack of double precision performance (IEEE 754 FP64) could be disappointing for some. While NVIDIA would definitely know their own market better than I do, I was under the impression that a common workstation system for GPU compute was a Quadro driving a few Teslas (such as two of these). It would seem weird for a company to have such a high-end GPU be paired with Teslas that have such a significant difference in FP64 compute. I wonder what this means for the Tesla line, and whether we will see a variant of Maxwell with a large boost in 64-bit performance, or if that line will be in an awkward place until Pascal.
Or maybe not? Maybe NVIDIA is planning to launch products based on an unannounced, FP64-focused architecture? The aim could be to let the Quadro deal with the heavy FP32 calculations, while the customer could opt to load co-processors according to their double precision needs? It's an interesting thought as I sit here at my computer musing to myself, but then I immediately wonder why did they not announce it at GTC if that is the case? If that is the case, and honestly I doubt it because I'm just typing unfiltered thoughts here, you would think they would kind-of need to be sold together. Or maybe not. I don't know.
Pricing and availability is not currently known, except that it is “soon”.
Photons don’t cause memory
Photons don’t cause memory errors.
it seems, or it is theorized
it seems, or it is theorized anyways, that anti-protons, and/or anti-electrons, or possibly other dark matter, we are have SO MUCH to learn about dark matter, I’m getting away from myself, …. but IT SEEMS that dark matter passing through transistors are responcible for atlest some of the un-explained random computer-errors that we have been getting since the dawn of the microchip.
It’s possible that’s what he was referencing, or just shooting the shit casual style, either way I think we got what he was saying.
I was speaking casually, but
I was speaking casually, but gamma and x-ray radiation could mess with your computer. I never claim how likely it is, though.
Most, not all, of the "cosmic ray" radiation is blocked by the atmosphere, but that's not even the only source. Some coal power plants have been known to kick up kilograms of Uranium per year that was mixed in with the fuel, which decays in the air because smoke goes where smoke wants to. Many places, like Nova Scotia, Canada, are built atop giant slabs of granite, which contain elements that give off an above-average amount of nuclear radiation (which for some reason leads to lower rates of cancer — it is assumed that it builds up immunity or something). Even just the Iodine in your bones give off (very little) gamma radiation.
But non professional graphics
But non professional graphics drivers can sure screw up a 12 hour render, along with non the ECC memory found in most gaming rigs. It’s the costs of certifying the drivers for the professional graphics software that makes these professional SKUs cost more, that and the ECC. So simulated in software Photons and Ray interactions, can be screwed up royally by memory errors, and ruin a 12 hour render, for that giant trade show graphic/jumbo LCD screen displayed image, that has to be perfect before it is sent to the printer(high definition large format laser printer), or displayed on a jumbo display. Just have you graphics show up late for a trade show, and see your contract go up in smoke, or lose your job.
It Looks like Nvidia is further segmenting their professional graphics line, from their HPC/supercomputing/server line of GPU accelerators, to keep those pro graphics SKUs from being used for any heavy double precision scientific work. A bit like Intel does with their Xeon server kit, and their E series core i processors! Just Gimping the E series i7’s down enough to not cross over in to usable for server/workstation workloads. You want ECC get a Xeon and pay, same with Nvidia, you want DP, get a scientific/HPC SKU and pay.
Put your device out in space,
Put your device out in space, and watch those gamma ray photons do a number on the memory. Ever wonder why the space probe/satellites have CPUs fabricated on large nanometer process nodes, and are enclosed in radiation hardened cases. And a camera flash can cause a reboot, on some kit. Photons come in lots of wavelengths, some can ruin you day permanently.
http://www.pcpro.co.uk/components/1000375/why-a-camera-flash-will-reboot-your-raspberry-pi-2
As others have stated, gama
As others have stated, gama radiation is high energy photons. I am not sure what the current manufacturing processes are like, but in the past, a lot of memory errors would have been caused by locally occurring radiation. We always have a low level of background radiation around from naturally occurring heavy elements and also unstable isotopes of common elements like carbon-14. Some of the radiation could come from the elements used to make or package the memory chips themselves. I believe they attempt to use somewhat purified materials to avoid some inclusion of radioactive isotopes but this is not perfect as separating isotopes is difficult and expensive.
I have had files get corrupted by bad system memory before, so I am tempted to build my next system with a Xeon, a server board, and registered ECC memory. The error did not cause noticeable instability, but it did cause a lot of files written to be corrupted. It is more expensive to get ECC but mostly does not perform much differently anymore. Currently, after doing hardware replacement on systems, I always run a full pass of memtest86. With the way things are going, it may be best to run a gpu memory test also since this is more likely to cause a crash with gpus acting more like CPUs. In the past, errors in gpu memory would mostly cause some display corruption. As processes get smaller, it seems like memory errors will be more likely to occur. I think they should include ECC everywhere rather than trying to keep this as a server/workstation feature. At least detecting errors would be nice, otherwise you can end up with a bunch of corrupted files before the error is discovered.