NVIDIA Launches Tesla K20X Accelerator Card, Powers Titan Supercomputer

Subject: General Tech | November 12, 2012 - 06:29 AM |
Tagged: tesla, supercomputer, nvidia, k20x, HPC, CUDA, computing

Graphics card manufacturer NVIDIA launched a new Tesla K20X accelerator card today that supplants the existing K20 as the top of the line model. The new card cranks up the double and single precision floating point performance, beefs up the memory capacity and bandwidth, and brings some efficiency improvements to the supercomputer space.

NVIDIA_Tesla_K20X_K20_GPU_Accelerator.jpg

While it is not yet clear how many CUDA cores the K20X has, NVIDIA has stated that it is using the GK110 GPU, and is running with 6GB of memory with 250 GB/s of bandwidth – a nice improvement over the K20’s 5GB at 208 GB/s. Both the new K20X and K20 accelerator cards are based on the company’s Kepler architecture, but NVIDIA has managed to wring out more performance from the K20X. The K20 is rated at 1.17 TFlops peak double precision and 3.52 TFlops peak single precision while the K20X is rated at 1.31 TFlops and 3.95 TFlops.

Screenshot (363).png

The K20X manages to score 1.22 TFlops in DGEmm, which puts it at almost three times faster than the previous generation Tesla M2090 accelerator based on the Fermi architecture.

Screenshot (362).png

Aside from pure performance, NVIDIA is also touting efficiency gains with the new K20X accelerator card. When two K20X cards are paired with a 2P Sandy Bridge server, NVIDIA claims to achieve 76% efficiency versus 61% efficiency with a 2P Sandy Bridge server equipped with two previous generation M2090 accelerator cards. Additionally, NVIDIA claims to have enabled the Titan supercomputer to reach the #1 spot on the top 500 green supercomputers thanks to its new cards with a rating of 2,120.16 MFLOPS/W (million floating point operations per second per watt).

Screenshot (359).png

NVIDIA claims to have already shipped 30 PFLOPS worth of GPU accelerated computing power. Interestingly, most of that computing power is housed in the recently unveiled Titan supercomputer. This supercomputer contains 18,688 Tesla K20X (Kepler GK110) GPUs and 299,008 16-core AMD Opteron 6274 processors. It will consume 9 megawatts of power and is rated at a peak of 27 Petaflops and 17.59 Petaflops during a sustained Linpack benchmark. Further, when compared to Sandy Bridge processors, the K20 series offers up between 8.2 and 18.1 times more performance at several scientific applications.

Screenshot (360).png

While the Tesla cards undoubtedly use more power than CPUs, you need far fewer numbers of accelerator cards than processors to hit the same performance numbers. That is where NVIDIA is getting its power efficiency numbers from.

NVIDIA is aiming the accelerator cards at researchers and businesses doing 3D graphics, visual effects, high performance computing, climate modeling, molecular dynamics, earth science, simulations, fluid dynamics, and other such computationally intensive tasks. Using CUDA and the parrallel nature of the GPU, the Tesla cards can acheive performance much higher than a CPU-only system can. NVIDIA has also engineered software to better parrellelize workloads and keep the GPU accelerators fed with data that the company calls Hyper-Q and Dynamic Parallelism respectively.

It is interesting to see NVIDIA bring out a new flagship, especially another GK110 card. Systems using the K20 and the new K20X are available now with cards shipping this week and general availability later this month.

You can find the full press release below and a look at the GK110 GPU in our preview.

Anandtech also managed to get a look inside the Titan supercomputer at Oak Ridge National Labratory, where you can see the Tesla K20X cards in action.

NVIDIA's Tesla K10 offers serious single-precision performance

Subject: General Tech | June 19, 2012 - 03:04 PM |
Tagged: nvidia, tesla, K10, GK104, HPC

One of NVIDIA 's line of Tesla HPC cards, the Tesla K10 has actually been seen in the wild.  the new Tesla series is split between the GK104 based K10 model specifically designed for single-precision tasks and the GK110 based Tesla K20 and it is optimized for double-precision tasks.  The K10 is capable of 4.58 teraflops thanks to a pair of GK104s with 8GB of GDDR5, whereas the K20 should in theory double Intel's Xeon Phi at 2 teraflops of double-precision performance but that has yet to be demonstrated.  The K10 that was demonstrated also showed off another of the benefits of NVIDIA's new architecture, even with two GPUs the card remains within a 225W thermal envelop, something that is incredibly important if you are building a cluster.  The Register has gathered together some of the benchmarks and slides from NVIDIA's release, which you can see here.

elreg_nvidia_isc_tesla_k10_benchmarks.jpg

"The Top 500 supercomputer ranking is based on the performance of machines running the Linpack Fortran matrix math benchmark using double-precision floating point math, but a lot of applications will do just fine with single-precision math. And it is for these workloads, graphics chip maker and supercomputing upstart Nvidia says, that it designed the new Tesla K10 server coprocessors."

Here is some more Tech News from around the web:

Tech Talk

 

Source: The Register

Intel Introduces Xeon Phi: Larrabee Unleashed

Subject: Processors | June 19, 2012 - 11:46 AM |
Tagged: Xeon Phi, xeon e5, nvidia, larrabee, knights corner, Intel, HPC, gpgpu, amd

Intel does not respond well when asked about Larabee.  Though Intel has received a lot of bad press from the gaming community about what they were trying to do, that does not necessarily mean that Intel was wrong about how they set up the architecture.  The problem with Larrabee was that it was being considered as a consumer level product with an eye for breaking into the HPC/GPGPU market.  For the consumer level, Larrabee would have been a disaster.  Intel simply would not have been able to compete with AMD and NVIDIA for gamers’ hearts.
 
The problem with Larrabee and the consumer space was a matter of focus, process decisions, and die size.  Larrabee is unique in that it is almost fully programmable and features really only one fixed function unit.  In this case, that fixed function unit was all about texturing.    Everything else relied upon the large array of x86 processors and their attached vector units.  This turns out to be very inefficient when it comes to rendering games, which is the majority of work for the consumer market in graphics cards.  While no outlet was able to get a hold of a Larrabee sample and run benchmarks on it, the general feeling was that Intel would easily be a generation behind in performance.  When considering how large the die size would have to be to even get to that point, it was simply not economical for Intel to produce these cards.
 
phi_01.jpg
 
Xeon Phi is essentially an advanced part based on the original Larrabee architecture.
 
This is not to say that Larrabee does not have a place in the industry.  The actual design lends itself very nicely towards HPC applications.  With each chip hosting many x86 processors with powerful vector units attached, these products can provide tremendous performance in HPC applications which can leverage these particular units.  Because Intel utilized x86 processors instead of the more homogenous designs that AMD and NVIDIA use (lots of stream units doing vector and scalar, but no x86 units or a more traditional networking fabric to connect them).  This does give Intel a leg up on the competition when it comes to programming.  While GPGPU applications are working with products like OpenCL, C++ AMP, and NVIDIA’s CUDA, Intel is able to rely on many current programming languages which can utilize x86.  With the addition of wide vector units on each x86 core, it is relatively simple to make adjustments to utilize these new features as compared to porting something over to OpenCL.
 
So this leads us to the Intel Xeon Phi.  This is the first commercially available product based on an updated version of the Larrabee technology.  The exact code name is Knights Corner.  This is a new MIC (many integrated cores) product based on Intel’s latest 22 nm Tri-Gate process technology.  The details are scarce on how many cores this product actually contains, but it looks to be more than 50 of a very basic “Pentium” style core;  essentially low die space, in-order, and all connected by a robust networking fabric that allows fast data transfer between the memory interface, PCI-E interface, and the cores.
 
intelphi.jpg
 
Each Xeon Phi promises more than 1 TFLOP of performance (as measured by Linpack).  When combined with the new Xeon E5 series of processors, these products can provide a huge amount of computing power.  Furthermore, with the addition of the Cray interconnect technology that Intel acquired this year, clusters of these systems could provide for some of the fastest supercomputers on the market.  While it will take until the end of this year at least to integrate these products into a massive cluster, it will happen and Intel expects these products to be at the forefront of driving performance from the Petascale to the Exascale.
 
phi_02.jpg
 
These are the building blocks that Intel hopes to utilize to corner the HPC market.  Providing powerful CPUs and dozens if not hundreds of MIC units per cluster, the potential computer power should bring us to the Exascale that much sooner.
 
Time will of course tell if Intel will be successful with Xeon Phi and Knights Corner.  The idea behind this product seems sound, and the addition of powerful vector units being attached to simple x86 cores should make the software migration to massively parallel computing just a wee bit easier than what we are seeing now with GPU based products from AMD and NVIDIA.  The areas that those other manufacturers have advantages over Intel are that of many years of work with educational institutions (research), software developers (gaming, GPGPU, and HPC), and industry standards groups (Khronos).  Xeon Phi has a ways to go before being fully embraced by these other organizations, and its future is certainly not set in stone.  We have yet to see 3rd party groups get a hold of these products and put them to the test.  While Intel CPUs are certainly class leading, we still do not know of the full potential of these MIC products as compared to what is currently available in the market.
 

The one positive thing for Intel’s competitors is that it seems their enthusiasm for massively parallel computing is justified.  Intel just entered that ring with a unique architecture that will certainly help push high performance computing more towards true heterogeneous computing. 

Source: Intel

AMD and SeaMicro partnering to develop a processor agnostic HPC interconnect

Subject: General Tech | March 28, 2012 - 01:21 PM |
Tagged: amd, seamicro, interconnect, purchase, HPC, 3d torus, freedom

In the beginning of March it was announced that AMD would be spending $334 million to purchase SeaMicro, a company who holds the patents on the 3D torus interconnect for High Powered Computing and servers.  This interconnect utilizes PCIe lanes to connect large amounts of processors together to create what was commonly referred to as a supercomputer and is now more likely to be labelled an HPC machine.  SeaMicro's current SM1000 chassis can hold 64 processor cards, each of which have a processor socket, chipset and memory slots which makes the entire design beautifully modular. 

One of the more interesting features of the Freedom systems design is that it can currently utilize either Atom or Xeon chips on those processor cards.  With AMD now in the mix you can expect to see compatibility with Opteron chips in the very near future.  That will give AMD a chance to grab market share from Intel in the HPC market segment.   The Opteron series may not be as powerful as the current Xeons but they do cost noticeably less which makes them very attractive for customers who cannot afford 64 Xeons but need more power than an Atom can provide.

The competition is not just about price however; with Intel's recent purchase of QLogic and the InfiniBand interconnect technology, AMD needs to ensure they can also provide a backbone which is comparable in speed.  The current Freedom interconnect has 1.28Tb/sec of aggregate bandwidth on a 3D torus, and supports up to sixteen 10-Gigabit Ethernet links or 64 Gigabit links, which is in the same ballpark as a 64 channel InfiniBand based system.  The true speed will actually depend on which processors AMD plans to put into these systems, but as Michael Detwiler told The Register, that will depend on what customers actually want and not on what AMD thinks will be best.

amd_freedom_interconnect.jpg

"As last week was winding down, Advanced Micro Devices took control of upstart server maker SeaMicro, and guess what? AMD is still not getting into the box building business, even if it does support SeaMicro's customers for the foreseeable future out of necessity.

Further: Even if AMD doesn't have aspirations to build boxes, the company may be poised to shake up the server racket as a component supplier. Perhaps not as dramatically as it did with the launch of the Opteron chips nearly a decade ago, but then again, maybe as much or more - depending on how AMD plays it and Intel and other server processor makers react."

Here is some more Tech News from around the web:

Tech Talk

 

Source: The Register

SeaMicro spurns Atom and cleaves to AMD

Subject: General Tech | March 1, 2012 - 02:09 PM |
Tagged: amd, seamicro, interconnect, purchase, HPC

There is more movement in the low power server market as AMD purchased SeaMicro for $334 million, an investment that may help them keep their share of the server market.  You might have thought that a company that arrived on the scene with a server based on 512 single core Atoms would either stick with Intel or even consider ARM but instead it was AMD which grabbed them.   It is an important move for AMD to retain competitiveness against Intel considering Intel's purchase of QLogic and its InfiniBand interconnect technology which could lead to entirely new server architecture.  Using SeaMicro's experience of connecting a large amount of individually weak processors into a powerful server AMD will be able to develop the SoC business that they have been pursuing for quite a while now.   Check out the full story at The Inquirer.

amdseam.jpg

"AMD's new CEO Rory Read was fired up about executing better in the server racket at the company's analyst day earlier this month and has wasted little time in stirring things up with the acquisition of low-power server start-up SeaMicro for $334m."

Here is some more Tech News from around the web:

Tech Talk

 

Source: The Register

Intel is thinking even bigger and likely leveraging their McAfee assets

Subject: General Tech | January 24, 2012 - 01:24 PM |
Tagged: Intel, QLogic, purchase, Infiniband, HPC

Intel blew tiny $125 million piece of their record breaking quarterly income to purchase QLogic's InfiniBand business, which gives them access to a networking technology significantly faster than Ethernet.  InfiniBand is what is referred to as a switched fabric technology which allows multiple switches to connect to multiple hosts or data stores as opposed to the more point to point single broadcast which current ethernet based networks use.

fabric.png

That may look familiar to some, but not as a network technology; it matches the communications architecture behind PCIe and SATA.  As we have seen, the speed difference between parallel connections and serial is quite impressive and InfiniBand's fastest implementation is currently capable of transferring 25 Gbit/s per lane.  That is significantly faster than the 1Gbit/s per lane PCIe 3.0 can provide which is why some current implementations of InfiniBand are used in High Performance Computing (HPC) applications.  InfiniBand also offers incredibly low latency of between 100 to 200 nanoseconds, depending on the implementation.

IB-Roadmap072611.gif

Getting a hold of this interconnect technology gives Intel a huge boost in their capabilities of creating high performance networking technologies.  They have been looking for a way to grow in that area and push out Application Specific Integrated Circuit (ASIC) manufactures from the market, replacing those chips with low power Xeons or future Intel chips.  This would open up an entirely new market for Intel, who could see their already impressive growth increase significantly.  Intel could become even more attractive to customers by taking advantage of the benefits of owning McAfee by placing virus/malware protection directly onto their switches.   We have already seen evidence of one project along these lines at IDF 2011 when they announced the DeepSAFE project which is software that operates below the OS level, providing what they refer to as "hardware-assisted" security.  With that OS-agnostic approach it would be possible to run the security software on a network switch or on an HPC interconnect. That could give Intel not only the fastest interconnect technology but also the most secure.

When discussing this with The Inquirer, Intel's representative Kirk Skaugen stated that this purchase will help Intel design and produce an exaflop level supercomputer by 2018.  It is unlikely that this is Intel's only goal, with the purchase of Fulcrum Microsystems this summer, a company which designs ASICs for Ethernet switches and routers that run at 10Gbit and 40Gbit, they are well on their way to designing network switches for HPC applications.  The Register ponders what this could mean for companies which have used InfiniBand technology in their products.  Will they be snatched up by a networking company like Cisco, could AMD pick them up and provide competition in this industry or will they consider offering themselves to Intel the best alternative?  We will be keeping an eye on this as it will not only develop into the next generation of networking technology but could also drive the successor to PCIe.

012_julich1.jpg

"The high-performance networking market just got a whole lot more interesting, with Intel shelling out $125m to acquire the InfiniBand switch and adapter product lines from upstart QLogic.

Intel has made no secret that it wants to bolster its Data Center and Connected Systems business by getting network equipment providers to use Xeon processors inside of their networking gear – that Intel division posted $10.1bn in revenues in 2011, and the company wants to break $20bn in the next five years."

Here is some more Tech News from around the web:

Tech Talk

 

Source: The Register

Cray Announces AMD Bulldozer CPU and NVIDIA Tesla GPU Supercomputer Capable of 50 Petaflops

Subject: Systems | May 24, 2011 - 09:07 PM |
Tagged: tesla, supercomputer, petaflop, HPC, bulldozer

 Cray has been a huge name in the supercomputer market for years, and with the new XK6 they are promising to deliver a supercomputer capable of 50 Thousand Trillion operations per second. Powered by AMD Operton CPUs and NVIDIA GPUs, each XK6 blade is comprised of 2 Gemini interconnects pairing four AMD Opteron CPUs with four NVIDIA Tesla X2090 embedded graphics cards. The graphics cards in each blade have access to 6GB of GDDR5 memory, and are connected via PCI-E 2.0 links to the Opteron processors. The CPUS have access to four DDR3 memory slots “running at 1.6GHz for every G34 socket,” according to The Register. This amounts to 32GB per two-socket node when using 4GB sticks.

cray-xk6.jpg

Cray plans to wait until AMD releases the 16 core 32nm Opteron CPUs in Q3, dubbed the Opteron 6200s. The Register quotes AMD’s CEO Thomas Siefert as promising the processors are based on the new Bulldozer cores (and would be compatible with the current G34 sockets) “would ship by summer.”

Further, they claim that Cray’s goal with the XK6 was to keep the new blades within the same thermal boundaries as its predecessor, despite the inclusion of GPUs into the mix. Cray has indicated that, due to their success in remaining within the thermal envelope, their customers will be able to use XE6 and XK6 blades interchangeably and will allow them to customize their supercomputer load-out to meet the demands of their specific computing workloads.

XK6_Blade.PNG

Each cabinet is capable of storing up to 24 blades, and can deliver up to 50 kilowatts of power. Each of the Tesla X2090 GPUS are capable of 665 gigaflops during double-precision floating point operations, something that GPUs excel at. As each XK6 blade contains 4 GPUS, and each cabinet can hold 24 blades, customers are looking at 63.8 teraflops of computing power solely from the graphics cards. On the CPU side of things, Cray is not able to release specifications on the processors as AMD has yet to deliver the chips in question. The Register estimates that each XK6 blade will provide 3.5 teraflops of floating point computing power, which amounts to approximately 84 teraflops per cabinet.

With a claimed capability to utilize up to 300 cabinets full of XK6 blades, customers are looking at approximately 44 petaflops of computing horsepower, with GPUs delivering 19.14 petaflops, and the CPUs estimated to provide 25.2 petaflops of floating point computational power.

The first customer of this system will be the Swiss National Supercomputing Centre. According to the Seattle Times, the center’s director Professor Thomas Schulthess stated that they chose the Cray XK6 based supercomputer not for it’s raw performance, but because “the Cray XK6 promises to be the first general-purpose supercomputer based on GPU technology, and we are very much looking forward to exploring its performance and productivity on real applications relevant to our scientists.”

Source: The Register