AMD Announces Radeon Vega Frontier Edition Graphics Cards

Subject: Graphics Cards | May 16, 2017 - 07:39 PM |
Tagged: Vega, reference, radeon, graphics card, gpu, Frontier Edition, amd

AMD has revealed their concept of a premium reference GPU for the upcoming Radeon Vega launch, with the "Frontier Edition" of the new graphics cards.

Vega FE Slide.png

"Today, AMD announced its brand-new Radeon Vega Frontier Edition, the world’s most powerful solution for machine learning and advanced visualization aimed to empower the next generation of data scientists and visualization professionals -- the digital pioneers forging new paths in their fields. Designed to handle the most demanding design, rendering, and machine intelligence workloads, this powerful new graphics card excels in:

  • Machine learning. Together with AMD’s ROCm open software platform, Radeon Vega Frontier Edition enables developers to tap into the power of Vega for machine learning algorithm development. Frontier Edition delivers more than 50 percent more performance than today’s most powerful machine learning GPUs.
  • Advanced visualization. Radon Vega Frontier Edition provides the performance required to drive increasingly large and complex models for real-time visualization, physically-based rendering and virtual reality through the design phase as well as rendering phase of product development.
  • VR workloads. Radeon Vega Frontier Edition is ideal for VR content creation supporting AMD’s LiquidVR technology to deliver the gripping content, advanced visual comfort and compatibility needed for next-generation VR experiences.
  • Revolutionized game design workflows. Radeon Vega Frontier Edition simplifies and accelerates game creation by providing a single GPU optimized for every stage of a game developer’s workflow, from asset production to playtesting and performance optimization."

Vega FE.jpg

From the image provided on the official product page it appears that there will be both liquid-cooled (the gold card in the background) and air-cooled variants of these "Frontier Edition" cards, which AMD states will arrive with 16GB of HBM2 and offer 1.5x the FP32 performance and 3x the FP16 performance of the Fury X.

From AMD:

Radeon Vega Frontier Edition

  • Compute units: 64
  • Single precision compute performance (FP32): ~13 TFLOPS
  • Half precision compute performance (FP16): ~25 TFLOPS
  • Pixel Fillrate: ~90 Gpixels/sec
  • Memory capacity: 16 GBs of High Bandwidth Cache
  • Memory bandwidth: ~480 GBs/sec

The availability of the Radeon Vega Frontier Edition was announced as "late June", so we should not have too long to wait for further details, including pricing.

Source: AMD

NVIDIA Announces Q1 2018 Results

Subject: Editorial | May 10, 2017 - 09:45 PM |
Tagged: nvidia, earnings, revenues, Q1 2018, Q1, v100, data center, automotive, gpu, gtx 1080 ti

NVIDIA had a monster Q1. The quarter before the company had their highest revenue numbers in the history of the company.  Q1 can be a slightly more difficult time and typically the second weakest quarter of the year.  The Holiday rush is over and the market slows down.  For NVIDIA, this was not exactly the case.  While NVIDIA made $2.173 billion in Q4 2017, they came remarkably close to that with revenues of $1.937 billion.  While $250 million is a significant drop, it is not an unexpected one.  In fact, it shows NVIDIA being slightly stronger than expectations.

NVIDIA-Logo.jpg

The past year has shown tremendous growth for NVIDIA.  Their GPUs remain strong and they have the highest performing parts at the upper midrange and high end markets.  AMD simply has not been able to compete with NVIDIA, much less overcome the company with higher performing parts at the top end.  GPUs still make up the largest portion of income that NVIDIA receives.  NVIDIA continues to invest in new areas and those investments are starting to pay off.

Automotive is still in the growth stages for the company, but they have successfully taken the Tegra CPU division and moved away from the cellphone and tablet markets.  NVIDIA continues to support their Shield products, but the main focus looks to be the automotive industry with these high performing, low power parts that sport advanced graphical options.  Professional graphics continues to be a stronghold for NVIDIA.  While it did drop quite a bit from the previous quarter, it is a high margin area that helps bolster revenues.

The biggest mover over this past year seems to be the Data Center.  Last year NVIDIA focused on delivering entire solutions to the market as well as their individual GPUs.  The past two years have seen them have essentially no income in this area to having a $400 million quarter.  This is simply tremendous growth in an area that is still relatively untapped when it comes to GPU compute.

results.png

NVIDIA continues to be very aggressive in their product design and introductions.  They have simply owned the $300+ range of graphics cards with the GTX 1070, GTX 1080, and the recently introduced GTX 1080 Ti.  This is somewhat ignoring the even higher end TitanXp that is priced well above most enthusiasts’ budgets.  Today they announced the V100 chip that is the first glimpse we have of a high end part running on TSMC’s new 12nm FinFET process.  It also features 16 GB of HBM2 memory and a whopping 21 billion transistors in total.

Next quarter looks to be even better than this one, which is a shock because Q2 has traditionally been the slowest quarter of the year.  NVIDIA expects around $1.95 billion in revenues (actually increasing from Q1).  NVIDIA also is rewarding shareholders with not only a quarterly dividend, but also has been actively buying back shares (which tends to keep share prices healthy).  Early last year NVIDIA had a share price of around $30 while today they are trending well above $100.

SXM2-VoltaChipDetails.png

If NVIDIA keeps this up while continuing to expand in automotive and data center, it is a fairly safe bet that they will easily overtop $8 billion in revenues for the year.  Q3 and Q4 will be stronger if they continue to advance in those areas while retaining marketshare in the GPU market.  With rumors hinting that AMD will not have a product that will top the GTX 1080Ti, it is a safe bet that NVIDIA can easily adjust their prices across the board to stay competitive with whatever AMD throws at them.

It is interesting to look back when AMD was shopping around for a graphics firm and wonder what could have happened.  Hector Ruiz was in charge of AMD and tried to leverage a deal with NVIDIA.  Rumors have it that Huang would not agree to it unless he was CEO.  Hector laughed and talked to ATI who was more than happy to sell (and cover up some real weaknesses in the company).  We all know what happened to Hector and how his policies and actions started the spiral that AMD is only now recovering from.  What would that have been like if Jensen had actually become CEO of that merged company?

Source: NVIDIA

Build and Upgrade Components

Spring is in the air! And while many traditionally use this season for cleaning out their homes, what could be the point of reclaiming all of that space besides filling it up again with new PC hardware and accessories? If you answered, "there is no point, other than what you just said," then you're absolutely right. Spring a great time to procrastinate about housework and build up a sweet new gaming PC (what else would you really want to use that tax return for?), so our staff has listed their favorite PC hardware right now, from build components to accessories, to make your life easier. (Let's make this season far more exciting than taking out the trash and filing taxes!)

While our venerable Hardware Leaderboard has been serving the PC community for many years, it's still worth listing some of our favorite PC hardware for builds at different price points here.

Processors - the heart of the system.

No doubt about it, AMD's Ryzen CPU launch has been the biggest news of the year so far for PC enthusiasts, and while the 6 and 4-core variants are right around the corner the 8-core R7 processors are still a great choice if you have the budget for a $300+ CPU. To that end, we really like the value proposition of the Ryzen R7 1700, which offers much of the performance of its more expensive siblings for a really compelling price, and can potentially be overclocked to match the higher-clocked members of the Ryzen lineup, though moving up to either the R7 1700X or R7 1800X will net you higher clocks (without increasing voltage and power draw) out of the box.

box1.jpg

Really, any of these processors are going to provide a great overall PC experience with incredible multi-threaded performance for your dollar in many applications, and they can of course handle any game you throw at them - with optimizations already appearing to make them even better for gaming.

Don't forget about Intel, which has some really compelling options starting even at the very low end (Pentium G4560, when you can find one in stock near its ~$60 MSRP), thanks to their newest Kaby Lake CPUs. The high-end option from Intel's 7th-gen Core lineup is the Core i7-7700K (currently $345 on Amazon), which provides very fast gaming performance and plenty of power if you don't need as many cores as the R7 1700 (or Intel's high-end LGA-2011 parts). Core i5 processors provide a much more cost-effective way to power a gaming system, and an i5-7500 is nearly $150 less than the Core i7 while providing excellent performance if you don't need an unlocked multiplier or those additional threads.

Continue reading our Spring Buyer's Guide for selections of graphics cards, motherboards, memory and more!

Report: AMD to Launch Radeon RX 500 Series GPUs in April

Subject: Graphics Cards | March 1, 2017 - 05:04 PM |
Tagged: video card, RX 580, RX 570, RX 560, RX 550, rx 480, rumor, report, rebrand, radeon, graphics, gpu, amd

According to a report from VideoCardz.com we can expect AMD Radeon RX 500-series graphics cards next month, with an April 4th launch of the RX 580 and RX 570, and subsequent RX 560/550 launch on April 11. The bad news? According to the report "all cards, except RX 550, are most likely rebranded from Radeon RX 400 series".

Polaris10.jpg

AMD Polaris 10 GPU (Image credit: Heise Online)

Until official confirmation on specs arrive, this is still speculative; however, if Vega is not ready for an April launch and AMD will indeed be refreshing their Radeon lineup, an R9 300-series speed bump/rebrand is not out of the realm of possibility. VideoCardz offers (unconfirmed, at this point) specs of the upcoming RX 500-series cards, with RX 400 numbers for comparison:

videocardz_chart_1.png

Chart credit: VideoCardz.com

The first graph shows the increased GPU boost clock speed of ~1340 MHz for the rumored RX 580, with the existing RX 480 clocked at 1266 MHz. Both would be Polaris 10 GPUs with otherwise identical specs. The same largely holds for the rumored specs on the RX 570, though this GPU would presumably be shipping with faster memory clocks as well. On the RX 560 side, however, the Polaris 11 powered replacement for the RX 460 might be based on the 1024-core variant we have seen from the Chinese market.

videocardz_chart_2.png

Chart credit: VideoCardz.com

No specifics on the RX 550 are yet known, which VideoCardz says "is most likely equipped with Polaris 12, a new low-end GPU". These rumors come via heise.de (German language), who state that those "hoping for Vega-card will be disappointed - the cards are intended to be rebrands with known GPUs". We will have to wait until next month to know for sure, but even if this is the case, expect faster clocks and better performance for the same money.

Source: VideoCardz

Palit Introduces Fanless GeForce GTX 1050 Ti KalmX GPU

Subject: Graphics Cards | February 6, 2017 - 11:43 AM |
Tagged: video card, silent, Passive, palit, nvidia, KalmX, GTX 1050 Ti, graphics card, gpu, geforce

Palit is offering a passively-cooled GTX 1050 Ti option with their new KalmX card, which features a large heatsink and (of course) zero fan noise.

kalmx_1.jpg

"With passive cooler and the advanced powerful Pascal architecture, Palit GeForce GTX 1050 Ti KalmX - pursue the silent 0dB gaming environment. Palit GeForce GTX 1050 Ti gives you the gaming horsepower to take on today’s most demanding titles in full 1080p HD @ 60 FPS."

kalmx_3.jpg

The specs are identical to a reference GTX 1050 Ti (4GB GDDR5 @ 7 Gb/s, Base 1290/Boost 1392 MHz, etc.), so expect the full performance of this GPU - with some moderate case airflow, no doubt.

kalmx_2.jpg

We don't have specifics on pricing or availablity just yet.

Source: Palit

Sapphire Releases AMD Radeon RX460 with 1024 Shaders

Subject: Graphics Cards | January 18, 2017 - 08:43 PM |
Tagged: video, unlock, shaders, shader cores, sapphire, radeon, Polaris, graphics, gpu, gaming, card, bios, amd, 1024

As reported by WCCFtech, AMD partner Sapphire has a new 1024 stream processor version of the RX460 listed on their site (Chinese language), and this product reveal of course comes after it became known that RX460 graphics cards had the potential to have their stream processor count unlocked from 896 to 1024 via BIOS update.

Sapphire_201712151752.jpg

Sapphire RX460 1024SP 4G D5 Ultra Platinum OC (image credit: Sapphire)

The Sapphire RX460 1024SP edition offers a full Polaris 11 core operating at 1250 MHz, and it otherwise matches the specifications of a stock RX460 graphics card. Whether this product will be available outside of China is unknown, as is the potential pricing model should it be available in the USA. A 4GB Radeon RX460 retails for $99, while the current step-up option is the RX470, which doubles up on this 1024SP RX460's shader count with 2048, with a price increase of about 70% ($169).

Radeon_Chart.PNG

AMD Polaris GCN 4.0 GPU lineup (Credit WCCFtech)

As you may note from the chart above, there is also an RX470D option between these cards that features 1792 shaders, though this option is also China-only.

Source: WCCFtech

(Leak) AMD Vega 10 and Vega 20 Information Leaked

Subject: Graphics Cards | January 8, 2017 - 03:53 AM |
Tagged: vega 11, vega 10, navi, gpu, amd

During CES, AMD showed off demo machines running Ryzen CPUs and Vega graphics cards as well as gave the world a bit of information on the underlying architecture of Vega in an architectural preview that you can read about (or watch) here. AMD's Vega GPU is coming and it is poised to compete with NVIDIA on the high end (an area that has been left to NVIDIA for awhile now) in a big way.

Thanks to Videocardz, we have a bit more info on the products that we might see this year and what we can expect to see in the future. Specifically, the slides suggest that Vega 10 – the first GPUs to be based on the company's new architecture – may be available by the (end of) first half of 2017. Following that a dual GPU Vega 10 product is slated for a release in Q3 or Q4 of 2017 and a refreshed GPU based on smaller process node with more HBM2 memory called Vega 20 in the second half of 2018. The leaked slides also suggest that Navi (Vega's successor) might launch as soon as 2019 and will come in two variants called Navi 10 and Navi 11 (with Navi 11 being the smaller / less powerful GPU).

AMD Vega Leaked Info.jpg

The 14nm Vega 10 GPU allegedly offers up 64 NCUs and as much as 12 TFLOPS of single precision and 750 GFLOPS of double precision compute performance respectively. Half precision performance is twice that of FP32 at 24 TFLOPS (which would be good for things like machine learning). The NCUs allegedly run FP16 at 2x and DPFP at 1/16. If each NCU has 64 shaders like Polaris 10 and other GCN GPUs, then we are looking at a top-end Vega 10 chip having 4096 shaders which rivals that of Fiji. Further, Vega 10 supposedly has a TDP up to 225 watts.

For comparison, the 28nm 8.9 billion transistor Fiji-based R9 Fury X ran at 1050 MHz with a TDP of 275 watts and had a rated peak compute of 8.6 TFLOPS. While we do not know clock speeds of Vega 10, the numbers suggest that AMD has been able to clock the GPU much higher than Fiji while still using less power (and thus putting out less heat). This is possible with the move to the smaller process node, though I do wonder what yields will be like at first for the top end (and highest clocked) versions.

Vega 10 will be paired with two stacks of HBM2 memory on package which will offer 16GB of memory with memory bandwidth of 512 GB/s. The increase in memory bandwidth is thanks to the move to HBM2 from HBM (Fiji needed four HBM dies to hit 512 GB/s and had only 4GB).

The slide also hints at a "Vega 10 x2" in the second half of the year which is presumably a dual GPU product. The slide states that Vega 10 x2 will have four stacks of HBM2 (1TB/s) though it is not clear if they are simply adding the two stacks per GPU to claim the 1TB/s number or if both GPUs will have four stacks (this is unlikely though as there does not appear to be room on the package for two more stacks each and I am not sure if they could make the package bit enough to make room for them either). Even if we assume that they really mean 2x 512 GB/s per GPU (and maybe they can get more out of that in specific workloads across both) for memory bandwidth, the doubling of cores and at least potential compute performance will be big. This is going to be a big number crunching and machine learning card as well as for games of course. Clockspeeds will likely have to be much lower compared to the single GPU Vega 10 (especially with stated TDP of 300W) and workloads wont scale perfectly so potential compute performance will not be quite 2x but should still be a decent per-card boost.

AMD-Vega-GPU.jpg

Raja Koduri holds up a Vega GPU at CES 2017 via eTeknix

Moving into the second half of 2018, the leaked slides suggest that a Vega 20 GPU will be released based on a 7nm process node with 64 CUs and paired with four stacks of HBM2 for 16 GB or 32 GB of memory with 1TB/s of bandwidth. Interestingly, the shaders will be setup such that the GPU can still do half precision calculations at twice that of single precision, but will not take nearly the hit on double precision at Vega 10 at only 1/2 single precision rather than 1/16. The GPU(s) will use between 150W and 300W of power, and it seems these are set to be the real professional and workstation workhorses. A Vega 10 with 1/2 DPFP compute would hit 6 TFLOPS which is not bad (and it would hopefully be more than this due to faster clocks and architecture improvements).

Beyond that, the slides mention Navi's existence and that it will come in Navi 10 and Navi 11 but no other details were shared which makes sense as it is still far off.

You can see the leaked slides here. In all, it is an interesting look at potential Vega 10 and beyond GPUs but definitely keep in mind that this is leaked information and that the information allegedly came from an internal presentation that likely showed the graphics processors in their best possible/expect light. It does add a bit more hope to the fire of excitement for Vega though, and I hope that AMD pulls it off as my unlocked 6950 is no longer supported and it is only a matter of time before new games perform poorly or not at all!

Also read: 

Source: eTeknix.com

Report: NVIDIA GeForce GTX 1050 Ti Based on Pascal GP107

Subject: Graphics Cards | October 2, 2016 - 12:12 PM |
Tagged: rumor, report, pascal, nvidia, GTX 1050 Ti, graphics card, gpu, GP107, geforce

A report published by VideoCardz.com (via Baidu) contains pictures of an alleged NVIDIA GeForce GTX 1050 Ti graphics card, which is apparently based on a new Pascal GP107 GPU.

NVIDIA-GTX-1050-Ti-PCB-1.jpg

Image credit: VideoCardz

The card shown is also equipped with 4GB of GDDR5 memory, and contains a 6-pin power connector - though such a power requirement might be specific to this particular version of the upcoming GPU.

NVIDIA-GTX-1050-Ti-PCB-2.jpg

Image credit: VideoCardz

Specifications for the GTX 1050 Ti were previously reported by VideoCardz, with a reported GPU-Z screenshot. The card will apparently feature 768 CUDA cores and a 128-bit memory bus, with clock speeds (for this particular sample) of 1291 MHz base, 1392 MHz boost (with some room to overclock, from this screenshot).

NVIDIA-GeForce-GTX-1050-Ti-GPUZ-Specs.jpg

Image credit: VideoCardz

An official announcement for the new GPU has not been made by NVIDIA, though if these PCB photos are real it probably won't be far off.

Source: VideoCardz

NVIDIA Teases Low Power, High Performance Xavier SoC That Will Power Future Autonomous Vehicles

Subject: Processors | October 1, 2016 - 06:11 PM |
Tagged: xavier, Volta, tegra, SoC, nvidia, machine learning, gpu, drive px 2, deep neural network, deep learning

Earlier this week at its first GTC Europe event in Amsterdam, NVIDIA CEO Jen-Hsun Huang teased a new SoC code-named Xavier that will be used in self-driving cars and feature the company's newest custom ARM CPU cores and Volta GPU. The new chip will begin sampling at the end of 2017 with product releases using the future Tegra (if they keep that name) processor as soon as 2018.

NVIDIA_Xavier_SOC.jpg

NVIDIA's Xavier is promised to be the successor to the company's Drive PX 2 system which uses two Tegra X2 SoCs and two discrete Pascal MXM GPUs on a single water cooled platform. These claims are even more impressive when considering that NVIDIA is not only promising to replace the four processors but it will reportedly do that at 20W – less than a tenth of the TDP!

The company has not revealed all the nitty-gritty details, but they did tease out a few bits of information. The new processor will feature 7 billion transistors and will be based on a refined 16nm FinFET process while consuming a mere 20W. It can process two 8k HDR video streams and can hit 20 TOPS (NVIDIA's own rating for deep learning int(8) operations).

Specifically, NVIDIA claims that the Xavier SoC will use eight custom ARMv8 (64-bit) CPU cores (it is unclear whether these cores will be a refined Denver architecture or something else) and a GPU based on its upcoming Volta architecture with 512 CUDA cores. Also, in an interesting twist, NVIDIA is including a "Computer Vision Accelerator" on the SoC as well though the company did not go into many details. This bit of silicon may explain how the ~300mm2 die with 7 billion transistors is able to match the 7.2 billion transistor Pascal-based Telsa P4 (2560 CUDA cores) graphics card at deep learning (tera-operations per second) tasks. Of course in addition to the incremental improvements by moving to Volta and a new ARMv8 CPU architectures on a refined 16nm FF+ process.

  Drive PX Drive PX 2 NVIDIA Xavier Tesla P4
CPU 2 x Tegra X1 (8 x A57 total) 2 x Tegra X2 (8 x A57 + 4 x Denver total) 1 x Xavier SoC (8 x Custom ARM + 1 x CVA) N/A
GPU 2 x Tegra X1 (Maxwell) (512 CUDA cores total 2 x Tegra X2 GPUs + 2 x Pascal GPUs 1 x Xavier SoC GPU (Volta) (512 CUDA Cores) 2560 CUDA Cores (Pascal)
TFLOPS 2.3 TFLOPS 8 TFLOPS ? 5.5 TFLOPS
DL TOPS ? 24 TOPS 20 TOPS 22 TOPS
TDP ~30W (2 x 15W) 250W 20W up to 75W
Process Tech 20nm 16nm FinFET 16nm FinFET+ 16nm FinFET
Transistors ? ? 7 billion 7.2 billion

For comparison, the currently available Tesla P4 based on its Pascal architecture has a TDP of up to 75W and is rated at 22 TOPs. This would suggest that Volta is a much more efficient architecture (at least for deep learning and half precision)! I am not sure how NVIDIA is able to match its GP104 with only 512 Volta CUDA cores though their definition of a "core" could have changed and/or the CVA processor may be responsible for closing that gap. Unfortunately, NVIDIA did not disclose what it rates the Xavier at in TFLOPS so it is difficult to compare and it may not match GP104 at higher precision workloads. It could be wholly optimized for int(8) operations rather than floating point performance. Beyond that I will let Scott dive into those particulars once we have more information!

Xavier is more of a teaser than anything and the chip could very well change dramatically and/or not hit the claimed performance targets. Still, it sounds promising and it is always nice to speculate over road maps. It is an intriguing chip and I am ready for more details, especially on the Volta GPU and just what exactly that Computer Vision Accelerator is (and will it be easy to program for?). I am a big fan of the "self-driving car" and I hope that it succeeds. It certainly looks to continue as Tesla, VW, BMW, and other automakers continue to push the envelope of what is possible and plan future cars that will include smart driving assists and even cars that can drive themselves. The more local computing power we can throw at automobiles the better and while massive datacenters can be used to train the neural networks, local hardware to run and make decisions are necessary (you don't want internet latency contributing to the decision of whether to brake or not!).

I hope that NVIDIA's self-proclaimed "AI Supercomputer" turns out to be at least close to the performance they claim! Stay tuned for more information as it gets closer to launch (hopefully more details will emerge at GTC 2017 in the US).

What are your thoughts on Xavier and the whole self-driving car future?

Also read:

Source: NVIDIA

Podcast #414 - AMD Zen Architecture Details, Lightning Headphones, AMD GPU Market Share and more!

Subject: General Tech | August 25, 2016 - 10:51 AM |
Tagged: Zen, video, seasonic, Polaris, podcast, Omen, nvidia, market share, Lightning, hp, gtx 1060 3gb, gpu, brix, Audeze, asus, architecture, amd

PC Perspective Podcast #414 - 08/25/2016

Join us this week as we discuss the newly released architecture details of AMD Zen, Audeze headphones, AMD market share gains and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts:  Ryan Shrout, Allyn Malventano, Josh Walrath and Jeremy Hellstrom

Program length: 1:37:15
  1. Week in Review:
  2. News items of interest:
  3. Hardware/Software Picks of the Week
  4. Closing/outro