Build and Upgrade Components
Spring is in the air! And while many traditionally use this season for cleaning out their homes, what could be the point of reclaiming all of that space besides filling it up again with new PC hardware and accessories? If you answered, "there is no point, other than what you just said," then you're absolutely right. Spring a great time to procrastinate about housework and build up a sweet new gaming PC (what else would you really want to use that tax return for?), so our staff has listed their favorite PC hardware right now, from build components to accessories, to make your life easier. (Let's make this season far more exciting than taking out the trash and filing taxes!)
While our venerable Hardware Leaderboard has been serving the PC community for many years, it's still worth listing some of our favorite PC hardware for builds at different price points here.
Processors - the heart of the system.
No doubt about it, AMD's Ryzen CPU launch has been the biggest news of the year so far for PC enthusiasts, and while the 6 and 4-core variants are right around the corner the 8-core R7 processors are still a great choice if you have the budget for a $300+ CPU. To that end, we really like the value proposition of the Ryzen R7 1700, which offers much of the performance of its more expensive siblings for a really compelling price, and can potentially be overclocked to match the higher-clocked members of the Ryzen lineup, though moving up to either the R7 1700X or R7 1800X will net you higher clocks (without increasing voltage and power draw) out of the box.
Really, any of these processors are going to provide a great overall PC experience with incredible multi-threaded performance for your dollar in many applications, and they can of course handle any game you throw at them - with optimizations already appearing to make them even better for gaming.
Don't forget about Intel, which has some really compelling options starting even at the very low end (Pentium G4560, when you can find one in stock near its ~$60 MSRP), thanks to their newest Kaby Lake CPUs. The high-end option from Intel's 7th-gen Core lineup is the Core i7-7700K (currently $345 on Amazon), which provides very fast gaming performance and plenty of power if you don't need as many cores as the R7 1700 (or Intel's high-end LGA-2011 parts). Core i5 processors provide a much more cost-effective way to power a gaming system, and an i5-7500 is nearly $150 less than the Core i7 while providing excellent performance if you don't need an unlocked multiplier or those additional threads.
Subject: Graphics Cards | March 1, 2017 - 05:04 PM | Sebastian Peak
Tagged: video card, RX 580, RX 570, RX 560, RX 550, rx 480, rumor, report, rebrand, radeon, graphics, gpu, amd
According to a report from VideoCardz.com we can expect AMD Radeon RX 500-series graphics cards next month, with an April 4th launch of the RX 580 and RX 570, and subsequent RX 560/550 launch on April 11. The bad news? According to the report "all cards, except RX 550, are most likely rebranded from Radeon RX 400 series".
Until official confirmation on specs arrive, this is still speculative; however, if Vega is not ready for an April launch and AMD will indeed be refreshing their Radeon lineup, an R9 300-series speed bump/rebrand is not out of the realm of possibility. VideoCardz offers (unconfirmed, at this point) specs of the upcoming RX 500-series cards, with RX 400 numbers for comparison:
Chart credit: VideoCardz.com
The first graph shows the increased GPU boost clock speed of ~1340 MHz for the rumored RX 580, with the existing RX 480 clocked at 1266 MHz. Both would be Polaris 10 GPUs with otherwise identical specs. The same largely holds for the rumored specs on the RX 570, though this GPU would presumably be shipping with faster memory clocks as well. On the RX 560 side, however, the Polaris 11 powered replacement for the RX 460 might be based on the 1024-core variant we have seen from the Chinese market.
Chart credit: VideoCardz.com
No specifics on the RX 550 are yet known, which VideoCardz says "is most likely equipped with Polaris 12, a new low-end GPU". These rumors come via heise.de (German language), who state that those "hoping for Vega-card will be disappointed - the cards are intended to be rebrands with known GPUs". We will have to wait until next month to know for sure, but even if this is the case, expect faster clocks and better performance for the same money.
Subject: Graphics Cards | February 6, 2017 - 11:43 AM | Sebastian Peak
Tagged: video card, silent, Passive, palit, nvidia, KalmX, GTX 1050 Ti, graphics card, gpu, geforce
Palit is offering a passively-cooled GTX 1050 Ti option with their new KalmX card, which features a large heatsink and (of course) zero fan noise.
"With passive cooler and the advanced powerful Pascal architecture, Palit GeForce GTX 1050 Ti KalmX - pursue the silent 0dB gaming environment. Palit GeForce GTX 1050 Ti gives you the gaming horsepower to take on today’s most demanding titles in full 1080p HD @ 60 FPS."
The specs are identical to a reference GTX 1050 Ti (4GB GDDR5 @ 7 Gb/s, Base 1290/Boost 1392 MHz, etc.), so expect the full performance of this GPU - with some moderate case airflow, no doubt.
We don't have specifics on pricing or availablity just yet.
Subject: Graphics Cards | January 18, 2017 - 08:43 PM | Sebastian Peak
Tagged: video, unlock, shaders, shader cores, sapphire, radeon, Polaris, graphics, gpu, gaming, card, bios, amd, 1024
As reported by WCCFtech, AMD partner Sapphire has a new 1024 stream processor version of the RX460 listed on their site (Chinese language), and this product reveal of course comes after it became known that RX460 graphics cards had the potential to have their stream processor count unlocked from 896 to 1024 via BIOS update.
Sapphire RX460 1024SP 4G D5 Ultra Platinum OC (image credit: Sapphire)
The Sapphire RX460 1024SP edition offers a full Polaris 11 core operating at 1250 MHz, and it otherwise matches the specifications of a stock RX460 graphics card. Whether this product will be available outside of China is unknown, as is the potential pricing model should it be available in the USA. A 4GB Radeon RX460 retails for $99, while the current step-up option is the RX470, which doubles up on this 1024SP RX460's shader count with 2048, with a price increase of about 70% ($169).
AMD Polaris GCN 4.0 GPU lineup (Credit WCCFtech)
As you may note from the chart above, there is also an RX470D option between these cards that features 1792 shaders, though this option is also China-only.
Subject: Graphics Cards | January 8, 2017 - 03:53 AM | Tim Verry
Tagged: vega 11, vega 10, navi, gpu, amd
During CES, AMD showed off demo machines running Ryzen CPUs and Vega graphics cards as well as gave the world a bit of information on the underlying architecture of Vega in an architectural preview that you can read about (or watch) here. AMD's Vega GPU is coming and it is poised to compete with NVIDIA on the high end (an area that has been left to NVIDIA for awhile now) in a big way.
Thanks to Videocardz, we have a bit more info on the products that we might see this year and what we can expect to see in the future. Specifically, the slides suggest that Vega 10 – the first GPUs to be based on the company's new architecture – may be available by the (end of) first half of 2017. Following that a dual GPU Vega 10 product is slated for a release in Q3 or Q4 of 2017 and a refreshed GPU based on smaller process node with more HBM2 memory called Vega 20 in the second half of 2018. The leaked slides also suggest that Navi (Vega's successor) might launch as soon as 2019 and will come in two variants called Navi 10 and Navi 11 (with Navi 11 being the smaller / less powerful GPU).
The 14nm Vega 10 GPU allegedly offers up 64 NCUs and as much as 12 TFLOPS of single precision and 750 GFLOPS of double precision compute performance respectively. Half precision performance is twice that of FP32 at 24 TFLOPS (which would be good for things like machine learning). The NCUs allegedly run FP16 at 2x and DPFP at 1/16. If each NCU has 64 shaders like Polaris 10 and other GCN GPUs, then we are looking at a top-end Vega 10 chip having 4096 shaders which rivals that of Fiji. Further, Vega 10 supposedly has a TDP up to 225 watts.
For comparison, the 28nm 8.9 billion transistor Fiji-based R9 Fury X ran at 1050 MHz with a TDP of 275 watts and had a rated peak compute of 8.6 TFLOPS. While we do not know clock speeds of Vega 10, the numbers suggest that AMD has been able to clock the GPU much higher than Fiji while still using less power (and thus putting out less heat). This is possible with the move to the smaller process node, though I do wonder what yields will be like at first for the top end (and highest clocked) versions.
Vega 10 will be paired with two stacks of HBM2 memory on package which will offer 16GB of memory with memory bandwidth of 512 GB/s. The increase in memory bandwidth is thanks to the move to HBM2 from HBM (Fiji needed four HBM dies to hit 512 GB/s and had only 4GB).
The slide also hints at a "Vega 10 x2" in the second half of the year which is presumably a dual GPU product. The slide states that Vega 10 x2 will have four stacks of HBM2 (1TB/s) though it is not clear if they are simply adding the two stacks per GPU to claim the 1TB/s number or if both GPUs will have four stacks (this is unlikely though as there does not appear to be room on the package for two more stacks each and I am not sure if they could make the package bit enough to make room for them either). Even if we assume that they really mean 2x 512 GB/s per GPU (and maybe they can get more out of that in specific workloads across both) for memory bandwidth, the doubling of cores and at least potential compute performance will be big. This is going to be a big number crunching and machine learning card as well as for games of course. Clockspeeds will likely have to be much lower compared to the single GPU Vega 10 (especially with stated TDP of 300W) and workloads wont scale perfectly so potential compute performance will not be quite 2x but should still be a decent per-card boost.
Raja Koduri holds up a Vega GPU at CES 2017 via eTeknix
Moving into the second half of 2018, the leaked slides suggest that a Vega 20 GPU will be released based on a 7nm process node with 64 CUs and paired with four stacks of HBM2 for 16 GB or 32 GB of memory with 1TB/s of bandwidth. Interestingly, the shaders will be setup such that the GPU can still do half precision calculations at twice that of single precision, but will not take nearly the hit on double precision at Vega 10 at only 1/2 single precision rather than 1/16. The GPU(s) will use between 150W and 300W of power, and it seems these are set to be the real professional and workstation workhorses. A Vega 10 with 1/2 DPFP compute would hit 6 TFLOPS which is not bad (and it would hopefully be more than this due to faster clocks and architecture improvements).
Beyond that, the slides mention Navi's existence and that it will come in Navi 10 and Navi 11 but no other details were shared which makes sense as it is still far off.
You can see the leaked slides here. In all, it is an interesting look at potential Vega 10 and beyond GPUs but definitely keep in mind that this is leaked information and that the information allegedly came from an internal presentation that likely showed the graphics processors in their best possible/expect light. It does add a bit more hope to the fire of excitement for Vega though, and I hope that AMD pulls it off as my unlocked 6950 is no longer supported and it is only a matter of time before new games perform poorly or not at all!
- AMD Vega GPU Architecture Preview: Redesigned Memory Architecture
- CES 2017: AMD Vega Running DOOM at 4K
- AMD GPU Roadmap: Capsaicin Names Upcoming Architectures
- AMD's Raja Koduri talks moving past CrossFire, smaller GPU dies, HBM2 and more.
Subject: Graphics Cards | October 2, 2016 - 12:12 PM | Sebastian Peak
Tagged: rumor, report, pascal, nvidia, GTX 1050 Ti, graphics card, gpu, GP107, geforce
A report published by VideoCardz.com (via Baidu) contains pictures of an alleged NVIDIA GeForce GTX 1050 Ti graphics card, which is apparently based on a new Pascal GP107 GPU.
Image credit: VideoCardz
The card shown is also equipped with 4GB of GDDR5 memory, and contains a 6-pin power connector - though such a power requirement might be specific to this particular version of the upcoming GPU.
Image credit: VideoCardz
Specifications for the GTX 1050 Ti were previously reported by VideoCardz, with a reported GPU-Z screenshot. The card will apparently feature 768 CUDA cores and a 128-bit memory bus, with clock speeds (for this particular sample) of 1291 MHz base, 1392 MHz boost (with some room to overclock, from this screenshot).
Image credit: VideoCardz
An official announcement for the new GPU has not been made by NVIDIA, though if these PCB photos are real it probably won't be far off.
Subject: Processors | October 1, 2016 - 06:11 PM | Tim Verry
Tagged: xavier, Volta, tegra, SoC, nvidia, machine learning, gpu, drive px 2, deep neural network, deep learning
Earlier this week at its first GTC Europe event in Amsterdam, NVIDIA CEO Jen-Hsun Huang teased a new SoC code-named Xavier that will be used in self-driving cars and feature the company's newest custom ARM CPU cores and Volta GPU. The new chip will begin sampling at the end of 2017 with product releases using the future Tegra (if they keep that name) processor as soon as 2018.
NVIDIA's Xavier is promised to be the successor to the company's Drive PX 2 system which uses two Tegra X2 SoCs and two discrete Pascal MXM GPUs on a single water cooled platform. These claims are even more impressive when considering that NVIDIA is not only promising to replace the four processors but it will reportedly do that at 20W – less than a tenth of the TDP!
The company has not revealed all the nitty-gritty details, but they did tease out a few bits of information. The new processor will feature 7 billion transistors and will be based on a refined 16nm FinFET process while consuming a mere 20W. It can process two 8k HDR video streams and can hit 20 TOPS (NVIDIA's own rating for deep learning int(8) operations).
Specifically, NVIDIA claims that the Xavier SoC will use eight custom ARMv8 (64-bit) CPU cores (it is unclear whether these cores will be a refined Denver architecture or something else) and a GPU based on its upcoming Volta architecture with 512 CUDA cores. Also, in an interesting twist, NVIDIA is including a "Computer Vision Accelerator" on the SoC as well though the company did not go into many details. This bit of silicon may explain how the ~300mm2 die with 7 billion transistors is able to match the 7.2 billion transistor Pascal-based Telsa P4 (2560 CUDA cores) graphics card at deep learning (tera-operations per second) tasks. Of course in addition to the incremental improvements by moving to Volta and a new ARMv8 CPU architectures on a refined 16nm FF+ process.
|Drive PX||Drive PX 2||NVIDIA Xavier||Tesla P4|
|CPU||2 x Tegra X1 (8 x A57 total)||2 x Tegra X2 (8 x A57 + 4 x Denver total)||1 x Xavier SoC (8 x Custom ARM + 1 x CVA)||N/A|
|GPU||2 x Tegra X1 (Maxwell) (512 CUDA cores total||2 x Tegra X2 GPUs + 2 x Pascal GPUs||1 x Xavier SoC GPU (Volta) (512 CUDA Cores)||2560 CUDA Cores (Pascal)|
|TFLOPS||2.3 TFLOPS||8 TFLOPS||?||5.5 TFLOPS|
|DL TOPS||?||24 TOPS||20 TOPS||22 TOPS|
|TDP||~30W (2 x 15W)||250W||20W||up to 75W|
|Process Tech||20nm||16nm FinFET||16nm FinFET+||16nm FinFET|
|Transistors||?||?||7 billion||7.2 billion|
For comparison, the currently available Tesla P4 based on its Pascal architecture has a TDP of up to 75W and is rated at 22 TOPs. This would suggest that Volta is a much more efficient architecture (at least for deep learning and half precision)! I am not sure how NVIDIA is able to match its GP104 with only 512 Volta CUDA cores though their definition of a "core" could have changed and/or the CVA processor may be responsible for closing that gap. Unfortunately, NVIDIA did not disclose what it rates the Xavier at in TFLOPS so it is difficult to compare and it may not match GP104 at higher precision workloads. It could be wholly optimized for int(8) operations rather than floating point performance. Beyond that I will let Scott dive into those particulars once we have more information!
Xavier is more of a teaser than anything and the chip could very well change dramatically and/or not hit the claimed performance targets. Still, it sounds promising and it is always nice to speculate over road maps. It is an intriguing chip and I am ready for more details, especially on the Volta GPU and just what exactly that Computer Vision Accelerator is (and will it be easy to program for?). I am a big fan of the "self-driving car" and I hope that it succeeds. It certainly looks to continue as Tesla, VW, BMW, and other automakers continue to push the envelope of what is possible and plan future cars that will include smart driving assists and even cars that can drive themselves. The more local computing power we can throw at automobiles the better and while massive datacenters can be used to train the neural networks, local hardware to run and make decisions are necessary (you don't want internet latency contributing to the decision of whether to brake or not!).
I hope that NVIDIA's self-proclaimed "AI Supercomputer" turns out to be at least close to the performance they claim! Stay tuned for more information as it gets closer to launch (hopefully more details will emerge at GTC 2017 in the US).
What are your thoughts on Xavier and the whole self-driving car future?
- NVIDIA Teases Xavier, a High-Performance ARM SoC for Drive PX & AI @ AnandTech
- Tegra Related News @ PC Perspective
- Tesla P4 Specifications @ NVIDIA
- CES 2016: NVIDIA Launches DRIVE PX 2 With Dual Pascal GPUs Driving A Deep Neural Network @ PC Perspective
Subject: General Tech | August 25, 2016 - 10:51 AM | Ryan Shrout
Tagged: Zen, video, seasonic, Polaris, podcast, Omen, nvidia, market share, Lightning, hp, gtx 1060 3gb, gpu, brix, Audeze, asus, architecture, amd
PC Perspective Podcast #414 - 08/25/2016
Join us this week as we discuss the newly released architecture details of AMD Zen, Audeze headphones, AMD market share gains and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store (audio only)
- Google Play - Subscribe to our audio podcast directly through Google Play!
- RSS - Subscribe through your regular RSS reader (audio only)
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Allyn Malventano, Josh Walrath and Jeremy Hellstrom
Week in Review:
News items of interest:
Hardware/Software Picks of the Week
Subject: Graphics Cards | August 18, 2016 - 02:28 PM | Sebastian Peak
Tagged: nvidia, gtx 1060 3gb, gtx 1060, graphics card, gpu, geforce, 1152 CUDA Cores
NVIDIA has officially announced the 3GB version of the GTX 1060 graphics card, and it indeed contains fewer CUDA cores than the 6GB version.
The GTX 1060 Founders Edition
The product page on NVIDIA.com now reflects the 3GB model, and board partners have begun announcing their versions. The MSRP on this 3GB version is set at $199, and availablity of partner cards is expected in the next couple of weeks. The two versions will be designated only by their memory size, and no other capacities of either card are forthcoming.
|GeForce GTX 1060 3GB||GeForce GTX 1060 6GB|
|Base Clock||1506 MHz||1506 MHz|
|Boost Clock||1708 MHz||1708 MHz|
|Memory Speed||8 Gbps||8 Gbps|
As you can see from the above table, the only specification that has changed is the CUDA core count, with base/boost clocks, memory speed and interface, and TDP identical. As to performance, NVIDIA says the 6GB version holds a 5% performance advantage over this lower-cost version, which at $199 is 20% less expensive than the previous GTX 1060 6GB.
Subject: Graphics Cards | August 10, 2016 - 08:22 PM | Sebastian Peak
Tagged: video card, strix rx470, strix rx460, strix, rx 470, rx 460, ROG, Republic of Gamers, graphics, gpu, gaming, asus
Ryan posted details about the Radeon RX 470 and 460 graphics cards at the end of last month, and both are now available. Now the largest of the board partners, ASUS, has added both of these new GPUs to their Republic of Gamers STRIX series.
The STRIX Gaming RX 470 (Image: ASUS)
ASUS announced the Radeon RX 470 STRIX Gaming cards last week, and today the more affordable RX 460 GPU variant has been announced. The RX 470 is certainly a capable gaming option as it's a slightly cut-down version of the RX 480 GPU, and with the two versions of the STRIX Gaming cards offering varying levels of overclocking, they can come even closer to the performance of a stock RX 480.
The STRIX Gaming RX 460 (Image: ASUS)
The new STRIX Gaming RX 460 is significantly slower, with just 896 stream processors (to the 2048 of the RX 470) and a 128-bit memory interface (compared to 256-bit). Part of the appeal of the reference RX 460 - aside from low cost - is low power draw, as the <75W power draw allows for slot-powered board designs. This STRIX Gaming version adds a 6-pin power connector, however, which should provide additional overhead for further overclocking.
|GPU||AMD Radeon RX 470||AMD Radeon RX 470||AMD Radeon RX 460|
|Memory||4GB GDDR5||4GB GDDR5||4GB GDDR5|
|Memory Clock||6600 MHz||6600 MHz||7000 MHz|
|Core Clock||1270 MHz (OC Mode)
1250 MHz (Gaming Mode)
|1226 MHz (OC Mode)
1206 MHz (Gaming Mode)
|1256 MHz (OC Mode)
1236 MHz (Gaming Mode)
|Video Output||DVI-D x2
|Dimensions||9.5" x 5.1" x 1.6"||9.5" x 5.1" x 1.6"||7.6" x 4.7" x 1.4"|
The STRIX Gaming RX 470 OC 4GB is priced at $199, matching the (theoretical) retail of the 4GB RX 480, and the STRIX Gaming RX 470 is just behind at $189. The considerably lower-end STRIX Gaming RX 460 is $139. A check of Amazon/Newegg shows listings for these cards, but no in-stock units as of early this afternoon.