Author:
Manufacturer: AMD

AMD gets aggressive

At its Computex 2016 press conference in Taipei today, AMD has announced the branding and pricing, along with basic specifications, for one of its upcoming Polaris GPUs shipping later this June. The Radeon RX 480, based on Polaris 10, will cost just $199 and will offer more than 5 TFLOPS of compute capability. This is an incredibly aggressive move obviously aimed at continuing to gain market share at NVIDIA's expense. Details of the product are listed below.

  RX 480 GTX 1070 GTX 980 GTX 970 R9 Fury R9 Nano R9 390X R9 390
GPU Polaris 10 GP104 GM204 GM204 Fiji Pro Fiji XT Hawaii XT Grenada Pro
GPU Cores 2304 1920 2048 1664 3584 4096 2816 2560
Rated Clock ? 1506 MHz 1126 MHz 1050 MHz 1000 MHz up to 1000 MHz 1050 MHz 1000 MHz
Texture Units ? 120 128 104 224 256 176 160
ROP Units ? 64 64 56 64 64 64 64
Memory 4/8GB 8GB 4GB 4GB 4GB 4GB 8GB 8GB
Memory Clock 8000 MHz 8000 MHz 7000 MHz 7000 MHz 500 MHz 500 MHz 6000 MHz 6000 MHz
Memory Interface 256-bit 256-bit 256-bit 256-bit 4096-bit (HBM) 4096-bit (HBM) 512-bit 512-bit
Memory Bandwidth 256 GB/s 256 GB/s 224 GB/s 196 GB/s 512 GB/s 512 GB/s 384 GB/s 384 GB/s
TDP 150 watts 150 watts 165 watts 145 watts 275 watts 175 watts 275 watts 230 watts
Peak Compute > 5.0 TFLOPS 5.7 TFLOPS 4.61 TFLOPS 3.4 TFLOPS 7.20 TFLOPS 8.19 TFLOPS 5.63 TFLOPS 5.12 TFLOPS
Transistor Count ? 7.2B 5.2B 5.2B 8.9B 8.9B 6.2B 6.2B
Process Tech 14nm 16nm 28nm 28nm 28nm 28nm 28nm 28nm
MSRP (current) $199 $379 $499 $329 $549 $499 $389 $329

The RX 480 will ship with 36 CUs totaling 2304 stream processors based on the current GCN breakdown of 64 stream processors per CU. AMD didn't list clock speeds and instead is only telling us that the performance offered will exceed 5 TFLOPS of compute; how much is still a mystery and will likely change based on final clocks.

9310_ellesmere_cam1_02_0010_4K.jpg

The memory system is powered by a 256-bit GDDR5 memory controller running at 8 Gbps and hitting 256 GB/s of throughput. This is the same resulting memory bandwidth as NVIDIA's new GeForce GTX 1070 graphics card.

AMD also tells us that the TDP of the card is 150 watts, again matching the GTX 1070, though without more accurate performance data it's hard to assume anything about the new architectural efficiency of the Polaris GPUs built on the 14nm Global Foundries process.

Obviously the card will support FreeSync and all of AMD's VR features, in addition to being DP 1.3 and 1.4 ready. 

AMD stated that the RX 480 will launch on June 29th.

9310_ellesmere_cam4_02_0010_4K.jpg

I know that many of you will want us to start guessing at what performance level the new RX 480 will actually fall, and trust me, I've been trying to figure it out. Based on TFLOPS rating and memory bandwidth alone, it seems possible that the RX 480 could compete with the GTX 1070. But if that were the case, I don't think even AMD is crazy enough to set the price this far below where the GTX 1070 launched, $379. 

9310_ellesmere_cam3_02_0010_4K.jpg

I would expect the configuration of the GCN architecture to remain mostly unchanged on Polaris, compared to Hawaii, for the same reasons that we saw NVIDIA leave Pascal's basic compute architecture unchanged compared to Maxwell. Moving to the new process node was the primary goal and adding to that with drastic shifts in compute design might overly complicate product development.

9310_ellesmere_cam2_02_0010_4K.jpg

In the past, we have observed that AMD's GCN architecture tends to operate slightly less efficiently in terms of rated maximum compute capability versus realized gaming performance, at least compared to Maxwell and now Pascal. With that in mind, the >5 TFLOPS offered by the RX 480 likely lies somewhere between the Radeon R9 390 and R9 390X in realized gaming output. If that is the case, the Radeon RX 480 should have performance somewhere between the GeForce GTX 970 and the GeForce GTX 980. 

polaris-15 (1).jpg

AMD claims that the RX 480 at $199 is set to offer a "premium VR experience" that has previously be limited to $500 graphics cards (another reference to the original price of the GTX 980 perhaps...). AMD claims this should have a dramatic impact on increasing the TAM (total addressable market) for VR.

In a notable market survey, price was a leading barrier to adoption of VR. The $199 SEP for select Radeon™ RX Series GPUs is an integral part of AMD’s strategy to dramatically accelerate VR adoption and unleash the VR software ecosystem. AMD expects that its aggressive pricing will jumpstart the growth of the addressable market for PC VR and accelerate the rate at which VR headsets drop in price:

  • More affordable VR-ready desktops and notebooks
  • Making VR accessible to consumers in retail
  • Unleashing VR developers on a larger audience
  • Reducing the cost of entry to VR

AMD calls this strategy of starting with the mid-range product its "Water Drop" strategy with the goal "at releasing new graphics architectures in high volume segments first to support continued market share growth for Radeon GPUs."

So what do you guys think? Are you impressed with what Polaris looks like its going to be now?

Author:
Subject: Processors
Manufacturer: AMD

Bristol Ridge Takes on Mobile: E2 Through FX

It is no secret that AMD has faced an uphill battle since the release of the original Core 2 processors from Intel.  While stayed mostly competitive through the Phenom II years, they hit some major performance issues when moving to the Bulldozer architecture.  While on paper the idea of Chip Multi-Threading sounded fantastic, AMD was never able to get the per thread performance up to expectations.  While their CPUs performed well in heavily multi-threaded applications, they just were never seen in as positive of a light as the competing Intel products.

br_01.png

The other part of the performance equation that has hammered AMD is the lack of a new process node that would allow it to more adequately compete with Intel.  When AMD was at 32 nm PD-SOI, Intel had introduced its 22nm TriGate/FinFET.  AMD then transitioned to a 28nm HKMG planar process that was more size optimized than 32nm, but did not drastically improve upon power and transistor switching performance.

So AMD had a double whammy on their hands with an underperforming architecture and limitted to no access to advanced process nodes that would actually improve their power and speed situation.  They could not force their foundry partners to spend billions on a crash course in FinFET technology to bring that to market faster, so they had to iterate and innovate on their designs.

br_02.png

Bristol Ridge is the fruit of that particular labor.  It is also the end point to the architecture that was introduced with Bulldozer way back in 2011.

Click here to read the entire introduction of AMD's Bristol Ridge lineup!

Author:
Subject: Processors
Manufacturer: Intel

Broadwell-E Platform

It has been nearly two years since the release of the Haswell-E platform, which began with the launch of the Core i7-5960X processor. Back then, the introduction of an 8-core consumer processor was the primary selling point; along with the new X99 chipset and DDR4 memory support. At the time, I heralded the processor as “easily the fastest consumer processor we have ever had in our hands” and “nearly impossible to beat.” So what has changed over the course of 24 months?

01.jpg

Today Intel is launching Broadwell-E, the follow up to Haswell-E, and things look very much the same as they did before. There are definitely a couple of changes worth noting and discussing, including the move to a 10-core processor option as well as Turbo Boost Max Technology 3.0, which is significantly more interesting than its marketing name implies. Intel is sticking with the X99 platform (good for users that might want to upgrade), though the cost of these new processors is more than slightly disappointing based on trends elsewhere in the market.

This review of the new Core i7-6950X 10-core Broadwell-E processor is going to be quick, and to the point: what changes, what is the performance, how does it overclock, and what will it cost you?

Go.

Continue reading our review of the new Core i7-6950X 10-core processor!!

Author:
Subject: General Tech
Manufacturer: ARM

New Products for 2017

PC Perspective was invited to Austin, TX on May 11 and 12 to participate in ARM’s yearly tech day.  Also invited were a handful of editors and analysts that cover the PC and mobile markets.  Those folks were all pretty smart, so it is confusing as to why they invited me.  Perhaps word of my unique talent of screenshoting PDFs into near-unreadable JPGs preceded me?  Regardless of the reason, I was treated to two full days of in-depth discussion of the latest generation of CPU and GPU cores, 10nm test chips, and information on new licensing options.

A73_formfactors.png

Today ARM is announcing their next CPU core with the introduction of the Cortex-A73. They are also unwrapping the latest Mali-G71 graphics technology.  Other technologies such as the CCI-550 interconnect are also revealed.  It is a busy and important day for ARM, especially in light of Intel seemingly abandoning the sub-milliwatt mobile market.

A73_boost.png

Cortex-A73

ARM previously announced the Cortex-A72 in February, 2015.  Since that time it has been seen in most flagship mobile devices in late 2015 and throughout 2016.  The market continues to evolve, and as such the workloads and form factors have pushed ARM to continue to develop and improve their CPU technology.

A73_perf_comp_A72.png

The Sofia Antipolis, France design group is behind the new A73.  The previous several core architectures had been developed by the Cambridge group.  As such, the new design differs quite dramatically from the previous A72.  I was actually somewhat taken aback by the differences in the design philosophy of the two groups and the changes between the A72 and A73, but the generational jumps we have seen in the past make a bit more sense to me.

The marketplace is constantly changing when it comes to workloads and form factors.  More and more complex applications are being ported to mobile devices, including hot technologies like AR and VR.  Other technologies include 3D/360 degree video, greater than 20 MP cameras, and 4K/8K displays and their video playback formats.  Form factors on the other hand have continued to decrease in size, especially in overall height.  We have relatively large screens on most premium devices, but the designers have continued to make these phones thinner and thinner throughout the years.  This has put a lot of pressure on ARM and their partners to increase performance while keeping TDPs in check, and even reducing them so they more adequately fit in the TDP envelope of these extremely thin devices.

A73_power_comp_A72.png

Click here to continue reading about ARM's Tech Day 2016!

Author:
Manufacturer: NVIDIA

GP104 Strikes Again

It’s only been three weeks since NVIDIA unveiled the GeForce GTX 1080 and GTX 1070 graphics cards at a live streaming event in Austin, TX. But it feels like those two GPUs, one of which hasn't even been reviewed until today, have already drastically shifted the landscape of graphics, VR and PC gaming.

nvidia1.jpg

Half of the “new GPU” stories are told, with AMD due to follow up soon with Polaris, but it was clear to anyone watching the enthusiast segment with a hint of history that a line was drawn in the sand that day. There is THEN, and there is NOW. Today’s detailed review of the GeForce GTX 1070 completes NVIDIA’s first wave of NOW products, following closely behind the GeForce GTX 1080.

Interestingly, and in a move that is very uncharacteristic of NVIDIA, detailed specifications of the GeForce GTX 1070 were released on GeForce.com well before today’s reviews. With information on the CUDA core count, clock speeds, and memory bandwidth it was possible to get a solid sense of where the GTX 1070 performed; and I imagine that many of you already did the napkin math to figure that out. There is no more guessing though - reviews and testing are all done, and I think you'll find that the GTX 1070 is as exciting, if not more so, than the GTX 1080 due to the performance and pricing combination that it provides.

Let’s dive in.

Continue reading our review of the GeForce GTX  1070 8GB Founders Edition!!

Introduction

We’ve probably all lost data at some point, and many of us have tried various drive recovery solutions over the years. Of these, Disk Drill has been available for Mac OS X users for some time, but the company currently offers a Windows compatible version, released last year. The best part? It’s totally free (and not in the ad-ridden, drowning in popups kind of way). So does it work? Using some of my own data as a guinea pig, I decided to find out.

dd_01.PNG

The interface is clean and simple

To begin with I’ll list the features of Disk Drill as Clever Files describes it on their product page:


  • Any Drive
    • Our free data recovery software for Windows PC can recover data from virtually any storage device - including internal and external hard drives, USB flash drives, iPods, memory cards, and more.
  • Recovery Options
    • Disk Drill has several different recovery algorithms, including Undelete, Protected Data, Quick Scan, and Deep Scan. It will run through them one at a time until your lost data is found.
  • Speed & Simplicity
    • It’s as easy as one click: Disk Drill scans start with just the click of a button. There’s no complicated interface with too many options, just click, sit back and wait for your files to appear.
  • All File Systems
    • Different types of hard drives and memory cards have different ways of storing data. Whether your media has a FAT, exFAT or NTFS file system, is HFS+ Mac drive or Linux EXT2/3/4, Disk Drill can recover deleted files.
  • Partition Recovery
    • Sometimes your data is still on your drive, but a partition has been lost or reformatted. Disk Drill can help you find the “map” to your old partition and rebuild it, so your files can be recovered.
  • Recovery Vault
    • In addition to deleted files recovery, Disk Drill also protects your PC from future data loss. Recovery Vault keeps a record of all deleted files, making it much easier to recover them.

The Recovery Process

DSC_1061.jpg

(No IDE hard drives were harmed in the making of this photo)

My recovery process involved an old 320GB IDE drive, which was used for backup until a power outage-related data corruption (I didn’t own a UPS at the time, and the drive was in the process of writing) which left me without a valid partition. At one point I had given up and formatted the drive; thinking all of my original backup was lost. Thankfully I didn’t use it much after this, and it’s been sitting on a shelf for years.

There are different methods that can be employed to recover lost or deleted data. One of these is to scan for the file headers (or signatures), which contain information about what type of file it is (i.e. Microsoft Word, JPEG image, etc.). There are advanced recovery methods that attempt to reconstruct an entire file system, preserving the folder structures and the original files names. Unfortunately, this is not a simple (or fast) process, and is generally left to the professionals.

Continue reading our look at Clever Files Disk Drill Windows File Recovery Software!!

Manufacturer: NVIDIA

First, Some Background

 
TL;DR:
NVIDIA's Rumored GP102
 
Based on two rumors, NVIDIA seems to be planning a new GPU, called GP102, that sits between GP100 and GP104. This changes how their product stack flowed since Fermi and Kepler. GP102's performance, both single-precision and double-precision, will likely signal NVIDIA's product plans going forward.
  • - GP100's ideal 1 : 2 : 4 FP64 : FP32 : FP16 ratio is inefficient for gaming
  • - GP102 either extends GP104's gaming lead or bridges GP104 and GP100
  • - If GP102 is a bigger GP104, the future is unclear for smaller GPGPU devs
    • This is, unless GP100 can be significantly up-clocked for gaming.
  • - If GP102 matches (or outperforms) GP100 in gaming, and has better than 1 : 32 double-precision performance, then GP100 would be the first time that NVIDIA designed an enterprise-only, high-end GPU.
 

 

When GP100 was announced, Josh and I were discussing, internally, how it would make sense in the gaming industry. Recently, an article on WCCFTech cited anonymous sources, which should always be taken with a dash of salt, that claimed NVIDIA was planning a second architecture, GP102, between GP104 and GP100. As I was writing this editorial about it, relating it to our own speculation about the physics of Pascal, VideoCardz claims to have been contacted by the developers of AIDA64, seemingly on-the-record, also citing a GP102 design.

I will retell chunks of the rumor, but also add my opinion to it.

nvidia-titan-black-1.jpg

In the last few generations, each architecture had a flagship chip that was released in both gaming and professional SKUs. Neither audience had access to a chip that was larger than the other's largest of that generation. Clock rates and disabled portions varied by specific product, with gaming usually getting the more aggressive performance for slightly better benchmarks. Fermi had GF100/GF110, Kepler had GK110/GK210, and Maxwell had GM200. Each of these were available in Tesla, Quadro, and GeForce cards, especially Titans.

Maxwell was interesting, though. NVIDIA was unable to leave 28nm, which Kepler launched on, so they created a second architecture at that node. To increase performance without having access to more feature density, you need to make your designs bigger, more optimized, or more simple. GM200 was giant and optimized, but, to get the performance levels it achieved, also needed to be more simple. Something needed to go, and double-precision (FP64) performance was the big omission. NVIDIA was upfront about it at the Titan X launch, and told their GPU compute customers to keep purchasing Kepler if they valued FP64.

Fast-forward to Pascal.

Subject: Storage
Manufacturer: Toshiba (OCZ)

Introduction, Specifications and Packaging

Introduction:

The OCZ RevoDrive has been around for a good long while. We looked at the first ever RevoDrive back in 2010. It was a bold move for the time, as PCIe SSDs were both rare and very expensive at that time. OCZ's innovation was to implement a new VCA RAID controller which kept latencies low and properly scaled with increased Queue Depth. OCZ got a lot of use out of this formula, later expanding to the RevoDrive 3 x2 which expanded to four parallel SSDs, all the way to the enterprise Z-Drive R4 which further expanded that out to eight RAIDed SSDs.

110911-140303-5.5.jpg

OCZ's RevoDrive lineup circa 2011.

The latter was a monster of an SSD both in physical size and storage capacity. Its performance was also impressive given that it launched five years ago. After being acquired by Toshiba, OCZ re-spun the old VCA-driven SSD one last time in the form of a RevoDrive 350, but it was the same old formula and high-latency SandForce controllers (updated with in-house Toshiba flash). The RevoDrive line needed to ditch that dated tech and move into the world of NVMe, and today it has!

DSC00772.jpg

Here is the new 'Toshiba OCZ RD400', branded as such under the recent rebadging that took place on OCZ's site. The Trion 150 and Vertex 180 have also been relabeled as TR150 and VT180. This new RD400 has some significant changes over the previous iterations of that line. The big one is that it is now a lean M.2 part which can come on/with an optional adapter card for those not having an available M.2 slot.

Read on for our full review of the new OCZ RD400!

Subject: Motherboards
Manufacturer: GIGABYTE

Introduction and Technical Specifications

Introduction

02-board.jpg

Courtesy of GIGABYTE

The X99P-SLI motherboard is the newest member of their Ultra Durable board line, updated with newest technological innovations including USB 3.1 and Thunderbolt 3. The board supports all Intel LGA2011-3 based processors paired with DDR4 memory in up to a quad channel configuration. GIGABYTE priced the X99P-SLI at an approachable MSRP of $249.99.

03-board-flyapart.jpg

Courtesy of GIGABYTE

04-mosfet-expl.jpg

Courtesy of GIGABYTE

05-pcie-explained.jpg

Courtesy of GIGABYTE

Like all members of the Ultra-Durable board line, the X99P-SLI was over engineered to take whatever abuse is thrown its way, featuring an 6+4-phase digital power system with International Rectify Gen 4 digital PWM controllers and Gen 3 PowIRstage controllers, Server Level chokes, and long life Durable Black Solid capacitors. The board also features GIGABYTE's next generation PCIe x16 slots with PCIe Metal Shielding - steel reinforced overlays to provide extra vertical support for graphics cards featuring large and heavy coolers.

Continue reading our review of the GIGABYTE X99P-SLI motherboard!

Author:
Subject: Processors
Manufacturer: ARM

10nm Sooner Than Expected?

It seems only yesterday that we had the first major GPU released on 16nm FF+ and now we are talking about ARM about to receive their first 10nm FF test chips!  Well, in fact it was yesterday that NVIDIA formally released performance figures on the latest GeForce GTX 1080 which is based on TSMC’s 16nm FF+ process technology.  Currently TSMC is going full bore on their latest process node and producing the fastest current graphics chip around.  It has taken the foundry industry as a whole a lot longer to develop FinFET technology than expected, but now that they have that piece of the puzzle seemingly mastered they are moving to a new process node at an accelerated rate.

arm_td01.png

TSMC’s 10nm FF is not well understood by press and analysts yet, but we gather that it is more of a marketing term than a true drop to 10 nm features.  Intel has yet to get past 14nm and does not expect 10 nm production until well into next year.  TSMC is promising their version in the second half of 2016.  We cannot assume that TSMC’s version will match what Intel will be doing in terms of geometries and electrical characteristics, but we do know that it is a step past TSMC’s 16nm FF products.  Lithography will likely get a boost with triple patterning exposure.  My guess is that the back end will also move away from the “20nm metal” stages that we see with 16nm.  All in all, it should be an improved product from what we see with 16nm, but time will tell if it can match the performance and density of competing lines that bear the 10nm name from Intel, Samsung, and GLOBALFOUNDRIES.

ARM has a history of porting their architectures to new process nodes, but they are being a bit more aggressive here than we have seen in the past.  It used to be that ARM would announce a new core or technology, and it would take up to two years to be introduced into the market.  Now we are seeing technology announcements and actual products hitting the scenes about nine months later.  With the mobile market continuing to grow we expect to see products quicker to market still.

arm_td02.png

The company designed a simplified test chip to tape out and send to TSMC for test production on the aforementioned 10nm FF process.  The chip was taped out in December, 2015.  The design was shipped to TSMC for mask production and wafer starts.  ARM is expecting the finished wafers to arrive this month.

Click here to continue reading about ARM's test foray into 10nm!

Author:
Manufacturer: NVIDIA

A new architecture with GP104

Table of Contents

The summer of change for GPUs has begun with today’s review of the GeForce GTX 1080. NVIDIA has endured leaks, speculation and criticism for months now, with enthusiasts calling out NVIDIA for not including HBM technology or for not having asynchronous compute capability. Last week NVIDIA’s CEO Jen-Hsun Huang went on stage and officially announced the GTX 1080 and GTX 1070 graphics cards with a healthy amount of information about their supposed performance and price points. Issues around cost and what exactly a Founders Edition is aside, the event was well received and clearly showed a performance and efficiency improvement that we were not expecting.

DSC00209.jpg

The question is, does the actual product live up to the hype? Can NVIDIA overcome some users’ negative view of the Founders Edition to create a product message that will get the wide range of PC gamers looking for an upgrade path an option they’ll take?

I’ll let you know through the course of this review, but what I can tell you definitively is that the GeForce GTX 1080 clearly sits alone at the top of the GPU world.

Continue reading our review of the GeForce GTX 1080 Founders Edition!!

Manufacturer: NVIDIA

An Overview

 
TL;DR:
NVIDIA's Ansel Technology
 
Ansel is a utility that expands the concept of screenshots along the direction of photography. When fully enabled, it allows the user to capture still images with HDR exposures, gigapixel levels of resolution, 360-degree views for VR, 3D stereo projection, and post-processing filters, all from either the game's view, or from a free-roaming camera (if available). While it must be implemented by the game developer, mostly to prevent the user from either cheating or seeing hidden parts of the world, such as an inventory or minimap rendering room, NVIDIA claims that it is a tiny burden.
  • - NVIDIA blog claims "GTX 600-series and up"
  • - UI/UX is NVIDIA controlled
    • Allows NVIDIA to provide a consistent UI across all supported games
    • Game developers don't need to spend UX and QA effort on their own
  • - Can signal the game to use its highest-quality assets during the shot
  • - NVIDIA will provide an API for users to create their own post-process shader
    • Will allow access to Color, Normal, Depth, Geometry, (etc.) buffers
  • - When asked about implementing Ansel with ShadowPlay: "Stay tuned."
     

 

“In-game photography” is an interesting concept. Not too long ago, it was difficult to just capture the user's direct experience with a title. Print screen could only hold a single screenshot at a time, which allowed Steam and FRAPS to provide a better user experience. FRAPS also made video more accessible to the end-user, but it output huge files and, while it wasn't too expensive, it needed to be purchased online, which was a big issue ten-or-so years ago.

shadowplay-vs.jpg

Seeing that their audience would enjoy video captures, NVIDIA introduced ShadowPlay a couple of years ago. The feature allowed users to, not only record video, but also capture the last few minutes. It did this with hardware acceleration, and it did this for free (for compatible GPUs). While I don't use ShadowPlay, preferring the control of OBS, it's a good example of how NVIDIA wants to support their users. They see these features as a value-add, which draw people to their hardware.

Read on to learn more about NVIDIA Ansel

Subject: Storage
Manufacturer: ICY DOCK

Introduction, Specifications, and Packaging

Introduction

ICY DOCK has made themselves into a sort of Swiss Army knife of dockable and hot-swappable storage solutions. From multi-bay desktop external devices to internal hot-swap enclosures, these guys have just about every conceivable way to convert storage form factors covered. We’ve looked at some of their other offerings in the past, but this week we will focus on a pair of their ToughArmor series products.

160426-152000.jpg

As you can no doubt see here, these two enclosures aim to cram as many 2.5” x 7mm form factor devices into the smallest space possible. They also offer hot swap capability and feature front panel power + activity LEDs. As the name would imply, these are built to be extremely durable, with ICY DOCK proudly running them over with a truck in some of their product photos.

Read on for our full review of the ICY DOCK ToughArmor MB998SP-B and MB993SK-B!

Author:
Subject: Processors
Manufacturer: AMD

Lower Power, Same Performance

AMD is in a strange position in that there is a lot of excitement about their upcoming Zen architecture, but we are still many months away from that introduction.  AMD obviously needs to keep the dollars flowing in, and part of that means that we get refreshes now and then of current products.  The “Kaveri” products that have been powering the latest APUs from AMD have received one of those refreshes.  AMD has done some redesigning of the chip and tweaked the process technology used to manufacture them.  The resulting product is the “Godavari” refresh that offers slightly higher clockspeeds as well as better overall power efficiency as compared to the previous “Kaveri” products.

7860K_01.jpg

One of the first refreshes was the A8-7670K that hit the ground in November of 2015.  This is a slightly cut down part that features 6 GPU compute units vs. the 8 that a fully enabled Godavari chip has.  This continues to be a FM2+ based chip with a 95 watt TDP.  The clockspeed of this part goes from 3.6 GHz to 3.9 GHz.  The GPU portion runs at the same 757 MHz that the original A10-7850K ran at.  It is interesting to note that it is still a 95 watt TDP part with essentially the same clockspeeds as the 7850K, but with two fewer GPU compute units.

The other product being covered here is a bit more interesting.  The A10-7860K looks to be a larger improvement from the previous 7850K in terms of power and performance.  It shares the same CPU clockspeed range as the 7850K (3.6 GHz to 3.9 GHz), but improves upon the GPU clockspeed by hitting around 800 MHz.  At first this seems underwhelming until we realize that AMD has lowered the TDP from 95 watts down to 65 watts.  Less power consumed and less heat produced for the same performance from the CPU side and improved performance from the GPU seems like a nice advance.

amd_cool_stack.png

AMD continues to utilize GLOBALFOUNDRIES 28 nm Bulk/HKMG process for their latest APUs and will continue to do so until Zen is released late this year.  This is not the same 28 nm process that we were introduced to over four years ago.  Over that time improvements have been made to improve yields and bins, as well as optimize power and clockspeed.  GF also can adjust the process on a per batch basis to improve certain aspects of a design (higher speed, more leakage, lower power, etc.).  They cannot produce miracles though.  Do not expect 22 nm FinFET performance or density with these latest AMD products.  Those kinds of improvements will show up with Samsung/GF’s 14nm LPP and TSMC’s 16nm FF+ lines.  While AMD will be introducing GPUs on 14nm LPP this summer, the Zen launch in late 2016 will be the first AMD CPU to utilize that advanced process.

Click here to read the entire AMD A10-7860K and A10-7670K Review!

Subject: Motherboards
Manufacturer: ECS

Introduction and Technical Specifications

Introduction

02-board.jpg

Courtesy of ECS

The ECS Z170-Claymore motherboard is the newest offering in ECS' L337 product line with support for the Intel Z170 Express chipset. The Z170-Claymore is a more enthusiast-friendly design then some of their previous offerings with a slew of features sure to entice gamers and power users alike. ECS priced this board competitively with an MSRP of $159.99, a price point sure to appeal to a wide swath of users given the board's integrated feature set.

03-board-profile.jpg

Courtesy of ECS

04-board-flyapart.jpg

Courtesy of ECS

ECS took out all of the stops with the Z170-Claymore, integrating a host of features together with high quality components for a compelling product. The board was designed with a 12-phase digital power delivery system, using high efficiency chokes and MOSFETs, as well as solid core capacitors for optimal board performance under any operating conditions. ECS integrated the following features into the Z170-Claymore board: four SATA 3 ports; one SATA-Express port; a PCIe X2 M.2 port; a Realtek GigE NIC; five PCI-Express x16 slots; 2-digit diagnostic LED display; on-board power and reset buttons; Realtek audio solution; integrated DisplayPort and HDMI video port support; and USB 2.0, 3.0, and 3.1 Gen2 port support.

Continue reading our review of the ECS Z170-Claymore motherboard!

Author:
Manufacturer: AMD

History and Specifications

The Radeon Pro Duo had an interesting history. Originally shown as an unbranded, dual-GPU PCB during E3 2015, which took place last June, AMD touted it as the ultimate graphics card for both gamers and professionals. At that time, the company thought that an October launch was feasible, but that clearly didn’t work out. When pressed for information in the Oct/Nov timeframe, AMD said that they had delayed the product into Q2 2016 to better correlate with the launch of the VR systems from Oculus and HTC/Valve.

During a GDC press event in March, AMD finally unveiled the Radeon Pro Duo brand, but they were also walking back on the idea of the dual-Fiji beast being aimed at the gaming crowd, even partially. Instead, the company talked up the benefits for game developers and content creators, such as its 8192 stream processors for offline rendering, or even to aid game devs in the implementation and improvement of multi-GPU for upcoming games.

05.jpg

Anyone that pays attention to the graphics card market can see why AMD would make the positional shift with the Radeon Pro Duo. The Fiji architecture is on the way out, with Polaris due out in June by AMD’s own proclamation. At $1500, the Radeon Pro Duo will be a stark contrast to the prices of the Polaris GPUs this summer, and it is well above any NVIDIA-priced part in the GeForce line. And, though CrossFire has made drastic improvements over the last several years thanks to new testing techniques, the ecosystem for multi-GPU is going through a major shift with both DX12 and VR bearing down on it.

So yes, the Radeon Pro Duo has both RADEON and PRO right there in the name. What’s a respectable PC Perspective graphics reviewer supposed to do with a card like that if it finds its way into your office? Test it of course! I’ll take a look at a handful of recent games as well as a new feature that AMD has integrated with 3DS Max called FireRender to showcase some of the professional chops of the new card.

Continue reading our review of the AMD Radeon Pro Duo!!

Manufacturer: NZXT

Introduction and First Impressions

The NZXT Manta is a mini-ITX enclosure that boasts better than average room for components and cooling, and is packaged in a rather unusual, rounded design.

manta_one.jpg

There is a reason for the Manta's somewhat bulbous appearance, and it's part of a recent trend in mini-ITX enclosure design; bigger is better. While you might think that mITX is all about fitting components into the smallest enclosure possible, there have been some recent examples of cases which expand the chassis to micro-ATX sizes (or above).

The Manta from NZXT is actually large enough to be a micro-ATX case, and its total volume exceeds their S340 enclosure; a full ATX design (!). So why on earth would you want a mini-ITX enclosure with that much volume? Three words: cooling, cooling, and cooling.

manta_exploded_view.jpg

As you can see from NZXT's graphic above, the Manta's protruding top and front panels provide a the additional space needed to allow for thicker cooling setups.

DSC_0922.jpg

Before we dive in for a closer look at the new Manta enclosure, let's take a look at the full specs from NZXT:


Specifications:

  • Motherboard Support: mini-ITX
  • Expansion Slots: 2
  • Drive Bays
    • Internal 3.5”: 2 
    • Internal 2.5”: 3
  • Cooling System
    • Front: 2 x 140/120mm (2 x 120mm included) 
    • Top: 2 x 140/120mm 
    • Rear: 1 x 120mm (Included)
  • Radiator Support
    • Front: Up to 280mm 
    • Top: Up to 280mm 
    • Rear: 120mm
  • Clearance
    • CPU Clearance: 160mm
    • GPU Clearance: 363mm 
    • PSU Length: 363mm
  • Power Supply Support: ATX
  • External Electronics:
    • I/O Panel LED On/Off
    • 1x Audio/Mic
    • USB 3.0
  • Dimensions (WxHxD): 245 x 426 x 450mm (9.65 x 16.77 x 17.72 inches)
  • Weight: 7.2 kg (15.87 lbs)

Our thanks to NZXT for providing the Manta enclosure for our review.

First Impressions

DSC_0921.jpg

At first glance the Manta is a departure from the typical enclosure design. The rounded panels are built around a standard rectangular frame, so it's really quite conventional underneath.

DSC_0928.jpg

The look from the front of the enclosure really shows off the rounded sides, and this will certainly not be everyone's favorite look - but anything beyond the norm tends be divisive in this market.

Continue reading our review of the NZXT Manta Mini ITX Enclosure!!

Author:
Manufacturer: be quiet!

Introduction, Features and Specifications

Introduction

2-Banner.jpg

The German manufacturer be quiet! is best known for their attention to silence. They are currently introducing four new models in their entry-level Pure Power series, which includes the Pure Power 9 400W, 500W, 600W, and 700W power supplies. Be quiet! is targeting these power supplies to budget-minded users for use in silent PC builds, office applications, multimedia and home theater systems. The new Pure Power 9 series will replace the Pure Power L8 series.

3-PSU-side.jpg

All of the Pure Power 9 series power supplies are 80 Plus Silver certified for high efficiency (the L8 series is Bronze certified) and feature modular cables and a 120mm cooling fan. The power supplies are designed with dual +12V rails and incorporate a new active clamp and synchronous rectifier technology with zero voltage and zero current switching for increased efficiency. We will be taking a detailed look at the Pure Power 9 600W power supply in this review.

4-PSU-front.jpg

be quiet! Pure Power 9 600W PSU Key Features:

•    Exceptionally quiet operation (be quiet! silence-optimized 120mm fan)
•    80 Plus Silver certified with up to 91% power conversion efficiency
•    Two +12V rails and four PCI-E connectors for multi- GPU systems
•    Active Clamp + Synchronous Rectification circuit design
•    Modular cable management with flat ribbon-style peripheral cables
•    Meets latest Intel C6/C7, ErP and Energy Star guidelines
•    Active Power Factor correction (0.99) with Universal AC input
•    Product conception, design and QC in Germany, manufactured in China
•    3-Year warranty

Please continue reading our review of the be quiet! Pure Power 9 600W PSU!!!

Author:
Manufacturer: AMD

The Dual-Fiji Card Finally Arrives

This weekend, leaks of information on both WCCFTech and VideoCardz.com have revealed all the information about the pending release of AMD’s dual-GPU giant, the Radeon Pro Duo. While no one at PC Perspective has been briefed on the product officially, all of the interesting data surrounding the product is clearly outlined in the slides on those websites, minus some independent benchmark testing that we are hoping to get to next week. Based on the report from both sites, the Radeon Pro Duo will be released on April 26th.

AMD actually revealed the product and branding for the Radeon Pro Duo back in March, during its live streamed Capsaicin event surrounding GDC. At that point we were given the following information:

  • Dual Fiji XT GPUs
  • 8GB of total HBM memory
  • 4x DisplayPort (this has since been modified)
  • 16 TFLOPS of compute
  • $1499 price tag

The design of the card follows the same industrial design as the reference designs of the Radeon Fury X, and integrates a dual-pump cooler and external fan/radiator to keep both GPUs running cool.

01-official.jpg

Based on the slides leaked out today, AMD has revised the Radeon Pro Duo design to include a set of three DisplayPort connections and one HDMI port. This was a necessary change as the Oculus Rift requires an HDMI port to work; only the HTC Vive has built in support for a DisplayPort connection and even in that case you would need a full-size to mini-DisplayPort cable.

The 8GB of HBM (high bandwidth memory) on the card is split between the two Fiji XT GPUs on the card, just like other multi-GPU options on the market. The 350 watts power draw mark is exceptionally high, exceeded only by AMD’s previous dual-GPU beast, the Radeon 295X2 that used 500+ watts and the NVIDIA GeForce GTX Titan Z that draws 375 watts!

02-official.jpg

Here is the specification breakdown of the Radeon Pro Duo. The card has 8192 total stream processors and 128 Compute Units, split evenly between the two GPUs. You are getting two full Fiji XT GPUs in this card, an impressive feat made possible in part by the use of High Bandwidth Memory and its smaller physical footprint.

  Radeon Pro Duo R9 Nano R9 Fury R9 Fury X GTX 980 Ti TITAN X GTX 980 R9 290X
GPU Fiji XT x 2 Fiji XT Fiji Pro Fiji XT GM200 GM200 GM204 Hawaii XT
GPU Cores 8192 4096 3584 4096 2816 3072 2048 2816
Rated Clock up to 1000 MHz up to 1000 MHz 1000 MHz 1050 MHz 1000 MHz 1000 MHz 1126 MHz 1000 MHz
Texture Units 512 256 224 256 176 192 128 176
ROP Units 128 64 64 64 96 96 64 64
Memory 8GB (4GB x 2) 4GB 4GB 4GB 6GB 12GB 4GB 4GB
Memory Clock 500 MHz 500 MHz 500 MHz 500 MHz 7000 MHz 7000 MHz 7000 MHz 5000 MHz
Memory Interface 4096-bit (HMB) x 2 4096-bit (HBM) 4096-bit (HBM) 4096-bit (HBM) 384-bit 384-bit 256-bit 512-bit
Memory Bandwidth 1024 GB/s 512 GB/s 512 GB/s 512 GB/s 336 GB/s 336 GB/s 224 GB/s 320 GB/s
TDP 350 watts 175 watts 275 watts 275 watts 250 watts 250 watts 165 watts 290 watts
Peak Compute 16.38 TFLOPS 8.19 TFLOPS 7.20 TFLOPS 8.60 TFLOPS 5.63 TFLOPS 6.14 TFLOPS 4.61 TFLOPS 5.63 TFLOPS
Transistor Count 8.9B x 2 8.9B 8.9B 8.9B 8.0B 8.0B 5.2B 6.2B
Process Tech 28nm 28nm 28nm 28nm 28nm 28nm 28nm 28nm
MSRP (current) $1499 $499 $549 $649 $649 $999 $499 $329

The Radeon Pro Duo has a rated clock speed of up to 1000 MHz. That’s the same clock speed as the R9 Fury and the rated “up to” frequency on the R9 Nano. It’s worth noting that we did see a handful of instances where the R9 Nano’s power limiting capability resulted in some extremely variable clock speeds in practice. AMD recently added a feature to its Crimson driver to disable power metering on the Nano, at the expense of more power draw, and I would assume the same option would work for the Pro Duo.

Continue reading our preview of the AMD Radeon Pro Duo!!

Author:
Subject: General Tech
Manufacturer: Various

Intro and Xbox One

Introduction to Remote Streaming

The ability to play console games on the PC is certainly nothing new. A wide range of emulators have long offered PC owners access to thousands of classic games. But the recent advent of personal game streaming gives users the ability to legally enjoy current generation console games on their PCs.

Both Microsoft and Sony now offer streaming from their respective current generation consoles to the PC, but via quite different approaches. For PC owners contemplating console streaming, we set out to discover how each platform works and compares, what level of quality discerning PC gamers can expect, and what limitations and caveats console streaming brings. Read on for our comparison of Xbox One Streaming in Windows 10 and PS4 Remote Play for the PC and Mac.

Xbox One Streaming in Windows 10

Xbox One Streaming was introduced alongside the launch of Windows 10 last summer, and the feature is limited to Microsoft's latest (and last?) operating system via its built-in Xbox app. To get started, you first need to enable the Game Streaming option in your Xbox One console's settings (Settings > Preferences > Game DVR & Streaming > Allow Game Streaming to Other Devices).

xbox-vs-ps4-1.png

Once that's done, head to your Windows 10 PC, launch the Xbox app, and sign in with the same Microsoft account you use on your Xbox One. By default, the app will offer to sign you in with the same Microsoft account you're currently using for Windows 10. If your Xbox gamertag profile is associated with a different Microsoft account, just click Microsoft account instead of your current Windows 10 account name to sign in with the correct credentials.

xbox-vs-ps4-2.png

Note, however, that as part of Microsoft's relentless efforts to get everyone in the Virgo Supercluster to join the online Microsoft family, the Xbox app will ask those using a local Windows 10 account if they want to "sign in to this device" using the account associated with their Xbox gamertag, thereby creating a new "online" account on your Windows 10 PC tied to your Xbox account.

xbox-vs-ps4-3.png

If that's what you want, just type your current local account's password and click Next. If, like most users, you intentionally created your local Windows 10 account and have no plans to change it, click "Sign in to just this app instead," which will allow you to continue using your local account while still having access to the Xbox app via your gamertag-associated online Microsoft account.

Once you're logged in to the Xbox app, find and click on the "Connect" button in the sidebar on the left side of the window, which will let you add your Xbox One console as a device in your Windows 10 Xbox app.

xbox-vs-ps4-4.png

Continue reading our comparison of Xbox One Streaming and PlayStation 4 Remote Play!!