All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Yes, We're Writing About a Forum Post
Update - July 19th @ 7:15pm EDT: Well that was fast. Futuremark published their statement today. I haven't read it through yet, but there's no reason to wait to link it until I do.
Update 2 - July 20th @ 6:50pm EDT: We interviewed Jani Joki, Futuremark's Director of Engineering, on our YouTube page. The interview is embed just below this update.
Original post below
The comments of a previous post notified us of an Overclock.net thread, whose author claims that 3DMark's implementation of asynchronous compute is designed to show NVIDIA in the best possible light. At the end of the linked post, they note that asynchronous compute is a general blanket, and that we should better understand what is actually going on.
So, before we address the controversy, let's actually explain what asynchronous compute is. The main problem is that it actually is a broad term. Asynchronous compute could describe any optimization that allows tasks to execute when it is most convenient, rather than just blindly doing them in a row.
This is asynchronous computing.
Twelve days ago, NVIDIA announced its competitor to the AMD Radeon RX 480, the GeForce GTX 1060, based on a new Pascal GPU; GP 106. Though that story was just a brief preview of the product, and a pictorial of the GTX 1060 Founders Edition card we were initially sent, it set the community ablaze with discussion around which mainstream enthusiast platform was going to be the best for gamers this summer.
Today we are allowed to show you our full review: benchmarks of the new GeForce GTX 1060 against the likes of the Radeon RX 480, the GTX 970 and GTX 980, and more. Starting at $250, the GTX 1060 has the potential to be the best bargain in the market today, though much of that will be decided based on product availability and our results on the following pages.
Does NVIDIA’s third consumer product based on Pascal make enough of an impact to dissuade gamers from buying into AMD Polaris?
All signs point to a bloody battle this July and August and the retail cards based on the GTX 1060 are making their way to our offices sooner than even those based around the RX 480. It is those cards, and not the reference/Founders Edition option, that will be the real competition that AMD has to go up against.
First, however, it’s important to find our baseline: where does the GeForce GTX 1060 find itself in the wide range of GPUs?
Introduction and First Impressions
A newcomer in the PC enclosure space, RIOTORO has a lineup of unique-looking products to offer in a market flooded with options at every price-point. With this full-tower PRISM CR1280 enclosure the company says that they are providing not just a home for your components, but “the world’s 1st fully RGB case with unparalleled personalization options”.
Clearly, RGB lighting has been one of the biggest trends in PC hardware for the past year or so, and if you are so inclined the PRISM CR1280 promises fully customizable color with lighted accents on the front of the case, and included RGB intake fans.
Beyond the RGB lighting, however, the PRISM CR1280 has a rather unusual industrial design. There is angular black plastic over a steel body, and a large edge-to-edge side panel window (not to mention those bare aluminum feet). It looks like a premium enclosure, and it’s certainly priced like one with an MSRP of $169.99 (selling for $149 currently). Is it worth it? Read on to find out!
Through the looking glass
Futuremark has been the most consistent and most utilized benchmark company for PCs for quite a long time. While other companies have faltered and faded, Futuremark continues to push forward with new benchmarks and capabilities in an attempt to maintain a modern way to compare performance across platforms with standardized tests.
Back in March of 2015, 3DMark added support for an API Overhead test to help gamers and editors understand the performance advantages of Mantle and DirectX 12 compared to existing APIs. Though the results were purely “peak theoretical” numbers, the data helped showcase to consumers and developers what low levels APIs brought to the table.
Today Futuremark is releasing a new benchmark that focuses on DX12 gaming. No longer just a feature test, Time Spy is a fully baked benchmark with its own rendering engine and scenarios for evaluating the performance of graphics cards and platforms. It requires Windows 10 and a DX12-capable graphics card, and includes two different graphics tests and a CPU test. Oh, and of course, there is a stunningly gorgeous demo mode to go along with it.
I’m not going to spend much time here dissecting the benchmark itself, but it does make sense to have an idea of what kind of technologies are built into the game engine and tests. The engine is based purely on DX12, and integrates technologies like asynchronous compute, explicit multi-adapter and multi-threaded workloads. These are highly topical ideas and will be the focus of my testing today.
Futuremark provides an interesting diagram to demonstrate the advantages DX12 has over DX11. Below you will find a listing of the average number of vertices, triangles, patches and shader calls in 3DMark Fire Strike compared with 3DMark Time Spy.
It’s not even close here – the new Time Spy engine has more than a factor of 10 more processing calls for some of these items. As Futuremark states, however, this kind of capability isn’t free.
With DirectX 12, developers can significantly improve the multi-thread scaling and hardware utilization of their titles. But it requires a considerable amount of graphics expertise and memory-level programming skill. The programming investment is significant and must be considered from the start of a project.
Introduction, Specifications, and Packaging
Everyone expects SSD makers to keep pushing out higher and higher capacity SSDs, but the thing holding them back is sufficient market demand for that capacity. With that, it appears Samsung has decided it was high time for a 4TB model of their 850 EVO. Today we will be looking at this huge capacity point, and paying close attention to any performance dips that sometimes result in pushing a given SSD controller / architecture to extreme capacities.
This new 4TB model benefits from the higher density of Samsung’s 48-layer V-NAND. We performed a side-by-side comparison of 32 and 48 layer products back in March, and found the newer flash to reduce Latency Percentile profiles closer to MLC-equipped Pro model than the 32-layer (TLC) EVO:
Latency Percentile showing reduced latency of Samsung’s new 48-layer V-NAND
We’ll be looking into all of this in today’s review, along with trying our hand at some new mixed paced workload testing, so let’s get to it!
Introduction and Technical Specifications
Courtesy of Primochill
The Praxis WetBench open-air test bench is the newest version Primochill's test bench line of cases. The updated version of the WetBench features a dual steel and acrylic-based design, offering a stronger base than the original. The acrylic accents give the test bench a unique and compelling aesthetic, offered in over 20 different configurations. The open design and quick remove panels allow for easy access to the motherboard and PCIe cards without the hassle of removing case panels and mounting screws associated with a typical case motherboard change out. With a starting MSRP of $184.99, the Praxis WetBench is competitively priced when compared with other test bench solutions.
Courtesy of Primochill
Courtesy of Primochill
Like its predecessor, the Praxis WetBench is unique in its design - built to support custom water cooling solutions from the ground up and re-engineered with a stronger structure for added support. Primochill designed the Praxis for mounting of the water cooling kit's radiator to the back panel with support of up to a 280mm or 360mm radiator (or 2 x 140mm or 3 x 120mm fans). The back panel is designed to allow for radiator mounting to the inside or outside of the panel surface.
Radeon Software 16.7.1 Adjustments
Last week we posted a story that looked at a problem found with the new AMD Radeon RX 480 graphics card’s power consumption. The short version of the issue was that AMD’s new Polaris 10-based reference card was drawing more power than its stated 150 watt TDP and that it was drawing more power through the motherboard PCI Express slot that the connection was rated for. And sometimes that added power draw was significant, both at stock settings and overclocked. Seeing current draw over a connection rated at just 5.5A peaking over 7A at stock settings raised an alarm (validly) and our initial report detailed the problem very specifically.
AMD responded initially that “everything was fine here” but the company eventually saw the writing on the wall and started to work on potential solutions. The Radeon RX 480 is a very important product for the future of Radeon graphics and this was a launch that needs to be as perfect as it can be. Though the risk to users’ hardware with the higher than expected current draw is muted somewhat by motherboard-based over-current protection, it’s crazy to think that AMD actually believed that was the ideal scenario. Depending on the “circuit breaker” in any system to save you when standards exists for exactly that purpose is nuts.
Today AMD has released a new driver, version 16.7.1, that actually introduces a pair of fixes for the problem. One of them is hard coded into the software and adjusts power draw from the different +12V sources (PCI Express slot and 6-pin connector) while the other is an optional flag in the software that is disabled by default.
Reconfiguring the power phase controller
The Radeon RX 480 uses a very common power controller (IR3567B) on its PCB to cycle through the 6 power phases providing electricity to the GPU itself. Allyn did some simple multimeter trace work to tell us which phases were connected to which sources and the result is seen below.
The power controller is responsible for pacing the power coming in from the PCI Express slot and the 6-pin power connection to the GPU, in phases. Phases 1-3 come in from the power supply via the 6-pin connection, while phases 4-6 source power from the motherboard directly. At launch, the RX 480 drew nearly identical amounts of power from both the PEG slot and the 6-pin connection, essentially giving each of the 6 phases at work equal time.
That might seem okay, but it’s far from the standard of what we have seen in the past. In no other case have we measured a graphics card drawing equal power from the PEG slot as from an external power connector on the card. (Obviously for cards without external power connections, that’s a different discussion.) In general, with other AMD and NVIDIA based graphics cards, the motherboard slot would provide no more than 50-60 watts of power, while any above that would come from the 6/8-pin connections on the card. In many cases I saw that power draw through the PEG slot was as low as 20-30 watts if the external power connections provided a lot of overage for the target TDP of the product.
It’s probably not going to come as a surprise to anyone that reads the internet, but NVIDIA is officially taking the covers off its latest GeForce card in the Pascal family today, the GeForce GTX 1060. As the number scheme would suggest, this is a more budget-friendly version of NVIDIA’s latest architecture, lowering performance in line with expectations. The GP106-based GPU will still offer impressive specifications and capabilities and will probably push AMD’s new Radeon RX 480 to its limits.
Let’s take a quick look at the card’s details.
|GTX 1060||RX 480||R9 390||R9 380||GTX 980||GTX 970||GTX 960||R9 Nano||GTX 1070|
|GPU||GP106||Polaris 10||Grenada||Tonga||GM204||GM204||GM206||Fiji XT||GP104|
|Rated Clock||1506 MHz||1120 MHz||1000 MHz||970 MHz||1126 MHz||1050 MHz||1126 MHz||up to 1000 MHz||1506 MHz|
|Texture Units||80 (?)||144||160||112||128||104||64||256||120|
|ROP Units||48 (?)||32||64||32||64||56||32||64||64|
|Memory Clock||8000 MHz||7000 MHz
|6000 MHz||5700 MHz||7000 MHz||7000 MHz||7000 MHz||500 MHz||8000 MHz|
|Memory Interface||192-bit||256-bit||512-bit||256-bit||256-bit||256-bit||128-bit||4096-bit (HBM)||256-bit|
|Memory Bandwidth||192 GB/s||224 GB/s
|384 GB/s||182.4 GB/s||224 GB/s||196 GB/s||112 GB/s||512 GB/s||256 GB/s|
|TDP||120 watts||150 watts||275 watts||190 watts||165 watts||145 watts||120 watts||275 watts||150 watts|
|Peak Compute||3.85 TFLOPS||5.1 TFLOPS||5.1 TFLOPS||3.48 TFLOPS||4.61 TFLOPS||3.4 TFLOPS||2.3 TFLOPS||8.19 TFLOPS||5.7 TFLOPS|
The GeForce GTX 1060 will sport 1280 CUDA cores with a GPU Boost clock speed rated at 1.7 GHz. Though the card will be available in only 6GB varieties, the reference / Founders Edition will ship with 6GB of GDDR5 memory running at 8.0 GHz / 8 Gbps. With 1280 CUDA cores, the GP106 GPU is essentially one half of a GP104 in terms of compute capability. NVIDIA decided not to cut the memory interface in half though, instead going with a 192-bit design compared to the GP104 and its 256-bit option.
The rated GPU clock speeds paint an interesting picture for peak performance of the new card. At the rated boost clock speed, the GeForce GTX 1070 produces 6.46 TFLOPS of performance. The GTX 1060 by comparison will hit 4.35 TFLOPS, a 48% difference. The GTX 1080 offers nearly the same delta of performance above the GTX 1070; clearly NVIDIA has set the scale Pascal and product deviation.
NVIDIA wants us to compare the new GeForce GTX 1060 to the GeForce GTX 980 in gaming performance, but the peak theoretical performance results don’t really match up. The GeForce GTX 980 is rated at 4.61 TFLOPS at BASE clock speed, while the GTX 1060 doesn’t hit that number at its Boost clock. Obviously Pascal improves on performance with memory compression advancements, but the 192-bit memory bus is only able to run at 192 GB/s, compared to the 224 GB/s of the GTX 980. Obviously we’ll have to wait for performance result from our own testing to be sure, but it seems possible that NVIDIA’s performance claims might depend on technology like Simultaneous Multi-Projection and VR gaming to be validated.
Introduction, Specifications and Packaging
It's been too long since we took a look at enterprise SSDs here at PC Perspective, so it's high time we get back to it! The delay has stemmed from some low-level re-engineering of our test suite to unlock some really cool QoS and Latency Percentile possibilities involving PACED workloads. We've also done a lot of work to distill hundreds of hours of test results into fewer yet more meaningful charts. More on that as we get into the article. For now, let's focus on today's test subject:
Behold the Micron 9100 MAX Series. Inside that unassuming 2.5" U.2 enclosure sits 4TB of flash and over 4GB of DRAM. It's capable of 3 GB/s reads, 2 GB/s writes, and 750,000 IOPS. All from inside that little silver box! There's not a lot more to say here because nobody is going to read much past that 3/4 MILLION IOPS figure I just slipped, so I'll just get into the rest of the article now :).
The 9100's come in two flavors and form factors. The MAX series (1.2TB and 2.4TB in the above list) come with very high levels of performance and endurance, while the PRO series comes with lower overprovisioning, enabling higher capacity points for a given flash loadout (800GB, 1.6TB, 3.2TB). Those five different capacity / performance points are available in both PCIe (HHHL) and U.2 (2.5") form factors, making for 10 total available SKUs. All products are PCIe 3.0 x4, using NVMe as their protocol. They should all be bootable on systems capable of UEFI/NVMe BIOS enumeration.
Idle power consumption is a respectable 7W, while active consumption is selectable in 20W, 25W, and 'unlimited' increments. While >25W operation technically exceeds the PCIe specification for non-GPU devices, we know that the physical slot is capable of 75W for GPUs, so why can't SSDs have some more fun too! That said, even in unlimited mode, the 9100's should still stick relatively close to 25W and in our testing did not exceed 29W at any workload. Detailed power testing is coming to future enterprise articles, but for now, the extent will be what was measured and noted in this paragraph.
Our 9100 MAX samples came only in anti-static bags, so no fancy packaging to show here. Enterprise parts typically come in white/brown boxes with little flair.
The New Corinthian Leather?
I really do not know what happened to me, but I used to hate racing games. I mean, really hate them. I played old, old racing games on Atari. I had some of the first ones available on PC. They did not appeal to me in the least. Instant buyer’s remorse for the most part. Then something strange happened. 3D graphics technology changed that opinion. Not only did hardware accelerated 3D help me get over my dislike, but the improvements in physical simulations also allowed a greater depth of experience. Throw in getting my first force feedback device and NFS: Porsche Unleashed and I was hooked from then on out.
The front of the box shows the lovely Ferrari 599XX supercar with the wheel in the foreground.
The itch to improve the driving experience only grows as time goes on. More and more flashy looking titles are released, some of which actually improve upon the simulation with complex physics rewrites, all of which consume more horsepower from the CPU and GPU. This then leads to more hardware upgrades. The next thing a person knows they are ordering multiple monitors so they can just experience racing in Surround/Eyefinity (probably the best overall usage for the technology).
One bad thing about having a passion for something is that itch to improve the experience never goes away. DiRT 2 inspired me to purchase my first FFB wheel, the TM Ferrari F420 model. Several games later and my disappointment for the F420’s 270 degree steering had me pursue my next purchase which was a TX F458 Ferrari Edition racing wheel. This featured the TX base, the stock/plastic Ferrari wheel, and the two pedal set. This was a tremendous upgrade from the older TM F420 and the improvement to 900 degrees of rotation and far better FFB effects was tremendous. Not only that, but the TX platform could be upgradeable. The gate leading to madness was now open.
The TX base can fit a variety of 2 and 3 pedal systems, but the big push is towards the actual wheel itself. Thrustmaster has several products that fit the base that feature a materials such as plastic, rubber, and leather. These products go from $120 on up to around $150. These are comprised of three GT style wheels and one F1 wheel. All of them look pretty interesting and are a big step up from the bundled F458 replica that comes standard with the TX set.
The rear shows the rim itself at actual size.
I honestly had not thought about upgrading to any of these units as I was pleased with the feel and performance of the stock wheel. It seemed to have fit my needs. Then it happened. Thrustmaster announced the Ferrari 599XX EVO wheel with honest-to-goodness Alcantara ™ construction. The more I read about this wheel, the more I wanted it. The only problem in my mind is that it is priced at a rather dramatic $179. I had purchased the entire TX F458 setup on sale for only $280 some months before! Was the purchase of the 599XX worth it? Would it dramatically change my gaming experience? I guess there is only one way to find out. I hid the credit card statement and told my wife, “Hey, look what I got in for review!”
Introduction and Specifications
Lenovo made quite a splash with the introduction of the original X1 Carbon notebook in 2012; with its ultra-thin, ultra-light, and carbon fiber-infused construction, it became the flagship ThinkPad notebook. Fast-forward to late 2013, and the introduction of the ThinkPad Yoga; the business version of the previous year's consumer Yoga 2-in-1. The 360-degree hinge was novel for a business machine at the time, and the ThinkPad Yoga had a lot of promise, though it was far from perfect.
Now we fast-forward again, to the present day. It's 2016, and Lenovo has merged their ThinkPad X1 Carbon and ThinkPad Yoga together to create the X1 Yoga. This new notebook integrates the company's Yoga design (in appearance this is akin to the recent ThinkPad Yoga 260/460 revision) into the flagship ThinkPad X lineup, and provides what Lenovo is calling "the world's lightest 14-inch business 2-in-1".
Yoga and Carbon Merge
When Lenovo announced the marriage of the X1 Carbon notebook with the ThinkPad Yoga, I took notice. A buyer of the original ThinkPad Yoga S1 (with which I had a love/hate relationship) I wondered if the new X1 version of the business-oriented Yoga convertible would win me over. On paper it checks all the right boxes, and the slim new design looks great. I couldn't wait to get my hands on one for some real-world testing, and to see if my complaints about the original TP Yoga design were still valid.
As one would expect from a notebook carrying Lenovo’s ThinkPad X1 branding, this new Yoga is quite slim, and made from lightweight materials. Comparing this new Yoga to the X1 Carbon directly, the most obvious difference is that 360° hinge, which is the hallmark of the Yoga series, and exclusive to those Lenovo designs. This hinge allows the X1 Yoga to be used as a notebook, tablet, or any other imaginable position in between.
|Lenovo ThinkPad X1 Yoga (base configuration, as reviewed)|
|Processor||Intel Core i5-6200U (Skylake)|
|Graphics||Intel HD Graphics 520|
|Screen||14-in 1920x1080 IPS Touch (with digitizer, active pen)|
|Storage||256GB M.2 SSD|
|Camera||720p / Digital Array Microphone|
|Wireless||Intel 8260 802.11ac + BT 4.1 (Dual Band, 2x2)|
3x USB 3.0
Audio combo jack
|Dimensions||333mm x 229 mm x 16.8mm (13.11" x 9.01" x 0.66")
2.8 lbs. (1270 g)
|OS||Windows 10 Pro|
|Price||$1349 - Amazon.com|
Pre and Post Update Testing
Samsung launched their 840 Series SSDs back in May of 2013, which is over three years ago as of this writing. They were well-received as a budget unit but rapidly eclipsed by the follow-on release of the 840 EVO.
A quick check of our test 840 revealed inconsistent read speeds.
We broke news of Samsung’s TLC SSDs being effected by a time-based degrading of read speeds in September of 2014, and since then we have seen nearly every affected product patched by Samsung, with one glaring exception - the original 840 SSD. While the 840 EVO was a TLC SSD with a built-in SLC static data cache, the preceding 840 was a pure TLC drive. With the focus being on the newer / more popular drives, I had done only spot-check testing of our base 840 sample here at the lab, but once I heard there was finally a patch for this unit, I set out to do some pre-update testing so that I could gauge any improvements to read speed from this update.
As a refresher, ‘stale’ data on an 840 EVO would see reduced read speeds over a period of months after those files were written to the drive. This issue was properly addressed in a firmware issued back in April of 2015, but there were continued grumbles from owners of other affected drives, namely the base model 840. With the Advanced Performance Optimization patch being issued so long after others have been patched, I’m left wondering why there was such a long delay on this one? Differences in the base-840’s demonstration of this issue revealed themselves in my pre-patch testing:
Too much power to the people?
UPDATE (7/1/16): I have added a third page to this story that looks at the power consumption and power draw of the ASUS GeForce GTX 960 Strix card. This card was pointed out by many readers on our site and on reddit as having the same problem as the Radeon RX 480. As it turns out...not so much. Check it out!
UPDATE 2 (7/2/16): We have an official statement from AMD this morning.
As you know, we continuously tune our GPUs in order to maximize their performance within their given power envelopes and the speed of the memory interface, which in this case is an unprecedented 8Gbps for GDDR5. Recently, we identified select scenarios where the tuning of some RX 480 boards was not optimal. Fortunately, we can adjust the GPU's tuning via software in order to resolve this issue. We are already testing a driver that implements a fix, and we will provide an update to the community on our progress on Tuesday (July 5, 2016).
Honestly, that doesn't tell us much. And AMD appears to be deflecting slightly by using words like "some RX 480 boards". I don't believe this is limited to a subset of cards, or review samples only. AMD does indicate that the 8 Gbps memory on the 8GB variant might be partially to blame - which is an interesting correlation to test out later. The company does promise a fix for the problem via a driver update on Tuesday - we'll be sure to give that a test and see what changes are measured in both performance and in power consumption.
The launch of the AMD Radeon RX 480 has generally been considered a success. Our review of the new reference card shows impressive gains in architectural efficiency, improved positioning against NVIDIA’s competing parts in the same price range as well as VR-ready gaming performance starting at $199 for the 4GB model. AMD has every right to be proud of the new product and should have this lone position until the GeForce product line brings a Pascal card down into the same price category.
If you read carefully through my review, there was some interesting data that cropped up around the power consumption and delivery on the new RX 480. Looking at our power consumption numbers, measured directly from the card, not from the wall, it was using slightly more than the 150 watt TDP it was advertised as. This was done at 1920x1080 and tested in both Rise of the Tomb Raider and The Witcher 3.
When overclocked, the results were even higher, approaching the 200 watt mark in Rise of the Tomb Raider!
A portion of the review over at Tom’s Hardware produced similar results but detailed the power consumption from the motherboard PCI Express connection versus the power provided by the 6-pin PCIe power cable. There has been a considerable amount of discussion in the community about the amount of power the RX 480 draws through the motherboard, whether it is out of spec and what kind of impact it might have on the stability or life of the PC the RX 480 is installed in.
As it turns out, we have the ability to measure the exact same kind of data, albeit through a different method than Tom’s, and wanted to see if the result we saw broke down in the same way.
Our Testing Methods
This is a complex topic so it makes sense to detail the methodology of our advanced power testing capability up front.
How do we do it? Simple in theory but surprisingly difficult in practice, we are intercepting the power being sent through the PCI Express bus as well as the ATX power connectors before they go to the graphics card and are directly measuring power draw with a 10 kHz DAQ (data acquisition) device. A huge thanks goes to Allyn for getting the setup up and running. We built a PCI Express bridge that is tapped to measure both 12V and 3.3V power and built some Corsair power cables that measure the 12V coming through those as well.
The result is data that looks like this.
What you are looking at here is the power measured from the GTX 1080. From time 0 to time 8 seconds or so, the system is idle, from 8 seconds to about 18 seconds Steam is starting up the title. From 18-26 seconds the game is at the menus, we load the game from 26-39 seconds and then we play through our benchmark run after that.
There are four lines drawn in the graph, the 12V and 3.3V results are from the PCI Express bus interface, while the one labeled PCIE is from the PCIE power connection from the power supply to the card. We have the ability to measure two power inputs there but because the GTX 1080 only uses a single 8-pin connector, there is only one shown here. Finally, the blue line is labeled total and is simply that: a total of the other measurements to get combined power draw and usage by the graphics card in question.
From this we can see a couple of interesting data points. First, the idle power of the GTX 1080 Founders Edition is only about 7.5 watts. Second, under a gaming load of Rise of the Tomb Raider, the card is pulling about 165-170 watts on average, though there are plenty of intermittent, spikes. Keep in mind we are sampling the power at 1000/s so this kind of behavior is more or less expected.
Different games and applications impose different loads on the GPU and can cause it to draw drastically different power. Even if a game runs slowly, it may not be drawing maximum power from the card if a certain system on the GPU (memory, shaders, ROPs) is bottlenecking other systems.
One interesting note on our data compared to what Tom’s Hardware presents – we are using a second order low pass filter to smooth out the data to make it more readable and more indicative of how power draw is handled by the components on the PCB. Tom’s story reported “maximum” power draw at 300 watts for the RX 480 and while that is technically accurate, those figures represent instantaneous power draw. That is interesting data in some circumstances, and may actually indicate other potential issues with excessively noisy power circuitry, but to us, it makes more sense to sample data at a high rate (10 kHz) but to filter it and present it more readable way that better meshes with the continuous power delivery capabilities of the system.
Image source: E2E Texas Instruments
An example of instantaneous voltage spikes on power supply phase changes
Some gamers have expressed concern over that “maximum” power draw of 300 watts on the RX 480 that Tom’s Hardware reported. While that power measurement is technically accurate, it doesn’t represent the continuous power draw of the hardware. Instead, that measure is a result of a high frequency data acquisition system that may take a reading at the exact moment that a power phase on the card switches. Any DC switching power supply that is riding close to a certain power level is going to exceed that on the leading edges of phase switches for some minute amount of time. This is another reason why our low pass filter on power data can help represent real-world power consumption accurately. That doesn’t mean the spikes they measure are not a potential cause for concern, that’s just not what we are focused on with our testing.
Polaris 10 Specifications
It would be hard at this point to NOT know about the Radeon RX 480 graphics card. AMD and the Radeon Technologies Group has been talking publicly about the Polaris architecture since December of 2015 with lofty ambitions. In the precarious position that the company rests, being well behind in market share and struggling to compete with the dominant player in the market (NVIDIA), the team was willing to sacrifice sales of current generation parts (300-series) in order to excite the user base for the upcoming move to Polaris. It is a risky bet and one that will play out over the next few months in the market.
Since then AMD continued to release bits of information at a time. First there were details on the new display support, then information about the 14nm process technology advantages. We then saw demos of working silicon at CES with targeted form factors and then at events in Macau, showed press the full details and architecture. At Computex they announced rough performance metrics and a price point. Finally, at E3, AMD discussed the RX 460 and RX 470 cousins and the release date of…today. It’s been quite a whirlwind.
Today the rubber meets the road: is the Radeon RX 480 the groundbreaking and stunning graphics card that we have been promised? Or does it struggle again to keep up with the behemoth that is NVIDIA’s GeForce product line? AMD’s marketing team would have you believe that the RX 480 is the start of some kind of graphics revolution – but will the coup be successful?
Join us for our second major graphics architecture release of the summer and learn for yourself if the Radeon RX 480 is your next GPU.
A new competitor has entered the arena!
When we first saw the announcement of the MateBook in Spain back in March, pricing was immediately impressive. The base model of the tablet starts at just $699; $200 less than the lowest-priced Surface Pro 4, with features and performance that pretty closely match one another.
The MateBook only ships with Core m processors, a necessity of the incredibly thin and fanless design that Huawei is using. That obviously will put the MateBook behind other tablets and notebooks that use the Core i3/i5/i7 processors, but with a power consumption advantage along the way. Honestly, the performance differences between the Core m3 and m5 and m7 parts is pretty small – all share the same 4.5 watt TDP and all have fairly low base clock speeds and high boost clocks. The Core m5-6Y54 that rests in our test sample has a base clock of 1.1 GHz and a maximum Turbo Boost clock of 2.7 GHz. The top end Core m7-6Y75 has a base of 1.2 GHz and Boost of 3.1 GHz. The secret of course is that these processors run at Turbo clocks very infrequently; only during touch interactions and when applications demand performance.
If you work-load regularly requires you to do intensive transcoding, video editing or even high-resolution photo manipulation, the Core m parts are going to be slower than the Core i-series options available in other solutions. If you just occasionally need to use an application like Photoshop, the MateBook has no problems doing so.
|Huawei MateBook Tablet PC|
|Screen||12-in 2160x1440 IPS|
|CPU||Core m3||Core m3||Core m5||Core m5||Core m7||Core m7|
|GPU||Intel HD Graphics 515|
|Network||802.11ac MIMO (2.4 GHz, 5.0 GHz)
Gigabite Ethernet (MateDock)
|Display Output||HDMI / VGA (through MateDock)|
|Connectivity||USB 3.0 Type-C
USB 3.0 x 2 (MateDock)
|Audio||Dual Digital Mic
|Weight||640g (1.41 lbs)|
|Dimensions||278.8mm x 194.1mm x 6.9mm
(10.9-in x 7.6-in x 0.27-in)
|Operating System||Windows 10 Home / Pro|
Update: The Huawei Matebook is now available on Amazon.com!
At the base level, both the Surface Pro 4 and the MateBook have identical specs, but the Huawei unit is priced $200 lower. After that, things get more complicated as the Surface Pro 4 moves to Core i5 and Core i7 processors while the MateBook sticks with m5 and m7 parts. Storage capacities and memory size scale though. The lowest entry point for the MateBook to get 256GB of storage and 8GB of memory is $999 and comes with a Core m5 processor; a comparable Surface Pro 4 uses a Core i5 CPU instead but will run you $1199. If you want to move from 256GB to 512GB of storage, Microsoft wants $400 more for your SP4, while Huawei’s price only goes up $200.
Introduction and Features
In this review we will take a detailed look at one of be quiet!’s top of the line power supplies, the Dark Power Pro 11 750W. There are currently six power supplies in the Dark Power Pro 11 Series, which include 550W, 650W, 750W, 850W, 1000W and 1200W models. As you might expect, be quiet! continues to be focused on delivering virtually silent power supplies and they are one of the top selling brands in Europe. All of the Dark Power Pro 11 models are certified for high efficiency (80 Plus Platinum) and come with modular cables.
be quiet! designed the Dark Power Pro 11 Series to provide high efficiency with minimal noise for systems that demand whisper-quiet operation without compromising on power quality. In addition to the Dark Power Pro 11 Series, be quiet! offers a full range of power supplies in ATX, SFX, and TFX form factors.
(Courtesy of be quiet!)
All of the Dark Power Pro 11 Series power supplies are semi-modular (all cables are modular except for the fixed 24-pin ATX cable). Along with 80 Plus Platinum certified high efficiency and quiet operation, the Dark Power Pro 11 750W PSU features an “overclocking” key to select between multi-rail and single rail +12V outputs. The power supply uses be quiet!’s latest SilentWings3 135mm fan for virtually silent operation. The fan speed starts out very slow and remains slow and quiet through mid-power levels. And the Dark Power Pro 11 power supplies allow connecting up to four case fans, whose speed will be controlled by the PSU.
be quiet! Dark Power Pro 11 750W PSU Key Features:
• 750W continuous DC output (ATX12V v2.4, EPS 2.92 compliant)
• Virtually inaudible SilentWings3 135mm FDB cooling fan
• 80 PLUS Platinum certified efficiency (up to 94%)
• Premium 105°C rated parts enhance stability and reliability
• Powerful GPU support with seven PCI-E connectors
• User-friendly cable management reduces clutter and improves airflow
• NVIDIA SLI Ready and AMD CrossFire X certified
• ErP 2014 ready and meets Energy Star 6.0 guidelines
• Zero load design supports Intel’s Deep Power Down C6 & C7 modes
• Overclocking key selects between single or multiple +12V rails
• Active Power Factor correction (0.99) with Universal AC input
• Intelligent speed control for up to four case fans
• Safety Protections :OCP, OVP, UVP, SCP, OTP, and OPP
• 5-Year warranty
Here is what be quiet! has to say about the Dark Power Pro 11 750W PSU: "It is a fact of the modern world that high technology requires constant refinement and unending improvement – and that is even truer for those who would be leaders. Dark Power Pro power supplies are renowned as the world’s quietest and most efficient high-performance PSUs. The Dark Power Pro 11 750W model takes that a step further with a power conversion topology that delivers 80Plus Platinum performance, add to that an unparalleled array of enhancements that augment this unit’s compatibility, convenience of use, reliability, and safety, and the result is the most technologically-advanced power supply be quiet! has ever built.”
Introduction, Specifications, and Packaging
Western Digital launched their My Passport Wireless nearly two years ago. It was a nifty device that could back up or offload SD cards without the need for a laptop, making it ideal for photographers in the field. I came away from that review wondering just how much more you could pack into a device like that, and today I get to find out:
Not to be confused with the My Passport Pro (a TB-connected portable RAID storage device), the My Passport Wireless Pro is meant for on-the-go photographers who seek to back up their media while in the field but also lighten their backpacks. The concept is simple - have a small device capable of offloading (or backing up) SD cards without having to lug along your laptop and a portable hard drive to do so. Add in a wireless hotspot with WAN pass-through along with mobile apps to access the media and you can almost get away without bringing a laptop at all. Oh, and did I mention this one can also import photos and videos from your smartphone while charging it via USB?
- Capacity: 2TB and 3TB
- Battery: 6,400 mAH / 24WH
- UHS-I SD Card Reader
- USB 3.0 (upstream) port for data and charging
- USB 2.0 (downstream) port for importing and charging smartphones
- 802.11AC + N dual band (2.4 / 5 GHz) WiFi
- 2.4A Travel Charge Adapter (included)
- Plex Media Server capable
- Available 'My Cloud' mobile apps
No surprises here. 2.4W power adapter is included this time around, which is a nice touch.
ARM Releases Egil Specs
The final product that ARM showed us at that Austin event is the latest video processing unit that will be integrated into their Mali GPUs. The Egil video processor is a next generation unit that will be appearing later this year with the latest products that utilize Mali GPUs up and down the spectrum. It is not tied to the latest G71 GPU, but rather can be used with a multitude of current Mali products.
Video is one of the biggest usage cases for modern SOCs in mobile devices. People constantly stream and record video from their handhelds and tablets, and there are some real drawbacks in current video processor products from a variety of sources. We have seen the amazing increase in pixel density on phones and tablets and the power draw to render video effectively on these products has gone up. We have also seen the introduction of new codecs that require a serious amount of processing capabilities to decode.
Egil is a scalable product that can go from one core to six. A single core can display video from a variety of codecs at 1080P and up to 80 fps. The six core solution can play back 4K video at 120 Hz. This is assuming that the Egil processor is produced on a 16nm FF process or smaller and running at 800 MHz. This provides a lot of flexibility with SOC manufacturers that allows them to adequately tailor their products for specific targets and markets.
The cores themselves are fixed function blocks with dedicated controllers and control logic. Previous video processors were more heavy on the decode aspects rather than encode. Now that we have more pervasive streaming from mobile devices and cameras/optics that can support higher resolutions and bitrates, ARM has redesigned Egil to offer extensive encoding capabilities. Not only does it add this capability, but it enhances it by not only decoding at 4K but being able to encode four 1080p30 streams at the same time.
Egil will eventually find its way into other products such as TVs. These custom SOCs will be even more important as 4K playback and media become more common plus potential new functionality that has yet to be implemented effectively on TVs. For the time being we will likely see this in mobile first, with the initial products hitting the market in the second half of 2016.
ARM is certainly on a roll this year with introducing new CPU, GPU, and now video processors. We will start to see these products being introduced throughout the end of this year and into the next. The company certainly has not been resting or letting potential competitors get the edge on them. Their products are always focused on consuming low amounts of power, but the potential performance looks to satisfy even power hungry users in the mobile and appliance markets. Egil is another solid looking member to the lineup that brings some impressive performance and codec support for both decoding and encoding.
Introduction, Dynamic Write Acceleration, and Packaging
Micron joined Intel in announcing their joint venture production of IMFT 3D NAND just a bit over a year ago. The industry was naturally excited since IMFT has historically enabled relatively efficient production, ultimately resulting in reduced SSD prices over time. I suspect this time things will be no different as IMFT's 3D Flash has been aiming high die capacities since its inception, and I suspect their second generation will *double* per-die capacities while keeping speeds reasonable thanks to a quad-plane design implemented from the start of this endeavor. Of course, I'm getting ahead of myself a bit as there are no consumer products sporting this flash just yet - well not until today at least:
Marketed under Micron's consumer brand Crucial, the MX300 is their first entrant into the consumer space, as well as the first consumer SSD sporting IMFT 3D NAND. Crucial is known for their budget-minded SSDs, and for the MX300 they chose to go with the best cost/GB they could manage with what they had to work with. That meant putting this new 3D NAND into TLC mode. Now there are many TLC haters out there, but remember this is 3D NAND. Samsung's 850 EVO can exceed 500 MB/sec writes to TLC at its 500GB capacity point, and this MX300 is a product that is launching with *only* a 750GB capacity, so its TLC speed should be at least reasonable.
(the return of) Dynamic Write Acceleration
Dynamic Write Acceleration in action during a sequential fill - that last slowest part was my primary concern for the mX300.
TLC is not the only story here because Crucial has included their Dynamic Write Acceleration (DWA) technology into the MX300. This is a tech where the SSD controller is able to dynamically switch flash programming modes of the flash pool, doing so at the block level. This appears to be a feature unique to IMFT flash, as every other 'hybrid' SSD we have tested had a static SLC cache area. DWA's ability to switch flash modes on-the-fly has always fascinated me on paper, but I just haven't been impressed by Micron's previous attempts to implement it. The M600 was a bit all over the place on its write consistency, and that SSD was flipping blocks between SLC and MLC. With the MX300 flipping between SLC and *TLC*, there was a possibility of far more noticeable slow downs in the cases where large writes were taking place and the controller was caught trying to scavenge space in the background.
New Latency Percentile vs. legacy IO Percentile, shown here highlighting a performance inconsistency seen in the Toshiba OCZ RD400. Note which line more closely represents the Latency Distribution (gray) also on this plot.
Introduction and Features
SilverStone is a veteran in the PC power supply industry and they continue to offer a full line of enclosures, power supplies, fans, coolers, and PC accessories. They have raised the bar in their Strider power supply series, which now includes three 80 Plus Titanium certified units, the ST60F-TI, ST70F-TI, and ST80F-TI. These three units are billed as being “the world’s smallest 80 Plus Titanium, full-modular ATX power supplies”; with a chassis that is only 150mm (5.9”) deep.
The 80 Plus Titanium efficiency standards were introduced in 2012 and are the most demanding specifications to date. In addition to raising the efficiency requirements at 20%, 50% and 100% loads, the Titanium standard adds a new requirement at 10%. This insures that a Titanium certified power supply will operate with at least 90% efficiency over the full range of loads and deliver up to 94% efficiency at a 50% load.
I’m also happy to report that SilverStone is now providing a 5-year warranty on both the Strider Titanium and Strider Platinum series power supplies; up from their standard 3-year warranty.
SilverStone ST60F-TI Power Supply Key Features:
• 80 Plus Titanium certified for super-high efficiency
• Compact design with a depth of 150mm for easy integration
• All-modular, flat ribbon-style cables
• 100% all Japanese made capacitors
• Strict ±3% voltage regulation and low AC ripple and noise
• Powerful single +12V rail
• Four PCI-E connectors for multiple GPU support
• Safety Protections: OCP, OTP, OPP, UVP, OVP, and SCP
• Quiet 120mm fan with Fluid Dynamic bearing
• 140mm dust filter
• 5-Year warranty