All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
The Killer 1535 Wi-Fi adapter was the first 2x2 MU-MIMO compatible adapter on the market when it launched earlier this year, and is only found in a few products right now. We had a chance to test it out with the recently reviewed MSI G72 Dominator Pro G-Sync laptop, using the new Linksys EA8500 MU-MIMO router. How did it perform, and just what is MU-MIMO? Read on to find out!
Killer networks certainly haven’t skimped on the hardware with their new wireless adapter, as the Wireless-AC 1535 features two external 5 GHz signal amplifiers and is 802.11ac Wave 2 compliant with its support for MU-MIMO and Transmit Beamforming. And while the adapter itself certainly sounds impressive the real star here – besides the MU-MIMO support – is the Killer software. With these two technologies Killer has a unique product on the market, and if it works as advertised it would create an attractive alternative to the typical Wi-Fi solution.
MU-MIMO: What is it?
With an increasing number of devices using Wi-Fi in the average connected home the strain on a wireless network can often be felt. Just as one download can bring your internet connection to a crawl, one computer can hog nearly all available bandwidth from your router. MU-MIMO offers a solution to the network limitations of a typical multi-user home, and in fact the MU in MU-MIMO stands for Multi-User. The technology is part of the Wave 2 spec for 802.11ac, and it works differently than standard MIMO (multiple input, multiple output) technology. What’s the difference?
With standard MIMO (also known as Single-User MIMO) compatible devices take advantage of multiple data streams that are propagated to provide faster data than would otherwise be available for a single device. Multiple antennas on both base station and the client device are used to create the multiple transmit/receive streams needed for the added bandwidth. The multiple antennas used in MIMO systems create multiple channels, allowing for those separate data streams, and the number of streams is equal to the number of antennas (1x1 supports one stream, 2x2 supports two streams, etc.).
For multi-device users
One of the things that we still wrestle with here at PC Perspective is keeping a host of phones, tablets and mobile gaming devices charged and ready to go when we need them. Reviewing items means we need to have multiple devices ready to go to run tests and benchmarks at any given time. Keeping that collection of technology powered up can be a pain in the rear - adapters everywhere, cables strewn across the shelf, etc.
The same is true for me at home - even though we are only a two adult household, my wife and I each have a tablet we use regularly, smartphones and a host of accessories like wireless headphones, smart watches and more. And when company comes over it is expected that at least someone will need to top off the power to their phone.
Skiva has a USB charging accessory to help alleviate much of the headache involved with these situations. The Powerflow 7 Stand Charger combines a 7-port USB charger capable of 2.4A to each port with a simple stand to support 7 tablets and phones vertically. The result is a neatly organized set of hardware that is accessible when you need it.
Specs and Hardware
The AMD Radeon Nano graphics card is unlike any product we have ever tested at PC Perspective. As I wrote and described to the best of my ability (without hardware in my hands) late last month, AMD is targeting a totally unique and different classification of hardware with this release. As a result, there is quite a bit of confusion, criticism, and concern about the Nano, and, to be upfront, not all of it is unwarranted.
After spending the past week with an R9 Nano here in the office, I am prepared to say this immediately: for users matching specific criteria, there is no other option that comes close to what AMD is putting on the table today. That specific demographic though is going to be pretty narrow, a fact that won’t necessarily hurt AMD simply due to the obvious production limitations of the Fiji and HBM architectures.
At $650, the R9 Nano comes with a flagship cost but it does so knowing full well that it will not compete in terms of raw performance against the likes of the GTX 980 Ti or AMD’s own Radeon R9 Fury X. However, much like Intel has done with the Ultrabook and ULV platforms, AMD is attempting to carve out a new market that is looking for dense, modest power GPUs in small form factors. Whether or not they have succeeded is what I am looking to determine today. Ride along with me as we journey on the roller coaster of a release that is the AMD Radeon R9 Nano.
Introduction and Specifications
It has been a while since we took a look at some hard drives here at PC Perspective. While seemingly everyone is pushing hard into Solid State Storage, those spinning platters have gotten the computer industry by for several decades, and they won't be going away any time soon so long as magnetic domains can store bits for cheaper than electrons can. SSDs have been eating away at the market for OS and single drive mobile needs, but when it comes to bulk storage, nothing beats a great hard drive for the money. Since many users would rather avoid maintaining a large array of drives, getting the capacity of each 3.5" unit higher is still a need, especially for storage hungry consumers. Enterprise units have been pushing into 8TB territory lately, but the consumer sweet spot currently remains at 6TB. Western Digital entered this area in July of last year, pushing their popular Green and Red lines up to 6TB. While the capacity was great, those two lines are mean to be power saving, slower spinning drives. When platter speeds are low, the laws of physics (and of rotational latency) kick in and dictate that they could never perform as well as their 7200 RPM counterparts.
...and now they have filled that gap, with their Black and Red Pro models now made available in up to 6TB capacities. To clarify the product lines here, the Green and Black products are intended for usage as a single drive, while the Red and Red Pro are meant for operating in NAS devices and use in a RAID. The two drives in this review are the faster spinning models, so we should see better performance all around. Spinning those platters faster means more power drawn and more heat generated by air friction across the platters, as we can look into below:
Western Digital Red Pro 6TB:
- Model: WD6001FFWX
- Max Sequential Read: 214 MB/s
- Form Factor: 3.5”
- Interface Type: SATA 6.0 Gb/s (SATA 3)
- UBER: <1 in 1015
- Power (active/idle/standby): 10.6W/7.4W/1.6W
- Warranty: 5 years
Western Digital Black 6TB:
- Model: WD6001FZWX
- Max Sequential: 218 MB/s
- Form Factor: 3.5”
- Interface Type: SATA 6.0 Gb/s (SATA 3)
- UBER: <1 in 1014
- Power (active/idle/standby): 10.6W/7.6W/1.6W
- Warranty: 5 years
For comparison, the slower spinning 6TB Red and Green models run at 5.3W/3.4W/0.4W. Lesson learned - moving from ~5400 RPM to 7200 RPM roughly doubles the power draw of a high capacity 3.5" HDD. Other manufacturers are doing things like hermetically sealing their drives and filling them with Helium, but that is a prohibitively expensive proposition for consumer / small business drives, which is what the Black and Red Pro lines are meant to satisfy. It has also been proven that Helium filled drives are not the best if their track geometry is not optimized as well as it could be.
Handcrafted in Brooklyn, NY
First impressions usually count for a lot, correct? Well, my first impression of a Grado product was not all that positive. I had a small LAN party at my house one night and I invited over the audio lead for Ritual Entertainment and got him set up on one of the test machines. He pulled out a pair of Grado SR225 headphones and plugged them in. I looked at them and thought, “Why does this audio guy have such terrible headphones?” Just like most others that have looked at Grados the first time, I thought these were similar to a set of WWII headsets, and likely sounded about as good. I offered my friend a more “gaming friendly” set of headphones. He laughed at me and said no thanks.
The packaging is relatively bland as compared to other competing "high end" headphones. Grado has a reputation of under-promising, yet overperforming.
I of course asked him about his headphones that he was so enamored with and he told me a little bit about how good they actually were and that he was quite happy to game on them. This of course got me quite interested in what exactly Grado had to offer. Those “cheap looking” headphones are anything but cheap. While the aesthetics can be debated, but what can’t be is that Grado makes a pretty great series of products.
Grado was founded by Joseph Grado in 1953. Sadly, Joseph passed away this year. Though he had been retired for some time, the company is still family owned and we are now seeing the 3rd generation of Grados getting involved in the day to day workings of the company. The headquarters was actually the site of the family fruit business before Joseph decided to go into the audio industry. They originally specialized in phonograph heads as well as other phono accessories, and it wasn’t until 1989 that Grado introduced their first headphones. Headphones are not exactly a market where there are massive technological leaps, so it appears as though there has been around three distinct generations of headphone designs from Grado with the Prestige series. The originals were introduced in the mid-90s then in the mid 2000s with the updated “i” series, and finally we have the latest “e” models that were released last year.
The company also offers five different lines of headphones that range from the $50 eGrado up to the $1700 PS1000E. They also use a variety of materials from plastic, to metal, and finally the very famous wood based headphones. In fact, they have a limited edition Grado Heritage run that was made from a maple tree cut down in Brooklyn very near to the workshop where Grado still handcrafts their headphones.
That townhouse in the middle? That is where the vast majority of Grado headphones are made. Not exactly what most expect considering the reputation of the Grado brand. (Photo courtesy of Jonathan Grado)
I was sent the latest SR225e models to take a listen to some time back. I finally got to a place where I could just sit down and pen about my thoughts and experience with these headphones.
Introduction and Technical Specifications
Courtesy of Corsair
Corsair's newest enthusiast targeted DDR4 memory kit features 4 x 4GB DDR4 modules rated for operating speeds of up to 3200MHz, catering to both the Intel X99 and Intel Z170 motherboards The modules are passively cooled with Corsair's Vengeance LPX aluminum heat spreads. The kit also comes with two Corsair Vengeance Airflow memory fans for additional active cooling.
Courtesy of Corsair
Courtesy of Corsair
The modules included with the the Vengeance DDR4-3200 16GB kit feature the latest design innovations in Corsair's Vengeance DDR4 memory line, including redesigned LPX heat spreaders for cool running at their rated 1.35V voltage. The modules have been optimized for quad channel operation with an Intel X99 motherboard as well as dual channel operation in an Intel Z170 motherboard, pairing well with both the Intel Haswell-E and Skylake processors. The modules also support the latest version of Intel XMP (Extreme Memory Profile), XMP 2.0.
AM3+ Keeps Chugging Along
Consumers cannot say that MSI has not attempted to keep the AM3+ market interesting with a handful of new products based upon that socket. Throughout this past year MSI has released three different products addressing multiple price points and featuresets. The 970 Gaming was the first, the 970 KRAIT introduced USB 3.1 to the socket, and the latest 990FXA-Gaming board provides the most feature rich implementation of the socket plus USB 3.1.
AMD certainly has not done the platform any real favors as of late in terms of new CPUs and architectures to inhabit that particular socket. The last refresh we had was around a year ago with the release of the FX-8370 and 8370e. These are still based on the Piledriver based Vishera core that was introduced three years ago. Unlike the GPU market, the CPU market has certainly not seen the leaps and bounds in overall performance that we had enjoyed in years past.
MSI has taken the now geriatric 990FX (based upon the 890FX chipset released in 2010- I think AMD might have gotten their money out of this particular chipset iteration) and implemented it in a new design that embraces many of the top end features that are desired by enthusiasts. AMD still has a solid following and their products are very competitive from a price/performance standpoint (check out Ryan’s price/perf graphs from his latest Intel CPU review).
The packing material is pretty basic. Just cardboard and no foam. Still, fits nicely and is quite snug.
The idea behind the 990FXA-Gaming is to provide a very feature-rich product that appeals to gamers and enthusiasts. The key is to provide those features at a price point that will not scare away the budget enthusiasts. Just as MSI has done with the 970 Gaming, there were decisions made to keep costs down. We will get into these tradeoffs shortly.
To the Max?
Much of the PC enthusiast internet, including our comments section, has been abuzz with “Asynchronous Shader” discussion. Normally, I would explain what it is and then outline the issues that surround it, but I would like to swap that order this time. Basically, the Ashes of the Singularity benchmark utilizes Asynchronous Shaders in DirectX 12, but they disable it (by Vendor ID) for NVIDIA hardware. They say that this is because, while the driver reports compatibility, “attempting to use it was an unmitigated disaster in terms of performance and conformance”.
AMD's Robert Hallock claims that NVIDIA GPUs, including Maxwell, cannot support the feature in hardware at all, while all AMD GCN graphics cards do. NVIDIA has yet to respond to our requests for an official statement, although we haven't poked every one of our contacts yet. We will certainly update and/or follow up if we hear from them. For now though, we have no idea whether this is a hardware or software issue. Either way, it seems more than just politics.
So what is it?
Simply put, Asynchronous Shaders allows a graphics driver to cram workloads in portions of the GPU that are idle, but not otherwise available. For instance, if a graphics task is hammering the ROPs, the driver would be able to toss an independent physics or post-processing task into the shader units alongside it. Kollock from Oxide Games used the analogy of HyperThreading, which allows two CPU threads to be executed on the same core at the same time, as long as it has the capacity for it.
Kollock also notes that compute is becoming more important in the graphics pipeline, and it is possible to completely bypass graphics altogether. The fixed-function bits may never go away, but it's possible that at least some engines will completely bypass it -- maybe even their engine, several years down the road.
But, like always, you will not get an infinite amount of performance by reducing your waste. You are always bound by the theoretical limits of your components, and you cannot optimize past that (except for obviously changing the workload itself). The interesting part is: you can measure that. You can absolutely observe how long a GPU is idle, and represent it as a percentage of a time-span (typically a frame).
And, of course, game developers profile GPUs from time to time...
According to Kollock, he has heard of some console developers getting up to 30% increases in performance using Asynchronous Shaders. Again, this is on console hardware and so this amount may increase or decrease on the PC. In an informal chat with a developer at Epic Games, so massive grain of salt is required, his late night ballpark “totally speculative” guesstimate is that, on the Xbox One, the GPU could theoretically accept a maximum ~10-25% more work in Unreal Engine 4, depending on the scene. He also said that memory bandwidth gets in the way, which Asynchronous Shaders would be fighting against. It is something that they are interested in and investigating, though.
This is where I speculate on drivers. When Mantle was announced, I looked at its features and said “wow, this is everything that a high-end game developer wants, and a graphics developer absolutely does not”. From the OpenCL-like multiple GPU model taking much of the QA out of SLI and CrossFire, to the memory and resource binding management, this should make graphics drivers so much easier.
It might not be free, though. Graphics drivers might still have a bunch of games to play to make sure that work is stuffed through the GPU as tightly packed as possible. We might continue to see “Game Ready” drivers in the coming years, even though much of that burden has been shifted to the game developers. On the other hand, maybe these APIs will level the whole playing field and let all players focus on chip design and efficient injestion of shader code. As always, painfully always, time will tell.
That is a lotta SKUs!
The slow, gradual release of information about Intel's Skylake-based product portfolio continues forward. We have already tested and benchmarked the desktop variant flagship Core i7-6700K processor and also have a better understanding of the microarchitectural changes the new design brings forth. But today Intel's 6th Generation Core processors get a major reveal, with all the mobile and desktop CPU variants from 4.5 watts up to 91 watts, getting detailed specifications. Not only that, but it also marks the first day that vendors can announce and begin selling Skylake-based notebooks and systems!
All indications are that vendors like Dell, Lenovo and ASUS are still some weeks away from having any product available, but expect to see your feeds and favorite tech sites flooded with new product announcements. And of course with a new Apple event coming up soon...there should be Skylake in the new MacBooks this month.
Since I have already talked about the architecture and the performance changes from Haswell/Broadwell to Skylake in our 6700K story, today's release is just a bucket of specifications and information surround 46 different 6th Generation Skylake processors.
Intel's 6th Generation Core Processors
At Intel's Developer Forum in August, the media learned quite a bit about the new 6th Generation Core processor family including Intel's stance on how Skylake changes the mobile landscape.
Skylake is being broken up into 4 different line of Intel processors: S-series for desktop DIY users, H-series for mobile gaming machines, U-series for your everyday Ultrabooks and all-in-ones, Y-series for tablets and 2-in-1 detachables. (Side note: Intel does not reference an "Ultrabook" anymore. Huh.)
As you would expect, Intel has some impressive gains to claim with the new 6th Generation processor. However, it is important to put them in context. All of the claims above, including 2.5x performance, 30x graphics improvement and 3x longer battery life, are comparing Skylake-based products to CPUs from 5 years ago. Specifically, Intel is comparing the new Core i5-6200U (a 15 watt part) against the Core i5-520UM (an 18 watt part) from mid-2010.
Introduction and First Impressions
The Enthoo Pro M is the new mid-tower version of the Enthoo Pro, previously a full-tower ATX enclosure from the PC cooler and enclosure maker. This new enclosure adds another option to the $79 case market, which already has a number of solid options. Let's see how it stacks up!
I was very impressed by the Phanteks Enthoo EVOLV ATX enclosure, which received our Editor’s Choice award when reviewed earlier this year. The enclosure was very solidly made and had a number of excellent features, and even with a primarily aluminum construction and premium design it can be found for $119, rather unheard-of for this combination in the enclosure market. So what changes from that design might be expect to see with the $79 Enthoo Pro M?
The Pro M is a very businesslike design, constructed of steel and plastic, and with a very understated appearance. Not exactly “boring”, as it does have some personality beyond the typical rectangular box, with a brushed finish to the front panel which also features a vented front fan opening, and a side panel window to show off your build. But I think the real story here is the intelligent internal design, which is nearly identical to that of the EVOLV ATX.
Introduction and Technical Specifications
Courtesy of ASUS
The Z170-A motherboard is among initial offerings from ASUS' channel line of Intel Z170 chipset board line. The board features ASUS' new Channel line aesthetics, featuring white and black coloration to differentiate the line from their Z97 gold-theme offerings. ASUS uses the Z170-A to redefine the definition of a base-line motherboard, integrating many "upper-tier style" features not normally found on the lower tier offerings. The board's integrated Intel Z170 chipset integrates support for the latest Intel LGA1151 Skylake processor line as well as Dual Channel DDR4 memory. Offered at a price-competitive MSRP of $165, the Z170-A threatens to give the rest of the Z170-based boards a run for the money.
Courtesy of ASUS
The Z170 shares the same DIGI+ style power system of its higher priced siblings, featuring an 8-phase digital power delivery system. ASUS integrated the following features into the Z170-A board: four SATA 3 ports; one SATA-Express port; one M.2 PCIe x4 capable port; an Intel I219-V Gigabit NIC; three PCI-Express x16 slots; two PCI-Express x1 slots; one PCI slot; on-board power, and MemOK! buttons; EZ XMP and TPU switches; Crystal Sound 3 audio subsystem; integrated DisplayPort, HDMI, DVI, and VGA video ports; and USB 2.0, 3.0, and 3.1 Type-A and Type-C port support.
Courtesy of ASUS
The Z170-A motherboard comes standard with ASUS latest iteration of their sound technology, dubbed Crystal Sound 3. Like its predecessors, Crystal Sound 3 integrates the audio components on a isolated PCB from the other main board components minimizing noise generation caused by those other integrated devices. ASUS designed the audio subsystem with high-quality Japanese-sourced audio and power circuitry for a top-notch audio experience.
The Tiniest Fiji
Way back on June 16th, AMD held a live stream event during E3 to announce a host of new products. In that group was the AMD Radeon R9 Fury X, R9 Fury and the R9 Nano. Of the three, the Nano was the most intriguing to most of the online press as it was the one we knew the least about. AMD promised a full Fiji GPU in a package with a 6-in PCB and a 175 watt TDP. Well today, AMD is, uh, re-announcing (??) the AMD Radeon R9 Nano with more details on specifications, performance and availability.
First, let’s get this out of the way: AMD is making this announcement today because they publicly promised the R9 Nano for August. And with the final days of summer creeping up on them, rather than answer questions about another delay, AMD is instead going the route of a paper launch, but one with a known end date. We will apparently get our samples of the hardware in early September with reviews and the on-sale date following shortly thereafter. (Update: AMD claims the R9 Nano will be on store shelves on September 10th and should have "critical mass" of availability.)
Now let’s get to the details that you are really here for. And rather than start with the marketing spin on the specifications that AMD presented to the media, let’s dive into the gory details right now.
|R9 Nano||R9 Fury||R9 Fury X||GTX 980 Ti||TITAN X||GTX 980||R9 290X|
|GPU||Fiji XT||Fiji Pro||Fiji XT||GM200||GM200||GM204||Hawaii XT|
|Rated Clock||1000 MHz||1000 MHz||1050 MHz||1000 MHz||1000 MHz||1126 MHz||1000 MHz|
|Memory Clock||500 MHz||500 MHz||500 MHz||7000 MHz||7000 MHz||7000 MHz||5000 MHz|
|Memory Interface||4096-bit (HBM)||4096-bit (HBM)||4096-bit (HBM)||384-bit||384-bit||256-bit||512-bit|
|Memory Bandwidth||512 GB/s||512 GB/s||512 GB/s||336 GB/s||336 GB/s||224 GB/s||320 GB/s|
|TDP||175 watts||275 watts||275 watts||250 watts||250 watts||165 watts||290 watts|
|Peak Compute||8.19 TFLOPS||7.20 TFLOPS||8.60 TFLOPS||5.63 TFLOPS||6.14 TFLOPS||4.61 TFLOPS||5.63 TFLOPS|
AMD wasn’t fooling around, the Radeon R9 Nano graphics card does indeed include a full implementation of the Fiji GPU and HBM, including 4096 stream processors, 256 texture units and 64 ROPs. The GPU core clock is rated “up to” 1.0 GHz, nearly the same as the Fury X (1050 MHz), and the only difference that I can see in the specifications on paper is that the Nano is rated at 8.19 TFLOPS of theoretical compute performance while the Fury X is rated at 8.60 TFLOPS.
Retail Card Design
AMD is in an interesting spot right now. The general consensus is that both the AMD Radeon R9 Fury X and the R9 Fury graphics cards had successful launches into the enthusiast community. We found that the performance of the Fury X was slightly under that of the GTX 980 Ti from NVIDIA, but also that the noise levels and power draw were so improved on Fiji over Hawaii that many users would dive head first into the new flagship from the red team.
The launch of the non-X AMD Fury card was even more interesting – here was a card with a GPU performing better than the competition in a price point that NVIDIA didn’t have an exact answer. The performance gap between the GTX 980 and GTX 980 Ti resulted in a $550 graphics card that AMD had a victory with. Add in the third Fiji-based product due out in a few short weeks, the R9 Nano, and you have a robust family of products that don’t exactly dominate the market but do put AMD in a positive position unlike any it has seen in recent years.
But there are some problems. First and foremost for AMD, continuing drops in market share. With the most recent reports from multiple source claiming that AMD’s Q2 2015 share has dropped to 18%, an all-time low in the last decade or so, AMD needs some growth and they need it now. Here’s the catch: AMD can’t make enough of the Fiji chip to affect that number at all. The Fury X, Fury and Nano are going to be hard to find for the foreseeable future thanks to production limits on the HBM (high bandwidth memory) integration; that same feature that helps make Fiji the compelling product it is. I have been keeping an eye on the stock of the Fury and Fury X products and found that it often can’t be found anywhere in the US for purchase. Maybe even more damning is the fact that the Radeon R9 Fury, the card that is supposed to be the model customizable by AMD board partners, still only has two options available: the Sapphire, which we reviewed when it launched, and the ASUS Strix R9 Fury that we are reviewing today.
AMD’s product and financial issues aside, the fact is that the Radeon R9 Fury 4GB and the ASUS Strix iteration of it are damned good products. ASUS has done its usual job of improving on the design of the reference PCB and cooler, added in some great features and packaged it up a price that is competitive and well worth the investment for enthusiast gamers. Our review today will only lightly touch on out-of-box performance of the Strix card mostly because it is so similar to that of the initial Fury review we posted in July. Instead I will look at the changes to the positioning of the AMD Fury product (if any) and how the cooler and design of the Strix product helps it stand out. Overclocking, power consumption and noise will all be evaluated as well.
Introduction, Specifications, and Packaging
We have reviewed a lot of Variable Refresh Rate displays over the past several years now, and for the most part, these displays have come with some form of price premium attached. Nvidia’s G-Sync tech requires an additional module that adds some cost to the parts list for those displays. AMD took a while to get their FreeSync tech pushed through the scaler makers, and with the added effort needed to implement these new parts, display makers naturally pushed the new features into their higher end displays first. Just look at the specs of these displays:
- ASUS PG278Q 27in TN 1440P 144Hz G-Sync
- Acer XB270H 27in TN 1080P 144Hz G-Sync
- Acer XB280HK 28in TN 4K 60Hz G-Sync
- Acer XB270HU 27in IPS 1440P 144Hz G-Sync
- LG 34UM67 34in IPS 25x18 21:9 48-75Hz FreeSync
- BenQ XL2730Z 27in TN 1440P 40-144Hz FreeSync
- Acer XG270HU 27in TN 1440P 40-144Hz FreeSync
- ASUS MG279Q 27in IPS 1440P 144Hz FreeSync (35-90Hz)
Most of the reviewed VRR panels are 1440P or higher, and the only 1080P display currently runs $500. This unfortunately leaves VRR technology at a price point that is simply out of reach of gamers unable to drop half a grand on a display. What we need was a good 1080P display with a *full* VRR range. Bonus points to high refresh rates and in the case of a FreeSync display, a minimum refresh rate low enough that a typical game will not run below it. This shouldn’t be too hard since 1080P is not that demanding on even lower cost hardware these days. Who was up to this challenge?
Nixeus has answered this call with their new Nixeus Vue display. This is a 24” 1080P 144Hz FreeSync display with a VRR bottom limit of 30 FPS. It comes in two models, distinguished by a trailing letter in the model. The NX-VUE24B contains a ‘base’ model stand with only tilt support, while the NX-VUE24A contains a ‘premium’ stand with full height, rotation, and tilt support.
Does the $330-350 dollar Nixues Vue 24" FreeSync monitor fit the bill?
The Dell Venue 10 7000 Series tablet features a stunning 10.5" OLED screen and is designed to mate perfectly with the optional keyboard. So how does it perform as both a laptop and a tablet? Read on for the full review!
To begin with I will simply say the keyboard should not be an optional accessory. There, I've said it. As I used the Venue 10 7000, which arrived bundled with the keyboard, I was instantly excited about this design. The Venue 10 is a device that is as remarkable for its incredible screen as much as any other feature, but once coupled with the magnetically attached keyboard becomes something more - and quite different than existing implementations of the transforming tablet. More than a simple accessory the keyboard felt like it was really a part of the device when connected, and made it feel like a real laptop.
I'm getting way ahead of myself here so let's go back to the beginning, and back to a world where one might consider purchasing this tablet by itself. At $499 for the 16GB model you might reasonably ask how it compares to the identically-priced Apple iPad Air 2. Well, most of the comparison is going to be software/app related as the Venue 10 7000 is running Android 5.1 Lollipop, and of course the iPad runs iOS. The biggest difference between these tablets (besides the keyboard integration) becomes the 10.5-inch, 2560x1600 OLED screen, and oh what a screen it is!
A third primary processor
As the Hot Chips conference begins in Cupertino this week, Qualcomm is set to divulge another set of information about the upcoming Snapdragon 820 processor. Earlier this month the company revealed details about the Adreno 5xx GPU architecture, showcasing improved performance and power efficiency while also adding a new Spectra 14-bit image processor. Today we shift to what Qualcomm calls the “third pillar in the triumvirate of programmable processors” that make up the Snapdragon SoC. The Hexagon DSP (digital signal processor), introduced initially by Qualcomm in 2004, has gone through a massive architecture shift and even programmability shift over the last 10 years.
Qualcomm believes that building a balanced SoC for mobile applications is all about heterogeneous computing with no one processor carrying the entire load. The majority of the work that any modern Snapdragon processor must handle goes through the primary CPU cores, the GPU or the DSP. We learned about upgrades to the Adreno 5xx series for the Snapdragon 820 and we are promised information about Kryo CPU architecture soon as well. But the Hexagon 600-series of DSPs actually deals with some of the most important functionality for smartphones and tablets: audio, voice, imaging and video.
Interestingly, Qualcomm opened up the DSP to programmability just four years ago, giving developers the ability to write custom code and software to take advantages of the specific performance capabilities that the DSP offers. Custom photography, videography and sound applications could benefit greatly in terms of performance and power efficiency if utilizing the QC DSP rather than the primary system CPU or GPU. As of this writing, Qualcomm claims there are “hundreds” of developers actively writing code targeting its family of Hexagon processors.
The Hexagon DSP in Snapdragon 820 consists of three primary partitions. The main compute DSP works in conjunction with the GPU and CPU cores and will do much of the heavy lifting for encompassed workloads. The modem DSP aids the cellular modem in communication throughput. The new guy here is the lower power DSP in the Low Power Island (LPI) that shifts how always-on sensors can communicate with the operating system.
Introduction and First Impressions
The ASUS PB258Q is a "frameless" monitor with a full 2560x1440 resolution from a fairly compact 25-inch size, and at first glance it might appear to be a bare LCD panel affixed to a stand. This attractive design also features 100% sRGB coverage and full height/tilt/swivel and rotation adjustment. The price? Less than $400. We'll put it to the test to see just what kind of value to expect here.
A beautiful looking monitor even with nothing on the display
The ASUS PB258Q came out of nowhere one day when I was looking to replace a smaller 1080p display on my desk. Given some pretty serious size constraints I was hesitant to move up to the 27 - 30 inch range for 2560x1440 monitors, but I didn't want to settle for 1920x1080 again. The ASUS PB258Q intrigued me immediately not only due to its interesting size/resolution of 25-inch/1440p, but also for the claimed 100% sRGB coverage and fully adjustable stand. And then I looked over at the price. $376.99 shipped from Amazon with Prime shipping? Done.
The pricing (and compact 25-inch size) made it a more compelling choice to me than the PB278Q, ASUS's "professional graphics monitor" which uses a PLS panel, though this larger display has recently dropped in price to the $400 range. When the PB258Q arrived a couple of days later I was first struck by how compact it is, and how nice the monitor looked without even being powered up.
Another Maxwell Iteration
The mainstream end of the graphics card market is about to get a bit more complicated with today’s introduction of the GeForce GTX 950. Based on a slightly cut down GM206 chip, the same used in the GeForce GTX 960 that was released almost 8 months ago, the new GTX 950 will fill a gap in the product stack for NVIDIA, resting right at $160-170 MSRP. Until today that next-down spot from the GTX 960 was filled by the GeForce GTX 750 Ti, the very first iteration of Maxwell (we usually call it Maxwell 1) that came out in February of 2014!
Even though that is a long time to go without refreshing the GTX x50 part of the lineup, NVIDIA was likely hesitant to do so based on the overwhelming success of the GM107 for mainstream gaming. It was low cost, incredibly efficient and didn’t require any external power to run. That led us down the path of upgrading OEM PCs with GTX 750 Ti, an article and video that still gets hundreds of views and dozens of comments a week.
The GTX 950 has some pretty big shoes to fill. I can tell you right now that it uses more power than the GTX 750 Ti, and it requires a 6-pin power connector, but it does so while increasing gaming performance dramatically. The primary competition from AMD is the Radeon R7 370, a Pitcairn GPU that is long in the tooth and missing many of the features that Maxwell provides.
And NVIDIA is taking a secondary angle with the GTX 950 launch –targeting the MOBA players (DOTA 2 in particular) directly and aggressively. With the success of this style of game over the last several years, and the impressive $18M+ purse for the largest DOTA 2 tournament just behind us, there isn’t a better area of PC gaming to be going after today. But are the tweaks and changes to the card and software really going to make a difference for MOBA gamers or is it just marketing fluff?
Let’s dive into everything GeForce GTX 950!
Core and Interconnect
The Skylake architecture is Intel’s first to get a full release on the desktop in more than two years. While that might not seem like a long time in the grand scheme of technology, for our readers and viewers that is a noticeable change and shift from recent history that Intel has created with the tick-tock model of releases. Yes, Broadwell was released last year and was solid product, but Intel focused almost exclusively on the mobile platforms (notebooks and tablets) with it. Skylake will be much more ubiquitous and much more quickly than even Haswell.
Skylake represents Intel’s most scalable architecture to date. I don’t mean only frequency scaling, though that is an important part of this design, but rather in terms of market segment scaling. Thanks to brilliant engineering and design from Intel’s Israeli group Intel will be launching Skylake designs ranging from 4.5 watt TDP Core M solutions all the way up to the 91 watt desktop processors that we have already reviewed in the Core i7-6700K. That’s a range that we really haven’t seen before and in the past Intel has depended on the Atom architecture to make up ground on the lowest power platforms. While I don’t know for sure if Atom is finally trending towards the dodo once Skylake’s reign is fully implemented, it does make me wonder how much life is left there.
Scalability also refers to the package size – something that ensures that the designs the engineers created can actually be built and run in the platform segments they are targeting. Starting with the desktop designs for LGA platforms (DIY market) that fits on a 1400 mm2 design on the 91 watt TDP implementation Intel is scaling all the way down to 330 mm2 in a BGA1515 package for the 4.5 watt TDP designs. Only with a total product size like that can you hope to get Skylake in a form factor like the Compute Stick – which is exactly what Intel is doing. And note that the smaller packages require the inclusion of the platform IO chip as well, something that H- and S-series CPUs can depend on the motherboard to integrate.
Finally, scalability will also include performance scaling. Clearly the 4.5 watt part will not offer the user the same performance with the same goals as the 91 watt Core i7-6700K. The screen resolution, attached accessories and target applications allow Intel to be selective about how much power they require for each series of Skylake CPUs.
The fundamental design theory in Skylake is very similar to what exists today in Broadwell and Haswell with a handful of significant and hundreds of minor change that make Skylake a large step ahead of previous designs.
This slide from Julius Mandelblat, Intel Senior Principle Engineer, shows a higher level overview of the entirety of the consumer integration of Skylake. You can see that Intel’s goals included a bigger and wider core design, higher frequency, improved right architecture and fabric design and more options for eDRAM integration. Readers of PC Perspective will already know that Skylake supports both DDR3L and DDR4 memory technologies but the inclusion of the camera ISP is new information for us.
I knew that the move to DirectX 12 was going to be a big shift for the industry. Since the introduction of the AMD Mantle API along with the Hawaii GPU architecture we have been inundated with game developers and hardware vendors talking about the potential benefits of lower level APIs, which give more direct access to GPU hardware and enable more flexible threading for CPUs to game developers and game engines. The results, we were told, would mean that your current hardware would be able to take you further and future games and applications would be able to fundamentally change how they are built to enhance gaming experiences tremendously.
I knew that the reader interest in DX12 was outstripping my expectations when I did a live blog of the official DX12 unveil by Microsoft at GDC. In a format that consisted simply of my text commentary and photos of the slides that were being shown (no video at all), we had more than 25,000 live readers that stayed engaged the whole time. Comments and questions flew into the event – more than me or my staff could possible handle in real time. It turned out that gamers were indeed very much interested in what DirectX 12 might offer them with the release of Windows 10.
Today we are taking a look at the first real world gaming benchmark that utilized DX12. Back in March I was able to do some early testing with an API-specific test that evaluates the overhead implications of DX12, DX11 and even AMD Mantle from Futuremark and 3DMark. This first look at DX12 was interesting and painted an amazing picture about the potential benefits of the new API from Microsoft, but it wasn’t built on a real game engine. In our Ashes of the Singularity benchmark testing today, we finally get an early look at what a real implementation of DX12 looks like.
And as you might expect, not only are the results interesting, but there is a significant amount of created controversy about what those results actually tell us. AMD has one story, NVIDIA another and Stardock and the Nitrous engine developers, yet another. It’s all incredibly intriguing.