All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
AM4 Edging Closer to Retail
Many of us were thinking that one of the bigger stories around CES would be the unveiling of a goodly chunk of AM4 motherboards. AM4 has been around for about half a year now, but only in system integrator builds (such as HP). These have all been based around Bristol Ridge APU (essentially an updated Carrizo APU). These SOCs are not exactly barn burners, but they provide a solid foundation for a low-cost build. The APUs features 2 modules/4 cores, a GCN based GPU, and limited southbridge I/O functionality.
During all this time the motherboards available from these OEMs are very basic units not fit for retail. Users simply could not go out and buy a Bristol Ridge APU and motherboard for themselves off of Newegg, Amazon, and elsewhere. Now after much speculation we finally got to see the first AM4 retail style boards unveiled at this year’s CES. AMD showed off around 16 boards based on previously unseen B350 and X370 chipsets.
AMD has had a pretty limited number of chipsets that they have introduced over the years. Their FM2+ offerings spanned the A series of chipsets, but they added very little in terms of functionality as compared to the 900 series that populate the AM3+ world. The latest offering from AMD was the A88x which was released in September 2013. At one time there was supposed to be a 1000 series of chipsets for AM3+, but those were cancelled and we have had the 900 series (which are identical to the previous 800 series) since 2011. This has been a pretty stagnant area for AMD and their partners. 3rd party chips have helped shore up the feature divide between AMD and Intel’s regular release of new chipsets and technologies attached to them.
There are three primary chipsets being released as well as two physical layer chips that allow the use of the onboard southbridge on Ryzen and Bristol Ridge. The X370 for the enthusiast market, the B350 for the mainstream, and then the budget A320. The two chipset options for utilizing the SOC’s southbridge functionality are the X300 and A/B300.
Before we jump into the chipsets we should take a look at what kind of functionality Ryzen and Bristol Ridge have that can be leveraged by motherboard manufacturers. Bristol Ridge is a true SOC in that it contains the GPU, CPU, and southbridge functionality to stand alone. Ryzen is different in that it does not have the GPU portion so it still requires a standalone graphics card to work. Bristol Ridge is based off of the older Carrizo design and does not feature the flexibility in its I/O connections that Ryzen does.
Bristol Ridge features up to 8 lanes of PCI-E 3.0. The I/O on it includes 2 native SATA6G ports as well as the ability to either utilize two more PCI-e lanes or have them as x2 NVME. That is about as flexible as it gets. It also natively supports four USB 3.1 gen 1 ports. For a chip that was designed to be a mobile focused SoC it makes sense that it will not max out PCI-E lanes or SATA ports. It still is enough to satisfy most mobile and SFF builds.
High Bandwidth Cache
Apart from AMD’s other new architecture due out in 2017, its Zen CPU design, there is no other product that has had as much build up and excitement surrounding it than its Vega GPU architecture. After the world learned that Polaris would be a mainstream-only design that was released as the Radeon RX 480, the focus for enthusiasts came straight to Vega. It’s been on the public facing roadmaps for years and signifies the company’s return to the world of high end GPUs, something they have been missing since the release of the Fury X in mid-2015.
Let’s be clear: today does not mark the release of the Vega GPU or products based on Vega. In reality, we don’t even know enough to make highly educated guesses about the performance without more details on the specific implementations. That being said, the information released by AMD today is interesting and shows that Vega will be much more than simply an increase in shader count over Polaris. It reminds me a lot of the build to the Fiji GPU release, when the information and speculation about how HBM would affect power consumption, form factor and performance flourished. What we can hope for, and what AMD’s goal needs to be, is a cleaner and more consistent product release than how the Fury X turned out.
The Design Goals
AMD began its discussion about Vega last month by talking about the changes in the world of GPUs and how the data sets and workloads have evolved over the last decade. No longer are GPUs only worried about games, but instead they must address profession workloads, enterprise workloads, scientific workloads. Even more interestingly, as we have discussed the gap in CPU performance vs CPU memory bandwidth and the growing gap between them, AMD posits that the gap between memory capacity and GPU performance is a significant hurdle and limiter to performance and expansion. Game installs, professional graphics sets, and compute data sets continue to skyrocket. Game installs now are regularly over 50GB but compute workloads can exceed petabytes. Even as we saw GPU memory capacities increase from Megabytes to Gigabytes, reaching as high as 12GB in high end consumer products, AMD thinks there should be more.
Coming from a company that chose to release a high-end product limited to 4GB of memory in 2015, it’s a noteworthy statement.
The High Bandwidth Cache
Bold enough to claim a direct nomenclature change, Vega 10 will feature a HBM2 based high bandwidth cache (HBC) along with a new memory hierarchy to call it into play. This HBC will be a collection of memory on the GPU package just like we saw on Fiji with the first HBM implementation and will be measured in gigabytes. Why the move to calling it a cache will be covered below. (But can’t we call get behind the removal of the term “frame buffer”?) Interestingly, this HBC doesn’t have to be HBM2 and in fact I was told that you could expect to see other memory systems on lower cost products going forward; cards that integrate this new memory topology with GDDR5X or some equivalent seem assured.
With the near comes a new push for performance, efficiency and feature leadership from Qualcomm and its Snapdragon line of mobile SoCs. The Snapdragon 835 was officially announced in November of last year when the partnership with Samsung on 10nm process technology was announced, but we now have the freedom to share more of the details on this new part and how it changes Qualcomm’s position in the ultra-device market. Though devices with the new 835 part won’t be on the market for several more months, with announcements likely coming at CES this year.
Qualcomm frames the story around the Snapdragon 835 processor with what they call the “five pillars” – five different aspects of mobile processor design that they have addressed with updates and technologies. Qualcomm lists them as battery life (efficiency), immersion (performance), connectivity, and security.
Starting where they start, on battery life and efficiency, the SD 835 has a unique focus that might surprise many. Rather than talking up the improvements in performance of the new processor cores, or the power of the new Adreno GPU, Qualcomm is firmly planted on looking at Snapdragon through the lens of battery life. Snapdragon 835 uses half of the power of Snapdragon 801.
The company touts usage claims of 1+ day of talk time, 5+ days of music playback, 11 hours of 4K video playback, 3 hours of 4K video capture and 2+ hours of sustained VR gaming. These sound impressive, but as we must always do in this market, you must wait for consumer devices from Qualcomm partners to really measure how well this platform will do. Going through a typical power user comparison of a device built on the Snapdragon 835 to one use the 820, Qualcomm thinks it could result in 2 or more hours of additional battery life at the end of the day.
We have already discussed the new Quick Charge 4 technology, that can offer 5 hours of use with just 5 minutes of charge time.
Courtesy of ASUS
The Strix Z270E Gaming motherboard is among the Z270-based offerings in ASUS ROG Strix product line. The board's integrated Intel Z270 chipset integrates support for the latest Intel LGA1151 Kaby Lake processor line as well as Dual Channel DDR4 memory. With an MSRP of $199, the Strix Z270E Gaming board comes at a premium, more than justified by its feature set.
Courtesy of ASUS
The new Strix board does not disappoint with its sleek black appearance and 10-phase digital power delivery system. ASUS integrated the following features into the Prime Z270-A board: six SATA 3 ports; two M.2 PCIe x4 capable ports; an Intel I219-V Gigabit NIC; 2x2 802.11ac WiFI adapter; three PCI-Express x16 slots; four PCI-Express x1 slots; two RGB 12V headers; Supreme FX 8-channel audio subsystem; integrated DisplayPort, HDMI, and DVI video ports; and USB 3.0 and 3.1 Type-A and Type-C port support.
It probably doesn't surprise any of our readers that there has been a tepid response to the leaks and reviews that have come out about the new Core i7-7700K CPU ahead of the scheduled launch of Kaby Lake-S from Intel. Replacing the Skylake-based 6700K part as the new "flagship" consumer enthusiast CPU, the 7700K has quite a bit stacked against it. We know that Kaby Lake is the first in the new sequence of tick-tock-optimize, and thus there are few architectural changes to any portion of the chip. However, that does not mean that the 7700K and Kaby Lake in general don't offer new capabilities (HEVC) or performance (clock speed).
The Core i7-7700K is in an interesting spot as well with regard to motherboards and platforms. Nearly all motherboards that run the Z170 chipset will be able to run the new Kaby Lake parts without requiring an upgrade to the newly released Z270 chipset. However, the likelihood that any user on a Z170 platform today using a Skylake processor will feel the NEED to upgrade to Kaby Lake is minimal, to say the least. The Z270 chipset only offers a couple of new features compared to last generation, so the upgrade path is again somewhat limited in excitement.
Let's start by taking a look at the Core i7-7700K and how it compares to the previous top-end parts from the consumer processor line and then touch on the changes that Kaby Lake brings to the table.
With the beginning of CES just days away (as I write this), Intel is taking the wrapping paper off of its first gift of 2017 to the industry. As you can see from the slide above, more than just the Kaby Lake-S consumer socketed processors are launching today, but other components including Iris Plus graphics implementations and quad-core notebook implementations will need to wait for another day.
For DIY builders and OEMs, Kaby Lake-S, now known as the 7th Generation Core Processor family, offer some changes and additions. First, we will get a dual-core HyperThreaded processor with an unlocked designation in the Core i3-7350K. Other than the aforementioned Z270 chipset, Kaby Lake will be the first platform compatible with Intel Optane memory. (To be extra clear, I was told that previous processors will NOT be able to utilize Optane in its M.2 form factor.)
Though we have already witnessed Lenovo announcing products using Optane, this is the first official Intel discussion about it. Optane memory will be available in M.2 modules that can be installed on Z270 motherboards, improving snappiness and responsiveness. It seems this will be launched later in the quarter as we don't have any performance numbers or benchmarks to point to demonstrating the advantages that Intel touts. I know both Allyn and I are very excited to see how this differs from previous Intel caching technologies.
|Core i7-7700K||Core i7-6700K||Core i7-5775C||Core i7-4790K||Core i7-4770K||Core i7-3770K|
|Architecture||Kaby Lake||Skylake||Broadwell||Haswell||Haswell||Ivy Bridge|
|Socket||LGA 1151||LGA 1151||LGA 1150||LGA 1150||LGA 1150||LGA 1155|
|Base Clock||4.2 GHz||4.0 GHz||3.3 GHz||4.0 GHz||3.5 GHz||3.5 GHz|
|Max Turbo Clock||4.5 GHz||4.2 GHz||3.7 GHz||4.4 GHz||3.9 GHz||3.9 GHz|
|Memory Speeds||Up to 2400 MHz||Up to 2133 MHz||Up to 1600 MHz||Up to 1600 MHz||Up to 1600 MHz||Up to 1600 MHz|
|Cache (L4 Cache)||8MB||8MB||6MB (128MB)||8MB||8MB||8MB|
|System Bus||DMI3 - 8.0 GT/s||DMI3 - 8.0 GT/s||DMI2 - 6.4 GT/s||DMI2 - 5.0 GT/s||DMI2 - 5.0 GT/s||DMI2 - 5.0 GT/s|
|Graphics||HD Graphics 630||HD Graphics 530||Iris Pro 6200||HD Graphics 4600||HD Graphics 4600||HD Graphics 4000|
|Max Graphics Clock||1.15 GHz||1.15 GHz||1.15 GHz||1.25 GHz||1.25 GHz||1.15 GHz|
Introduction and Motherboard Layout
Introduction and Motherboard Layout
With the release of Intel Z270 chipset, GIGABYTE is unveiling its AORUS line of products. The AORUS branding will be used to differentiate enthusiast and gamer friendly products from their other product lines, similar to how ASUS uses the ROG branding to differentiate their high performance product line. The Z270X-Gaming 5 is among the first to be released as part of GIGABYTE's AORUS line. The board features the black and white branding common to the AORUS product line, with the rear panel cover and chipset featuring the brand logos. The board is designed around the Intel Z270 chipset with in-built support for the latest Intel LGA1151 Kaby Lake processor line (as well as support for Skylake processors) and Dual Channel DDR4 memory. The Z270X-Gaming 5 will be available at an MRSP of $189.99.
GIGABYTE designed the Z270X-Gaming 5 with a total of 11 digital power phases, more than enough to handle any load thrown at the system. The CPU VRMs are cooled by large coolers above and to the right of the socket.
The board supports a total of two PCIe x4 M.2 slots, one to the left of the CPU socket and below the primary PCIe x1 slot, and the other underneath the third PCIe x1 slot. The M.2 slot by the CPU supports M.2 cards up to 110mm in length, while the other supports cards up to 80mm in length.
Introduction and Technical Specifications
Courtesy of ASUS
The Maximus VIII Impact is one of the Intel Z170 chipset offerings in the ROG (Republic of Gamer) board line. The board features the standard black and red ROG aesthetics in an mini-ITX form factor to accommodate space constrained system builds. ASUS chose to integrate black-chrome heat sinks into the board's build, giving it a sleek and modern appearance. The board's integrated Intel Z170 chipset integrates support for the latest Intel LGA1151 Skylake processor line as well as Dual Channel DDR4 memory. With an MSRP of $250, the Maximus VIII Impact comes a price premium for a high-quality and feature packed product.
Courtesy of ASUS
Courtesy of ASUS
ASUS integrated the following features into the Maximus VIII Impact board: four SATA 3 ports; one U.2 32Gbps port; an Intel I219-V Gigabit NIC; 2x2 802.11ac WiFI adapter; one PCI-Express x16 slot; on-board power, reset, Clear CMOS, and USB BIOS Flashback buttons; 2-digit Q-Code LED diagnostic display; ROG SupremeFX Impact III 8-Channel audio subsystem; integrated DisplayPort and HDMI video ports; and USB 2.0, 3.0, and 3.1 Type-A and Type-C port support.
Courtesy of ASUS
The Maximus VIII Impact features an eight phase digital power system, providing more than enough power to the CPU for any task you throw its way. The power delivery system itself consists of International Rectifier PowIRStage MOSFETs, MicroFine alloy chokes, and 10k-rated Japanese-sourced black-metallic capacitors.
Introduction and Technical Specifications
Courtesy of ASUS
The X99-A II motherboard is among the latest boards from ASUS, updating their existing product line with many new features including RGB LEDs and USB 3.1 support. The board supports all Intel LGA2011-3 based processors, including the new Broadwell-E processor, paired with DDR4 memory in up to a quad channel configuration. The ASUS-A II can be found in retail at an MSRP of $249, a virtual steal for the features and performance offered by this board.
Courtesy of ASUS
Courtesy of ASUS
ASUS integrated the following features into the X99-A II board: eight SATA 3 ports; one SATA-Express port; one U.2 32Gbps port; one M.2 PCIe x4 capable port; an Intel I218-V Gigabit NIC; four PCI-Express x16 slots; two PCI-Express x1 slots; on-board power, reset, MemOK!, and USB BIOS Flashback buttons; Aura RGB LED 4-pin power header; RGB Q-Slot support; 2-digit Q-Code LED diagnostic display; Q-LED support; Crystal Sound 3 8-Channel audio subsystem; and USB 3.0 and 3.1 Gen2 Type-A / Type-C port support.
Introduction, Specifications and Packaging
Earlier this year we covered the lower two capacities of the Samsung 750 EVO. We had some requests for a review of the 500GB model as soon as it was added to their lineup, and Samsung promptly sent a sample, but I delayed that review in the interest of getting the full 750 EVO lineup tested under our new storage test suite. I've been running batches of SSDs through this new suite, and we now have enough data points to begin cranking out some reviews. The 750 EVO was at the head of the line, so we will be starting with it first. I'm 'reissuing' our review as a full capacity roundup of the 750 EVO lineup as these are fresh results on a completely new test suite.
These are the 'Rev. 2' specifications from Samsung, which include the 500GB model of the 750 EVO. The changes are not significant, mainly a slight bump to random performance of the top capacity model along with a changeover to lower power DDR3 (of twice the capacity) for the 500GB model's system cache.
Nothing new here. This is the standard Samsung packaging for their SATA products.
Vulkan 1.0, OpenGL 4.5, and OpenGL ES 3.2 on a console
A few days ago, sharp eyes across the internet noticed that Nintendo’s Switch console has been added to lists of compliant hardware at The Khronos Group. Vulkan 1.0 was the eye-catcher, although the other tabs also claims conformance with OpenGL 4.5 and OpenGL ES 3.2. The device is not listed as compatible with OpenCL, although that does not really surprise me for a single-GPU gaming system. The other three APIs have compute shaders designed around the needs of game developers. So the Nintendo Switch conforms to the latest standards of the three most important graphics APIs that a gaming device should use -- awesome.
But what about performance?
In other news, Eurogamer / Digital Foundary and VentureBeat uncovered information about the hardware. It will apparently use a Tegra X1, which is based around second-generation Maxwell, that is under-clocked from what we see on the Shield TV. When docked, the GPU will be able to reach 768 MHz on its 256 CUDA cores. When undocked, this will drop to 307.2 MHz (although the system can utilize this mode while docked, too). This puts the performance at ~315 GFLOPs when in mobile, pushing up to ~785 GFLOPs when docked.
You might compare this to the Xbox One, which runs at ~1310 GFLOPs, and the PlayStation 4, which runs at ~1840 GFLOPs. This puts the Nintendo Switch somewhat behind it, although the difference is even greater than that. The FLOP calculation of Sony and Microsoft is 2 x Shader Count x Frequency, but the calculation of Nintendo’s Switch is 4 x Shader Count x Frequency. FMA is the factor of two, but the extra factor of two in Nintendo’s case... ...
Yup, the Switch’s performance rating is calculated as FP16, not FP32.
Snippet from an alleged leak of what Nintendo is telling developers.
If true, it's very interesting that FP16 values are being discussed as canonical.
Reducing shader precision down to 16-bit is common for mobile devices. It takes less transistors to store and translate half-precision values, and accumulated error will be muted by the fact that you’re viewing it on a mobile screen. The Switch isn’t always a mobile device, though, so it will be interesting to see how this reduction of lighting and shading precision will affect games on your home TV, especially in titles that don’t follow Nintendo’s art styles. That said, shaders could use 32-bit values, but then you are cutting your performance for those instructions in half, when you are already somewhat behind your competitors.
As for the loss of performance when undocked, it shouldn’t be too much of an issue if Nintendo pressures developers to hit 1080p when docked. If that’s the case, the lower resolution, 720p mobile screen will roughly scale with the difference in clock.
Lastly, there is a bunch of questions surrounding Nintendo’s choice of operating system: basically, all the questions. It’s being developed by Nintendo, but we have no idea what they forked it from. NVIDIA supports the Tegra SoC on both Android and Linux, it would be legal for Nintendo to fork either one, and Nintendo could have just asked for drivers even if NVIDIA didn’t already support the platform in question. Basically, anything is possible from the outside, and I haven’t seen any solid leaks from the inside.
The Nintendo Switch launches in March.
Introduction and Features
The Strider Platinum Series is one of SilverStone’s most complete and popular lines of power supplies with PC enthusiasts. Starting at 550W and going up to 1200W, the Strider Platinum Series includes six different models. The four 550W thru 850W models feature a compact chassis measuring only 140mm deep. The two high output models (1000W and 1200W) forgo the small enclosure and step up to a larger chassis and 140mm fan.
All of the SilverStone Strider Platinum power supplies come with 80 Plus Platinum certification for high efficiency and fully modular cables. Earlier this year, we took a look at two of the compact chassis units (ST55F-PT and ST85F-PT) and found them to be very capable power supplies. In this review, we will be taking a detailed look at the Strider Platinum Series ST1000-PT.
SilverStone Strider Platinum Series ST1000-PT Key Features:
• 1000W DC power output
• High efficiency with 80 Plus Platinum certification
• 100% Modular cables
• Intelligent semi-fanless operation
• Quiet 140mm cooling fan with FF141 dust filter
• 24/7 Continuous power output with 40°C operating temperature
• Strict ±3% voltage regulation and low AC ripple
• Dedicated single +12V rail (83A/996W)
• Universal AC input (90-264V) with Active PFC
• DC Output protections: UVP, OVP, OPP, SCP, OCP, and OTP
• Dimensions: 150mm (W) x 86mm (H) x 180mm (L)
• 5-Year warranty
• MSRP : $189.99 USD
It's that time of year again where the holidays are upon us, it's freezing outside, and balancing work, school, and family events makes us all a bit crazy. If you are still procrastinating on your holiday shopping or are just not sure what to get the techie that seems to have everything already, PC Perspective has you covered! And if you dare not venture outside into the winter wasteland, there is still time to order online and have it arrive in time!
Following the same format as previous years, the first set of pages are picks that the staff has put together collectively and are recommendations for things like PC hardware components like CPUs, graphics cards, and coolers, mobile hardware (phones, tablets, etc.), and finally accessories and audio. Beyond that, the staff members are given a section to suggest picks of their own that may not fit into one of the main categories but are still thoughtful and useful gift ideas!
Good luck out there, and thank you for another wonderful year of your valued readership! May you have safe travels and memorable holidays!
Image courtesy maf04 via Flickr creative commons.
Intel Core i7-6700K Quad-Core Unlocked Processor - $344, Amazon
It's a bit of an interesting time for CPUs with Intel's Kaby Lake (e.g. 7700K) not out yet and AMD's Ryzen Zen-based (e.g. 8 core Summit Ridge) processors slated for release early next year. Last year the i7-6700K was our top pick, and due to timing of upcoming releases, it is still the pick for this year though it may be wise to look at other gift ideas this year unless they really want that gaming PC ASAP. On the plus side, it is a bit cheaper than last year! You can read our review of the Core i7 6700K here.
AMD Athlon X4 880K Unlocked Quad Core - $92, Amazon
On the AMD side of things, the Athlon X4 880K is a great processor to base a budget gaming build around. The pricing works out a bit cheaper than the old 860K as well as offering slightly faster clockspeeds and a better stock cooler.
Continue reading our holiday gift guide for our picks for graphics cards, storage, and more!
The NVIDIA GTX 1080 is the current consumer-grade performance king, and there are a number of brands to choose from. Whichever you go with, the Pascal-based graphics card is ready for 1440 and even 4k gaming along with VR (virtual reality) gaming. The EVGA FTW Hybrid is a beast of a card that can easily be overclocked with keeping temperatures in check.
If you are looking for something a bit cheaper, AMD's RX 480 is a great midrange graphics card that can easily do 1080p with the details cranked up. Sapphire has a good factory overclocked card with the RX 480 Nitro+ which can be found for $250. For reference, check out our reviews on the RX 480 (we also have a video of the Sapphire card specifically) and it's competitor the GTX 1060.
Samsung 960 Evo 1TB NVMe M.2 SSD - $480, Amazon
Samsung's recently released 960 Evo is not the fastest SSD available, but it's no slouch either. Using Samsung's TLC V-NAND flash, its Polaris controller, and 1GB of DDR3 cache, the drive packs 1TB of speedy storage into the M.2 form factor. The NVMe SSD is rated at 3,200 MB/s sequential reads, 1,900 MB/s sequential writes, 380,000 4k random reads and 360,000 4k random writes (IOPS ratings at QD32). It is rated at 1.5 million hours MTBF and while it does not have the lifetime or write speeds of the pro version (960 Pro), it is quite a bit cheaper! The gamer in your life will appreciate the super fast loading times too!
If you are looking for something a bit more down to earth, SATA SSDs continue to get cheaper and more capacious and there are even some good budget M.2 options these days!
MyDigitalSSD BPX 480GB NVMe M.2 SSD - $200, Amazon
MyDigitalSSD's BPX solid state drive pairs a Phison PS5007-E7 controller with up to 480GB of 2D MLC NAND. The drive is rated at a respectable 2,600 MB/s sequential reads, 1,300 MB/s sequential writes, and approximately 208,728 4k random read IOPS and 202,713 4k random write IOPS. Despite being from a less well known company, the budget drive puts up very nice numbers for the price and comes with a 5 year warranty, 2 million hours MTBF rating, and 1,400 TB total bytes written rating on the flash. Pricing is much more budget friendly at $200 for the 480GB model, $115 for the 240GB, and $70 for the 120GB drive.
SATA SSD Recommendations
SATA SSDs are still great options for a system build and/or HDD upgrade, and Allyn has a few picks later on in this guide.
The following pages are individual selections / gift ideas from each staff member!
Das Keyboard describes their products as "the ultimate experience for badasses", and the Austin, TX based company has delivered premium designs since their initial (completely blank) keyboard in 2005. The Prime 13 is a traditional 104-key design (with labeled keys), and features Cherry MX Brown switches and simple white LED backlighting. So it is a truly "badass" product? Read on to find out!
"Das Keyboard Prime 13 is a minimalist mechanical keyboard designed to take productivity to the next level. Free of fancy features, the Prime 13 delivers an awesome typing experience by focusing on premium material and simple design. Featuring an anodized aluminum top panel, Cherry MX switches with white LEDs, USB pass-through and an extra-long braided cable, the Prime 13 is the ideal mechanical keyboard for overachievers who want get the job done."
I don't need to tell prospective mechanical keyboard buyers that the market is very crowded, and it seems to grow every month. Just about PC accessory maker offers at least one option, and many have tried to distinguish themselves with RGB lighting effects and software with game-specific profiles and the like. So is there still room for a simple, non-RGB keyboard with no special software involved? I think so, but it will need to be quite a premium design to justify a $149 price tag, and that's what the Prime 13 will run at retail. First impressions are very good, but I'll try to cover the experience as well as I can in text and photos in this review.
A close look at the MX Brown switches within the Prime 13
Ryzen coming in 2017
As much as we might want it to be, today is not the day that AMD launches its new Zen processors to the world. We’ve been teased with it for years now, with trickles of information at event after event…but we are going to have to wait a little bit longer with one more tease at least. Today’s AMD is announcing the official branding of the consumer processors based on Zen, previously code named Summit Ridge, along with a clock speed data point and a preview of five technology that will help it be competitive with the Intel Core lineup.
The future consumer desktop processor from AMD will now officially be known as Ryzen. That’s pronounced “RISE-IN” not “RIS-IN”, just so we are all on the same page. CEO Lisa Su was on stage during the reveal at a media event last week and claimed that while media, fans and AMD fell in love with the Zen name, it needed a differentiation from the architecture itself. The name is solid – not earth shattering though I foresee a long life of mispronunciation ahead of it.
Now that we have the official branding behind us, let’s get to the rest of the disclosed information we can reveal today.
We already knew that Summit Ridge would ship with an 8 core, 16 thread version (with lower core counts at lower prices very likely) but now we know a frequency and a cache size. AMD tells us that there will be a processor (the flagship) that will have a base clock of 3.4 GHz with boost clocks above that. How much above that is still a mystery – AMD is likely still tweaking its implementation of boost to get as much performance as possible for launch. This should help put those clock speed rumors to rest for now.
The 20MB of cache matches the Core i7-6900K, though obviously with some dramatic architecture differences between Broadwell and Zen, the effect and utilization of that cache will be interesting measure next year.
We already knew that Ryzen will be utilizing the AM4 platform, but it’s nice to see it reiterated a modern feature set and expandability. DDR4 memory, PCI Express Gen3, native USB 3.1 and NVMe support – there are all necessary building blocks for a modern consumer and enthusiast PC. We still should see how many of these ports the chipset offers and how aggressive motherboard companies like ASUS, MSI and Gigabyte are in their designs. I am hoping there are as many options as would see for an X99/Z170 platform, including budget boards in the $100 space as well as “anything and everything” options for those types of buyers that want to adopt AMD’s new CPU.
AMD Enters Machine Learning Game with Radeon Instinct Products
NVIDIA has been diving in to the world of machine learning for quite a while, positioning themselves and their GPUs at the forefront on artificial intelligence and neural net development. Though the strategies are still filling out, I have seen products like the DIGITS DevBox place a stake in the ground of neural net training and platforms like Drive PX to perform inference tasks on those neural nets in self-driving cars. Until today AMD has remained mostly quiet on its plans to enter and address this growing and complex market, instead depending on the compute prowess of its latest Polaris and Fiji GPUs to make a general statement on their own.
The new Radeon Instinct brand of accelerators based on current and upcoming GPU architectures will combine with an open-source approach to software and present researchers and implementers with another option for machine learning tasks.
The statistics and requirements that come along with the machine learning evolution in the compute space are mind boggling. More than 2.5 quintillion bytes of data are generated daily and stored on phones, PCs and servers, both on-site and through a cloud infrastructure. That includes 500 million tweets, 4 million hours of YouTube video, 6 billion google searches and 205 billion emails.
Machine intelligence is going to allow software developers to address some of the most important areas of computing for the next decade. Automated cars depend on deep learning to train, medical fields can utilize this compute capability to more accurately and expeditiously diagnose and find cures to cancer, security systems can use neural nets to locate potential and current risk areas before they affect consumers; there are more uses for this kind of network and capability than we can imagine.
Third annual release
For the past two years, AMD has made a point of releasing one major software update to Radeon users and gamers annually. In 2014 this started with Catalyst Omega, a dramatic jump in performance, compatibility testing and new features were the story. We were told that for the first time in a very long while, and admitting this was the most important aspect to me, AMD was going to focus on building great software with regular and repeated updates. In 2015 we got a rebrand along with the release: Radeon Software Crimson Edition. AMD totally revamped the visual and user experience of the driver software, bringing into the modern world of style and function. New features and added performance were also the hallmarks of this release, with a stronger promise to produce more frequent drivers to address any performance gaps, stability concerns and to include new features.
For December 2016 and into the new year, AMD is launching the Radeon Software Crimson ReLive Edition driver. While the name might seem silly, it will make sense as we dive into the new features.
While you may have seen the slides leak out through some other sites over the past 48 hours, I thought it was worth offering my input on the release.
Not a performance focused story
The first thing that should be noted with the ReLive Edition is that AMD isn’t making any claims of substantially improved performance. Instead, the Radeon Technologies Group software team is dedicated to continued and frequent iterations that improve performance gradually over time.
As you can see in the slide above, AMD is showing modest 4-8% performance gains on the Radeon RX 480 with the Crimson ReLive driver, and even then, its being compared to the launch driver of 16.6.2. That is significantly lower than the claims made in previous major driver releases. Talking with AMD about this concern, it told us that they don’t foresee any dramatic, single large step increases in performance going forward. The major design changes that were delivered over the last several years, starting with a reconstruction of the CrossFire system thanks to our testing, have been settled. All we should expect going forward is a steady trickle of moderate improvements.
(Obviously, an exception may occur here or there, like with a new game release.)
Radeon ReLive Capture and Streaming Feature
So, what is new? The namesake feature for this driver is the Radeon ReLive application that is built in. ReLive is a capture and streaming tool that will draw obvious comparisons to what NVIDIA has done with GeForce Experience. The ReLive integration is clean and efficient, well designed and seems easy to use in my quick time with it. There are several key capabilities it offers.
First, you can record your gameplay with the press of a hotkey; this includes the ability to record and capture the desktop as well. AMD has included a bevy of settings for your captures to adjust quality, resolution, bitrate, FPS and more.
ReLive supports resolutions up to 4K30 with the Radeon R9 series of GPUs and up to 1440p30 with the RX 480/470/460. That includes both AVC H.264 and HEVC H.265.
Along with recording is support for background capture, called Instant Replay. This allows the gamer to always record in the background, up to 20 minutes, so you can be sure you capture amazing moments that happen during your latest gaming session. Hitting a hotkey will save the clip permanently to the system.
Introduction and Features
2016 has been another busy year for EVGA as they continue to expand their product offerings. EVGA recently introduced five power supplies in the new Supernova G3 Series. Compared to the original G2 Series, these new power supplies offer improved performance, a smaller chassis, and incorporate a hydraulic dynamic bearing fan. We will be taking a detailed look at the entry level 550 G3 in this review.
• EVGA SuperNOVA 550W G3 ($89.99 USD)
• EVGA SuperNOVA 650W G3 ($109.99 USD)
• EVGA SuperNOVA 750W G3 ($129.99 USD)
• EVGA SuperNOVA 850W G3 ($139.99 USD)
• EVGA SuperNOVA 1000W G3 ($159.99 USD)
The Supernova G3 Series is based on EVGA’s popular G2 series but now comes in a smaller chassis measuring only 150mm (5.9”) deep. The G3 Series also uses a 130mm cooling fan with a hydraulic dynamic bearing for quiet operation and extended life.
The Supernova G3 series power supplies are 80 Plus Gold certified for high efficiency and feature all modular cables, high-quality Japanese brand capacitors, and EVGA’s ECO Intelligent Thermal Control System which enables fan-less operation at low to mid power. All G3 series power supplies are NVIDIA SLI and AMD Crossfire Ready and are backed by either a 7-year (550W and 650W) or 10-year (750W, 850W and 1000W) EVGA warranty.
EVGA SuperNOVA 550W G3 PSU Key Features:
• 80 PLUS Gold certified, with up to 90%/92% efficiency (115VAC/240VAC)
• Highest quality Japanese brand capacitors ensure long-term reliability
• Fully modular cables to reduce clutter and improve airflow
• Quiet 130mm hydraulic dynamic bearing fan for long life
• ECO Intelligent Thermal Control allows silent, fan-less operation at low power
• NVIDIA SLI & AMD Crossfire Ready
• Active Power Factor correction (0.99) with Universal AC input
• Heavy-duty protections: OVP, UVP, OCP, OPP, SCP, and OTP
• 7-Year warranty with unparalleled EVGA Customer Support
Maybe Good that Valve Called their API OpenVR?
Update, December 6th, 2016 @ 2:46pm EST: Khronos has updated the images on their website, and those changes are now implemented on our post. The flow-chart image changed dramatically, but the members image has also added LunarG.
Original Post Below
The Khronos Group has just announced their VR initiative, which is in the early, call for participation stage. The goal is to produce an API that can be targeted by drivers from each vendor, so that applications can write once and target all compatible devices. The current list of participants are: Epic Games, Google, Oculus VR, Razer, Valve, AMD, ARM, Intel, NVIDIA, VeriSilicon, Sensics, and Tobii. The point of this announcement is to get even more companies involved, before it matures.
Image Credit: The Khronos Group
Valve, in particular, has donated their OpenVR API to Khronos Group. I assume that this will provide the starting point for the initiative, similar to how AMD donated Mantle to found Vulkan, which overcomes the decision paralysis of a blank canvas. Also, especially for VR, I doubt these decisions would significantly affect individual implementations. If it does, though, now would be the time for them to propose edits.
In terms of time-frame, it’s early enough that the project scope hasn’t even been defined, so schedules can vary. They do claim that, based on past experiences, about 18 months is “often typical”.
That’s about it for the announcement; on to my analysis.
Image Credit: The Khronos Group, modified
First, it’s good that The Khronos Group are the ones taking this on. Not only do they have the weight to influence the industry, especially with most of these companies having already collaborated on other projects, like OpenGL, OpenCL, and Vulkan, but their standards tend to embrace extensions. This allows Oculus, Valve, and others to add special functionality that can be picked up by applications, but still be compatible at a base level with the rest of the ecosystem. To be clear, the announcement said nothing about extensions, but it would definitely make sense for VR, which can vary with interface methods, eye-tracking, player tracking, and so forth.
If extensions end up being a thing, this controlled competition allows the standard as a whole to evolve. If an extension ends up being popular, that guides development of multi-vendor extensions, which eventually may be absorbed into the core specification. On the other hand, The Khronos Group might decide that, for VR specifically, the core functionality is small and stable enough that extensions would be unnecessary. Who knows at this point.
Second, The Khronos Group stated that Razer joined for this initiative specifically. A few days ago, we posted news and assumed that they wanted to have input into an existing initiative, like Vulkan. While they still might, their main intentions are to contribute to this VR platform.
Third, there are a few interesting omissions from the list of companies.
Microsoft, who recently announced a VR ecosystem for Windows 10 (along with the possibly-applicable HoloLens of course), and is a member of the Khronos Group, isn’t part of the initiative, at least not yet. This makes sense from a historical standpoint, as Microsoft tends to assert control over APIs from the ground up. They are, or I should say were, fairly reluctant to collaborate, unless absolutely necessary. This has changed recently, starting with their participation with the W3C, because good God I hope web browsers conform to a standard, but also their recent membership with the Khronos Group, hiring ex-Mozilla employees, and so forth. Microsoft has been lauding how they embrace openness lately, but not in this way yet.
Speaking of Mozilla, that non-profit organization has been partnered with Google on WebVR for a few years now. While Google is a member of this announcement, it seems to be mostly based around their Daydream initiative. The lack of WebVR involvement with whatever API comes out of this initiative is a bit disappointing, but, again, it’s early days. I hope to see Mozilla and the web browser side of Google jump in and participate, especially if video game engines continue to experiment with cross-compiling to Web standards.
It's also surprising to not see Qualcomm's name on this list. The dominant mobile SoC vendor is a part of many Khronos-based groups including Vulkan, OpenCL, and others, so it's odd to have this omission here. It is early, so there isn't any reason to have concern over a split, but Qualcomm's strides into VR with development kits, platform advancements and other initiatives have picked up in recent months and I imagine it will have input on what this standard becomes.
And that’s all that I can think of at the moment. If you have any interests or concerns, be sure to drop a line in the comments. Registration is not required.
Introduction and First Impressions
The GENOME is the world’s first computer case with an integrated liquid-cooling system, and this unique design allows users to simply drop in the main system components and have a complete system with liquid cooling loop (and with very little effort).
“One of the first things many of us look at when considering the purchase of a new case is whether it will accommodate the cooling subsystem that we’d like to install in our next build. Can you install big enough radiators? Is there room in the main interior space for the reservoir and pump that you have your eye on? How will it look when everything is put together? To improve PC user experience is why DEEPCOOL comes up with GENOME, which is a PC hardware component, consists of an ATX PC case and an extreme liquid cooling system.”
When I first heard about the GENOME I was nonplussed - wondering how I would even go about reviewing at since it defies conventional classification. It’s as much a CPU cooler as a case, and DEEPCOOL calls this simply a “cooling system”. But however you label it there is no doubt that this novel concept has the potential to produce a polished build with a minimal effort (if it is well-designed, of course).
If you have switched cases as often as I do (no one should - I do it once every week or two), you might appreciate any sort of labor-saving design in a case. As a reviewer moving a test system from one enclosure to the next, I just want an easy build with adequate clearance and good cable management (these requirements are true for most normal people as well). Some cases are much easier to build in that others, and I was very curious to see how something which sounds quite complex would actually come together.
Move Over T150...
The Thrustmaster TMX was released this past summer to address the Xbox One ecosystem with an affordable, entry level force-feedback wheel. This is essentially the Xbox version of the previously reviewed Thrustmaster T150 for the PS3/PS4. There are many things that these two wheels have in common, but there are a few significant differences as well. The TMX is also PC compatible, which is what I tested it upon.
A no-nonsense box design that lets the buyer know exactly what systems this product is for.
The TMX is priced at an MSRP of $199. Along with the T150 this is truly an entry level FFB wheel with all of the features that racers desire. The wheel itself is 11” wide and the base is compact, with a solid feel. Unlike the T150, the TMX is entirely decked out in multiple shades of black. The majority of the unit is a dark, slick black while the rubber grips have a matte finish. The buttons on the wheel are colored appropriately according to the Xbox controller standards (yellow, blue, green, and red). The other buttons are black with a couple of them having some white stenciling on them.
The motor in this part is not nearly as powerful as what we find in the TX and T300rs base units. Those are full pulley based parts with relatively strong motors while the TMX is a combination gear and pulley system. This provides a less expensive setup than the full pulley systems of the higher priced parts, but it still is able to retain pretty strong FFB. Some of the more subtle effects may be lost due to the setup, but it is far and away a better solution than units that feature bungee cords and basic rumble functionality.
The back shows a basic diagram of the mixed pulley and geared subsystem for force-feedback.
The wheel features a 12 bit optical pickup sensor for motion on the wheel. This translates into 4096 values through 360 degrees of rotation. This is well below the 16 bit units of the TX and T300rs bases, but in my racing I did not find it to be holding me back. Yes, the more expensive units are more accurate and utilize the Hall Effect rather than an optical pickup, but the TMX provides more than enough precision for the vast majority of titles out there. The pedals look to feature the same 10 bit resolution that most other Thrustmaster pedals offer, or about 1024 values for several inches of travel.