All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Introduction and Specifications
Logitech has been releasing gaming headphones with a steady regularity of late, and this summer we have another new model to examine in the G433 Gaming Headset, which has just been released (along with the G233). This wired, 7.1-channel capable headset is quite different visually from previous Logitech models as is finished with an interesting “lightweight, hydrophobic fabric shell” and offered in various colors (our review pair is a bright red). But the G433’s have function to go along with the style, as Logitech has focused on both digital and analog sound quality with this third model to incorporate the Logitech’s Pro-G drivers. How do they sound? We’ll find out!
One of the main reasons to consider a gaming headset like this in the first place is the ability to take advantage of multi-channel surround sound from your PC, and with the G433’s (as with the previously reviewed G533) this is accomplished via DTS Headphone:X, a technology which in my experience is capable of producing a convincing sound field that is very close to that of multiple surround drivers. All of this is being created via the same pair of left/right drivers that handle music, and here Logitech is able to boast of some very impressive engineering that produced the Pro-G driver introduced two years ago. An included DAC/headphone amp interfaces with your PC via USB to drive the surround experience, and without this you still have a standard stereo headset that can connect to anything with a 3.5 mm jack.
The G433 is available in four colors, of which we have the red on hand today
If you have not read up on Logitech’s exclusive Pro-G driver, you will find in their description far more similarities to an audiophile headphone company than what we typically associate with a computer peripheral maker. Logitech explains the thinking behind the technology:
“The intent of the Pro-G driver design innovation is to minimize distortion that commonly occurs in headphone drivers. When producing lower frequencies (<1kHz), most speaker diaphragms operate as a solid mass, like a piston in an engine, without bending. When producing many different frequencies at the same time, traditional driver designs can experience distortion caused by different parts of the diaphragm bending when other parts are not. This distortion caused by rapid transition in the speaker material can be tuned and minimized by combining a more flexible material with a specially designed acoustic enclosure. We designed the hybrid-mesh material for the Pro-G driver, along with a unique speaker housing design, to allow for a more smooth transition of movement resulting in a more accurate and less distorted output. This design also yields a more efficient speaker due to less overall output loss due to distortion. The result is an extremely accurate and clear sounding audio experience putting the gamer closer to the original audio of the source material.”
Logitech’s claims about the Pro-G have, in my experience with the previous models featuring these drivers (G633/G933 Artemis Spectrum and G533 Wireless), have been spot on, and I have found them to produce a clarity and detail that rivals ‘audiophile’ stereo headphones.
Astute readers of the site might remember the original story we did on Bitcoin mining in 2011, the good ole' days where the concept of the blockchain was new and exciting and mining Bitcoin on a GPU was still plenty viable.
However, that didn't last long, as the race for cash lead people to developing Application Specific Integrated Circuits (ASICs) dedicated solely to Bitcoin mining quickly while sipping power. Use of the expensive ASICs drove the difficulty of mining Bitcoin to the roof and killed any sort of chance of profitability from mere mortals mining cryptocurrency.
Cryptomining saw a resurgence in late 2013 with the popular adoption of alternate cryptocurrencies, specifically Litecoin which was based on the Scrypt algorithm instead of AES-256 like Bitcoin. This meant that the ASIC developed for mining Bitcoin were useless. This is also the period of time that many of you may remember as the "Dogecoin" era, my personal favorite cryptocurrency of all time.
Defenders of these new "altcoins" claimed that Scrypt was different enough that ASICs would never be developed for it, and GPU mining would remain viable for a larger portion of users. As it turns out, the promise of money always wins out, and we soon saw Scrypt ASICs. Once again, the market for GPU mining crashed.
That brings us to today, and what I am calling "Third-wave Cryptomining."
While the mass populous stopped caring about cryptocurrency as a whole, the dedicated group that was left continued to develop altcoins. These different currencies are based on various algorithms and other proofs of works (see technologies like Storj, which use the blockchain for a decentralized Dropbox-like service!).
As you may have predicted, for various reasons that might be difficult to historically quantify, there is another very popular cryptocurrency from this wave of development, Ethereum.
Ethereum is based on the Dagger-Hashimoto algorithm and has a whole host of different quirks that makes it different from other cryptocurrencies. We aren't here to get deep in the woods on the methods behind different blockchain implementations, but if you have some time check out the Ethereum White Paper. It's all very fascinating.
Logitech G413 Mechanical Gaming Keyboard
The rise in popularity of mechanical gaming keyboards has been accompanied by the spread of RGB backlighting. But RGBs, which often include intricate control systems and software, can significantly raise the price of an already expensive peripheral. There are many cheaper non-backlit mechanical keyboards out there, but they are often focused on typing, and lack the design and features that are unique to the gaming keyboard market.
Gamers on a budget, or those who simply dislike fancy RGB lights, are therefore faced with a relative dearth of options, and it's exactly this market segment that Logitech is targeting with its G413 Mechanical Gaming Keyboard.
An Data Format for Whole 3D Scenes
The Khronos Group has finalized the glTF 2.0 specification, and they recommend that interested parties integrate this 3D scene format into their content pipeline starting now. It’s ready.
glTF is a format to deliver 3D content, especially full scenes, in a compact and quick-loading data structure. These features differentiate glTF from other 3D formats, like Autodesk’s FBX and even the Khronos Group’s Collada, which are more like intermediate formats between tools, such as 3D editing software (ex: Maya and Blender) and game engines. They don’t see a competing format for final scenes that are designed to be ingested directly, quick and small.
glTF 2.0 makes several important changes.
The previous version of glTF was based on a defined GLSL material, which limited how it could be used, although it did align with WebGL at the time (and that spurred some early adoption). The new version switches to Physically Based Rendering (PBR) workflows to define their materials, which has a few advantages.
First, PBR can represent a wide range of materials with just a handful of parameters. Rather than dictating a specific shader, the data structure can just... structure the data. The industry has settled on two main workflows, metallic-roughness and specular-gloss, and glTF 2.0 supports them both. (Metallic-roughness is the core workflow, but specular-gloss is provided as an extension, and they can be used together in the same scene. Also, during the briefing, I noticed that transparency was not explicitly mentioned in the slide deck, but the Khronos Group confirmed that it is stored as the alpha channel of the base color, and thus supported.) Because the format is now based on existing workflows, the implementation can be programmed in OpenGL, Vulkan, DirectX, Metal, or even something like a software renderer. In fact, Microsoft was a specification editor on glTF 2.0, and they have publicly announced using the format in their upcoming products.
The original GLSL material, from glTF 1.0, is available as an extension (for backward compatibility).
A second advantage of PBR is that it is lighting-independent. When you define a PBR material for an object, it can be placed in any environment and it will behave as expected. Noticeable, albeit extreme examples of where this would have been useful are the outdoor scenes of Doom 3, and the indoor scenes of Battlefield 2. It also simplifies asset creation. Some applications, like Substance Painter and Quixel, have artists stencil materials onto their geometry, like gold, rusted iron, and scuffed plastic, and automatically generate the appropriate textures. It also aligns well with deferred rendering, see below, which performs lighting as a post-process step and thus skip pixels (fragments) that are overwritten.
PBR Deferred Buffers in Unreal Engine 4 Sun Temple.
Lighting is applied to these completed buffers, not every fragment.
glTF 2.0 also improves support for complex animations by adding morph targets. Most 3D animations, beyond just moving, rotating, and scaling whole objects, are based on skeletal animations. This method works by binding vertexes to bones, and moving, rotating, and scaling a hierarchy of joints. This works well for humans, animals, hinges, and other collections of joints and sockets, and it was already supported in glTF 1.0. Morph targets, on the other hand, allow the artist to directly control individual vertices between defined states. This is often demonstrated with a facial animation, interpolating between smiles and frowns, but, in an actual game, this is often approximated with skeletal animations (for performance reasons). Regardless, glTF 2.0 now supports morph targets, too, letting the artists make the choice that best suits their content.
Speaking of performance, the Khronos Group is also promoting “enhanced performance” as a benefit of glTF 2.0. I asked whether they have anything to elaborate on, and they responded with a little story. While glTF 1.0 validators were being created, one of the engineers compiled a list of design choices that would lead to minor performance issues. The fixes for these were originally supposed to be embodied in a glTF 1.1 specification, but PBR workflows and Microsoft’s request to abstract the format away from GLSL lead to glTF 2.0, which is where the performance optimization finally ended up. Basically, there wasn’t just one or two changes that made a big impact; it was the result of many tiny changes that add up.
Also, the binary version of glTF is now a core feature in glTF 2.0.
The slide looks at the potential future of glTF, after 2.0.
Looking forward, the Khronos Group has a few items on their glTF roadmap. These did not make glTF 2.0, but they are current topics for future versions. One potential addition is mesh compression, via the Google Draco team, to further decrease file size of 3D geometry. Another roadmap entry is progressive geometry streaming, via Fraunhofer SRC, which should speed up runtime performance.
Yet another roadmap entry is “Unified Compression Texture Format for Transmission”, specifically Basis by Binomial, for texture compression that remains as small as possible on the GPU. Graphics processors can only natively operate on a handful of formats, like DXT and ASTC, so textures need to be converted when they are loaded by an engine. Often, when a texture is loaded at runtime (rather than imported by the editor) it will be decompressed and left in that state on the GPU. Some engines, like Unity, have a runtime compress method that converts textures to DXT, but the developer needs to explicitly call it and the documentation says it’s lower quality than the algorithm used by the editor (although I haven’t tested this). Suffices to say, having a format that can circumvent all of that would be nice.
Again, if you’re interested in adding glTF 2.0 to your content pipeline, then get started. It’s ready. Microsoft is doing it, too.
ARM Refreshes All the Things
This past April ARM invited us to visit Cambridge, England so they could discuss with us their plans for the next year. Quite a bit has changed for the company since our last ARM Tech Day in 2016. They were acquired by SoftBank, but continue to essentially operate as their own company. They now have access to more funds, are less risk averse, and have a greater ability to expand in the ever growing mobile and IOT marketplaces.
The ARM of today certainly is quite different than what we had known 10 years ago when we saw their technology used in the first iPhone. The company back then had good technology, but a relatively small head count. They kept pace with the industry, but were not nearly as aggressive as other chip companies in some areas. Through the past 10 years they have grown not only in numbers, but in technologies that they have constantly expanded on. The company became more PR savvy and communicated more effectively with the press and in the end their primary users. Where once ARM would announce new products and not expect to see shipping products upwards of 3 years away, we are now seeing the company be much more aggressive with their designs and getting them out to their partners so that production ends up happening in months as compared to years.
Several days of meetings and presentations left us a bit overwhelmed by what ARM is bringing to market towards the end of 2017 and most likely beginning of 2018. On the surface it appears that ARM has only done a refresh of the CPU and GPU products, but once we start looking at these products in the greater scheme and how they interact with DynamIQ we see that ARM has changed the mobile computing landscape dramatically. This new computing concept allows greater performance, flexibility, and efficiency in designs. Partners will have far more control over these licensed products to create more value and differentiation as compared to years past.
We have previously covered DynamIQ at PCPer this past March. ARM wanted to seed that concept before they jumped into more discussions on their latest CPUs and GPUs. Previous Cortex products cannot be used with DynamIQ. To leverage that technology we must have new CPU designs. In this article we are covering the Cortex-A55 and Cortex-A75. These two new CPUs on the surface look more like a refresh, but when we dig in we see that some massive changes have been wrought throughout. ARM has taken the concepts of the previous A53 and A73 and expanded upon them fairly dramatically, not only to work with DynamIQ but also by removing significant bottlenecks that have impeded theoretical performance.
It Started with an OpenCL 2.2 Press Release
Update (May 18 @ 4pm EDT): A few comments across the internet believes that the statements from The Khronos Group were inaccurately worded, so I emailed them yet again. The OpenCL working group has released yet another statement:
OpenCL is announcing that their strategic direction is to support CL style computing on an extended version of the Vulkan API. The Vulkan group is agreeing to advise on the extensions.
In other words, this article was and is accurate. The Khronos Group are converging OpenCL and Vulkan into a single API: Vulkan. There was no misinterpretation.
Original post below
Earlier today, we published a news post about the finalized specifications for OpenCL 2.2 and SPIR-V 1.2. This was announced through a press release that also contained an odd little statement at the end of the third paragraph.
We are also working to converge with, and leverage, the Khronos Vulkan API — merging advanced graphics and compute into a single API.
This statement seems to suggest that OpenCL and Vulkan are expecting to merge into a single API for compute and graphics at some point in the future. This seemed like a huge announcement to bury that deep into the press blast, so I emailed The Khronos Group for confirmation (and any further statements). As it turns out, this interpretation is correct, and they provided a more explicit statement:
The OpenCL working group has taken the decision to converge its roadmap with Vulkan, and use Vulkan as the basis for the next generation of explicit compute APIs – this also provides the opportunity for the OpenCL roadmap to merge graphics and compute.
This statement adds a new claim: The Khronos Group plans to merge OpenCL into Vulkan, specifically, at some point in the future. Making the move in this direction, from OpenCL to Vulkan, makes sense for a handful of reasons, which I will highlight in my analysis, below.
Going Vulkan to Live Long and Prosper?
The first reason for merging OpenCL into Vulkan, from my perspective, is that Apple, who originally created OpenCL, still owns the trademarks (and some other rights) to it. The Khronos Group licenses these bits of IP from Apple. Vulkan, based on AMD’s donation of the Mantle API, should be easier to manage from the legal side of things.
The second reason for going in that direction is the actual structure of the APIs. When Mantle was announced, it looked a lot like an API that wrapped OpenCL with a graphics-specific layer. Also, Vulkan isn’t specifically limited to GPUs in its implementation.
Aside: When you create a device queue, you can query the driver to see what type of device it identifies as by reading its VkPhysicalDeviceType. Currently, as of Vulkan 1.0.49, the options are Other, Integrated GPU, Discrete GPU, Virtual GPU, and CPU. While this is just a clue, to make it easier to select a device for a given task, and isn’t useful to determine what the device is capable of, it should illustrate that other devices, like FPGAs, could support some subset of the API. It’s just up to the developer to check for features before they’re used, and target it at the devices they expect.
If you were to go in the other direction, you would need to wedge graphics tasks into OpenCL. You would be creating Vulkan all over again. From my perspective, pushing OpenCL into Vulkan seems like the path of least resistance.
The third reason (that I can think of) is probably marketing. DirectX 12 isn’t attempting to seduce FPGA developers. Telling a game studio to program their engine on a new, souped-up OpenCL might make them break out in a cold sweat, even if both parties know that it’s an evolution of Vulkan with cross-pollination from OpenCL. OpenCL developers, on the other hand, are probably using the API because they need it, and are less likely to be shaken off.
What OpenCL Could Give Vulkan (and Vice Versa)
From the very onset, OpenCL and Vulkan were occupying similar spaces, but there are some things that OpenCL does “better”. The most obvious, and previously mentioned, element is that OpenCL supports a wide range of compute devices, such as FPGAs. That’s not the limit of what Vulkan can borrow, though, although it could make for an interesting landscape if FPGAs become commonplace in the coming years and decades.
Personally, I wonder how SYCL could affect game engine development. This standard attempts to guide GPU- (and other device-) accelerated code into a single-source, C++ model. For over a decade, Tim Sweeney of Epic Games has talked about writing engines like he did back in the software-rendering era, but without giving up the ridiculous performance (and efficiency) provided by GPUs.
The reason for bringing up this anecdote is because, if OpenCL is moving into Vulkan, and SYCL is still being developed, then it seems likely that SYCL will eventually port into Vulkan. If this is the case, then future game engines can gain benefits that I was striving toward without giving up access to fixed-function features, like hardware rasterization. If Vulkan comes to web browsers some day, it would literally prune off every advantage I was hoping to capture, and it would do so with a better implementation.
More importantly, SYCL is something that Microsoft cannot provide with today’s DirectX.
Admittedly, it’s hard to think of something that OpenCL can acquire from Vulkan, besides just a lot more interest from potential developers. Vulkan was already somewhat of a subset of OpenCL that had graphics tasks (cleanly) integrated over top of it. On the other hand, OpenCL has been struggling to acquire mainstream support, so that could, in fact, be Vulkan’s greatest gift.
The Khronos Group has not provided a timeline for this change. It’s just a roadmap declaration.
YouTube Tries Everything
Back in March, Google-owned YouTube announced a new live TV streaming service called YouTube TV to compete with the likes of Sling, DirecTV Now, PlayStation Vue, and upcoming offerings from Hulu, Amazon, and others. All these services aim to deliver curated bundles of channels aimed at cord cutters that run over the top of customer’s internet only connections as replacements for or in addition to cable television subscriptions. YouTube TV is the latest entrant to this market with the service only available in seven test markets currently, but it is off to a good start with a decent selection of content and features including both broadcast and cable channels, on demand media, and live and DVR viewing options. A responsive user interface and generous number of family sharing options (six account logins and three simultaneous streams) will need to be balanced by the requirement to watch ads (even on some DVR’ed shows) and the $35 per month cost.
YouTube TV was launched in 5 cities with more on the way. Fortunately, I am lucky enough to live close enough to Chicago to be in-market and could test out Google’s streaming TV service. While not a full review, the following are my first impressions of YouTube TV.
Setup / Sign Up
YouTube TV is available with a one month free trail, after which you will be charged $35 a month. Sign up is a simple affair and can be started by going to tv.youtube.com or clicking the YouTube TV link from “hamburger” menu on YouTube. If you are on a mobile device, YouTube TV uses a separate app than the default YouTube app and weighs in at 9.11 MB for the Android version. The sign up process is very simple. After verifying your location, the following screens show you the channels available in your market and gives you the option of adding Showtime ($11) and/or Fox Soccer ($15) for additional monthly fees. After that, you are prompted for a payment method that can be the one already linked to your Google account and used for app purchases and other subscriptions. As far as the free trial, I was not charged anything and there was no hold on my account for the $35. I like that Google makes it easy to see exactly how many days you have left on your trial and when you will be charged if you do not cancel. Further, the cancel link is not buried away and is intuitively found by clicking your account photo in the upper right > Personal > Membership. Google is doing things right here. After signup, a tour is offered to show you the various features, but you can skip this if you want to get right to it.
In my specific market, I have the following channels. When I first started testing some of the channels were not available, and were just added today. I hope to see more networks added, and if Google can manage that YouTube TV and it’s $35/month price are going to shape up to be a great deal.
- ABC 7, CBS 2, Fox 32, NBC 5, ESPN, CSN, CSN Plus, FS1, CW, USA, FX, Free Form, NBC SN, ESPN 2, FS2, Disney, E!, Bravo, Oxygen, BTN, SEC ESPN Network, ESPN News, CBS Sports, FXX, Syfy, Disney Junior, Disney XD, MSNBC, Fox News, CNBC, Fox Business, National Geographic, FXM, Sprout, Universal, Nat Geo Wild, Chiller, NBC Golf, YouTube Red Originals
- Plus: AMC, BBC America, IFC, Sundance TV, We TV, Telemundo, and NBC Universal (just added).
- Optional Add-Ons: Showtime and Fox Soccer.
I tested YouTube TV out on my Windows PCs and an Android phone. You can also watch YouTube TV on iOS devices, and on your TV using an Android TVs and Chromecasts (At time of writing, Google will send you a free Chromecast after your first month). (See here for a full list of supported devices.) There are currently no Roku or Apple TV apps.
Each YouTube TV account can share out the subscription to 6 total logins where each household member gets their own login and DVR library. Up to three people can be streaming TV at the same time. While out and about, I noticed that YouTube TV required me to turn on location services in order to use the app. Looking further into it, the YouTube TV FAQ states that you will need to verify your location in order to stream live TV and will only be able to stream live TV if you are physically in the markets where YouTube TV has launched. You can watch your DVR shows anywhere in the US. However, if you are traveling internationally you will not be able to use YouTube TV at all (I’m not sure if VPNs will get around this or if YouTube TV blocks this like Netflix does). Users will need to login from their home market at least once every 3 months to keep their account active and able to stream content (every month for MLB content).
YouTube TV verifying location in Chrome (left) and on the android app (right).
On one hand, I can understand this was probably necessary in order for YouTube TV to negotiate a licensing deal, and their terms do seem pretty fair. I will have to do more testing on this as I wasn’t able to stream from the DVR without turning on location services on my Android – I can chalk this up to growing pains though and it may already be fixed.
Features & First Impressions
YouTube TV has an interface that is perhaps best described as a slimmed down YouTube that takes cues from Netflix (things like the horizontal scrolling of shows in categories). The main interface is broken down into three sections: Library, Home, and Live with the first screen you see when logging in being Home. You navigate by scrolling and clicking, and by pulling the menus up from the bottom while streaming TV like YouTube.
Despite its surprise launch a few weeks ago, the Corsair ONE feels like it was inevitable. Corsair's steady expansion from RAM modules to power supplies, cases, SSDs, CPU coolers, co-branded video cards, and most recently barebones systems pointed to an eventual complete Corsair system. However, what we did not expect was the form it would take.
Did Corsair hit it out of the park on their first foray into prebuilt systems, or do they still have some work to do?
It's a bit difficult to get an idea of the scale of the Corsair ONE. Even the joke of "Is it bigger than a breadbox?" doesn't quite work here with the impressively breadbox-size and shape.
Essentially, when you don't take the fins on the top and the bottom into account, the Corsair ONE is as tall as a full-size graphics card — such as the GeForce GTX 1080 — and that's no coincidence.
|Corsair ONE Pro (configuration as reviewed)|
|Processor||Intel Core i7-7700K (Kaby Lake)|
|Graphics||NVIDIA Geforce GTX 1080 Watercooled|
|Motherboard||Custom MSI Z270 Mini-ITX|
|Storage||960 GB Corsair Force LE|
|Power Supply||Corsair SF400 80+ Gold SFX|
|Wireless||Intel 8265 802.11ac + BT 4.2 (Dual Band, 2x2)|
|Connections||1 X USB 3.1 GEN2 TYPE C
3 X USB 3.1 GEN1 TYPE A
2 X USB 2.0 TYPE A
1 X PS/2 Port
1 X HDMI 2.0
2 X DisplayPort
1 X S/PDIF
|Dimensions||7.87 x 6.93 x 14.96 inches (20 x 17.6 x 38 cm)
15.87 lbs. (7.2 kg)
|OS||Windows 10 Home|
|Price||$2299.99 - Corsair.com|
Taking a look at the full specifcations, we see all the components for a capable gaming PC. In addition to the afforementioned GTX 1080, you'll find Intel's flagship Core i7-7700K, a Mini ITX Z270 motherboard produced by MSI, a 960GB SSD, and 16GB of DDR4 memory.
Build and Upgrade Components
Spring is in the air! And while many traditionally use this season for cleaning out their homes, what could be the point of reclaiming all of that space besides filling it up again with new PC hardware and accessories? If you answered, "there is no point, other than what you just said," then you're absolutely right. Spring a great time to procrastinate about housework and build up a sweet new gaming PC (what else would you really want to use that tax return for?), so our staff has listed their favorite PC hardware right now, from build components to accessories, to make your life easier. (Let's make this season far more exciting than taking out the trash and filing taxes!)
While our venerable Hardware Leaderboard has been serving the PC community for many years, it's still worth listing some of our favorite PC hardware for builds at different price points here.
Processors - the heart of the system.
No doubt about it, AMD's Ryzen CPU launch has been the biggest news of the year so far for PC enthusiasts, and while the 6 and 4-core variants are right around the corner the 8-core R7 processors are still a great choice if you have the budget for a $300+ CPU. To that end, we really like the value proposition of the Ryzen R7 1700, which offers much of the performance of its more expensive siblings for a really compelling price, and can potentially be overclocked to match the higher-clocked members of the Ryzen lineup, though moving up to either the R7 1700X or R7 1800X will net you higher clocks (without increasing voltage and power draw) out of the box.
Really, any of these processors are going to provide a great overall PC experience with incredible multi-threaded performance for your dollar in many applications, and they can of course handle any game you throw at them - with optimizations already appearing to make them even better for gaming.
Don't forget about Intel, which has some really compelling options starting even at the very low end (Pentium G4560, when you can find one in stock near its ~$60 MSRP), thanks to their newest Kaby Lake CPUs. The high-end option from Intel's 7th-gen Core lineup is the Core i7-7700K (currently $345 on Amazon), which provides very fast gaming performance and plenty of power if you don't need as many cores as the R7 1700 (or Intel's high-end LGA-2011 parts). Core i5 processors provide a much more cost-effective way to power a gaming system, and an i5-7500 is nearly $150 less than the Core i7 while providing excellent performance if you don't need an unlocked multiplier or those additional threads.
New "Fabric" for ARM
There are cars that get you from point A to point B, and then there are luxurious grand touring cars which will get you there with power, comfort, and style - for a price. Based on the cost alone ($269.99 MSRP!) it seems like a safe bet to say that the REALFORCE RGB keyboard will be a similarly premium experience. Let’s take a look!
There is as much personal taste at issue when considering a keyboard (or dream car!) as almost any other factor, and regardless of build quality or performance a keyboard is probably not going to work out for you if it doesn’t feel right. Mechanical keyboards are obviously quite popular, and more companies than ever offer their own models, many using Cherry MX key switches (or generic ‘equivalents’ - which vary in quality). Topre keys are different, as they are a capacitive key with a rubber dome and metal spring, and have a very smooth, fast feel to them - not clicky at all.
“Topre capacitive key switches are a patented hybrid between a mechanical spring based switch, a rubber dome switch, and a capacitive sensor which, combined, provide tactility, comfort, and excellent durability. The unique electrostatic design of Topre switches requires no physical mechanical coupling and therefore key switch bounce/chatter is eliminated.”
Introduction and Specifications
The G533 Wireless headset is the latest offering from Logitech, combining the company’s premium Pro-G drivers, 15-hour battery life, and a new, more functional style. Obvious comparisons can be made to last year’s G933 Artemis Spectrum, since both are wireless headsets using Logitech’s Pro-G drivers; but this new model comes in at a lower price while offering much of the same functionality (while dropping the lighting effects). So does the new headset sound any different? What about the construction? Read on to find out!
The G533 exists alongside the G933 Artemis Spectrum in Logitech’s current lineup, but it takes most of the features from that high-end wireless model, while paring it down to create a lean, mean option for gamers who don’t need (or want) RGB lighting effects. The 40 mm Pro-G drivers are still here, and the new G533 offers a longer battery life (15 hours) than the G933 could manage, even with its lighting effects disabled (12 hours). 7.1-channel surround effects and full EQ and soundfield customization remain, though only DTS effects are present (no Dolby this time).
What do these changes translate to? First of all, the G533 headset is being introduced with a $149 MSRP, which is $50 lower than the G933 Artemis Spectrum at $199. I think many of our readers would trade RGB effects for lower cost, making this a welcome change (especially considering lighting effects don’t really mean much when you are wearing the headphones).Another difference is the overall weight of the headset at 12.5 oz, which is 0.5 oz lighter than the G933 at 13 oz.
Is Mechanical Mandatory?
The Logitech G213 Prodigy gaming keyboard offers the company's unique Mech-Dome keys and customizable RGB lighting effects, and it faces some stiff competition in a market overflowing with gaming keyboards for every budget (including mechanical options). But it really comes down to performance, feel, and usability; and I was interested in giving these new Mech-Dome keys a try.
“The G213 Prodigy gaming keyboard features Logitech Mech-Dome keys that are specially tuned to deliver a superior tactile response and performance profile similar to a mechanical keyboard. Mech-Dome keys are full height, deliver a full 4mm travel distance, 50g actuation force, and a quiet sound operation.
The G213 Prodigy gaming keyboard was designed for gaming, featuring ultra-quick, responsive feedback that is up to 4x faster than the 8ms report rate of standard keyboards and an anti-ghosting matrix that keeps you in control when you press multiple gaming keys simultaneously.”
I will say that at $69.99 the G213 plays in a somewhat odd space relative to the current gaming keyboard market; though it would be well positioned in a retail setting, where at a local Best Buy it would be a compelling option vs. a $100+ mechanical option. But savvy internet shoppers see the growing number of <$70 mechanical keyboards available and might question the need for a ‘quasi-mechanical’ option like this. I don’t review products from a marketing perspective, however, and I simply set out to determine if the G213 is a well-executed product on the hardware front.
Bluetooth has come a long way since the technology was introduced in 1998. The addition of the Advanced Audio Distribution Profile (A2DP) in 2003 brought support for high-quality audio streaming, but Bluetooth still didn’t offer anywhere near the quality of a wired connection. This unfortunate fact is often overlooked in favor of the technology's convenience factor, but what if we could have the best of both worlds? This is where Qualcomm's aptX comes in, and it is a departure from the methods in place since the introduction of Bluetooth audio.
What is aptX audio? It's actually a codec that compresses audio in a very different manner than that of the standard Bluetooth codec, and the result is as close to uncompressed audio as the bandwidth-constrained Bluetooth technology can possibly allow. Qualcomm describes aptX audio as "a bit-rate efficiency technology that ensures you receive the highest possible sound quality from your Bluetooth audio device," and there is actual science to back up this claim. After doing quite a bit of reading on the subject as I prepared for this review, I found that the technology behind aptX audio, and its history, is very interesting.
A Brief History of aptX Audio
The aptX codec has actually been around since long before Bluetooth, with its invention in the 1980s and first commercial applications beginning in the 1990s. The version now found in compatible Bluetooth devices is 4th-generation aptX, and in the very beginning it was actually a hardware product (the APTX100ED chip). The technology has had a continued presence in pro audio for three decades now, with a wider reach than I had ever imagined when I started researching the topic. For example, aptX is used for ISDN line connections for remote voice work (voice over, ADR, foreign language dubs, etc.) in movie production, and even for mix approvals on film soundtracks. In fact, aptX was also the compression technology behind DTS theater sound, which had its introduction in 1993 with Jurassic Park. It is in use in over 30,000 radio stations around the world, where it has long been used for digital music playback.
So, while it is clear that aptX is a respected technology with a long history in the audio industry, how exactly does this translate into improvements for someone who just wants to listen to music over a bandwidth-constrained Bluetooth connection? The nature of the codec and its differences/advantages vs. A2DP is a complex topic, but I will attempt to explain in plain language how it actually can make Bluetooth audio sound better. Having science behind the claim of better sound goes a long way in legitimizing perceptual improvements in audio quality, particularly as the high-end audio industry is full of dubious - and often ridiculous - claims. There is no snake-oil to be sold here, as we are simply talking about a different way to compress and uncompress an audio signal - which is the purpose of a codec (code, decode) to begin with.
Price and Other Official Information
Since our last Nintendo Switch post, the company had their full reveal event, which confirmed the two most critical values: it will launch on March 3rd for $299.99 USD ($399.99 CDN). This is basically what the rumors have pointed to for a little while, and it makes sense. That was last week, but this week gave rise to a lot more information, mostly from either an interview with Nintendo of America’s President and COO, Reggie Fils-Aimé, or from footage that was recorded and analyzed by third parties, like Digital Foundry.
From the GameSpot interview, above, Reggie was asked about the launch bundle, and why it didn’t include any game, like 1 - 2 - Switch. His response was blunt and honest: they wanted to hit $299 USD and the game found itself below the cut-off point. While I can respect that, I cannot see enough people bothering with the title at full price for it to have been green-lit in the first place. If Nintendo wasn’t interested in just eating the cost of that game’s development to affect public (and developer) perceptions, although they might end up taking the loss if the game doesn’t sell anyway, then at least it wasn’t factored into the system.
Speaking of price, we are also seeing what the accessories sell at.
From the controller side of things, the more conventional one, the Nintendo Switch Pro Controller, has an MSRP of $69.99 USD. If you look at its competitors, the DualShock 4 for the PlayStation 4 at $49 and the Xbox Wireless Controller for the Xbox One at the same price, this is notably higher. While it has a bunch of interesting features, like “HD rumble”, motion sensing, and some support for amiibos, its competitors are similar, but $20 cheaper.
The Switch-specific controllers, called “Joy-Con”, are $10 more expensive than the Pro Controller, at $79.99 USD for the pair, or just $49.99 USD for the left or right halves. (Some multiplayer titles only require a half, so Nintendo doesn’t force you to buy the whole pair at the expense of extra SKUs, which is also probably helpful if you lose one.) This seems high, and could be a significant problem going forward.
As for its availability? Nintendo has disclosed that they are pushing 2 million units into the channel, so they are not expecting shortages like the NES Classic had. They do state that demand is up-in-the-air a bit, though.
It's that time of year again where the holidays are upon us, it's freezing outside, and balancing work, school, and family events makes us all a bit crazy. If you are still procrastinating on your holiday shopping or are just not sure what to get the techie that seems to have everything already, PC Perspective has you covered! And if you dare not venture outside into the winter wasteland, there is still time to order online and have it arrive in time!
Following the same format as previous years, the first set of pages are picks that the staff has put together collectively and are recommendations for things like PC hardware components like CPUs, graphics cards, and coolers, mobile hardware (phones, tablets, etc.), and finally accessories and audio. Beyond that, the staff members are given a section to suggest picks of their own that may not fit into one of the main categories but are still thoughtful and useful gift ideas!
Good luck out there, and thank you for another wonderful year of your valued readership! May you have safe travels and memorable holidays!
Image courtesy maf04 via Flickr creative commons.
Intel Core i7-6700K Quad-Core Unlocked Processor - $344, Amazon
It's a bit of an interesting time for CPUs with Intel's Kaby Lake (e.g. 7700K) not out yet and AMD's Ryzen Zen-based (e.g. 8 core Summit Ridge) processors slated for release early next year. Last year the i7-6700K was our top pick, and due to timing of upcoming releases, it is still the pick for this year though it may be wise to look at other gift ideas this year unless they really want that gaming PC ASAP. On the plus side, it is a bit cheaper than last year! You can read our review of the Core i7 6700K here.
AMD Athlon X4 880K Unlocked Quad Core - $92, Amazon
On the AMD side of things, the Athlon X4 880K is a great processor to base a budget gaming build around. The pricing works out a bit cheaper than the old 860K as well as offering slightly faster clockspeeds and a better stock cooler.
Continue reading our holiday gift guide for our picks for graphics cards, storage, and more!
The NVIDIA GTX 1080 is the current consumer-grade performance king, and there are a number of brands to choose from. Whichever you go with, the Pascal-based graphics card is ready for 1440 and even 4k gaming along with VR (virtual reality) gaming. The EVGA FTW Hybrid is a beast of a card that can easily be overclocked with keeping temperatures in check.
If you are looking for something a bit cheaper, AMD's RX 480 is a great midrange graphics card that can easily do 1080p with the details cranked up. Sapphire has a good factory overclocked card with the RX 480 Nitro+ which can be found for $250. For reference, check out our reviews on the RX 480 (we also have a video of the Sapphire card specifically) and it's competitor the GTX 1060.
Samsung 960 Evo 1TB NVMe M.2 SSD - $480, Amazon
Samsung's recently released 960 Evo is not the fastest SSD available, but it's no slouch either. Using Samsung's TLC V-NAND flash, its Polaris controller, and 1GB of DDR3 cache, the drive packs 1TB of speedy storage into the M.2 form factor. The NVMe SSD is rated at 3,200 MB/s sequential reads, 1,900 MB/s sequential writes, 380,000 4k random reads and 360,000 4k random writes (IOPS ratings at QD32). It is rated at 1.5 million hours MTBF and while it does not have the lifetime or write speeds of the pro version (960 Pro), it is quite a bit cheaper! The gamer in your life will appreciate the super fast loading times too!
If you are looking for something a bit more down to earth, SATA SSDs continue to get cheaper and more capacious and there are even some good budget M.2 options these days!
MyDigitalSSD BPX 480GB NVMe M.2 SSD - $200, Amazon
MyDigitalSSD's BPX solid state drive pairs a Phison PS5007-E7 controller with up to 480GB of 2D MLC NAND. The drive is rated at a respectable 2,600 MB/s sequential reads, 1,300 MB/s sequential writes, and approximately 208,728 4k random read IOPS and 202,713 4k random write IOPS. Despite being from a less well known company, the budget drive puts up very nice numbers for the price and comes with a 5 year warranty, 2 million hours MTBF rating, and 1,400 TB total bytes written rating on the flash. Pricing is much more budget friendly at $200 for the 480GB model, $115 for the 240GB, and $70 for the 120GB drive.
SATA SSD Recommendations
SATA SSDs are still great options for a system build and/or HDD upgrade, and Allyn has a few picks later on in this guide.
The following pages are individual selections / gift ideas from each staff member!
Das Keyboard describes their products as "the ultimate experience for badasses", and the Austin, TX based company has delivered premium designs since their initial (completely blank) keyboard in 2005. The Prime 13 is a traditional 104-key design (with labeled keys), and features Cherry MX Brown switches and simple white LED backlighting. So it is a truly "badass" product? Read on to find out!
"Das Keyboard Prime 13 is a minimalist mechanical keyboard designed to take productivity to the next level. Free of fancy features, the Prime 13 delivers an awesome typing experience by focusing on premium material and simple design. Featuring an anodized aluminum top panel, Cherry MX switches with white LEDs, USB pass-through and an extra-long braided cable, the Prime 13 is the ideal mechanical keyboard for overachievers who want get the job done."
I don't need to tell prospective mechanical keyboard buyers that the market is very crowded, and it seems to grow every month. Just about PC accessory maker offers at least one option, and many have tried to distinguish themselves with RGB lighting effects and software with game-specific profiles and the like. So is there still room for a simple, non-RGB keyboard with no special software involved? I think so, but it will need to be quite a premium design to justify a $149 price tag, and that's what the Prime 13 will run at retail. First impressions are very good, but I'll try to cover the experience as well as I can in text and photos in this review.
A close look at the MX Brown switches within the Prime 13
Maybe Good that Valve Called their API OpenVR?
Update, December 6th, 2016 @ 2:46pm EST: Khronos has updated the images on their website, and those changes are now implemented on our post. The flow-chart image changed dramatically, but the members image has also added LunarG.
Original Post Below
The Khronos Group has just announced their VR initiative, which is in the early, call for participation stage. The goal is to produce an API that can be targeted by drivers from each vendor, so that applications can write once and target all compatible devices. The current list of participants are: Epic Games, Google, Oculus VR, Razer, Valve, AMD, ARM, Intel, NVIDIA, VeriSilicon, Sensics, and Tobii. The point of this announcement is to get even more companies involved, before it matures.
Image Credit: The Khronos Group
Valve, in particular, has donated their OpenVR API to Khronos Group. I assume that this will provide the starting point for the initiative, similar to how AMD donated Mantle to found Vulkan, which overcomes the decision paralysis of a blank canvas. Also, especially for VR, I doubt these decisions would significantly affect individual implementations. If it does, though, now would be the time for them to propose edits.
In terms of time-frame, it’s early enough that the project scope hasn’t even been defined, so schedules can vary. They do claim that, based on past experiences, about 18 months is “often typical”.
That’s about it for the announcement; on to my analysis.
Image Credit: The Khronos Group, modified
First, it’s good that The Khronos Group are the ones taking this on. Not only do they have the weight to influence the industry, especially with most of these companies having already collaborated on other projects, like OpenGL, OpenCL, and Vulkan, but their standards tend to embrace extensions. This allows Oculus, Valve, and others to add special functionality that can be picked up by applications, but still be compatible at a base level with the rest of the ecosystem. To be clear, the announcement said nothing about extensions, but it would definitely make sense for VR, which can vary with interface methods, eye-tracking, player tracking, and so forth.
If extensions end up being a thing, this controlled competition allows the standard as a whole to evolve. If an extension ends up being popular, that guides development of multi-vendor extensions, which eventually may be absorbed into the core specification. On the other hand, The Khronos Group might decide that, for VR specifically, the core functionality is small and stable enough that extensions would be unnecessary. Who knows at this point.
Second, The Khronos Group stated that Razer joined for this initiative specifically. A few days ago, we posted news and assumed that they wanted to have input into an existing initiative, like Vulkan. While they still might, their main intentions are to contribute to this VR platform.
Third, there are a few interesting omissions from the list of companies.
Microsoft, who recently announced a VR ecosystem for Windows 10 (along with the possibly-applicable HoloLens of course), and is a member of the Khronos Group, isn’t part of the initiative, at least not yet. This makes sense from a historical standpoint, as Microsoft tends to assert control over APIs from the ground up. They are, or I should say were, fairly reluctant to collaborate, unless absolutely necessary. This has changed recently, starting with their participation with the W3C, because good God I hope web browsers conform to a standard, but also their recent membership with the Khronos Group, hiring ex-Mozilla employees, and so forth. Microsoft has been lauding how they embrace openness lately, but not in this way yet.
Speaking of Mozilla, that non-profit organization has been partnered with Google on WebVR for a few years now. While Google is a member of this announcement, it seems to be mostly based around their Daydream initiative. The lack of WebVR involvement with whatever API comes out of this initiative is a bit disappointing, but, again, it’s early days. I hope to see Mozilla and the web browser side of Google jump in and participate, especially if video game engines continue to experiment with cross-compiling to Web standards.
It's also surprising to not see Qualcomm's name on this list. The dominant mobile SoC vendor is a part of many Khronos-based groups including Vulkan, OpenCL, and others, so it's odd to have this omission here. It is early, so there isn't any reason to have concern over a split, but Qualcomm's strides into VR with development kits, platform advancements and other initiatives have picked up in recent months and I imagine it will have input on what this standard becomes.
And that’s all that I can think of at the moment. If you have any interests or concerns, be sure to drop a line in the comments. Registration is not required.
Move Over T150...
The Thrustmaster TMX was released this past summer to address the Xbox One ecosystem with an affordable, entry level force-feedback wheel. This is essentially the Xbox version of the previously reviewed Thrustmaster T150 for the PS3/PS4. There are many things that these two wheels have in common, but there are a few significant differences as well. The TMX is also PC compatible, which is what I tested it upon.
A no-nonsense box design that lets the buyer know exactly what systems this product is for.
The TMX is priced at an MSRP of $199. Along with the T150 this is truly an entry level FFB wheel with all of the features that racers desire. The wheel itself is 11” wide and the base is compact, with a solid feel. Unlike the T150, the TMX is entirely decked out in multiple shades of black. The majority of the unit is a dark, slick black while the rubber grips have a matte finish. The buttons on the wheel are colored appropriately according to the Xbox controller standards (yellow, blue, green, and red). The other buttons are black with a couple of them having some white stenciling on them.
The motor in this part is not nearly as powerful as what we find in the TX and T300rs base units. Those are full pulley based parts with relatively strong motors while the TMX is a combination gear and pulley system. This provides a less expensive setup than the full pulley systems of the higher priced parts, but it still is able to retain pretty strong FFB. Some of the more subtle effects may be lost due to the setup, but it is far and away a better solution than units that feature bungee cords and basic rumble functionality.
The back shows a basic diagram of the mixed pulley and geared subsystem for force-feedback.
The wheel features a 12 bit optical pickup sensor for motion on the wheel. This translates into 4096 values through 360 degrees of rotation. This is well below the 16 bit units of the TX and T300rs bases, but in my racing I did not find it to be holding me back. Yes, the more expensive units are more accurate and utilize the Hall Effect rather than an optical pickup, but the TMX provides more than enough precision for the vast majority of titles out there. The pedals look to feature the same 10 bit resolution that most other Thrustmaster pedals offer, or about 1024 values for several inches of travel.
Introduction and First Impressions
Aukey, a prominent seller of mobile accessories on Amazon, has an interesting product for PC enthusiasts: an RGB mechanical gaming keyboard for $59.99. The price is definitely right, but is it any good? We’ll take a look!
“The AUKEY KM-G3 mechanical keyboard takes the gaming experience to a new level. Tactile, responsive mechanical keys put you in control for an outstanding typing or gaming experience. The KM-G3 offers preloaded multi-color RGB backlit lighting effects and patterns. Ideal for FPS, CF, COD, LOL and Racing games - Just use the Function key to easily switch between gaming presets.”
The KM-G3 keyboard is a standard 104-key design, using blue switches (presumably a generic switch as no brand is listed), and there is RGB lighting which can be cycled between various colors and patterns, or switched off if desired. Aukey is also offering a 2-year warranty on the keyboard, which should help allay any fear about a purchase.