Flash player not detected. Click here to install flash.
« 1 2 3 4 5 »

Wolfenstein II is new, but you don't necessarily need new hardware to enjoy it

Subject: General Tech | March 21, 2018 - 03:27 PM |
Tagged: gaming, wolfenstein ii, the new colossus

If you are on the fence about picking up the new Wolfenstein because you aren't sure your GPU can handle it?  Overclockers Club tested the game with some older hardware as well as the current generation, including the GTX 770 and GTX 980 in addition to a RX Vega 64 and GTX 1080.  After running through the benchmarks they find that the GTX 980 is more than capable of handling this game, so grab it if you have a GPU of that calibre.  If you are looking for the best possible experience, the Vega 64 is the way to go.


"Having additional GPUs may have proven useful for this work since we leapt from barely playable on the GTX 770 to max settings on the GTX 980. The GTX 1080 naturally surpassed the GTX 980 and the RX Vega 64 beat them all, both at stock and with the undervolt and power limit. Based on this sampling of performance data, if you could pick any GPU to play Wolfenstein II: The New Colossus on, the RX Vega 64 would be the best of those test. However, you can very comfortably go with something older and cheaper, like the GTX 980 without compromising a setting. To my mind, that is pretty impressive for a modern game with modern graphics."

Here is some more Tech News from around the web:


April releases are coming from AMD and Intel

Subject: General Tech | March 21, 2018 - 02:48 PM |
Tagged: H310, H370, B360, Q360, Q370, Intel, amd, ryzen 2000, x470, b450

With both AMD and Intel scheduled to release new chips in a few weeks it looks like it will be a busy April for reviewers.  Motherboard manufacturers are hoping the retail market will also be a busy as they have all seen slower sales this quarter than they achieved a year ago.  Indeed total global motherboard shipments slipped 15% in 2017, a noticeable slowdown.  Intel will be refreshing Coffee Lake and adding several new chipsets while AMD will be introducing Ryzen 2000 as well as two new chipsets.

From the looks of the names, which are listed at DigiTimes, the naming conventions for the two competing companies will remain annoyingly similar.


"Asustek Computer, ASRock, Gigabyte Technology and Micro-Star International (MSI) have all begun making deployments, hoping their motherboard shipments in the second quarter can at least remain at levels similar to those a year ago, according market watchers."

Here is some more Tech News from around the web:

Tech Talk


Source: DigiTimes

GDC 2018: Qualcomm Talks Future of VR and AR with Upcoming Dev Kit

Subject: General Tech | March 21, 2018 - 09:20 AM |
Tagged: xr, VR, Tobii, qualcomm, HMD, GDC 2018, GDC, eye-tracking, developers, dev kit, AR

We have recently covered news of Qualcomm's ongoing VR/AR efforts (the two terms now combine as "XR", for eXtended reality), with news of the Snapdragon 845-powered reference HMD and more recently the collaboration with Tobii to bring eye-tracking to the Qualcomm development platform. Today at GDC Qualcomm is mapping out their vision for the future of XR, and providing additional details about the Snapdragon 845 dev kit - and announcing support for the HTC Vive Wave SDK.


From Qualcomm:

For the first time, many new technologies that are crucial for an optimal and immersive VR user experience will be supported in the Snapdragon 845 Virtual Reality Development Kit. These include:


  • Room-scale 6DoF SLAM: The Snapdragon 845 Virtual Reality Development Kit is engineered to help VR developers create applications that allow users to explore virtual worlds, moving freely around in a room, rather than being constrained to a single viewing position. Un-tethered mobile VR experiences like these can benefit from the Snapdragon 845 Virtual Reality Development Kit’s pre-optimized hardware and software for room-scale six degrees of freedom (6DoF) with “inside-out” simultaneous localization and mapping (SLAM). All of this is designed to be accomplished without any external setup in the room by the users, and without any cables or wires.
  • Qualcomm® Adreno™ Foveation: Our eyes are only able to observe significant details in a very small center of our field of vision - this region is called the “fovea”. Foveated rendering utilizes this understanding to boost performance & save power, while also improving visual quality. This is accomplished through multiple technology advancements for multi-view, tile-based foveation with eye-tracking and fine grain preemption to help VR application developers deliver truly immersive visuals with optimal power efficiency.


  • Eye Tracking: Users naturally convey intentions about how and where they want to interact within virtual worlds through their eyes. Qualcomm Technologies worked with Tobii AB to develop an integrated and optimized eye tracking solution for the Snapdragon 845 VR Development Kit. The cutting-edge eye tracking solution on Snapdragon 845 VR Development Kit is designed to help developers utilize Tobii’s EyeCore™ eye tracking algorithms to create content that utilizes gaze direction for fast interactions, and superior intuitive interfaces.
  • Boundary System: The new SDK for the Snapdragon 845 VR Development Kit supports a boundary system that is engineered to help VR application developers accurately visualize real-world spatial constraints within virtual worlds, so that their applications can effectively manage notifications and play sequences for VR games or videos, as the user approaches the boundaries of the real-world play space.


In addition to enhancing commercial reach for the VR developer community, Qualcomm Technologies is excited to announce support for the HTC Vive Wave™ VR SDK on the Snapdragon 845 Virtual Reality Development Kit, anticipated to be available later this year. The Vive Wave™ VR SDK is a comprehensive tool set of APIs that is designed to help developers create high-performance, Snapdragon-optimized content across diverse hardware vendors at scale, and offer a path to monetizing applications on future HTC Vive ready products via the multi-OEM Viveport™ application store.

The Snapdragon 845 HMD/dev kit and SDK are expected to be available in Q2 2018.

Source: Qualcomm

AMD finalizing fixes for Ryzen, EPYC security vulnerabilities

Subject: Processors | March 20, 2018 - 04:33 PM |
Tagged: ryzenfall, masterkey, fallout, cts labs, chimera, amd

AMD’s CTO Mark Papermaster released a blog today that both acknowledges the security vulnerabilities first shown by a CTS Labs report last week, while also laying the foundation for the mitigations to be released. Though the company had already acknowledged the report, and at least one other independent security company validated the claims, we had yet to hear from AMD officially on the potential impact and what fixes might be possible for these concerns.

In the write up, Papermaster is clear to call out the short period of time AMD was given with this information, quoting “less than 24 hours” from the time it was notified to the time the story was public on news outlets and blogs across the world. It is important to detail for some that may not follow the security landscape clearly that this has no relation to the Spectre and Meltdown issues that are affecting the industry and what CTS did find has nothing to do with the Zen architecture itself. Instead, the problem revolves around the embedded security protocol processor; while an important distinction moving forward, from a practical view to customers this is one and the same.


AMD states that it has “rapidly completed its assessment and is in the process of developing and staging the deployment of mitigations.” Rapidly is an understatement – going from blindsided to an organized response is a delicate process and AMD has proven its level of sincerity with the priority it placed on this.

Papermaster goes on to mention that all these exploits require administrative access to the computer being infected, a key differentiator from the Spectre/Meltdown vulnerabilities. The post points out that “any attacker gaining unauthorized administrative access would have a wide range of attacks at their disposal well beyond the exploits identified in this research.” I think AMD does an excellent job threading the needle in this post balancing the seriousness of these vulnerabilities with the overzealous hype that was created upon their initial release and the accompanying financial bullshit that followed.

AMD provides an easy to understand table with a breakdown of the vulnerabilities, the potential impact of the security risk, and what the company sees as its mitigation capability. Both sets that affect the secure processor in the Ryzen and EPYC designs are addressable with a firmware update for the secure unit itself, distributed through a standard BIOS update. For the Promontory chipset issue, AMD is utilizing a combination of a BIOS update and further work with ASMedia to further enhance the security updates.



That is the end of the update from AMD at this point. In my view, the company is doing a satisfactory job addressing the problems in what must be an insanely accelerated time table. I do wish AMD was willing to offer more specific time tables for the distribution of those security patches, and how long we should expect to wait to see them in the form of BIOS updates for consumer and enterprise customers. For now, we’ll monitor the situation and look for other input from AMD, CTS, or secondary security firms to see if the risks laid out ever materialize.

For what could have been a disastrous week for AMD, it has pivoted to provide a controlled, well-executed plan. Despite the hype and hysteria that might have started with stock-shorting and buzzwords, the plight of the AMD processor family looks stable.

Source: AMD
Manufacturer: Corsair

Introduction and Features



Corsair is a well-respected name in the PC industry and they continue to offer a complete line of products for enthusiasts, gamers, and professionals alike.  Today we are taking a detailed look at Corsair’s latest flagship power supply, the AX1600i Digital ATX power supply unit. This is the most technologically advanced power supply we have reviewed to date. Over time, we often grow numb to marketing terms like “most technologically advanced”, “state-of-the-art”, “ultra-stable”, “super-high efficiency”, etc., but in the case of the AX1600i Digital PSU, we have seen these claims come to life before our eyes.

1,600 Watts: 133.3 Amps on the +12V outputs!

The AX1600i Digital power supply is capable of delivering up to 1,600 watts of continuous DC power (133.3 Amps on the +12V rails) and is 80 Plus Titanium certified for super-high efficiency. If that’s not impressive enough, the PSU can do it while operating on 115 VAC mains and with an ambient temperature up to 50°C (internal case temperature). This beast was made for multiple power-hungry graphic adapters and overclocked CPUs.

The AX1600i is a digital power supply, which provides two distinct advantages. First, it incorporates Digital Signal Processing (DSP) on both the primary and secondary sides, which allows the PSU to deliver extremely tight voltage regulation over a wide range of loads. And second, the AX1600i features the digital Corsair Link, which enables the PSU to be connected to the PC’s motherboard (via USB) for real-time monitoring (efficiency, voltage regulation, and power usage) and control (over-current protection and fan speed profiles).

Quiet operation with a semi-fanless mode (zero-rpm fan mode up to ~40% load) might not be at the top of your feature list when shopping for a 1,600 watt PSU, but the AX1600i is up to the challenge.


(Courtesy of Corsair)

Corsair AX1600i Digital ATX PSU Key Features:

•    Digital Signal Processor (DSP) for extremely clean and efficient power
•    Corsair Link Interface for monitoring and adjusting performance
•    1,600 watts continuous power output (50°C)
•    Dedicated single +12V rail (133.3A) with user-configurable virtual rails
•    80 Plus Titanium certified, delivering up to 94% efficiency
•    Ultra-low noise 140mm Fluid Dynamic Bearing (FDB) fan
•    Silent, Zero RPM mode up to ~40% load (~640W)
•    Self-test switch to verify power supply functionality
•    Premium components (GaN transistors and all Japanese made capacitors)
•    Fully modular cable system
•    Conforms to ATX12V v2.4 and EPS 2.92 standards
•    Universal AC input (100-240V) with Active PFC
•    Safety Protections: OCP, OVP, UVP, SCP, OTP, and OPP
•    Dimensions: 150mm (W) x 86mm (H) x 200mm (L)
•    10-Year warranty and legendary Corsair customer service
•    $449.99 USD

Please continue reading our review of the AX1600i Digital PSU !!!

Manufacturer: Microsoft

It's all fun and games until something something AI.

Microsoft announced the Windows Machine Learning (WinML) API about two weeks ago, but they did so in a sort-of abstract context. This week, alongside the 2018 Game Developers Conference, they are grounding it in a practical application: video games!


Specifically, the API provides the mechanisms for game developers to run inference on the target machine. The training data that it runs against would be in the Open Neural Network Exchange (ONNX) format from Microsoft, Facebook, and Amazon. Like the initial announcement suggests, it can be used for any application, not just games, but… you know. If you want to get a technology off the ground, and it requires a high-end GPU, then video game enthusiasts are good lead users. When run in a DirectX application, WinML kernels are queued on the DirectX 12 compute queue.

We’ve discussed the concept before. When you’re rendering a video game, simulating an accurate scenario isn’t your goal – the goal is to look like you are. The direct way of looking like you’re doing something is to do it. The problem is that some effects are too slow (or, sometimes, too complicated) to correctly simulate. In these cases, it might be viable to make a deep-learning AI hallucinate a convincing result, even though no actual simulation took place.

Fluid dynamics, global illumination, and up-scaling are three examples.

Previously mentioned SIGGRAPH demo of fluid simulation without fluid simulation...
... just a trained AI hallucinating a scene based on input parameters.

Another place where AI could be useful is… well… AI. One way of making AI is to give it some set of data from the game environment, often including information that a player in its position would not be able to know, and having it run against a branching logic tree. Deep learning, on the other hand, can train itself on billions of examples of good and bad play, and make results based on input parameters. While the two methods do not sound that different, the difference between logic being designed (vs logic being assembled from an abstract good/bad dataset) someone abstracts the potential for assumptions and programmer error. Of course, it abstracts that potential for error into the training dataset, but that’s a whole other discussion.

The third area that AI could be useful is when you’re creating the game itself.

There’s a lot of grunt and grind work when developing a video game. Licensing prefab solutions (or commissioning someone to do a one-off asset for you) helps ease this burden, but that gets expensive in terms of both time and money. If some of those assets could be created by giving parameters to a deep-learning AI, then those are assets that you would not need to make, allowing you to focus on other assets and how they all fit together.

These are three of the use cases that Microsoft is aiming WinML at.


Sure, these are smooth curves of large details, but the antialiasing pattern looks almost perfect.

For instance, Microsoft is pointing to an NVIDIA demo where they up-sample a photo of a car, once with bilinear filtering and once with a machine learning algorithm (although not WinML-based). The bilinear algorithm behaves exactly as someone who has used Photoshop would expect. The machine learning algorithm, however, was able to identify the objects that the image intended to represent, and it drew the edges that it thought made sense.


Like their DirectX Raytracing (DXR) announcement, Microsoft plans to have PIX support WinML “on Day 1”. As for partners? They are currently working with Unity Technologies to provide WinML support in Unity’s ML-Agents plug-in. That’s all the game industry partners they have announced at the moment, though. It’ll be interesting to see who jumps in and who doesn’t over the next couple of years.

Manufacturer: Microsoft

O Rayly? Ya Rayly. No Ray!

Microsoft has just announced a raytracing extension to DirectX 12, called DirectX Raytracing (DXR), at the 2018 Game Developer's Conference in San Francisco.


The goal is not to completely replace rasterization… at least not yet. This effect will be mostly implemented for effects that require supplementary datasets, such as reflections, ambient occlusion, and refraction. Rasterization, the typical way that 3D geometry gets drawn on a 2D display, converts triangle coordinates into screen coordinates, and then a point-in-triangle test runs across every sample. This will likely occur once per AA sample (minus pixels that the triangle can’t possibly cover -- such as a pixel outside of the triangle's bounding box -- but that's just optimization).


For rasterization, each triangle is laid on a 2D grid corresponding to the draw surface.
If any sample is in the triangle, the pixel shader is run.
This example shows the rotated grid MSAA case.

A program, called a pixel shader, is then run with some set of data that the GPU could gather on every valid pixel in the triangle. This set of data typically includes things like world coordinate, screen coordinate, texture coordinates, nearby vertices, and so forth. This lacks a lot of information, especially things that are not visible to the camera. The application is free to provide other sources of data for the shader to crawl… but what?

  • Cubemaps are useful for reflections, but they don’t necessarily match the scene.
  • Voxels are useful for lighting, as seen with NVIDIA’s VXGI and VXAO.

This is where DirectX Raytracing comes in. There’s quite a few components to it, but it’s basically a new pipeline that handles how rays are cast into the environment. After being queued, it starts out with a ray-generation stage, and then, depending on what happens to the ray in the scene, there are close-hit, any-hit, and miss shaders. Ray generation allows the developer to set up how the rays are cast, where they call an HLSL instrinsic instruction, TraceRay (which is a clever way of invoking them, by the way). This function takes an origin and a direction, so you can choose to, for example, cast rays only in the direction of lights if your algorithm was to, for instance, approximate partially occluded soft shadows from a non-point light. (There are better algorithms to do that, but it's just the first example that came off the top of my head.) The close-hit, any-hit, and miss shaders occur at the point where the traced ray ends.

To connect this with current technology, imagine that ray-generation is like a vertex shader in rasterization, where it sets up the triangle to be rasterized, leading to pixel shaders being called.


Even more interesting – the close-hit, any-hit, and miss shaders can call TraceRay themselves, which is used for multi-bounce and other recursive algorithms (see: figure above). The obvious use case might be reflections, which is the headline of the GDC talk, but they want it to be as general as possible, aligning with the evolution of GPUs. Looking at NVIDIA’s VXAO implementation, it also seems like a natural fit for a raytracing algorithm.

Speaking of data structures, Microsoft also detailed what they call the acceleration structure. Each object is composed of two levels. The top level contains per-object metadata, like its transformation and whatever else data that the developer wants to add to it. The bottom level contains the geometry. The briefing states, “essentially vertex and index buffers” so we asked for clarification. DXR requires that triangle geometry be specified as vertex positions in either 32-bit float3 or 16-bit float3 values. There is also a stride property, so developers can tweak data alignment and use their rasterization vertex buffer, as long as it's HLSL float3, either 16-bit or 32-bit.

As for the tools to develop this in…


Microsoft announced PIX back in January 2017. This is a debugging and performance analyzer for 64-bit, DirectX 12 applications. Microsoft will upgrade it to support DXR as soon as the API is released (specifically, “Day 1”). This includes the API calls, the raytracing pipeline resources, the acceleration structure, and so forth. As usual, you can expect Microsoft to support their APIs with quite decent – not perfect, but decent – documentation and tools. They do it well, and they want to make sure it’s available when the API is.

ea-2018-SEED screenshot (002).png

Example of DXR via EA's in-development SEED engine.

In short, raytracing is here, but it’s not taking over rasterization. It doesn’t need to. Microsoft is just giving game developers another, standardized mechanism to gather supplementary data for their games. Several game engines have already announced support for this technology, including the usual suspects of anything top-tier game technology:

  • Frostbite (EA/DICE)
  • SEED (EA)
  • 3DMark (Futuremark)
  • Unreal Engine 4 (Epic Games)
  • Unity Engine (Unity Technologies)

They also said, “and several others we can’t disclose yet”, so this list is not even complete. But, yeah, if you have Frostbite, Unreal Engine, and Unity, then you have a sizeable market as it is. There is always a question about how much each of these engines will support the technology. Currently, raytracing is not portable outside of DirectX 12, because it’s literally being announced today, and each of these engines intend to support more than just Windows 10 and Xbox.

Still, we finally have a standard for raytracing, which should drive vendors to optimize in a specific direction. From there, it's just a matter of someone taking the risk to actually use the technology for a cool work of art.

If you want to read more, check out Ryan's post about the also-announced RTX, NVIDIA's raytracing technology.

NVIDIA RTX Technology Accelerates Ray Tracing for Microsoft DirectX Raytracing API

Subject: Graphics Cards | March 19, 2018 - 01:00 PM |
Tagged: rtx, nvidia, dxr

The big news from the Game Developers Conference this week was Microsoft’s reveal of its work on a new ray tracing API for DirectX called DirectX Raytracing. As the name would imply, this is a new initiative to bring the image quality improvements of ray tracing to consumer hardware with the push of Microsoft’s DX team. Scott already has a great write up on that news and current and future implications of what it will mean for PC gamers, so I highly encourage you all to read that over before diving more into this NVIDIA-specific news.

Ray tracing has been the holy grail of real-time rendering. It is the gap between movies and games – though ray tracing continues to improve in performance it takes the power of offline server farms to render the images for your favorite flicks. Modern game engines continue to use rasterization, an efficient method for rendering graphics but one that depends on tricks and illusions to recreate the intended image. Ray tracing inherently solves the problems that rasterization works around including shadows, transparency, refraction, and reflection. But it does so at a prohibitive performance cost. But that will be changing with Microsoft’s enablement of ray tracing through a common API and technology like what NVIDIA has built to accelerate it.


Alongside support and verbal commitment to DXR, NVIDIA is announcing RTX Technology. This is a combination of hardware and software advances to improve the performance of ray tracing algorithms on its hardware and it works hand in hand with DXR. NVIDIA believes this is the culmination of 10 years of development on ray tracing, much of which we have talked about on this side from the world of professional graphics systems. Think Iray, OptiX, and more.

RTX will run on Volta GPUs only today, which does limit usefulness to gamers. With the only graphics card on the market even close to considered a gaming product the $3000 TITAN V, RTX is more of a forward-looking technology announcement for the company. We can obviously assume then that RTX technology will be integrated on any future consumer gaming graphics cards, be that a revision of Volta of something completely different. (NVIDIA refused to acknowledge plans for any pending Volta consumer GPUs during our meeting.)

The idea I get from NVIDIA is that today’s RTX is meant as a developer enablement platform, getting them used to the idea of adding ray tracing effects into their games and engines and to realize that NVIDIA provides the best hardware to get that done.

I’ll be honest with you – NVIDIA was light on the details of what RTX exactly IS and how it accelerates ray tracing. One very interesting example I was given was seen first with the AI-powered ray tracing optimizations for Optix from last year’s GDC. There, NVIDIA demonstrated that using the Volta Tensor cores it could run an AI-powered de-noiser on the ray traced image, effectively improving the quality of the resulting image and emulating much higher ray counts than are actually processed.

By using the Tensor cores with RTX for DXR implementation on the TITAN V, NVIDIA will be able to offer image quality and performance for ray tracing well ahead of even the TITAN Xp or GTX 1080 Ti as those GPUs do not have Tensor cores on-board. Does this mean that all (or flagship) consumer graphics cards from NVIDIA will includ Tensor cores to enable RTX performance? Obviously, NVIDIA wouldn’t confirm that but to me it makes sense that we will see that in future generations. The scale of Tensor core integration might change based on price points, but if NVIDIA and Microsoft truly believe in the future of ray tracing to augment and significantly replace rasterization methods, then it will be necessary.

Though that is one example of hardware specific features being used for RTX on NVIDIA hardware, it’s not the only one that is on Volta. But NVIDIA wouldn’t share more.

The relationship between Microsoft DirectX Raytracing and NVIDIA RTX is a bit confusing, but it’s easier to think of RTX as the underlying brand for the ability to ray trace on NVIDIA GPUs. The DXR API is still the interface between the game developer and the hardware, but RTX is what gives NVIDIA the advantage over AMD and its Radeon graphics cards, at least according to NVIDIA.

DXR will still run on other GPUS from NVIDIA that aren’t utilizing the Volta architecture. Microsoft says that any board that can support DX12 Compute will be able to run the new API. But NVIDIA did point out that in its mind, even with a high-end SKU like the GTX 1080 Ti, the ray tracing performance will limit the ability to integrate ray tracing features and enhancements in real-time game engines in the immediate timeframe. It’s not to say it is impossible, or that some engine devs might spend the time to build something unique, but it is interesting to hear NVIDIA infer that only future products will benefit from ray tracing in games.

It’s also likely that we are months if not a year or more from seeing good integration of DXR in games at retail. And it is also possible that NVIDIA is downplaying the importance of DXR performance today if it happens to be slower than the Vega 64 in the upcoming Futuremark benchmark release.


Alongside the RTX announcement comes GameWorks Ray Tracing, a colleciton of turnkey modules based on DXR. GameWorks has its own reputation, and we aren't going to get into that here, but NVIDIA wants to think of this addition to it as a way to "turbo charge enablement" of ray tracing effects in games.

NVIDIA believes that developers are incredibly excited for the implementation of ray tracing into game engines, and that the demos being shown at GDC this week will blow us away. I am looking forward to seeing them and for getting the reactions of major game devs on the release of Microsoft’s new DXR API. The performance impact of ray tracing will still be a hindrance to larger scale implementations, but with DXR driving the direction with a unified standard, I still expect to see some games with revolutionary image quality by the end of the year. 

Source: NVIDIA

HTC announces VIVE Pro Pricing, Available now for Preorder

Subject: General Tech, Graphics Cards | March 19, 2018 - 12:09 PM |
Tagged: vive pro, steamvr, rift, Oculus, Lighthouse, htc

Today, HTC has provided what VR enthusiasts have been eagerly waiting for since the announcement of the upgraded VIVE Pro headset earlier in the year at CESthe pricing and availability of the new device.


Available for preorder today, the VIVE Pro will cost $799 for the headset-only upgrade. As we mentioned during the VIVE Pro announcement, this first upgrade kit is meant for existing VIVE users who will be reusing their original controllers and lighthouse trackers to get everything up and running.

The HMD-only kit, with it's upgraded resolution and optics, is set to start shipping very soon on April 3 and can be preordered now on the HTC website.

Additionally, your VIVE Pro purchase (through June 3rd, 2018) will come with a free six-month subscription to HTC's VIVEPORT subscription game service, which will gain you access to up to 5 titles per month for free (chosen from the VIVEPORT catalog of 400+ games.)

There is still no word on the pricing and availability of the full VIVE Pro kit including the updated Lighthouse 2.0 trackers, but it seems likely that it will come later in the Summer after the upgrade kit has saturated the market of current VIVE owners.

As far as system requirements go, the HTC site doesn't list any difference between the standard VIVE and the VIVE Pro. One change, however, is the lack of an HDMI port on the new VIVE Pro link box, so you'll need a graphics card with an open DisplayPort 1.2 connector. 

Source: HTC
Subject: Storage
Manufacturer: CalDigit

CalDigit Tuff Rugged External Drive

There are a myriad of options when it comes to portable external storage. But if you value durability just as much as portability, those options quickly dry up. Combining a cheap 2.5-inch hard drive with an AmazonBasics enclosure is often just fine for an external storage solution that sits in your climate controlled office all day, but it's probably not the best choice for field use during your national park photography trip, your scuba diving expedition, or on-site construction management.

For situations like these where the elements become a factor and the chance of an accidental drop skyrockets, it's a good idea to invest in "ruggedized" equipment. Companies like Panasonic and Dell have long offered laptops custom-designed to withstand unusually harsh environments, and accessory makers have followed suit with ruggedized hard drives.

Today we're taking a look at one such ruggedized hard drive, the CalDigit Tuff. Released in 2017, the CalDigit Tuff is a 2.5-inch bus-powered external drive available in both HDD and SSD options. CalDigit loaned us the 2TB HDD model for testing.


Continue reading our review of the CalDigit Tuff rugged USB-C drive!

PNY Adds CS900 960GB SATA SSD To Budget SSD Series

Subject: General Tech, Storage | March 18, 2018 - 12:20 AM |
Tagged: ssd, sata 3, pny, 3d nand

PNY has added a new solid-state drive to its CS900 lineup doubling the capacity to 960GB. The SATA-based SSD is a 2.5" 7mm affair suitable for use in laptops and SFF systems as well as a budget option for desktops.

PNY CS900 960GB SATA SSD.png

The CS900 960GB SSD uses 3D TLC NAND flash and offers ECC, end-to-end data protection, secure erase, and power saving features to protect data and battery life in mobile devices. Unfortunately, information on the controller and NAND flash manufacturer is not readily available though I suspect it uses a Phison controller like PNY's other drives.

The 960GB capacity model is rated for sequential reads of 535 MB/s and sequential writes of 515 MB/s. PNY rates the drive at 2 million hours MTBF and they cover it with a 3-year warranty.

We may have to wait for reviews (we know how Allyn loves to tear apart drives!) for more information on this drive especially where random read/write and latency percentile performance are concerned. The good news is that if the performance is there the budget price seems right with an MSRP of $249.99 and an Amazon sale price of $229.99 (just under 24 cents/GB) at time of writing. Not bad for nearly a terabyte of solid state storage (though if you don't need that much space you can alternatively find PCI-E based M.2 SSDs in this price range).

Source: PNY
Subject: Storage
Manufacturer: MyDigitalSSD

Introduction, Specifications and Packaging


When one thinks of an M.2 SSD, we typically associate that with either a SATA 6GB/s or more recently with a PCIe 3.0 x4 link. The physical interface of M.2 was meant to accommodate future methods of connectivity, but it's easy to overlook the ability to revert back to something like a PCIe 3.0 x2 link. Why take a seemingly backward step on the interface of an SSD? Several reasons actually. Halving the number of lanes makes for a simpler SSD controller design, which lowers cost. Power savings are also a factor, as driving a given twisted pair lane at PCIe 3.0 speeds draws measurable current from the host and therefore adds to the heat production of the SSD controller. We recently saw that a PCIe 3.0 x2 can still turn in respectable performance despite lower bandwidth interface, but how far can we get the price down when pairing that host link with some NAND flash?


Enter the MyDigitalSSD SBX series. Short for Super Boot eXpress, the aim of these parts is to offer a reasonably performant PCIe NVMe SSD at something closer to SATA SSD pricing.


  • Physical: M.2 2280 (single sided)
  • Controller: Phison E8 (PS5008-E8)
  • Capacities: 128GB, 256GB, 512GB, 1TB
  • PCIe 3.0 x2, M.2 2280
  • Sequential: Up to 1.6/1.3 GB/s (R/W)
  • Random: 240K+ / 180K+ IOPS (R/W)
  • Weight: 8g
  • Power: <5W


The MyDigitalDiscount guys keep things extremely simple with their SSD packaging, which is eaxctly how it should be. It doesn't take much to package and protect an M.2 SSD, and this does the job just fine. They also include a screwdriver and a screw just in case you run into a laptop that came without one installed.


Read on for our full review of all capacities of the MyDigitalSSD SBX lineup!

Tobii and Qualcomm Announce Collaboration on Mobile VR Headsets with Eye-Tracking

Subject: General Tech | March 16, 2018 - 09:45 AM |
Tagged: xr, VR, Tobii, snapdragon 845, qualcomm, mobile, HMD, head mounted display, eye tracking, AR, Adreno 630

Tobii and Qualcomm's collaboration in the VR HMD (head-mounted display) space is a convergence of two recent stories, with Tobii's impressing showing of a prototype HMD device at CES featuring their eye-tracking technology, and Qualcomm's unvieling last month of their updated mobile VR platform, featuring the new Snapdragon 845.


The Qualcomm Snapdragon 845 Mobile VR Reference Platform

What does this new collaboration mean for the VR industry? For now it means a new reference design and dev kit with the latest tech from Tobii and Qualcomm:

"As a result of their collaboration, Tobii and Qualcomm are creating a full reference design and development kit for the Qualcomm Snapdragon 845 Mobile VR Platform, which includes Tobii's EyeCore eye tracking algorithms and hardware design. Tobii will license its eye tracking technologies and system and collaborate with HMD manufacturers on the optical solution for the reference design."

The press release announcing this collaboration recaps the benefits of Tobii eye tracking in a mobile VR/AR device, which include:

  • Foveated Rendering: VR/AR devices become aware of where you are looking and can direct high-definition graphics processing power to that exact spot in real time. This enables higher definition displays, more efficient devices, longer battery life and increased mobility.
  • Interpupillary Distance: Devices automatically orient images to align with your pupils. This enables devices to adapt to the individual user, helping to increase the visual quality of virtual and augmented reality experiences.
  • Hand-Eye Coordination: By using your eyes in harmony with your hands and associated controllers, truly natural interaction and immersion, not possible without the use of gaze, is realized.
  • Interactive Eye Contact: Devices can accurately track your gaze in real time, enabling content creators to express one of the most fundamental dimensions of human interaction – eye contact. VR technologies hold the promise of enabling a new and immersive medium for social interaction. The addition of true eye contact to virtual reality helps deliver that promise.


Tobii's prototype eye-tracking HMD

For its part, Qualcomm's Snapdragon 845-powered VR mobile platform promises greater portability of a better VR experience, with expanded freedom on top of the improved graphics horsepower from the new Adreno 630 GPU in the Snapdragon 845. This portability includes 6DoF (6 degrees of freedom) using external cameras to identify location within a room, eliminating the need for external room sensors.

"Together, 6DoF and SLAM deliver Roomscale - the ability to track the body and location within a room so you can freely walk around your XR environment without cables or separate room sensors – the first on a mobile standalone device. Much of this is processed on the new dedicated Qualcomm Hexagon Digital Signal Processor (DSP) and Adreno Graphics Processing Unit within the Snapdragon 845. Qualcomm Technologies’ reference designs have supported some of the first wave of standalone VR devices from VR ecosystem leaders like Google Daydream, Oculus and Vive."

It is up to developers, and consumer interest in VR moving forward, to see what this collaboration will produce. To editorialize briefly, from first-hand experience I can vouch for the positive impact of eye-tracking with an HMD, and if future products live up to the promise of a portable, high-performance VR experience (with a more natural feel from less rapid head movement) a new generation of VR enthusiasts could be born.

Source: PR Newswire

PCPer Mailbag #35 - 3/16/2018

Subject: Editorial | March 16, 2018 - 09:00 AM |
Tagged: video, Ryan Shrout, pcper mailbag

It's time for the PCPer Mailbag, our weekly show where Ryan and the team answer your questions about the tech industry, the latest and greatest GPUs, the process of running a tech review website, and more!

On today's show:

00:50 - Rumble gaming chairs?
02:18 - SSD for VMs?
04:08 - Playing old games in a virtual machine?
05:54 - Socketed GPUs?
09:08 - PC middlemen?
11:02 - NVIDIA tech demos?
13:10 - Raven Ridge motherboard features?
14:45 - x8 and x16 PCIe SSDs?
17:50 - Ryzen 2800X and Radeon RX 600?
19:53 - GeForce Partner Program?
21:14 - Goo goo g'joob?

Want to have your question answered on a future Mailbag? Leave a comment on this post or in the YouTube comments for the latest video. Check out new Mailbag videos each Friday!

Be sure to subscribe to our YouTube Channel to make sure you never miss our weekly reviews and podcasts, and please consider supporting PC Perspective via Patreon to help us keep videos like our weekly mailbag coming!

Source: YouTube

Intel promises 2018 processors with hardware mitigation for Spectre and Meltdown

Subject: Processors | March 15, 2018 - 10:29 AM |
Tagged: spectre, meltdown, Intel, cascade lake, cannon lake

In continuing follow up from the spectacle that surrounded the Meltdown and Spectre security vulnerabilities released in January, Intel announced that it has provided patches and updates that address 100% of the products it has launched in the last 5 years. The company also revealed its plan for updated chip designs that will address both the security and performance concerns surrounding the vulnerabilities.

Intel hopes that by releasing new chips to address the security and performance questions quickly it will cement its position as the leader in the enterprise compute space. Customers like Amazon, Microsoft, and Google that run the world’s largest data centers are looking for improved products to make up for the performance loss and assurances moving forward that a similar situation won’t impact their bottom line.


For current products, patches provide mitigations for the security flaws in the form operating system updates (for Windows, Linux) and what are called microcode updates, a small-scale firmware that helps provide instruction processing updates for a processor. Distributed by Intel OEMs (system vendors and component providers) as well as Microsoft, the patches have seemingly negated the risks for consumers and enterprise customer data, but with a questionable impact on performance.

The mitigations cause the processors to operate differently than originally designed and will cause performance slowdowns on some workloads. These performance degradations are the source of the handful of class-action lawsuits hanging over Intel’s head and are a potential sore spot for its relationship with partners. Details on the performance gaps from the security mitigations have been sparse from Intel, with only small updates posted on corporate blogs. And because the problem has been so widespread, covering the entire Intel product line of the last 10 years, researchers are struggling to keep up.

The new chips that Intel is promising will address both security and performance considerations in silicon rather than software, and will be available in 2018. For the data center this is the Cascade Lake server processor, and for the consumer and business markets this is known as Cannon Lake. Both will include what Intel is calling “virtual fences” between user and operating system privilege levels and will create a significant additional obstacle for potential vulnerabilities.

The chips will also lay the ground work and foundation for future security improvement, providing a method to more easily update the security of the processors through patching.

By moving the security mitigations from software (both operating system and firmware) into silicon, Intel is reducing the performance impact that Spectre and Meltdown cause on select computing tasks. Assurances that future generations of parts won’t suffer from a performance hit is good news for Intel and its customer base, but I don’t think currently afflicted customers will be satisfied at the assertion they need to buy updated Intel chips to avoid the performance penalty. It will be interesting to see how, if at all, the legal disputes are affected.


The speed at which Intel is releasing updated chips to the market is an impressive engineering feat, and indicates at top-level directive to get this fixed as quickly as possible. In the span of just 12 months (from Intel’s apparent notification of the security vulnerability to the expected release of this new hardware) the company will have integrated fairly significant architectural changes. While this may have been a costly more for the company, it is a drop in the bucket compared to the potential risks of lowered consumer trust or partner migration to competitive AMD processors.

For its part, AMD has had its own security issues pop up this week from a research firm called CTS Labs. While there are extenuating circumstances that cloud the release of the information, AMD does now have a template for how to quickly and effectively address a hardware-level security problem, if it exists.

The full content of Intel's posted story on the subject is included below:

Hardware-based Protection Coming to Data Center and PC Products Later this Year

By Brian Krzanich

In addressing the vulnerabilities reported by Google Project Zero earlier this year, Intel and the technology industry have faced a significant challenge. Thousands of people across the industry have worked tirelessly to make sure we delivered on our collective priority: protecting customers and their data. I am humbled and thankful for the commitment and effort shown by so many people around the globe. And, I am reassured that when the need is great, companies – and even competitors – will work together to address that need.

But there is still work to do. The security landscape is constantly evolving and we know that there will always be new threats. This was the impetus for the Security-First Pledge I penned in January. Intel has a long history of focusing on security, and now, more than ever, we are committed to the principles I outlined in that pledge: customer-first urgency, transparent and timely communications, and ongoing security assurance.

Today, I want to provide several updates that show continued progress to fulfill that pledge. First, we have now released microcode updates for 100 percent of Intel products launched in the past five years that require protection against the side-channel method vulnerabilities discovered by Google. As part of this, I want to recognize and express my appreciation to all of the industry partners who worked closely with us to develop and test these updates, and make sure they were ready for production.

With these updates now available, I encourage everyone to make sure they are always keeping their systems up-to-date. It’s one of the easiest ways to stay protected. I also want to take the opportunity to share more details of what we are doing at the hardware level to protect against these vulnerabilities in the future. This was something I committed to during our most recent earnings call.

While Variant 1 will continue to be addressed via software mitigations, we are making changes to our hardware design to further address the other two. We have redesigned parts of the processor to introduce new levels of protection through partitioning that will protect against both Variants 2 and 3. Think of this partitioning as additional “protective walls” between applications and user privilege levels to create an obstacle for bad actors.

These changes will begin with our next-generation Intel® Xeon® Scalable processors (code-named Cascade Lake) as well as 8th Generation Intel® Core™ processors expected to ship in the second half of 2018. As we bring these new products to market, ensuring that they deliver the performance improvements people expect from us is critical. Our goal is to offer not only the best performance, but also the best secure performance.

But again, our work is not done. This is not a singular event; it is a long-term commitment. One that we take very seriously. Customer-first urgency, transparent and timely communications, and ongoing security assurance. This is our pledge and it’s what you can count on from me, and from all of Intel.

Source: Intel

Podcast #491 - Intel Optane 800P, UltraWide Monitors, and more!

Subject: General Tech | March 15, 2018 - 09:07 AM |
Tagged: video, ultrawide, podcast, Optane, Intel, Huawei, GeForce Partner Program, FreeSync2, cts labs, caldigit

PC Perspective Podcast #491 - 03/14/18

Join us this week for discussion on Intel Optane 800P, UltraWide Monitors, and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath,Allyn Malventano

Peanut Gallery: Alex Lustenberg, Jim Tanous

Program length: 1:34:30

Podcast topics of discussion:
  1. Week in Review:
  2. News items of interest:
  3. Picks of the Week:
    1. 1:25:45 Ryan: Basta ZMover
  4. Closing/outro

Logitech Announces G560 Speakers and G513 Keyboard with LIGHTSYNC

Subject: General Tech | March 15, 2018 - 08:00 AM |
Tagged: speakers, romer-g, mechanical, logitech, keyboard, key switches

Logitech has announced a pair of devices featuring the company’s “LIGHTSYNC” technology,  with the new G560 gaming speaker and G513 gaming keyboard. What is LIGHTSYNC? According to Logitech, the technology synchronizes audio and RGB lighting with gameplay, to create a more immersive experience. We have seen a similar concept with the Philips Ambilight TVs, which project lighting onto a rear wall, synchronized with on-screen content to enhance immersion.

Logitech G560 PC Gaming Speaker 8.jpg

How is this new lighting tech implemented with gaming peripherals, exactly? We begin with the G560, which offers plenty of power with a 120W (240W peak) amplifier, and 3D audio effects courtesy of DTS:X Ultra 1.0. As to the LIGHTSYNC implementation, Logitech explains:

“Powered by advanced LIGHTSYNC technology, the Logitech G560 synchronizes brilliant RGB lighting and powerful audio in real time to match on-screen gameplay action. Light and animation effects can be customized across approximately 16.8 million colors, with four lighting zones.”

Logitech G560 PC Gaming Speaker 2.jpg

The G560 offers USB and 3.5mm connectivity, as well as Bluetooth, and Logitech's "Easy-Switch" technology allows for switching between any four connected devices. The included LIGHTSYNC technology works with both video and music.

Logitech G513 Mechanical Gaming Keyboard Carbon 1.jpg

Next we have the G513 Mechanical Gaming Keyboard, which combines Romer-G switches (your choice of either Romer-G Tactile or Linear), and support for LIGHTSYNC. The Logitech G513 offers a brushed aluminum top case and USB passthrough port, and includes an optional palm rest.

As to pricing, the Logitech G560 PC Gaming Speaker will carry an MSRP of $199.99, with a $149.99 MSRP for the G513 Mechanical Gaming Keyboard. Availability is set for April 2018.

Source: Logitech
Subject: Mobile
Manufacturer: Huawei

A Snappy Budget Tablet

Huawei has been gaining steam. Even though they’re not yet a household name in the United States, they’ve been a major player in the Eastern markets with global ambitions. Today we’re looking at the MediaPad M3 Lite, a budget tablet with the kind of snappy performance and just better features that should make entry-level tablet buyers take notice.


  • MSRP: $247.93
  • Size: 213.3mm (H) x 123.3 mm (W) x 7.5mm (D)
  • Color: White, Gold. Space Gray
  • Display:1920 x 1200 IPS
  • CPU: Qualcomm MSM8940, Octa-core
  • Operating System: Android 7.0/EMUI5.1
  • Memory: RAM+ROM 3GB+16GB (tested), 3GB+32GB, 4 GB+64 GB
  • Network: LTE CAT4/Wi-Fi 11ac 2.4 GHz & 5 GHz
  • GPS:Supports GPS, A-GPS, GLONASS, and BDS.
  • Connectivity: USB 2.0, high-speed Features supported: charging, USB OTG, USB tethering, and MTP/PTP
  • Sensors: Gravity sensor, ambient light sensor, compass, gyroscope (only CPN-L09 support, CPN-W09 does not support)
  • Camera: Rear camera: 8 MP and auto focus Front camera: 8 MP and fixed focus
  • Audio: 2 Speakers+2 SmartPA Super Wide Sound (SWS) 3.0 sound effects, Harman Kardon tuning and certification
  • Video: Video file format: *.3gp, *.mp4, *.webm, *.mkv, .ts, .3g2, .flv, and .m4v,
  • Battery: 6600 mAh
  • In the Box: Charger, USB Cable, Eject tool, Quick start guide, Warranty card


The tablet arrives well-packed inside a small but sturdy box. I’ve got to say, I love the copper on white look they’ve gone with and wish they’d applied it to the tablet itself, which is white and silver. Inside the box is the tablet, charging brick with USB cable, a SIM eject tool, and warranty card. It’s a bit sparse, but at this price point is perfectly fine.


The tablet looks remarkably similar to the Samsung Galaxy Tab 4, only missing the touch controls on either side of the Home button and shifting the branding to the upper left. This isn’t a bad thing by any means but the resemblance is definitely striking. One notable difference is that the Home button isn’t actually a button at all but a touch sensor that doubles as the fingerprint sensor. 


The MediaPad M3 Lite comes in at 7.5mm, or just under 0.3”, thick. Virtually all of the name brand tablets I researched prior to this review are within 0.05” of each other, so Huawei’s offering is in line with what we would expect, if ever so slightly thinner.

Continue reading our review of the Huawei MediaPad M3 Lite 10!

Subject: Editorial
Manufacturer: AMD

Much Ado About Nothing?

We live in a world seemingly fueled by explosive headlines. This morning we were welcomed with a proclamation that AMD has 13 newly discovered security flaws in their latest Ryzen/Zen chips that could potentially be showstoppers for the architecture, and AMD’s hopes that it can regain lost marketshare in mobile, desktop, and enterprise markets. CTS-Labs released a report along with a website and videos explaining what these vulnerabilities are and how they can affect AMD and its processors.


This is all of course very scary. It was not all that long ago that we found out about the Spectre/Meltdown threats that seemingly are more dangerous to Intel than to its competitor. Spectre/Meltdown can be exploited by code that will compromise a machine without having elevated privileges. Parts of Spectre/Meltdown were fixed by firmware updates and OS changes which had either no effect on the machine in terms of performance, or incurred upwards of 20% to 30% performance hits in certain workloads requiring heavy I/O usage. Intel is planning a hardware fix for these vulnerabilities later on this year with new products. Current products have firmware updates available to them and Microsoft has already implemented a fix in software. Older CPUs and platforms (back to at least 4th Generation Core) have fixes, but they were rolled out a bit slower. So the fear of a new exploit that is located on the latest AMD processors is something that causes fear in users, CTOs, and investors alike.

CTS-Labs have detailed four major vulnerabilities and have named them as well as have provided fun little symbols for each; Ryzenfall, Fallout, Masterkey, and Chimera. The first three affect the CPU directly. Unlike Spectre/Meltdown, these vulnerabilities require elevated administrative privileges to be run. These are secondary exploits that require either physical access to the machine or logging on with enhanced admin privileges. Chimera affects the chipset designed by ASMedia. It is installed via a signed driver. In a secured system where the attacker has no administrative access, these exploits are no threat. If a system has been previously compromised or physically accessed (eg. force a firmware update via USB and flashback functionality), then these vulnerabilities are there to be taken advantage of.


In every CPU it makes AMD utilizes a “Secure Processor”. This is simply a licensed ARM Cortex A5 that runs the internal secure OS/firmware. The same cores that comprise ARM’s “TrustZone” security product. In theory someone could compromise a server, install these exploits, and then remove the primary exploit so that on the surface it looks like the machine is operating as usual. The attackers will still have low level access to the machine in question, but it will be much harder to root them out.

Continue reading our thoughts on the AMD security concerns.

Subject: Displays
Manufacturer: Acer

When PC monitors made the mainstream transition to widescreen aspect ratios in the mid-2000s, many manufacturers opted for resolutions at a 16:10 ratio. My first widescreen displays were a pair of Dell monitors with a 1920x1200 resolution and, as time and technology marched forward, I moved to larger 2560x1600 monitors.

I grew to rely on and appreciate the extra vertical resolution that 16:10 displays offer, but as the production and development of "widescreen" PC monitors matured, it naturally began to merge with the television industry, which had long since settled on a 16:9 aspect ratio. This led to the introduction of PC displays with native resolutions of 1920x1080 and 2560x1440, keeping things simple for activities such as media playback but robbing consumers of pixels in terms of vertical resolution.

I was well-accustomed to my 16:10 monitors when the 16:9 aspect ratio took over the market, and while I initially thought that the 120 or 160 missing rows of pixels wouldn't be missed, I was unfortunately mistaken. Those seemingly insignificant pixels turned out to make a noticeable difference in terms of on-screen productivity real estate, and my 1080p and 1440p displays have always felt cramped as a result.

I was therefore sad to see that the relatively new ultrawide monitor market continued the trend of limited vertical resolutions. Most ultrawides feature a 21:9 aspect ratio with resolutions of 2560x1080 or 3440x1440. While this gives users extra resolution on the sides, it maintains the same limited height options of those ubiquitous 1080p and 1440p displays. The ultrawide form factor is fantastic for movies and games, but while some find them perfectly acceptable for productivity, I still felt cramped.

Thankfully, a new breed of ultrawide monitors is here to save the day. In the second half of 2017, display manufactures such as Dell, Acer, and LG launched 38-inch ultrawide monitors with a 3840x1600 resolution. Just like the how the early ultrawides "stretched" a 1080p or 1440p monitor, the 38-inch versions do the same for my beloved 2560x1600 displays.

The Acer XR382CQK

I've had the opportunity to test one of these new "taller" displays thanks to a review loan from Acer of the XR382CQK, a curved 37.5-inch behemoth. It shares the same glorious 3840x1600 resolution as others in its class, but it also offers some unique features, including a 75Hz refresh rate, USB-C input, and AMD FreeSync support.


Based on my time with the XR382CQK, my hopes for those extra 160 of resolution were fulfilled. The height of the display area felt great for tasks like video editing in Premiere and referencing multiple side-by-side documents and websites, and the gaming experience was just as satisfying. And with its 38-inch size, the display is quite usable at 100 percent scaling.


There's also an unexpected benefit for video content that I hadn't originally considered. I was so focused on regaining that missing vertical resolution that I initially failed to appreciate the jump in horizontal resolution from 3440px to 3840px. This is the same horizontal resolution as the consumer UHD standard, which means that 4K movies in a 21:9 or similar aspect ratio will be viewable in their full size with a 1:1 pixel ratio.

Continue reading our look at 38-inch 3840x1600 ultrawide monitors!