All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
The tides are turning. Over the last few years, the technology industry sung with praises and predictions on virtual reality. The past year, however, tides have begun to shift. While VR remains prohibitively expensive and still wanting in the kind of experiences gamers crave, Augmented Reality is becoming the head-mounted hope for mainstream saturation.
Today, we’re taking a look at one of the first major consumer AR products with Lenovo Star Wars: Jedi Challenges. The set marries exciting technology with exciting IP, but is it enough to justify the $199 MSRP?
MSRP: $199.99 ($169.99 on Amazon as of this writing)
- Dimensions: 315.5mm x 47.2mm
- Weight: 275g
- Buttons: Power, Activation Matrix, Control Button
- Battery: Micro-USB Rechargable
Lenovo Mirage AR Headset
- Dimensions: 209.2mm x 83.4mm x 154.8mm
- Weight: 477g
- Buttons: Select, Cancel, Menu
- Camera: Dual motion tracking cameras
- Battery: Micro-USB Rechargable
- Dimensions: 94.1mm x 76.7mm
- Weight: 117g
- Buttons: Power/color switch
- AA batteries (x2) required
- Connection: Bluetooth connection to phone
- Languages: English, German, Japanese, French, Spanish
The set comes in a large box that doubles as a storage container when the headset and isn’t in use. Everything is nicely packaged, but especially the lightsaber which rests in a nice foam cut-out just under the top half of the box. The unboxing experience is befittingly premium for a product such as this.
The attention to detail on the lightsaber is impressive. It’s a loving recreation of Luke’s lightsaber from A New Hope. The top illuminates white or blue to indicate when it’s paired with your phone. In-game, pressing the side buttons causes the blade to rise up with the iconic sound effect; if you’re a Star Wars fan, it’s beyond neat.
Introduction and Motherboard Layout
For the launch of the Intel H370 chipset motherboards, GIGABYTE chose their AORUS brand to lead the charge. The AORUS branding differentiates the enthusiast and gamer friendly products from other GIGABYTE product lines, similar to how ASUS uses the ROG branding to differentiate their high performance product line. The H370 AORUS Gaming 3 WIFI is among GIGABYTE's intial release boards offering support for the latest Intel consumer chipset and processor lines. Built around the Intel H370 chlipset, the board supports the Intel LGA1151 Coffee Lake processor line and Dual Channel DDR4 memory running at up to 2667MHz speeds. The H370 AORUS Gaming 3 WIFI can be found in retail with an MRSP of $139.99.
The HS370 AORUS Gaming 3 WIFI motherboard features a black PCB with black and chrome colored heat sinks covering all the necessary board components. The AORUS series logos are emblazoned on the chipset heat sink and the rear panel cover. Further, a large rendering of the logo is silk-screened in the upper left quadrant of the board. The ATX form factor provides more than enough surface area to house the integrated features, as well as giving the board compatibility with most available consumer enclosures.
The board's back is completely free of components, posing no problems with case mounting or mounting the CPU backplate.
GIGABYTE designed the H370 AORUS Gaming 3 WIFI motherboard with a 10-phase digital power system in an 8+2 configuration. The CPU VRMs are passively cooled by dual aluminum heat sinks above and to the upper right of the CPU socket.
Introduction and Case Exterior
The Meshify C - TG from Fractal Design is a high-airflow ATX case design with some added style from its unique angled front panel. Throw in a tempered glass side panel and a pair of pre-installed Dynamic X2 GP-12 120 mm fans and the $89.99 price tag looks pretty good - but how did it perform? We'll find out.
Having reviewed a few Fractal Design cases in the past three years I have come to expect a few things from their enclosures: solid construction, intelligent internal layouts, and excellent cable management. As to style, their cases are generally understated, and the Meshify's black color scheme with a tinted glass side certainly fits the bill - though the angled front mesh design catches the light and does add some visual interest.
More than a single enclosure, Meshify is now a dedicated line from Fractal Design, with a new Meshify C Mini for mATX/mITX motherboards, as well as variants of this Meshify C including a model with a solid side panel (the standard Meshify C) and one with dark-tinted glass (Meshify C - Dark TG). Regardless of which model you might be considering, they share a common design focused on high airflow (with a full compliment of filters), flexible storage options, and maximizing component space within their compact dimensions.
OK, call me crazy (you wouldn’t be the first) but this is something I’ve wanted to try for years, and I bet I’m not the only one. Each time a new power supply comes across the lab bench with ever increasing output capacities, I find myself thinking, “I could weld with this beast.” Well the AX1600i pushed me over the edge and I decided to give it a go; what could possibly go wrong?
133.3 Amps on the +12V outputs!
The Corsair AX1600i Digital power supply can deliver up to 133 Amps on the combined +12V rails, more than enough amperage for welding. There are dozens of PC power supplies on the market today that can deliver 100 Amps or more on the +12V output, but the AX1600i has another feature that might help make this project a success, the ability to manually set current limits on the +12V outputs. Thanks to the fact that the AX1600i is a digital power supply that allows manually setting the current limits on the +12V outputs via the Corsair Link data acquisition and control software, I might be able to add the ability to select a desired amperage to weld with. Yes!
Just because the AX1600i “can” deliver 133A doesn’t mean I want that much current available for welding. I typically only use that much power when I’m welding heavy steel pieces using ¼” rod. For this experiment I would like to be able to start out at a lot lower amperage, and I’m hoping the Corsair Link software will provide that ability.
Stick Welding with a PC Power Supply!
My first thought was to try to adapt a TIG (Tungsten Inert Gas) welder for use with the AX1600i. I figured using a TIG torch (Tungsten electrode shrouded with Argon gas instead of a flux coated rod) might give better control especially at the lower voltage and currents where I plan to start testing. TIG welders are commonly used to weld small stainless steel parts and sheet metal. But then I remembered the TIG welder power supply has a high voltage pulse built-in to initiate the plasma arc. Without that extra kick-start, it might be difficult to strike an arc without damaging the fine pointed tip of the Tungsten electrode. So I decided to just go with a conventional stick welding setup. The fact that PC power supplies put out DC voltage will be an advantage over the more common AC buzz-box arc welders for better stability and producing higher quality welds.
Obviously, trying to convert a PC power supply into an arc welding power supply will require a few modifications. Here is a quick list of the main challenges I think we will have to overcome.
• Higher capacity fan for better cooling
• Terminate all the PSU’s +12V cables into welding leads
• Disable the Short Circuit protection feature
• Implement selecting the desired current output
• Strike and maintain a stable arc with only 12 volts
Bloody Gaming is no newcomer to the world of PC gaming peripherals. As a subsidiary of A4Tech, they’re one of the few peripheral manufacturers to own their own assembly lines. Controlling their own manufacturing allows them to take risks and attempt new approaches the competition may not. Coming from a rich heritage of innovation at A4Tech, it comes as no surprise that Bloody has consistently sought to push the boundaries of the technology we use to game.
At the same time, the brand has taken a uniquely aggressive approach from name to design. Today, we’re looking at the company’s next generation of keyboard with the B975. With this release, we find a more restrained design coupled with the freshly redesigned Light Strike 3 optical switches and full RGB backlighting.
But is it enough for Bloody to challenge the heavy hitters like Logitech, Razer, and Corsair? Let’s find out.
Announced at Intel's Developer Forum in 2012, and launched later that year, the Next Unit of Computing (NUC) project was initially a bit confusing to the enthusiast PC press. In a market that appeared to be discarding traditional desktops in favor of notebooks, it seemed a bit odd to launch a product that still depended on a monitor, mouse, and keyboard, yet didn't provide any more computing power.
Despite this criticism, the NUC lineup has rapidly expanded over the years, seeing success in areas such as digital signage and enterprise environments. However, the enthusiast PC market has mostly eluded the lure of the NUC.
Intel's Skylake-based Skull Canyon NUC was the company's first attempt to cater to the enthusiast market, with a slight stray from the traditional 4-in x 4-in form factor and the adoption of their best-ever integrated graphics solution in the Iris Pro. Additionally, the ability to connect external GPUs via Thunderbolt 3 meant Skull Canyon offered more of a focus on high-end PC graphics.
However, Skull Canyon mostly fell on deaf ears among hardcore PC users, and it seemed that Intel lacked the proper solution to make a "gaming-focused" NUC device—until now.
Announced at CES 2018, the lengthily named 8th Gen Intel® Core™ processors With Radeon™ RX Vega M Graphics (henceforth referred to as the code name, Kaby Lake-G) marks a new direction for Intel. By partnering with one of the leaders in high-end PC graphics, AMD, Intel can now pair their processors with graphics capable of playing modern games at high resolutions and frame rates.
The first product to launch using the new Kaby Lake-G family of processors is Intel's own NUC, the NUC8i7HVK (Hades Canyon). Will the marriage of Intel and AMD finally provide a NUC capable of at least moderate gaming? Let's dig a bit deeper and find out.
Introduction and Technical Specifications
Courtesy of Noctua
Noctua is a well respected manufacturer in the highly competitive CPU cooler space, offering products optimized for high efficiency and low-noise. Their latest release for AMD Ryzen processors offer good stock performance at minimal noise levels. The cooler's minimalistic dimensions also ensures broad compatibility with AM4-based systems. Unlike other members of the Noctua cooler line, the L9a-AM4 uses a proprietary mounting system, not the standard SecuFirm2™ mounting mechanism. With an MSRP of $39.99, the NH-L9a-AM4 comes at a premium price for its performance goals.
Courtesy of Noctua
Courtesy of Noctua
The NH-L9a-AM4 CPU cooler is single radiator cooler placed in a horizontal orientation with a single included fan. The radiator's horizontal orientation gives the cooler a lower height in comparison to a cooler with the traditional vertical radiators while maintaining equivalent cooling performance. In typical Noctua fashion, the NH-L9a-AM4 combines a copper base plate and heat pipes with aluminum finned cooling towers for an optimal hybrid cooling solution. The base plate and heat pipes are nickel-plated for looks and to prevent corrosion.
Since it's introduction in early 2015, the modern iteration of the Dell XPS 13 has been one of the most influential computers in recent history. An example of the rise of desirable Windows-based notebooks back into the premium market, the XPS 13 has done what only a few OEMs have been able to—inspire knockoffs. Now, the market is filled with similar designs including ultrathin bezels (and some even copying the compromises of webcam placement), at similar price points.
Even though it's been regarded as one of the best PC notebooks for its entire tenure, it was clear for a while that Dell must move the brand of their flagship notebook forward, and here it is, the redesigned XPS 13 9370 for 2018.
From a quick glance, the 2018 XPS 13 is quite similar to the outgoing 9360 model from last year. Apart from this new, radical Alpine White and Rose Gold color scheme of our particular review unit, you would be hard-pressed to spot it as unique in public. However, once you start to dig in, the changes become quite evident.
While the new XPS 13 maintains the same physical footprint as the previous iterations, it loses a significant amount of thickness. Still retaining the wedge shape, although much less exaggerated now, the XPS 13 9370 measures only 0.46" at its thickest point, compared to 0.6" on the previous design. While tenths of inches may not seem like a huge difference, this amounts to a 23% reduction in thickness, which is noticeable for a highly portable item like a notebook.
Introduction and Features
Corsair is a well-respected name in the PC industry and they continue to offer a complete line of products for enthusiasts, gamers, and professionals alike. Today we are taking a detailed look at Corsair’s latest flagship power supply, the AX1600i Digital ATX power supply unit. This is the most technologically advanced power supply we have reviewed to date. Over time, we often grow numb to marketing terms like “most technologically advanced”, “state-of-the-art”, “ultra-stable”, “super-high efficiency”, etc., but in the case of the AX1600i Digital PSU, we have seen these claims come to life before our eyes.
1,600 Watts: 133.3 Amps on the +12V outputs!
The AX1600i Digital power supply is capable of delivering up to 1,600 watts of continuous DC power (133.3 Amps on the +12V rails) and is 80 Plus Titanium certified for super-high efficiency. If that’s not impressive enough, the PSU can do it while operating on 115 VAC mains and with an ambient temperature up to 50°C (internal case temperature). This beast was made for multiple power-hungry graphic adapters and overclocked CPUs.
The AX1600i is a digital power supply, which provides two distinct advantages. First, it incorporates Digital Signal Processing (DSP) on both the primary and secondary sides, which allows the PSU to deliver extremely tight voltage regulation over a wide range of loads. And second, the AX1600i features the digital Corsair Link, which enables the PSU to be connected to the PC’s motherboard (via USB) for real-time monitoring (efficiency, voltage regulation, and power usage) and control (over-current protection and fan speed profiles).
Quiet operation with a semi-fanless mode (zero-rpm fan mode up to ~40% load) might not be at the top of your feature list when shopping for a 1,600 watt PSU, but the AX1600i is up to the challenge.
(Courtesy of Corsair)
Corsair AX1600i Digital ATX PSU Key Features:
• Digital Signal Processor (DSP) for extremely clean and efficient power
• Corsair Link Interface for monitoring and adjusting performance
• 1,600 watts continuous power output (50°C)
• Dedicated single +12V rail (133.3A) with user-configurable virtual rails
• 80 Plus Titanium certified, delivering up to 94% efficiency
• Ultra-low noise 140mm Fluid Dynamic Bearing (FDB) fan
• Silent, Zero RPM mode up to ~40% load (~640W)
• Self-test switch to verify power supply functionality
• Premium components (GaN transistors and all Japanese made capacitors)
• Fully modular cable system
• Conforms to ATX12V v2.4 and EPS 2.92 standards
• Universal AC input (100-240V) with Active PFC
• Safety Protections: OCP, OVP, UVP, SCP, OTP, and OPP
• Dimensions: 150mm (W) x 86mm (H) x 200mm (L)
• 10-Year warranty and legendary Corsair customer service
• $449.99 USD
It's all fun and games until something something AI.
Microsoft announced the Windows Machine Learning (WinML) API about two weeks ago, but they did so in a sort-of abstract context. This week, alongside the 2018 Game Developers Conference, they are grounding it in a practical application: video games!
Specifically, the API provides the mechanisms for game developers to run inference on the target machine. The training data that it runs against would be in the Open Neural Network Exchange (ONNX) format from Microsoft, Facebook, and Amazon. Like the initial announcement suggests, it can be used for any application, not just games, but… you know. If you want to get a technology off the ground, and it requires a high-end GPU, then video game enthusiasts are good lead users. When run in a DirectX application, WinML kernels are queued on the DirectX 12 compute queue.
We’ve discussed the concept before. When you’re rendering a video game, simulating an accurate scenario isn’t your goal – the goal is to look like you are. The direct way of looking like you’re doing something is to do it. The problem is that some effects are too slow (or, sometimes, too complicated) to correctly simulate. In these cases, it might be viable to make a deep-learning AI hallucinate a convincing result, even though no actual simulation took place.
Fluid dynamics, global illumination, and up-scaling are three examples.
Previously mentioned SIGGRAPH demo of fluid simulation without fluid simulation...
... just a trained AI hallucinating a scene based on input parameters.
Another place where AI could be useful is… well… AI. One way of making AI is to give it some set of data from the game environment, often including information that a player in its position would not be able to know, and having it run against a branching logic tree. Deep learning, on the other hand, can train itself on billions of examples of good and bad play, and make results based on input parameters. While the two methods do not sound that different, the difference between logic being designed (vs logic being assembled from an abstract good/bad dataset) someone abstracts the potential for assumptions and programmer error. Of course, it abstracts that potential for error into the training dataset, but that’s a whole other discussion.
The third area that AI could be useful is when you’re creating the game itself.
There’s a lot of grunt and grind work when developing a video game. Licensing prefab solutions (or commissioning someone to do a one-off asset for you) helps ease this burden, but that gets expensive in terms of both time and money. If some of those assets could be created by giving parameters to a deep-learning AI, then those are assets that you would not need to make, allowing you to focus on other assets and how they all fit together.
These are three of the use cases that Microsoft is aiming WinML at.
Sure, these are smooth curves of large details, but the antialiasing pattern looks almost perfect.
For instance, Microsoft is pointing to an NVIDIA demo where they up-sample a photo of a car, once with bilinear filtering and once with a machine learning algorithm (although not WinML-based). The bilinear algorithm behaves exactly as someone who has used Photoshop would expect. The machine learning algorithm, however, was able to identify the objects that the image intended to represent, and it drew the edges that it thought made sense.
Like their DirectX Raytracing (DXR) announcement, Microsoft plans to have PIX support WinML “on Day 1”. As for partners? They are currently working with Unity Technologies to provide WinML support in Unity’s ML-Agents plug-in. That’s all the game industry partners they have announced at the moment, though. It’ll be interesting to see who jumps in and who doesn’t over the next couple of years.
O Rayly? Ya Rayly. No Ray!
Microsoft has just announced a raytracing extension to DirectX 12, called DirectX Raytracing (DXR), at the 2018 Game Developer's Conference in San Francisco.
The goal is not to completely replace rasterization… at least not yet. This effect will be mostly implemented for effects that require supplementary datasets, such as reflections, ambient occlusion, and refraction. Rasterization, the typical way that 3D geometry gets drawn on a 2D display, converts triangle coordinates into screen coordinates, and then a point-in-triangle test runs across every sample. This will likely occur once per AA sample (minus pixels that the triangle can’t possibly cover -- such as a pixel outside of the triangle's bounding box -- but that's just optimization).
For rasterization, each triangle is laid on a 2D grid corresponding to the draw surface.
If any sample is in the triangle, the pixel shader is run.
This example shows the rotated grid MSAA case.
A program, called a pixel shader, is then run with some set of data that the GPU could gather on every valid pixel in the triangle. This set of data typically includes things like world coordinate, screen coordinate, texture coordinates, nearby vertices, and so forth. This lacks a lot of information, especially things that are not visible to the camera. The application is free to provide other sources of data for the shader to crawl… but what?
- Cubemaps are useful for reflections, but they don’t necessarily match the scene.
- Voxels are useful for lighting, as seen with NVIDIA’s VXGI and VXAO.
This is where DirectX Raytracing comes in. There’s quite a few components to it, but it’s basically a new pipeline that handles how rays are cast into the environment. After being queued, it starts out with a ray-generation stage, and then, depending on what happens to the ray in the scene, there are close-hit, any-hit, and miss shaders. Ray generation allows the developer to set up how the rays are cast, where they call an HLSL instrinsic instruction, TraceRay (which is a clever way of invoking them, by the way). This function takes an origin and a direction, so you can choose to, for example, cast rays only in the direction of lights if your algorithm was to, for instance, approximate partially occluded soft shadows from a non-point light. (There are better algorithms to do that, but it's just the first example that came off the top of my head.) The close-hit, any-hit, and miss shaders occur at the point where the traced ray ends.
To connect this with current technology, imagine that ray-generation is like a vertex shader in rasterization, where it sets up the triangle to be rasterized, leading to pixel shaders being called.
Even more interesting – the close-hit, any-hit, and miss shaders can call TraceRay themselves, which is used for multi-bounce and other recursive algorithms (see: figure above). The obvious use case might be reflections, which is the headline of the GDC talk, but they want it to be as general as possible, aligning with the evolution of GPUs. Looking at NVIDIA’s VXAO implementation, it also seems like a natural fit for a raytracing algorithm.
Speaking of data structures, Microsoft also detailed what they call the acceleration structure. Each object is composed of two levels. The top level contains per-object metadata, like its transformation and whatever else data that the developer wants to add to it. The bottom level contains the geometry. The briefing states, “essentially vertex and index buffers” so we asked for clarification. DXR requires that triangle geometry be specified as vertex positions in either 32-bit float3 or 16-bit float3 values. There is also a stride property, so developers can tweak data alignment and use their rasterization vertex buffer, as long as it's HLSL float3, either 16-bit or 32-bit.
As for the tools to develop this in…
Microsoft announced PIX back in January 2017. This is a debugging and performance analyzer for 64-bit, DirectX 12 applications. Microsoft will upgrade it to support DXR as soon as the API is released (specifically, “Day 1”). This includes the API calls, the raytracing pipeline resources, the acceleration structure, and so forth. As usual, you can expect Microsoft to support their APIs with quite decent – not perfect, but decent – documentation and tools. They do it well, and they want to make sure it’s available when the API is.
Example of DXR via EA's in-development SEED engine.
In short, raytracing is here, but it’s not taking over rasterization. It doesn’t need to. Microsoft is just giving game developers another, standardized mechanism to gather supplementary data for their games. Several game engines have already announced support for this technology, including the usual suspects of anything top-tier game technology:
- Frostbite (EA/DICE)
- SEED (EA)
- 3DMark (Futuremark)
- Unreal Engine 4 (Epic Games)
- Unity Engine (Unity Technologies)
They also said, “and several others we can’t disclose yet”, so this list is not even complete. But, yeah, if you have Frostbite, Unreal Engine, and Unity, then you have a sizeable market as it is. There is always a question about how much each of these engines will support the technology. Currently, raytracing is not portable outside of DirectX 12, because it’s literally being announced today, and each of these engines intend to support more than just Windows 10 and Xbox.
Still, we finally have a standard for raytracing, which should drive vendors to optimize in a specific direction. From there, it's just a matter of someone taking the risk to actually use the technology for a cool work of art.
If you want to read more, check out Ryan's post about the also-announced RTX, NVIDIA's raytracing technology.
CalDigit Tuff Rugged External Drive
There are a myriad of options when it comes to portable external storage. But if you value durability just as much as portability, those options quickly dry up. Combining a cheap 2.5-inch hard drive with an AmazonBasics enclosure is often just fine for an external storage solution that sits in your climate controlled office all day, but it's probably not the best choice for field use during your national park photography trip, your scuba diving expedition, or on-site construction management.
For situations like these where the elements become a factor and the chance of an accidental drop skyrockets, it's a good idea to invest in "ruggedized" equipment. Companies like Panasonic and Dell have long offered laptops custom-designed to withstand unusually harsh environments, and accessory makers have followed suit with ruggedized hard drives.
Today we're taking a look at one such ruggedized hard drive, the CalDigit Tuff. Released in 2017, the CalDigit Tuff is a 2.5-inch bus-powered external drive available in both HDD and SSD options. CalDigit loaned us the 2TB HDD model for testing.
Introduction, Specifications and Packaging
When one thinks of an M.2 SSD, we typically associate that with either a SATA 6GB/s or more recently with a PCIe 3.0 x4 link. The physical interface of M.2 was meant to accommodate future methods of connectivity, but it's easy to overlook the ability to revert back to something like a PCIe 3.0 x2 link. Why take a seemingly backward step on the interface of an SSD? Several reasons actually. Halving the number of lanes makes for a simpler SSD controller design, which lowers cost. Power savings are also a factor, as driving a given twisted pair lane at PCIe 3.0 speeds draws measurable current from the host and therefore adds to the heat production of the SSD controller. We recently saw that a PCIe 3.0 x2 can still turn in respectable performance despite lower bandwidth interface, but how far can we get the price down when pairing that host link with some NAND flash?
Enter the MyDigitalSSD SBX series. Short for Super Boot eXpress, the aim of these parts is to offer a reasonably performant PCIe NVMe SSD at something closer to SATA SSD pricing.
- Physical: M.2 2280 (single sided)
- Controller: Phison E8 (PS5008-E8)
- Capacities: 128GB, 256GB, 512GB, 1TB
- PCIe 3.0 x2, M.2 2280
- Sequential: Up to 1.6/1.3 GB/s (R/W)
- Random: 240K+ / 180K+ IOPS (R/W)
- Weight: 8g
- Power: <5W
The MyDigitalDiscount guys keep things extremely simple with their SSD packaging, which is eaxctly how it should be. It doesn't take much to package and protect an M.2 SSD, and this does the job just fine. They also include a screwdriver and a screw just in case you run into a laptop that came without one installed.
A Snappy Budget Tablet
Huawei has been gaining steam. Even though they’re not yet a household name in the United States, they’ve been a major player in the Eastern markets with global ambitions. Today we’re looking at the MediaPad M3 Lite, a budget tablet with the kind of snappy performance and just better features that should make entry-level tablet buyers take notice.
- MSRP: $247.93
- Size: 213.3mm (H) x 123.3 mm (W) x 7.5mm (D)
- Color: White, Gold. Space Gray
- Display:1920 x 1200 IPS
- CPU: Qualcomm MSM8940, Octa-core
- Operating System: Android 7.0/EMUI5.1
- Memory: RAM+ROM 3GB+16GB (tested), 3GB+32GB, 4 GB+64 GB
- Network: LTE CAT4/Wi-Fi 11ac 2.4 GHz & 5 GHz
- GPS:Supports GPS, A-GPS, GLONASS, and BDS.
- Connectivity: USB 2.0, high-speed Features supported: charging, USB OTG, USB tethering, and MTP/PTP
- Sensors: Gravity sensor, ambient light sensor, compass, gyroscope (only CPN-L09 support, CPN-W09 does not support)
- Camera: Rear camera: 8 MP and auto focus Front camera: 8 MP and fixed focus
- Audio: 2 Speakers+2 SmartPA Super Wide Sound (SWS) 3.0 sound effects, Harman Kardon tuning and certification
- Video: Video file format: *.3gp, *.mp4, *.webm, *.mkv, .ts, .3g2, .flv, and .m4v,
- Battery: 6600 mAh
- In the Box: Charger, USB Cable, Eject tool, Quick start guide, Warranty card
The tablet arrives well-packed inside a small but sturdy box. I’ve got to say, I love the copper on white look they’ve gone with and wish they’d applied it to the tablet itself, which is white and silver. Inside the box is the tablet, charging brick with USB cable, a SIM eject tool, and warranty card. It’s a bit sparse, but at this price point is perfectly fine.
The tablet looks remarkably similar to the Samsung Galaxy Tab 4, only missing the touch controls on either side of the Home button and shifting the branding to the upper left. This isn’t a bad thing by any means but the resemblance is definitely striking. One notable difference is that the Home button isn’t actually a button at all but a touch sensor that doubles as the fingerprint sensor.
The MediaPad M3 Lite comes in at 7.5mm, or just under 0.3”, thick. Virtually all of the name brand tablets I researched prior to this review are within 0.05” of each other, so Huawei’s offering is in line with what we would expect, if ever so slightly thinner.
Much Ado About Nothing?
We live in a world seemingly fueled by explosive headlines. This morning we were welcomed with a proclamation that AMD has 13 newly discovered security flaws in their latest Ryzen/Zen chips that could potentially be showstoppers for the architecture, and AMD’s hopes that it can regain lost marketshare in mobile, desktop, and enterprise markets. CTS-Labs released a report along with a website and videos explaining what these vulnerabilities are and how they can affect AMD and its processors.
This is all of course very scary. It was not all that long ago that we found out about the Spectre/Meltdown threats that seemingly are more dangerous to Intel than to its competitor. Spectre/Meltdown can be exploited by code that will compromise a machine without having elevated privileges. Parts of Spectre/Meltdown were fixed by firmware updates and OS changes which had either no effect on the machine in terms of performance, or incurred upwards of 20% to 30% performance hits in certain workloads requiring heavy I/O usage. Intel is planning a hardware fix for these vulnerabilities later on this year with new products. Current products have firmware updates available to them and Microsoft has already implemented a fix in software. Older CPUs and platforms (back to at least 4th Generation Core) have fixes, but they were rolled out a bit slower. So the fear of a new exploit that is located on the latest AMD processors is something that causes fear in users, CTOs, and investors alike.
CTS-Labs have detailed four major vulnerabilities and have named them as well as have provided fun little symbols for each; Ryzenfall, Fallout, Masterkey, and Chimera. The first three affect the CPU directly. Unlike Spectre/Meltdown, these vulnerabilities require elevated administrative privileges to be run. These are secondary exploits that require either physical access to the machine or logging on with enhanced admin privileges. Chimera affects the chipset designed by ASMedia. It is installed via a signed driver. In a secured system where the attacker has no administrative access, these exploits are no threat. If a system has been previously compromised or physically accessed (eg. force a firmware update via USB and flashback functionality), then these vulnerabilities are there to be taken advantage of.
In every CPU it makes AMD utilizes a “Secure Processor”. This is simply a licensed ARM Cortex A5 that runs the internal secure OS/firmware. The same cores that comprise ARM’s “TrustZone” security product. In theory someone could compromise a server, install these exploits, and then remove the primary exploit so that on the surface it looks like the machine is operating as usual. The attackers will still have low level access to the machine in question, but it will be much harder to root them out.
When PC monitors made the mainstream transition to widescreen aspect ratios in the mid-2000s, many manufacturers opted for resolutions at a 16:10 ratio. My first widescreen displays were a pair of Dell monitors with a 1920x1200 resolution and, as time and technology marched forward, I moved to larger 2560x1600 monitors.
I grew to rely on and appreciate the extra vertical resolution that 16:10 displays offer, but as the production and development of "widescreen" PC monitors matured, it naturally began to merge with the television industry, which had long since settled on a 16:9 aspect ratio. This led to the introduction of PC displays with native resolutions of 1920x1080 and 2560x1440, keeping things simple for activities such as media playback but robbing consumers of pixels in terms of vertical resolution.
I was well-accustomed to my 16:10 monitors when the 16:9 aspect ratio took over the market, and while I initially thought that the 120 or 160 missing rows of pixels wouldn't be missed, I was unfortunately mistaken. Those seemingly insignificant pixels turned out to make a noticeable difference in terms of on-screen productivity real estate, and my 1080p and 1440p displays have always felt cramped as a result.
I was therefore sad to see that the relatively new ultrawide monitor market continued the trend of limited vertical resolutions. Most ultrawides feature a 21:9 aspect ratio with resolutions of 2560x1080 or 3440x1440. While this gives users extra resolution on the sides, it maintains the same limited height options of those ubiquitous 1080p and 1440p displays. The ultrawide form factor is fantastic for movies and games, but while some find them perfectly acceptable for productivity, I still felt cramped.
Thankfully, a new breed of ultrawide monitors is here to save the day. In the second half of 2017, display manufactures such as Dell, Acer, and LG launched 38-inch ultrawide monitors with a 3840x1600 resolution. Just like the how the early ultrawides "stretched" a 1080p or 1440p monitor, the 38-inch versions do the same for my beloved 2560x1600 displays.
The Acer XR382CQK
I've had the opportunity to test one of these new "taller" displays thanks to a review loan from Acer of the XR382CQK, a curved 37.5-inch behemoth. It shares the same glorious 3840x1600 resolution as others in its class, but it also offers some unique features, including a 75Hz refresh rate, USB-C input, and AMD FreeSync support.
Based on my time with the XR382CQK, my hopes for those extra 160 of resolution were fulfilled. The height of the display area felt great for tasks like video editing in Premiere and referencing multiple side-by-side documents and websites, and the gaming experience was just as satisfying. And with its 38-inch size, the display is quite usable at 100 percent scaling.
There's also an unexpected benefit for video content that I hadn't originally considered. I was so focused on regaining that missing vertical resolution that I initially failed to appreciate the jump in horizontal resolution from 3440px to 3840px. This is the same horizontal resolution as the consumer UHD standard, which means that 4K movies in a 21:9 or similar aspect ratio will be viewable in their full size with a 1:1 pixel ratio.
One of the promises of moving to interfaces like USB 3.1 Gen 2 and Thunderbolt 3 on notebooks is the idea of the "one cable future." For the most part, I think we are starting to see some of those benefits. It's nice that with USB Power Delivery, users aren't tied into buying chargers directly from their notebook manufacturer or turning to trying to find oddball third-party chargers with their exact barrel connector. Additionally, I also find it to be a great feature when laptops have USB-c charging ports on opposing sides of the notebooks, allowing me greater flexibility to plug in a charger without putting additional strain on the cable.
For years, the end-game for mobile versatility has been a powerful thin-and-light notebook which you can connect to a dock at home, and use a desktop PC. With more powerful notebook processor's like Intel's quad-core 8th generation parts coming out, we are beginning to reach a point where we have the processing power; the next step is having a quality dock with which to plug these notebooks.
While USB-C can support DisplayPort, Power Delivery, and 10 Gbit/s transfer speeds in its highest-end configuration, this would still be a bit lacking for power users. Thunderbolt 3 offering the same display and power delivery capabilities, but with its 40 Gbit/s data transfer capabilities is a more suitable option.
Today, we are taking a look at the CalDigit Thunderbolt Station 3 Plus, a Thunderbolt 3-enabled device that provides a plethora of connectivity options for your notebook.
Introduction, Specifications and Packaging
Intel has wanted a 3D XPoint to go 'mainstream' for some time now. Their last big mainstream part, the X25-M, launched 10 years ago. It was available in relatively small capacities of 80GB and 160GB, but it brought about incredible performance at a time where most other early SSDs were mediocre at best. The X25-M brought NAND flash memory to the masses, and now 10 years later we have another vehicle which hopes to bring 3D XPoint to the masses - the Intel Optane SSD 800P:
Originally dubbed 'Brighton Beach', the 800P comes in at capacities smaller than its decade-old counterpart - only 58GB and 118GB. The 'odd' capacities are due to Intel playing it extra safe with additional ECC and some space to hold metadata related to wear leveling. Even though 3D XPoint media has great endurance that runs circles around NAND flash, it can still wear out, and therefore the media must still be managed similarly to NAND. 3D XPoint can be written in place, meaning far less juggling of data while writing, allowing for far greater performance consistency across the board. Consistency and low latency are the strongest traits of Optane, to the point where Intel was bold enough to launch an NVMe part with half of the typical PCIe 3.0 x4 link available in most modern SSDs. For Intel, the 800P is more about being nimble than having straight line speed. Those after higher throughputs will have to opt for the SSD 900P, a device that draws more power and requires a desktop form factor.
- Capacities: 58GB, 118GB
- PCIe 3.0 x2, M.2 2280
- Sequential: Up to 1200/600 MB/s (R/W)
- Random: 250K+ / 140K+ IOPS (R/W) (QD4)
- Latency (average sequential): 6.75us / 18us (R/W) (TYP)
- Power: 3.75W Active, 8mW L1.2 Sleep
Specs are essentially what we would expect from an Optane Memory type device. Capacities of 58GB and 118GB are welcome additions over the prior 16GB and 32GB Optane Memory parts, but the 120GB capacity point is still extremely cramped for those who would typically desire such a high performing / low latency device. We had 120GB SSDs back in 2009, after all, and nowadays we have 20GB Windows installs and 50GB game downloads.
Before moving on, I need to call out Intel on their latency specification here. To put it bluntly, sequential transfer latency is a crap spec. Nobody cares about the latency of a sequential transfer, especially for a product which touts its responsiveness - something based on the *random* access latency, and the 6.75us figure above would translate to 150,000 QD1 IOPS (the 800P is fast, but it's not *that* fast). Most storage devices/media will internally 'read ahead' so that sequential latencies at the interface are as low as possible, increasing sequential throughput. Sequential latency is simply the inverse of throughput, meaning any SSD with a higher sequential throughput than the 800P should beat it on this particular spec. To drive the point home further, consider that a HDD's average sequential latency can beat the random read latency of a top-tier NVMe SSD like the 960 PRO. It's just a bad way to spec a storage device, and it won't do Intel any favors here if competing products start sharing this same method of rating latency in the future.
Our samples came in white/brown box packaging, but I did snag a couple of photos of what should be the retail box this past CES:
Don't Call It SPIR of the Moment
Vulkan 1.0 released a little over two years ago. The announcement, with conformant drivers, conformance tests, tools, and patch for The Talos Principle, made a successful launch for the Khronos Group. Of course, games weren’t magically three times faster or anything like that, but it got the API out there; it also redrew the line between game and graphics driver.
The Khronos Group repeats this “hard launch” with Vulkan 1.1.
First, the specifications for both Vulkan 1.1 and SPIR-V 1.3 have been published. We will get into the details of those two standards later. Second, a suite of conformance tests has also been included with this release, which helps prevent an implementation bug from being an implied API that software relies upon ad-infinitum. Third, several developer tools have been released, mostly by LunarG, into the open-source ecosystem.
Fourth – conformant drivers. The following companies have Vulkan 1.1-certified drivers:
There are two new additions to the API:
The first is Protected Content. This allows developers to restrict access to rendering resources (DRM). Moving on!
The second is Subgroup Operations. We mentioned that they were added to SPIR-V back in 2016 when Microsoft announced HLSL Shader Model 6.0, and some of the instructions were available as OpenGL extensions. They are now a part of the core Vulkan 1.1 specification. This allows the individual threads of a GPU in a warp or wavefront to work together on specific instructions.
Shader compilers can use these intrinsics to speed up operations such as:
- Finding the min/max of a series of numbers
- Shuffle and/or copy values between lanes of a group
- Adding several numbers together
- Multiply several numbers together
- Evaluate whether any, all, or which lanes of a group evaluate true
In other words, shader compilers can do more optimizations, which boosts the speed of several algorithms and should translate to higher performance when shader-limited. It also means that DirectX titles using Shader Model 6.0 should be able to compile into their Vulkan equivalents when using the latter API.
This leads us to SPIR-V 1.3. (We’ll circle back to Vulkan later.) SPIR-V is the shading language that Vulkan relies upon, which is based on a subset of LLVM. SPIR-V is the code that is actually run on the GPU hardware – Vulkan just deals with how to get this code onto the silicon as efficiently as possible. In a video game, this would be whatever code the developer chose to represent lighting, animation, particle physics, and almost anything else done on the GPU.
The Khronos Group is promoting that the SPIR-V ecosystem can be written in either GLSL, OpenCL C, or even HLSL. In other words, the developer will not need to rewrite their DirectX shaders to operate on Vulkan. This isn’t particularly new – Unity did this sort-of HLSL to SPIR-V conversion ever since they added Vulkan – but it’s good to mention that it’s a promoted workflow. OpenCL C will also be useful for developers who want to move existing OpenCL code into Vulkan on platforms where the latter is available but the former rarely is, such as Android.
Speaking of which, that’s exactly what Google, Codeplay, and Adobe are doing. Adobe wrote a lot of OpenCL C code for their Creative Cloud applications, and they want to move it elsewhere. This ended up being a case study for an OpenCL to Vulkan run-time API translation layer and the Clspv OpenCL C to SPIR-V compiler. The latter is open source, and the former might become open source in the future.
Now back to Vulkan.
The other major change with this new version is the absorption of several extensions into the core, 1.1 specification.
The first is Multiview, which allows multiple projections to be rendered at the same time, as seen in the GTX 1080 launch. This can be used for rendering VR, stereoscopic 3D, cube maps, and curved displays without extra draw calls.
The second is device groups, which allows multiple GPUs to work together.
The third allows data to be shared between APIs and even whole applications. The Khronos Group specifically mentions that Steam VR SDK uses this.
The fourth is 16-bit data types. While most GPUs operate on 32-bit values, it might be beneficial to pack data into 16-bit values in memory for algorithms that are limited by bandwidth. It also helps Vulkan be used in non-graphics workloads.
We already discussed HLSL support, but that’s an extension that’s now core.
The sixth extension is YCbCr support, which is required by several video codecs.
The last thing that I would like to mention is the Public Vulkan Ecosystem Forum. The Khronos Group has regularly mentioned that they want to get the open-source community more involved in reporting issues and collaborating on solutions. In this case, they are working on a forum where both members and non-members will collaborate, as well as the usual GitHub issues tab and so forth.
You can check out the details here.
Introduction and First Impressions
Launching today, Corsair’s new Carbide Series 275R case is a budget-friendly option that still offers plenty of understated style with clean lines and the option of a tempered glass side panel. Corsair sent us a unit to check out, so we have a day-one review to share. How does it compete against recent cases we’ve looked at? Find out here!
The Carbide 275R is a compact mid-tower design that still accommodates standard ATX motherboards, large CPU coolers (up to 170 mm tall), and long graphics cards, and it includes a pair of Corsair’s SP120 fans for intake/exhaust. The price tag? $69.99 for the version with an acrylic side, and $79.99 for the version with a tempered glass side panel (as reviewed). Let’s dive in, beginning with a rundown of the basic specs.