Flash player not detected. Click here to install flash.
« 1 2 3 4 5 »

Google Daydream Standalone VR Headset Powered by Snapdragon 835

Subject: Graphics Cards, Mobile | May 17, 2017 - 02:30 PM |
Tagged: snapdragon 835, snapdragon, qualcomm, google io 2017, google, daydream

During the Google I/O keynote, Google and Qualcomm announced a partnership to create a reference design for a standalone Daydream VR headset using Snapdragon 835 to enable the ecosystem of partners to have deliverable hardware in consumers’ hands by the end of 2017. The time line is aggressive, impressively so, thanks in large part to the previous work Qualcomm had done with the Snapdragon-based VR reference design we first saw in September 2016. At the time the Qualcomm platform was powered by the Snapdragon 820. Since then, Qualcomm has updated the design to integrate the Snapdragon 835 processor and platform, improving performance and efficiency along the way.

Google has now taken the reference platform and made some modifications to integrate Daydream support and will offer it to partners to show case what a standalone, untethered VR solution can do. Even though Google Daydream has been shipping in the form of slot-in phones with a “dummy” headset, integrating the whole package into a dedicate device offers several advantages.

First, I expected the free standalone units to have better performance than the phones used as a slot-in solution. With the ability to tune the device to higher thermal limits, Qualcomm and Google will be able to ramp up the clocks on the GPU and SoC to get optimal performance. And, because there is more room for a larger battery on the headset design, there should be an advantage in battery life along with the increase in performance.

sd835vr.jpg

The Qualcomm Snapdragon 835 VR Reference Device

It is also likely that the device will have better thermal properties than those using high smartphones today. In other words, with more space, there should be more area for cooling and thus the unit shouldn’t be as warm on the consumers face.

I would assume as well that the standalone units will have improved hardware over the smartphone iterations. That means better gyros, cameras, sensors, etc. that could lead to improved capability for the hardware in this form. Better hardware, tighter and more focused integration and better software support should mean lower latency and better VR gaming across the board. Assuming everything is implemented as it should.

The only major change that Google has made to this reference platform is the move away from Qualcomm’s 6DOF technology (6 degrees of freedom, allowing you to move in real space and have all necessary tracking done on the headset itself) and to Google calls WorldSense. Based on the Google Project Tango technology, this is the one area I have questions about going forward. I have used three different Tango enabled devices thus far with long-term personal testing and can say that while the possibilities for it were astounding, the implementations had been…slow. For VR that 100% cannot be the case. I don’t yet know how different its integration is from what Qualcomm had done previously, but hopefully Google will leverage the work Qualcomm has already done with its platform.

Google is claiming that consumers will have hardware based on this reference design in 2017 but no pricing has been shared with me yet. I wouldn’t expect it to be inexpensive though – we are talking about all the hardware that goes into a flagship smartphone plus a little extra for the VR goodness. We’ll see how aggressive Google wants its partners to be and if it is willing to absorb any of the upfront costs with subsidy.

Let me know if this is the direction you hope to see VR move – away from tethered PC-based solutions and into the world of standalone units.

Source: Qualcomm

Meet the GT 1030

Subject: Graphics Cards | May 17, 2017 - 01:55 PM |
Tagged: nvidia, msi, gt 1030, gigabyte, evga. zotac

The GT 1030 quietly launched from a variety of vendors late yesterday amidst the tsunami of AMD announcements.  The low profile card is advertised as offering twice the performance of the iGPU found on Intel Core i5 processors and in many cases is passively cooled.  From the pricing of the cards available now, expect to pay around $75 to $85 for this new card.

evga.jpg

EVGA announced a giveaway of several GTX 1030s at the same time as they released the model names.  The card which is currently available retails for $75 and is clocked at 1290MHz base, 1544 MHz boost and has 384 CUDA Cores.  The 2GB of GDDR5 is clocked a hair over 6GHz and runs on a 64 bit bus providing a memory bandwidth of 48.06 GB/s.  Two of their three models offer HDMI + DVI-D out, the third has a pair of DVI-D connectors.

zt-p10300a-10l_image5_0.jpg

Zotac's offering provides slightly lower clocks, a base of 1227MHz and boost of 1468MHz however the VRAM remains unchanged at 6GHz.  It pairs HDMI 2.0b with a DVI slot and comes with a low profile bracket if needed for an SFF build.

msi.png

MSI went all out and released a half dozen models, two of which you can see above.  The GT 1030 AERO ITX 2G OC is actively cooled which allows you to reach a 1265MHz base and 1518MHz boost clock.  The passively cooled GT 1030 2GH LP OCV1 runs at the same frequency and fits in a single slot externally, however you will need to leave space inside the system as the heatsink takes up an additional slot internally.  Both are fully compatible with the Afterburner Overclocking Utility and its features such as the Predator gameplay recording tool.

gb pair.png

Last but not least are a pair from Gigabyte, the GT 1030 Low Profile 2G and Silent Low Profile 2G cards.  The the cards both offer you two modes, in OC Mode the base clock is 1252MHz and boost clock 1506MHz while in Gaming Mode you will run at 1227MHz base and 1468MHz boost. 

Source: EVGA

Dating Intel and AMD in 2017, we're going out for chips

Subject: General Tech | May 17, 2017 - 12:30 PM |
Tagged: Intel, amd, rumour, release dates, ryzen, skylake-x, kaby lake x, Threadripper, X399, coffee lake

DigiTimes has posted an article covering the probable launch dates of AMD's new CPUs and GPUs as well as Intel's reaction to the release.  Not all of these dates are confirmed but it is worth noting as these rumours are often close to those eventually announced.  Naples will be the first, with the server chips launching at the end of June but that is just the start. July is the big month for AMD, with the lower end Ryzen 3 chips hitting the market as well as the newly announced 16 core Threadrippers and the X399 chipset.  That will also be the month we see Vega's Founders Frontier Edition graphics cards arrive.

Intel's Basin Falls platform; Skylake-X and Kaby Lake-X along with the associated X299 chipset are still scheduled for Computex reveal and a late June or early August release.  Coffee Lake is getting pushed ahead however, it's launch has been moved up to late August instead of the beginning of next year. 

Even with Intel's counters, AMD's balance sheet is likely to be looking better and better as the year goes on which is great news for everyone ... except perhaps Intel and NVIDIA.

Vega FE Slide.png

"Demand for AMD's Ryzen 7- and Ryzen 5-series CPU products has continued rising, which may allow the chipmaker to narrow its losses to below US$50 million for the second quarter of 2017. With Intel also rumored to pay licensing fees to AMD for its GPUs, some market watchers believe AMD may turn profitable in the second quarter or in the third."

Here is some more Tech News from around the web:

Tech Talk

 

Source: DigiTimes
Author:
Manufacturer: AMD

Is it time to buy that new GPU?

Testing commissioned by AMD. This means that AMD paid us for our time, but had no say in the results or presentation of them.

Earlier this week Bethesda and Arkane Studios released Prey, a first-person shooter that is a re-imaging of the 2006 game of the same name. Fans of System Shock will find a lot to love about this new title and I have found myself enamored with the game…in the name of science of course.

Prey-2017-05-06-15-52-04-16.jpg

While doing my due diligence and performing some preliminary testing to see if we would utilize Prey for graphics testing going forward, AMD approached me to discuss this exact title. With the release of the Radeon RX 580 in April, one of the key storylines is that the card offers a reasonably priced upgrade path for users of 2+ year old hardware. With that upgrade you should see some substantial performance improvements and as I will show you here, the new Prey is a perfect example of that.

Targeting the Radeon R9 380, a graphics card that was originally released back in May of 2015, the RX 580 offers substantially better performance at a very similar launch price. The same is true for the GeForce GTX 960: launched in January of 2015, it is slightly longer in the tooth. AMD’s data shows that 80% of the users on Steam are running on R9 380X or slower graphics cards and that only 10% of them upgraded in 2016. Considering the great GPUs that were available then (including the RX 480 and the GTX 10-series), it seems more and more likely that we going to hit an upgrade inflection point in the market.

slides-5.jpg

A simple experiment was setup: does the new Radeon RX 580 offer a worthwhile upgrade path for those many users of R9 380 or GTX 960 classifications of graphics cards (or older)?

  Radeon RX 580 Radeon R9 380 GeForce GTX 960
GPU Polaris 20 Tonga Pro GM206
GPU Cores 2304 1792 1024
Rated Clock 1340 MHz 918 MHz 1127 MHz
Memory 4GB
8GB
4GB 2GB
4GB
Memory Interface 256-bit 256-bit 128-bit
TDP 185 watts 190 watts 120 watts
MSRP (at launch) $199 (4GB)
$239 (8GB)
$219 $199

Continue reading our look at the Radeon RX 580 in Prey!

AMD Compares 1x 32-Core EPYC to 2x 12-Core Xeon E5s

Subject: Processors | May 17, 2017 - 04:05 AM |
Tagged: amd, EPYC, 32 core, 64 thread, Intel, Broadwell-E, xeon

AMD has formally announced their EPYC CPUs. While Sebastian covered the product specifications, AMD has also released performance claims against a pair of Intel’s Broadwell-E Xeons. While Intel’s E5-2650 v4 processors have an MSRP of around $1170 USD, each, we don’t know how that price will compare to AMD’s offering. At first glance, pitting thirty two cores against two twelve-core chips seems a bit unfair, although it could end up being a very fair comparison if the prices align.

amd-2017-epyc-ubuntucompile.jpg

Image Credit: Patrick Moorhead

Patrick Moorhead, who was at the event, tweeted out photos of a benchmark where Ubuntu was compiled over GCC. It looks like EPYC completed in just 33.7s while the Broadwell-E chip took 37.2s (making AMD’s part ~9.5% faster). While this, again, stems from having a third more cores, this depends on how much AMD is going to charge you for them, versus Intel’s current pricing structure.

amd-2017-epyc-threads.jpg

Image Credit: Patrick Moorhead

This one chip also has 128 PCIe lanes, rather than Intel’s 80 total lanes spread across two chips.

AMD Announces Radeon Vega Frontier Edition Graphics Cards

Subject: Graphics Cards | May 16, 2017 - 07:39 PM |
Tagged: Vega, reference, radeon, graphics card, gpu, Frontier Edition, amd

AMD has revealed their concept of a premium reference GPU for the upcoming Radeon Vega launch, with the "Frontier Edition" of the new graphics cards.

Vega FE Slide.png

"Today, AMD announced its brand-new Radeon Vega Frontier Edition, the world’s most powerful solution for machine learning and advanced visualization aimed to empower the next generation of data scientists and visualization professionals -- the digital pioneers forging new paths in their fields. Designed to handle the most demanding design, rendering, and machine intelligence workloads, this powerful new graphics card excels in:

  • Machine learning. Together with AMD’s ROCm open software platform, Radeon Vega Frontier Edition enables developers to tap into the power of Vega for machine learning algorithm development. Frontier Edition delivers more than 50 percent more performance than today’s most powerful machine learning GPUs.
  • Advanced visualization. Radon Vega Frontier Edition provides the performance required to drive increasingly large and complex models for real-time visualization, physically-based rendering and virtual reality through the design phase as well as rendering phase of product development.
  • VR workloads. Radeon Vega Frontier Edition is ideal for VR content creation supporting AMD’s LiquidVR technology to deliver the gripping content, advanced visual comfort and compatibility needed for next-generation VR experiences.
  • Revolutionized game design workflows. Radeon Vega Frontier Edition simplifies and accelerates game creation by providing a single GPU optimized for every stage of a game developer’s workflow, from asset production to playtesting and performance optimization."

Vega FE.jpg

From the image provided on the official product page it appears that there will be both liquid-cooled (the gold card in the background) and air-cooled variants of these "Frontier Edition" cards, which AMD states will arrive with 16GB of HBM2 and offer 1.5x the FP32 performance and 3x the FP16 performance of the Fury X.

From AMD:

Radeon Vega Frontier Edition

  • Compute units: 64
  • Single precision compute performance (FP32): ~13 TFLOPS
  • Half precision compute performance (FP16): ~25 TFLOPS
  • Pixel Fillrate: ~90 Gpixels/sec
  • Memory capacity: 16 GBs of High Bandwidth Cache
  • Memory bandwidth: ~480 GBs/sec

The availability of the Radeon Vega Frontier Edition was announced as "late June", so we should not have too long to wait for further details, including pricing.

Source: AMD

AMD's 16-Core Ryzen Threadripper CPUs Coming This Summer

Subject: Processors | May 16, 2017 - 07:22 PM |
Tagged: Zen, Threadripper, ryzen, processor, HEDT, cpu, amd

AMD revealed their entry into high-end desktop (HEDT) with the upcoming Ryzen "Threadripper" CPUs, which will feature up to 16 cores and 32 threads.

Threadripper 2.png

Little information was revealed along with the announcement, other than to announce availablility as "summer 2017", though rumors and leaks surrounding Threadripper have been seen on the internet (naturally) leading up to today's announcement, including this one from Wccftech. Not only will Threadripper (allegedly) offer quad-channel memory support and 44 PCI Express lanes, but they are also rumored to be released in a massive 4094-pin package (same as "Naples" aka EPYC) that most assuredly will not fit into the AM4 socket.

WCCFTECH Chart 2.png

Image credit: Wccftech

These Threadripper CPUs follow the lead of Intel's HEDT parts on X99, which are essentially re-appropriated Xeons with higher clock speeds and some feature differences such as a lack of ECC memory support. It remains to be seen what exactly will separate the enthusiast AMD platform from the EPYC datacenter platform, though the rumored base clock speeds are much higher with Threadripper.

Source: AMD

AMD Announces EPYC: A Massive 32-Core Datacenter SoC

Subject: Processors | May 16, 2017 - 06:49 PM |
Tagged: Zen, server, ryzen, processor, EPYC, datacenter, cpu, amd, 64 thread, 32 core

AMD has announced their new datacenter CPU built on the Zen architecture, which the company is calling EPYC. And epic they are, as these server processors will be offered with up to 32 cores and 64 threads, 8 memory channels, and 128 PCI Express lanes per CPU.

Epyc_1.jpg

Some of the details about the upcoming "Naples" server processors (now EPYC) were revealed by AMD back in March, when the upcoming server chips were previewed:

"Naples" features:

  • A highly scalable, 32-core System on Chip (SoC) design, with support for two high-performance threads per core
  • Industry-leading memory bandwidth, with 8-channels of memory per "Naples" device. In a 2-socket server, support for up to 32 DIMMS of DDR4 on 16 memory channels, delivering up to 4 terabytes of total memory capacity.
  • The processor is a complete SoC with fully integrated, high-speed I/O supporting 128 lanes of PCIe, negating the need for a separate chip-set
  • A highly-optimized cache structure for high-performance, energy efficient compute
  • AMD Infinity Fabric coherent interconnect for two "Naples" CPUs in a 2-socket system
  • Dedicated security hardware 

EPYC Screen.png

Compared to Ryzen (or should it be RYZEN?), EPYC offers a huge jump in core count and available performance - though AMD's other CPU announcement (Threadripper) bridges the gap between the desktop and datacenter offerings with an HEDT product. This also serves to bring AMD's CPU offerings to parity with the Intel product stack with desktop/high performance desktop/server CPUs.

epycpackage.jpg

EPYC is a large processor. (Image credit: The Tech Report)

While specifications were not offered, there have been leaks (of course) to help fill in the blanks. Wccftech offers these specs for EPYC (on the left):

Wccftech Chart.png

(Image credit: Wccftech)

We await further information from AMD about the EPYC launch.

Source: AMD
Subject: General Tech
Manufacturer: The Khronos Group

It Started with an OpenCL 2.2 Press Release

Update (May 18 @ 4pm EDT): A few comments across the internet believes that the statements from The Khronos Group were inaccurately worded, so I emailed them yet again. The OpenCL working group has released yet another statement:

OpenCL is announcing that their strategic direction is to support CL style computing on an extended version of the Vulkan API. The Vulkan group is agreeing to advise on the extensions.

In other words, this article was and is accurate. The Khronos Group are converging OpenCL and Vulkan into a single API: Vulkan. There was no misinterpretation.

Original post below

Earlier today, we published a news post about the finalized specifications for OpenCL 2.2 and SPIR-V 1.2. This was announced through a press release that also contained an odd little statement at the end of the third paragraph.

We are also working to converge with, and leverage, the Khronos Vulkan API — merging advanced graphics and compute into a single API.

khronos-vulkan-logo.png

This statement seems to suggest that OpenCL and Vulkan are expecting to merge into a single API for compute and graphics at some point in the future. This seemed like a huge announcement to bury that deep into the press blast, so I emailed The Khronos Group for confirmation (and any further statements). As it turns out, this interpretation is correct, and they provided a more explicit statement:

The OpenCL working group has taken the decision to converge its roadmap with Vulkan, and use Vulkan as the basis for the next generation of explicit compute APIs – this also provides the opportunity for the OpenCL roadmap to merge graphics and compute.

This statement adds a new claim: The Khronos Group plans to merge OpenCL into Vulkan, specifically, at some point in the future. Making the move in this direction, from OpenCL to Vulkan, makes sense for a handful of reasons, which I will highlight in my analysis, below.

Going Vulkan to Live Long and Prosper?

The first reason for merging OpenCL into Vulkan, from my perspective, is that Apple, who originally created OpenCL, still owns the trademarks (and some other rights) to it. The Khronos Group licenses these bits of IP from Apple. Vulkan, based on AMD’s donation of the Mantle API, should be easier to manage from the legal side of things.

khronos-2016-vulkan-why.png

The second reason for going in that direction is the actual structure of the APIs. When Mantle was announced, it looked a lot like an API that wrapped OpenCL with a graphics-specific layer. Also, Vulkan isn’t specifically limited to GPUs in its implementation.

Aside: When you create a device queue, you can query the driver to see what type of device it identifies as by reading its VkPhysicalDeviceType. Currently, as of Vulkan 1.0.49, the options are Other, Integrated GPU, Discrete GPU, Virtual GPU, and CPU. While this is just a clue, to make it easier to select a device for a given task, and isn’t useful to determine what the device is capable of, it should illustrate that other devices, like FPGAs, could support some subset of the API. It’s just up to the developer to check for features before they’re used, and target it at the devices they expect.

If you were to go in the other direction, you would need to wedge graphics tasks into OpenCL. You would be creating Vulkan all over again. From my perspective, pushing OpenCL into Vulkan seems like the path of least resistance.

The third reason (that I can think of) is probably marketing. DirectX 12 isn’t attempting to seduce FPGA developers. Telling a game studio to program their engine on a new, souped-up OpenCL might make them break out in a cold sweat, even if both parties know that it’s an evolution of Vulkan with cross-pollination from OpenCL. OpenCL developers, on the other hand, are probably using the API because they need it, and are less likely to be shaken off.

What OpenCL Could Give Vulkan (and Vice Versa)

From the very onset, OpenCL and Vulkan were occupying similar spaces, but there are some things that OpenCL does “better”. The most obvious, and previously mentioned, element is that OpenCL supports a wide range of compute devices, such as FPGAs. That’s not the limit of what Vulkan can borrow, though, although it could make for an interesting landscape if FPGAs become commonplace in the coming years and decades.

khronos-SYCL_Color_Mar14_154_75.png

Personally, I wonder how SYCL could affect game engine development. This standard attempts to guide GPU- (and other device-) accelerated code into a single-source, C++ model. For over a decade, Tim Sweeney of Epic Games has talked about writing engines like he did back in the software-rendering era, but without giving up the ridiculous performance (and efficiency) provided by GPUs.

Long-time readers of PC Perspective might remember that I was investigating GPU-accelerated software rendering in WebCL (via Nokia’s implementation). The thought was that I could concede the raster performance of modern GPUs and make up for it with added control, the ability to explicitly target secondary compute devices, and the ability to run in a web browser. This took place in 2013, before AMD announced Mantle and browser vendors expressed a clear disinterest in exposing OpenCL through JavaScript. Seeing the idea was about to be crushed, I pulled out the GPU-accelerated audio ideas into a more-focused project, but that part of my history is irrelevant to this post.

The reason for bringing up this anecdote is because, if OpenCL is moving into Vulkan, and SYCL is still being developed, then it seems likely that SYCL will eventually port into Vulkan. If this is the case, then future game engines can gain benefits that I was striving toward without giving up access to fixed-function features, like hardware rasterization. If Vulkan comes to web browsers some day, it would literally prune off every advantage I was hoping to capture, and it would do so with a better implementation.

microsoft-2015-directx12-logo.jpg

More importantly, SYCL is something that Microsoft cannot provide with today’s DirectX.

Admittedly, it’s hard to think of something that OpenCL can acquire from Vulkan, besides just a lot more interest from potential developers. Vulkan was already somewhat of a subset of OpenCL that had graphics tasks (cleanly) integrated over top of it. On the other hand, OpenCL has been struggling to acquire mainstream support, so that could, in fact, be Vulkan’s greatest gift.

The Khronos Group has not provided a timeline for this change. It’s just a roadmap declaration.

Lian Li's New PC-T70 Test Bench

Subject: Cases and Cooling | May 16, 2017 - 06:05 PM |
Tagged: Test Bench, T70-1, PC-T70, open air case, Lian Li, acrylic

Lian Li has designed an open air case with an optional acrylic enclosure to help simulate normal case environs or to protect your components if you build a system you want to showcase.  The PC-T70 is primarily designed as a test bench so you can set up a E-ATX, ATX, or Micro-ATX/iTX motherboard and easily swap out components while benchmarking hardware or software.  The problem with test benches is one of temperature; most of us set up our systems in enclosed cases and the temperatures experienced will be different than in a case fully exposed to any wafting breeze.  Lian Li has overcome this with their optional T70-1, a set of acrylic side pieces and top with mounts for fans or radiators which allow you to simulate a closed case environment when you are reporting on running temperatures.

Capture.PNG

There is another use for this case which might tempt a different set of users.   The case fully exposes your components which makes this a great base to build an impressive mod on, or simply to show off all of those RGB LEDs you paid good money for.  The acrylic case ensures that your system cannot be permanently killed by a passing feline as well as providing mounting points for an impressive watercooling setup.  You can check out the full PR below the specs and video.

spectac.PNG

New PC-T70 Test Bench Simulates Any Case Environment Lian Li’s New Modular Bench Transforms for Both Closed-Air and Open-Air Testing

May 16, 2017, Keelung, Taiwan - Lian-Li Industrial Co. Ltd is eager to announce the PC-T70 test bench. After productive collaboration, taking feedback from high-end PC hardware reviewers, Lian Li sought to create a test bench that could both provide unhindered access for enthusiasts who want to rapidly swap hardware, and those who like to use their test benches as a workstation. Lian Li’s latest test bench is its most flexible yet – a sleek, minimal platform for easy hardware swapping, with an optional kit that encloses the bench with radiator mounts and an acrylic cover.

Unobstructed Design for Hardware Swapping
After taking feedback from PC hardware reviewers, Lian Li realized that simplicity was key. The PC-T70 has completely free access, with zero barriers hindering the installation of motherboards and other hardware. Users can even remove the back frame for expansion slots and IO cover if they so choose. Six open pass-throughs are positioned around the motherboard tray to route cables down to the PSU and drive mounts on the floor panel.

Simulate Closed-Air Case Environments for Advanced Testing
With the T70-1 upgrade kit, users can add side panels to the open bench, each mounting two 120mm or 140mm fans or a 240mm or 280mm radiator with removable mesh dust filters. It also includes a back panel, mounting an additional 120mm or 140mm exhaust fan and an acrylic canopy secured by magnetic strips to fully enclose the motherboard compartment, simulating a closed-air environment more representative of regular users – a valuable advantage for hardware reviewers. Every panel is modular and easily taken down, so users can rapidly cycle between closed and open-air setups.

A Bench Built for All Form Factors
The PC-T70 mounts E-ATX, ATX, Micro ATX, and mini ITX motherboards, with eight expansion slots to mount VGA cards as long as 330mm. While enclosed, its CPU cooler clearance is limited to 180mm. The floor panel mounts ATX PSUs as long as 330mm and as many as five 2.5” and one 3.5” drives or one 2.5” and two 3.5” storage drives. Users can also use the floor panel to mount a 360mm radiator, reservoirs, and pumps for custom water cooling loops.

Price and Availability
The PC-T70, including the T70-1 option kit is now available at Newegg for $189.99.
Also available in white.

Source: Lian Li

CORSAIR Launches T1 RACE Gaming Chair

Subject: General Tech | May 16, 2017 - 01:58 PM |
Tagged: gaming chair, corsair, T1 RACE

Corsair have jumped into the gaming chair market, a product we did not see much of which has recently taken off in a big way.  The T1 RACE is made of PU leather, also known as bicast leather, so the shiny finish should last quite a while though the feel will not quite the same as a true leather chair, nor will the price be as astronomical.  Depending on the type of polyurethane leather they used, this product might be vegan.  You can choose between yellow, white, blue or  red trim to highlight your chair, or if you prefer you can choose to forego the colours for a purely black chair.  It can recline 90° to 180° if you need a moment to lie back, the arm rests can be adjusted for height, width, position and angle and neck and lumbar PU leather pillows are included. 

Check out Corsair's page here or the PR just below.

unnamed.jpg

FREMONT, CA – May 16th, 2017 - CORSAIR®, a world leader in enthusiast memory, PC components and high-performance gaming hardware today announced the launch of its first gaming chair, the T1 RACE. Inspired by racing, crafted for comfort and built to last, the T1 Race joins CORSAIR’s award-winning range of mice, keyboards, headsets and mousepads to complete the ultimate gaming experience. Built using a solid steel skeleton and dense foam cushions, the T1 RACE has the strength to ensure a lifetime of sturdiness, while it’s 4D-movement armrests raise, lower, shift and swivel to put gamers in the most comfortable position every time. Styled to turn heads and finished with immaculate attention to detail, the T1 RACE is the gaming chair your desk deserves.

Upholstered in luxurious PU leather on seating surfaces and available in five different colors, T1 RACE lets you choose your seat to match your style, in either Yellow, White, Blue, Red or Black trim, finished with automotive color-matched stitching and base accents. Nylon caster wheels, often an optional upgrade on office and gaming chairs, are included with T1 RACE as standard, ensuring stability and smooth movement on any surface.

T1 RACE’s sculpted race-seat design and included neck and lumbar PU leather pillows provide adjustable support for day-long gaming sessions, while its 4D-moment armrests effortlessly adjust in height, width, position and angle to put your arms precisely where they need to be. A steel construction Class 4 gas lift provides reliable height adjustment, while the seat itself tilts up to 10° and can recline anywhere between 90° to 180°, lying completely flat for when you need to take a break from the action. Finishing the T1 RACE’s attention to detail, the CORSAIR logo is tastefully embroidered into the rear of the chair, and lightly embossed into the headrest for maximum comfort.

Source: Corsair

Pot, meet kettle. Is it worse to hoard exploits or patches?

Subject: General Tech | May 16, 2017 - 01:27 PM |
Tagged: security, microsoft

Microsoft and the NSA have each been blaming the other for the ability of WannaCrypt to utilize a vulnerability in SMBv1 to spread.  Microsoft considers the NSA's decision not to share the vulnerabilities which their Eternalblue tool utilizes with Microsoft and various other security companies to be the cause of this particular outbreak.  Conversely, the fact is that while Microsoft developed patches to address this vulnerability for versions of Windows including WinXP, Server 2003, and Windows 8 RT back in March, they did not release the patches for legacy OSes until the outbreak was well underway. 

Perhaps the most compelling proof of blame is the number of systems which should not have been vulnerable but were hit due to the fact that the available patches were never installed. 

These three problems, the NSA wanting to hoard vulnerabilities so they can exploit them for espionage, Microsoft ending support of older products because they are a business and do not find it profitable to support products a decade or more after release and users not taking advantage of available updates have left us in the pickle we find ourselves in this week.  On the plus side this outbreak does have people patching, so we have that going for us.

fingerpointing.jpg

"Speaking of hoarding, though, it's emerged Microsoft was itself stockpiling software – critical security patches for months."

Here is some more Tech News from around the web:

Tech Talk

 

Source: The Register

Khronos Group Published Finalized OpenCL 2.2 & SPIR-V 1.2

Subject: General Tech | May 16, 2017 - 09:00 AM |
Tagged: spir-v, opencl, Khronos

Aligning with the start of the International Workshop on OpenCL (IWOCL) 2017 in Toronto, Ontario, Canada, The Khronos Group has published the finalized specification for OpenCL 2.2 and SPIR-V 1.2. The headlining feature for this release is the OpenCL C++ kernel language, which SPIR-V 1.2 fully supports. Kernels are the portion of code that execute on the compute devices, such as GPUs, FPGAs, super computers, multi-core CPUs, and so forth.

OpenCL_Logo.png

The OpenCL C++ kernel language is a subset of the C++14 standard, bringing many of its benefits to these less-general devices. Classes help data and code to be more tightly integrated. Templates help define logic in a general way for whatever data type implements whatever it requires, which is useful for things like custom containers. Lambda expressions make it easy to write one-off methods, rather than forcing the developer to name something that will only be used once, like comparing two data types for a special sort in one specific spot of code.

Exposing these features to the OpenCL device also enables The Khronos Group to further the SYCL standard, which aims for “single-source” OpenCL development. Having the code that executes on OpenCL-compatible devices contain roughly the same features as the host code is kind-of necessary to let them be written together, rather than exist as two pools.

The final OpenCL 2.2 and SPIR-V 1.2 specs are available now, and on GitHub for the first time.

Author:
Subject: General Tech
Manufacturer: YouTube TV

YouTube Tries Everything

Back in March, Google-owned YouTube announced a new live TV streaming service called YouTube TV to compete with the likes of Sling, DirecTV Now, PlayStation Vue, and upcoming offerings from Hulu, Amazon, and others. All these services aim to deliver curated bundles of channels aimed at cord cutters that run over the top of customer’s internet only connections as replacements for or in addition to cable television subscriptions.  YouTube TV is the latest entrant to this market with the service only available in seven test markets currently, but it is off to a good start with a decent selection of content and features including both broadcast and cable channels, on demand media, and live and DVR viewing options. A responsive user interface and generous number of family sharing options (six account logins and three simultaneous streams) will need to be balanced by the requirement to watch ads (even on some DVR’ed shows) and the $35 per month cost.

Get YouTube TV 1.jpg

YouTube TV was launched in 5 cities with more on the way. Fortunately, I am lucky enough to live close enough to Chicago to be in-market and could test out Google’s streaming TV service. While not a full review, the following are my first impressions of YouTube TV.

Setup / Sign Up

YouTube TV is available with a one month free trail, after which you will be charged $35 a month. Sign up is a simple affair and can be started by going to tv.youtube.com or clicking the YouTube TV link from “hamburger” menu on YouTube. If you are on a mobile device, YouTube TV uses a separate app than the default YouTube app and weighs in at 9.11 MB for the Android version. The sign up process is very simple. After verifying your location, the following screens show you the channels available in your market and gives you the option of adding Showtime ($11) and/or Fox Soccer ($15) for additional monthly fees. After that, you are prompted for a payment method that can be the one already linked to your Google account and used for app purchases and other subscriptions. As far as the free trial, I was not charged anything and there was no hold on my account for the $35. I like that Google makes it easy to see exactly how many days you have left on your trial and when you will be charged if you do not cancel. Further, the cancel link is not buried away and is intuitively found by clicking your account photo in the upper right > Personal > Membership. Google is doing things right here. After signup, a tour is offered to show you the various features, but you can skip this if you want to get right to it.

In my specific market, I have the following channels. When I first started testing some of the channels were not available, and were just added today. I hope to see more networks added, and if Google can manage that YouTube TV and it’s $35/month price are going to shape up to be a great deal.

  • ABC 7, CBS 2, Fox 32, NBC 5, ESPN, CSN, CSN Plus, FS1, CW, USA, FX, Free Form, NBC SN, ESPN 2, FS2, Disney, E!, Bravo, Oxygen, BTN, SEC ESPN Network, ESPN News, CBS Sports, FXX, Syfy, Disney Junior, Disney XD, MSNBC, Fox News, CNBC, Fox Business, National Geographic, FXM, Sprout, Universal, Nat Geo Wild, Chiller, NBC Golf, YouTube Red Originals
  • Plus: AMC, BBC America, IFC, Sundance TV, We TV, Telemundo, and NBC Universal (just added).
  • Optional Add-Ons: Showtime and Fox Soccer.

I tested YouTube TV out on my Windows PCs and an Android phone. You can also watch YouTube TV on iOS devices, and on your TV using an Android TVs and Chromecasts (At time of writing, Google will send you a free Chromecast after your first month). (See here for a full list of supported devices.) There are currently no Roku or Apple TV apps.

Get YouTube TV_full list.jpg

Each YouTube TV account can share out the subscription to 6 total logins where each household member gets their own login and DVR library. Up to three people can be streaming TV at the same time. While out and about, I noticed that YouTube TV required me to turn on location services in order to use the app. Looking further into it, the YouTube TV FAQ states that you will need to verify your location in order to stream live TV and will only be able to stream live TV if you are physically in the markets where YouTube TV has launched. You can watch your DVR shows anywhere in the US. However, if you are traveling internationally you will not be able to use YouTube TV at all (I’m not sure if VPNs will get around this or if YouTube TV blocks this like Netflix does). Users will need to login from their home market at least once every 3 months to keep their account active and able to stream content (every month for MLB content).

YouTube TV verifying location in Chrome (left) and on the android app (right).

On one hand, I can understand this was probably necessary in order for YouTube TV to negotiate a licensing deal, and their terms do seem pretty fair. I will have to do more testing on this as I wasn’t able to stream from the DVR without turning on location services on my Android – I can chalk this up to growing pains though and it may already be fixed.

Features & First Impressions

YouTube TV has an interface that is perhaps best described as a slimmed down YouTube that takes cues from Netflix (things like the horizontal scrolling of shows in categories). The main interface is broken down into three sections: Library, Home, and Live with the first screen you see when logging in being Home. You navigate by scrolling and clicking, and by pulling the menus up from the bottom while streaming TV like YouTube.

YouTube TV Home.jpg

Continue reading for my first impressions of YouTube TV!

Author:
Subject: Processors
Manufacturer: Various

Application Profiling Tells the Story

It should come as no surprise to anyone that has been paying attention the last two months that the latest AMD Ryzen processors and architecture are getting a lot of attention. Ryzen 7 launched with a $499 part that bested the Intel $1000 CPU at heavily threaded applications and Ryzen 5 launched with great value as well, positioning a 6-core/12-thread CPU against quad-core parts from the competition. But part of the story that permeated through both the Ryzen 7 and the Ryzen 5 processor launches was the situation surrounding gaming performance, in particular 1080p gaming, and the surprising delta  that we see in some games.

Our team has done quite a bit of research and testing on this topic. This included a detailed look at the first asserted reason for the performance gap, the Windows 10 scheduler. Our summary there was that the scheduler was working as expected and that minimal difference was seen when moving between different power modes. We also talked directly with AMD to find out its then current stance on the results, backing up our claims on the scheduler and presented a better outlook for gaming going forward. When AMD wanted to test a new custom Windows 10 power profile to help improve performance in some cases, we took part in that too. In late March we saw the first gaming performance update occur courtesy of Ashes of the Singularity: Escalation where an engine update to utilize more threads resulted in as much as 31% average frame increase.

ping-amd.png

As a part of that dissection of the Windows 10 scheduler story, we also discovered interesting data about the CCX construction and how the two modules on the 1800X communicated. The result was significantly longer thread to thread latencies than we had seen in any platform before and it was because of the fabric implementation that AMD integrated with the Zen architecture.

This has led me down another hole recently, wondering if we could further compartmentalize the gaming performance of the Ryzen processors using memory latency. As I showed in my Ryzen 5 review, memory frequency and throughput directly correlates to gaming performance improvements, in the order of 14% in some cases. But what about looking solely at memory latency alone?

Continue reading our analysis of memory latency, 1080p gaming, and how it impacts Ryzen!!

IBM Model M? Pah, get an Underwood based keyboard if you want to impress!

Subject: General Tech | May 15, 2017 - 01:55 PM |
Tagged: input, AiZO, MK Retro

Who needs a mechanical keyboard that is inspired by something a mere three decades ago when you can purchase one that looks like a manual typewriter you would see in a museum exhibit?  The AZiO MK Retro sports those raised circular keys use AZiO's own OARMY Olive switches, which NikKTech postulates were source from Kailh.  If you are desperate for a unique looking keyboard without any sign of RGB-itis, then feast your eyes below and follow that link to the review.

azio_mk_retroa.jpg

"With the MK Retro typewriter mechanical keyboard AZiO takes us for a trip down memory lane and although it leaves us asking for me we do feel they're on the right track."

Here is some more Tech News from around the web:

Tech Talk

Source: NikKTech

The internet is whipping out some Core-i9 tales

Subject: General Tech | May 15, 2017 - 12:36 PM |
Tagged: rumour, Intel, Core i9

A two part rumour circulating the internet this morning, involving new processors and a new naming convention.  The leak that The Inquirer posted about this morning reveals six new Intel processors, two Kaby Lake-X processors with four cores running at a base clock of 4GHz or 4.3GHz depending on the model and TDPs of 112W.  More interesting are the new Kaby Lake-X processors which are referred to as Core i9 models, running from an i9-7800X @ 3.6GHz base to the i9-7920X which runs at an unspecified speed.  All will have four times the L2 cache of the current i7-7700K and Turbo 2.0 Boost Max to increase the frequency of several cores at once as well as Turbo Boost 3.0 for single-threaded workloads. 

It will be interesting to see if the Core i7 family continues as an upper middle class of processors with the i9 family replacing it's current standing or if the new processors will be priced like high end Xeons.

Capture.PNG

"The slide, which an Anandtech forum member claims is an internal Intel document, provides details of four new Skylake-X processors and two Kaby Lake-X CPUs. The Skylake-X processors are described as Core i9, and if the leak is genuinely - and that's a fairly big if - the new Core i9s will replace Core i7s as Intel's top-of the-pile PC chipset range."

Here is some more Tech News from around the web:

Tech Talk

Source: The Inquirer

Inno3D Introduces a Single Slot GTX 1050 Ti Graphics Card

Subject: Graphics Cards | May 13, 2017 - 11:46 PM |
Tagged: SFF, pascal, nvidia, Inno3D, GP107

Hong Kong based Inno3D recently introduced a single slot graphics card using NVIDIA’s mid-range GTX 1050 Ti GPU. The aptly named Inno3D GeForce GTX 1050 Ti (1-Slot Edition) combines the reference clocked Pascal GPU, 4GB of GDDR5 memory, and a shrouded single fan cooler clad in red and black.

Inno3D GeForce GTX 1050 Ti 1 Slot Edition.png

Around back, the card offers three display outputs including a HDMI 2.0, DisplayPort 1.4, and DVI-D. The single slot cooler is a bit of an odd design with an thin axial fan rather than a centrifugal type that sits over a fake plastic fin array. Note that these fins do not actually cool anything, in fact the PCB of the card does not even extend out to where the fan is; presumably the fins are there primarily for aesthetics and secondarily to channel a bit of the air the fan pulls down. Air is pulled in and pushed over the actual GPU heatsink (under the shroud) and out the vent holes next to the display connectors. Air is circulated through the case and is not actually exhausted like traditional dual slot (and some single slot) designs. I am curious how the choice of fan and vents will affect cooling performance.

Overclocking is going to be limited on this card, and it comes out-of-the-box clocked at NVIDIA reference speeds of 1290 MHz base and 1392 MHz boost for the GPU’s 768 cores and 7 GT/s for the 4GB of GDDR5 memory. The card measures 211 mm (~8.3”) long and should fit in just about any case. Since it pulls all of its power from the slot, it might be a good option for those slim towers OEMs like to use these days to get a bit of gaming out of a retail PC.

Inno3D is not yet talking availability or pricing, but looking at there existing lineup I would expect a MSRP around $150.

Source: Tech Report

A dock for your Samsung S8? That would be DeX

Subject: Mobile | May 12, 2017 - 04:33 PM |
Tagged: Samsung, DeX, galaxy s8, galaxy s8+

Move over Surface, the new Galaxies are getting a docking station too.  The Dex is a charging port with talent, adding USB-A 2.0, ethernet, HDMI, and a USB-C charging port to your phone's capabilities.  Plug the dock into a monitor and you will be presented a limited Android system which supports various Samsung apps, as well as Microsoft Office apps, Gmail and YouTube; The Inquirer tested out a few others for compatibility in their review.  There is a virtual desktop app that will let you take over a desktop computer as well, according to the page on Samsung.  Gaming is not particularly good, unless you utilize the workaround The Inq discovered; pick up a USB C to HDMI adapter and bypass the DeX as opposed to the native screen mirroring which occurs with the DeX software.

DeX5-540x334.jpg

"In a nutshell, DeX is a dock for the Galaxy S8 and Galaxy S8+ that outputs a desktop experience from your phone to a big screen. It acts as little more than a portal, relying entirely on your phone's processing power to generate the experience, doing so via HDMI, making it compatible with most TVs and monitors."

Here are some more Mobile articles from around the web:

More Mobile Articles

 

Source: The Inquirer

Spring component refresh on your mind?

Subject: Systems | May 12, 2017 - 03:05 PM |
Tagged: system guide

2017 has been a good year for system guides as we finally have new hardware with a compelling reason to upgrade.  AMD now has new processors and the refreshed Polaris cards may tempt those who have a GPU several generations out of date.  NVIDIA released a graphics card which will tempt those who want the best and the SSD market continues to grow exponentially.

The Tech Report have updated their build recommendations for May and you can check out their new builds right here.  The recommendations span budgets from around $500 to $5000 so almost everyone is included.

ab350-gaming3.jpg

"AMD's Ryzen 5 CPUs are shaking up the midrange CPU market, and we're here to help builders navigate this unfamiliar terrain with the latest edition of our System Guide. We also account for the introduction of AMD's Radeon RX 500-series graphics cards."

Here are some more Systems articles from around the web:

Systems