Flash player not detected. Click here to install flash.
« 1 2 3 4 5 »

Add an ARM to your cortex

Subject: General Tech | May 18, 2017 - 12:29 PM |
Tagged: cyborgs, arm

Researchers at the Center for Sensorimotor Neural Engineering are working on a way you can truly have SoC on the brain, partnering with ARM to develop chips which can be implanted in the brain.  The goal is not to grant you a neural interface nor add a couple of petabytes to your long term memory but to help treat people suffering from paralysis due to stroke or other damage to the brain.  There is the small problem of heat, brain tissue will be much more susceptible to damage from implanted devices than an organ in the torso; a pacemaker has space in which to dissipate excess heat.  We are still a long way off but you can read up on the current state of the research by following the links at The Inquirer.

maxresdefault.jpg

"CHIP GIANT ARM is teaming up with US researchers on a project develop human brain implants aimed at helping paralysed patients as well as stroke and Alzheimer's patients."

Here is some more Tech News from around the web:

Tech Talk

 

Source: The Register

Podcast #450 - AMD Ryzen, AMD EPYC, AMD Threadripper, AMD Vega, and more non AMD news!

Subject: Editorial | May 18, 2017 - 11:46 AM |
Tagged: youtube tv, western digital, video, Vega, Threadripper, spir-v, ryzen, podcast, opencl, Google VR, EPYC, Core i9, battletech, amd

PC Perspective Podcast #450 - 05/18/17

Join us for AMD Announcments, Core i9 leaks, OpenCL updates, and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, Allyn Malventano

Peanut Gallery: Alex Lustenberg

Program length: 1:20:36

Podcast topics of discussion:

  1. Week in Review:
  2. News items of interest:
  3. Hardware/Software Picks of the Week
    1. Ryan: Gigabit LTE please hurry
    2. Allyn: TriboTEX (nanotech engine oil additive)
  4. Closing/outro

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

 

 

Source:

AMD Teases Ryzen Mobile APUs with Zen CPU Cores and On-Die Vega Graphics

Subject: Processors | May 18, 2017 - 01:01 AM |
Tagged: Zen, Vega, ryzen mobile, ryzen, raven ridge, APU, amd

AMD teased its upcoming Zen-based APUs aimed at mobile devices during its Financial Analyst Day where the company revealed the "Raven Ridge" parts will be aptly known as Ryzen Mobile. The Tech Report managed to acquire a couple slides which confirm some of the broader specifications and reveal how they stack up to AMD's latest Bristol Ridge A-Series APUs – at least as far as AMD's internal testing is concerned (which is to say not independently verified yet so take with a grain of salt).

AMD Ryzen Mobile APUs.jpg

Ryzen Mobile appears to be the new consumer-facing brand name for what has so far been code named "Raven Ridge". These parts will use a Zen-based CPU, Vega GPU, and integrated chipset. Thanks to the slides, it is now confirmed that the Vega-based graphics processor will be on-die. What has not been confirmed is whether the chipset will be on die or on package and exact specifications on CPU cores counts, GPU Compute Units, cache, memory support, and I/O like PCI-E lanes (you know, all the good stuff! heh). Note that rumors so far point towards Raven Ridge / Ryzen Mobile utilizing a single 4-core (8-thread) CCX, per core L2, 8MB shared L3 cache, and a Vega-based GPU with 1024 cores. HBM2 has also been rumored for awhile but we will have to wait for more leaks and/or an official announcement to know for sure if these Ryzen Mobile parts aimed for the second half of 2017 will have that (hopefully!).

With that said, according to AMD, Ryzen Mobile will offer up to 50% better CPU performance, 40% better GPU performance, and will use up to 50% less power than the previous 7th generation (Excavator-based) A-Series APUs (e.g. FX 9830P and A12-9730P). Those are some pretty bold claims, but still within the realm of possibility. Zen and Vega are both much more efficient architectures and AMD is also benefiting from a smaller process node (TSMC 28nm vs Samsung / GlobalFoundries 14nm FinFET). I do wonder how high the APUs will be able to clock on the CPU side of things with 4 GHz seeming to be the wall for most Zen-based Summit Ridge chips, so most of the CPU performance improvement claims will have to come from architecture changes rather than increases in clockspeeds (the highest clocked A-Series Bristol Ridge ran at up to 3.7 GHz and I would expect Raven Ridge to be around that, maybe the flagship part turbo-ing a bit more). Raven Ridge will benefit from the shared L3 cache and, more importantly, twice as many threads (4 vs 8) and this may be where AMD is primarily getting that 50% more CPU performance number from. On the graphics side of things, it looks like Bristol Ridge with its R7 graphics (GCN 3 (Tonga/Fiji on the Desktop)) had up to 512 cores. Again, taking the rumors into account which say that Raven Ridge will have a 1024 core Vega GPU, this may be where AMD is getting the large performance increase from (the core increase as well as newer architecture). On the other hand, the 40% number could suggest Ryzen Mobile will not have twice the GPU cores. I would guess that 1024 might be possible, but running at lower clocks and that is where the discrepancy is. I will admit I am a bit skeptical about the 1024 (16 CU) number though because that is a huge jump... I guess we will see though!

Further, I am curious if Ryzen Mobile will use HBC (high bandwidth cache) and if HBM2 does turn out to be utilized how that will play into the HBC and whether or not we will finally see the fruits of AMD's HSA labors! I think we will see most systems use DDR4, but certainly some SKUs could use HBM2 and that would definitely open up a lot of performance possibilities on mobile!

There is still a lot that we do not know, but Ryzen Mobile is coming and AMD is making big promises that I hope it delivers on. The company is aiming the new chips at a wide swath of the mobile market from budget laptops and tablets to convertibles and even has their sights set on premium thin and lights. The mobile space is one where AMD has struggled with in getting design wins even when they had good parts for that type of system. They will really need to push and hit Ryzen Mobile out of the park to make inroads into the laptop, tablet, and ultrabook markets!

AMD plans to launch the consumer version of Ryzen Mobile in the second half of this year (presumably with systems featuring the new APUs out in time for the holidays if not for the back to school end of summer rush). The commercial SKUs (which I think refers to the Ryzen equivalent of AMD Pro series APUs.Update: Mobile Ryzen Pro) will follow in the first half of 2018.

What are your thoughts on Ryzen Mobile and the alleged performance and power characteristics? Do you think the rumors are looking more or less correct?

Also read:

Source: Tech Report

Western Digital Launches 10TB Red and Red Pro

Subject: Storage | May 17, 2017 - 09:57 PM |
Tagged: western digital, wdc, WD, Red Pro, red, NAS, helium, HelioSeal, hdd, Hard Drive, 10TB

Western Digital increased the capacity of their Red and Red Pro NAS hard disk lines to 10TB. Acquiring the Helioseal technology via their HGST acquisition, which enables Helium filled hermetically sealed drives of even higher capacities, WD expanded the Red lines to 8TB (our review of those here) using that tech. Helioseal has certainly proven itself, as over 15 million such units have shipped so far.

sshot-54.png

We knew it was just a matter of time before we saw a 10TB Red and Red Pro, as it has been some time since the HGST He10 launched, and Western Digital's own 10TB Gold (datacenter) drive has been shipping for a while now.

  • Red 10TB:        $494
  • Red Pro 10TB: $533

MSRP pricing looks a bit high based on the lower cost/GB of the 8TB model, but given some time on the market and volume shipping, these should come down to match parity with the lesser capacities.

Press blast appears after the break.

Good news Battletech fans, Paradox will publish Harebrained Schemes new game

Subject: General Tech | May 17, 2017 - 02:56 PM |
Tagged: battletech, paradox, gaming, Kickstarter

The Kickstarter for the new turn based Battletech gaming was wildly successful with 41,733 backers pledging $2,785,537 and now we have even more good news.  Paradox Interactive, they of the continual updates and addins to published games have agreed to publish the new Battletech game.  Not only does this ensure solid support for players after release but could mean we see a long lineup of expansions after release, Paradox just added another major expansion to EU4 four years after its release.  For backers there is even more news, the closed beta will kick off in June and there is a new video of multiplayer gameplay you can watch.

"The long life of these internally developed games is a core part of Paradox’s business model, but the company is also expanding as a publisher. That includes not only third-party originals like Battletech, but ports of existing titles such as Prison Architect on tablet."

Here is some more Tech News from around the web:

Gaming

Google Daydream Standalone VR Headset Powered by Snapdragon 835

Subject: Graphics Cards, Mobile | May 17, 2017 - 02:30 PM |
Tagged: snapdragon 835, snapdragon, qualcomm, google io 2017, google, daydream

During the Google I/O keynote, Google and Qualcomm announced a partnership to create a reference design for a standalone Daydream VR headset using Snapdragon 835 to enable the ecosystem of partners to have deliverable hardware in consumers’ hands by the end of 2017. The time line is aggressive, impressively so, thanks in large part to the previous work Qualcomm had done with the Snapdragon-based VR reference design we first saw in September 2016. At the time the Qualcomm platform was powered by the Snapdragon 820. Since then, Qualcomm has updated the design to integrate the Snapdragon 835 processor and platform, improving performance and efficiency along the way.

Google has now taken the reference platform and made some modifications to integrate Daydream support and will offer it to partners to show case what a standalone, untethered VR solution can do. Even though Google Daydream has been shipping in the form of slot-in phones with a “dummy” headset, integrating the whole package into a dedicate device offers several advantages.

First, I expected the free standalone units to have better performance than the phones used as a slot-in solution. With the ability to tune the device to higher thermal limits, Qualcomm and Google will be able to ramp up the clocks on the GPU and SoC to get optimal performance. And, because there is more room for a larger battery on the headset design, there should be an advantage in battery life along with the increase in performance.

sd835vr.jpg

The Qualcomm Snapdragon 835 VR Reference Device

It is also likely that the device will have better thermal properties than those using high smartphones today. In other words, with more space, there should be more area for cooling and thus the unit shouldn’t be as warm on the consumers face.

I would assume as well that the standalone units will have improved hardware over the smartphone iterations. That means better gyros, cameras, sensors, etc. that could lead to improved capability for the hardware in this form. Better hardware, tighter and more focused integration and better software support should mean lower latency and better VR gaming across the board. Assuming everything is implemented as it should.

The only major change that Google has made to this reference platform is the move away from Qualcomm’s 6DOF technology (6 degrees of freedom, allowing you to move in real space and have all necessary tracking done on the headset itself) and to Google calls WorldSense. Based on the Google Project Tango technology, this is the one area I have questions about going forward. I have used three different Tango enabled devices thus far with long-term personal testing and can say that while the possibilities for it were astounding, the implementations had been…slow. For VR that 100% cannot be the case. I don’t yet know how different its integration is from what Qualcomm had done previously, but hopefully Google will leverage the work Qualcomm has already done with its platform.

Google is claiming that consumers will have hardware based on this reference design in 2017 but no pricing has been shared with me yet. I wouldn’t expect it to be inexpensive though – we are talking about all the hardware that goes into a flagship smartphone plus a little extra for the VR goodness. We’ll see how aggressive Google wants its partners to be and if it is willing to absorb any of the upfront costs with subsidy.

Let me know if this is the direction you hope to see VR move – away from tethered PC-based solutions and into the world of standalone units.

Source: Qualcomm

Meet the GT 1030

Subject: Graphics Cards | May 17, 2017 - 01:55 PM |
Tagged: nvidia, msi, gt 1030, gigabyte, evga. zotac

The GT 1030 quietly launched from a variety of vendors late yesterday amidst the tsunami of AMD announcements.  The low profile card is advertised as offering twice the performance of the iGPU found on Intel Core i5 processors and in many cases is passively cooled.  From the pricing of the cards available now, expect to pay around $75 to $85 for this new card.

evga.jpg

EVGA announced a giveaway of several GTX 1030s at the same time as they released the model names.  The card which is currently available retails for $75 and is clocked at 1290MHz base, 1544 MHz boost and has 384 CUDA Cores.  The 2GB of GDDR5 is clocked a hair over 6GHz and runs on a 64 bit bus providing a memory bandwidth of 48.06 GB/s.  Two of their three models offer HDMI + DVI-D out, the third has a pair of DVI-D connectors.

zt-p10300a-10l_image5_0.jpg

Zotac's offering provides slightly lower clocks, a base of 1227MHz and boost of 1468MHz however the VRAM remains unchanged at 6GHz.  It pairs HDMI 2.0b with a DVI slot and comes with a low profile bracket if needed for an SFF build.

msi.png

MSI went all out and released a half dozen models, two of which you can see above.  The GT 1030 AERO ITX 2G OC is actively cooled which allows you to reach a 1265MHz base and 1518MHz boost clock.  The passively cooled GT 1030 2GH LP OCV1 runs at the same frequency and fits in a single slot externally, however you will need to leave space inside the system as the heatsink takes up an additional slot internally.  Both are fully compatible with the Afterburner Overclocking Utility and its features such as the Predator gameplay recording tool.

gb pair.png

Last but not least are a pair from Gigabyte, the GT 1030 Low Profile 2G and Silent Low Profile 2G cards.  The the cards both offer you two modes, in OC Mode the base clock is 1252MHz and boost clock 1506MHz while in Gaming Mode you will run at 1227MHz base and 1468MHz boost. 

Source: EVGA

Dating Intel and AMD in 2017, we're going out for chips

Subject: General Tech | May 17, 2017 - 12:30 PM |
Tagged: Intel, amd, rumour, release dates, ryzen, skylake-x, kaby lake x, Threadripper, X399, coffee lake

DigiTimes has posted an article covering the probable launch dates of AMD's new CPUs and GPUs as well as Intel's reaction to the release.  Not all of these dates are confirmed but it is worth noting as these rumours are often close to those eventually announced.  Naples will be the first, with the server chips launching at the end of June but that is just the start. July is the big month for AMD, with the lower end Ryzen 3 chips hitting the market as well as the newly announced 16 core Threadrippers and the X399 chipset.  That will also be the month we see Vega's Founders Frontier Edition graphics cards arrive.

Intel's Basin Falls platform; Skylake-X and Kaby Lake-X along with the associated X299 chipset are still scheduled for Computex reveal and a late June or early August release.  Coffee Lake is getting pushed ahead however, it's launch has been moved up to late August instead of the beginning of next year. 

Even with Intel's counters, AMD's balance sheet is likely to be looking better and better as the year goes on which is great news for everyone ... except perhaps Intel and NVIDIA.

Vega FE Slide.png

"Demand for AMD's Ryzen 7- and Ryzen 5-series CPU products has continued rising, which may allow the chipmaker to narrow its losses to below US$50 million for the second quarter of 2017. With Intel also rumored to pay licensing fees to AMD for its GPUs, some market watchers believe AMD may turn profitable in the second quarter or in the third."

Here is some more Tech News from around the web:

Tech Talk

 

Source: DigiTimes
Author:
Manufacturer: AMD

Is it time to buy that new GPU?

Testing commissioned by AMD. This means that AMD paid us for our time, but had no say in the results or presentation of them.

Earlier this week Bethesda and Arkane Studios released Prey, a first-person shooter that is a re-imaging of the 2006 game of the same name. Fans of System Shock will find a lot to love about this new title and I have found myself enamored with the game…in the name of science of course.

Prey-2017-05-06-15-52-04-16.jpg

While doing my due diligence and performing some preliminary testing to see if we would utilize Prey for graphics testing going forward, AMD approached me to discuss this exact title. With the release of the Radeon RX 580 in April, one of the key storylines is that the card offers a reasonably priced upgrade path for users of 2+ year old hardware. With that upgrade you should see some substantial performance improvements and as I will show you here, the new Prey is a perfect example of that.

Targeting the Radeon R9 380, a graphics card that was originally released back in May of 2015, the RX 580 offers substantially better performance at a very similar launch price. The same is true for the GeForce GTX 960: launched in January of 2015, it is slightly longer in the tooth. AMD’s data shows that 80% of the users on Steam are running on R9 380X or slower graphics cards and that only 10% of them upgraded in 2016. Considering the great GPUs that were available then (including the RX 480 and the GTX 10-series), it seems more and more likely that we going to hit an upgrade inflection point in the market.

slides-5.jpg

A simple experiment was setup: does the new Radeon RX 580 offer a worthwhile upgrade path for those many users of R9 380 or GTX 960 classifications of graphics cards (or older)?

  Radeon RX 580 Radeon R9 380 GeForce GTX 960
GPU Polaris 20 Tonga Pro GM206
GPU Cores 2304 1792 1024
Rated Clock 1340 MHz 918 MHz 1127 MHz
Memory 4GB
8GB
4GB 2GB
4GB
Memory Interface 256-bit 256-bit 128-bit
TDP 185 watts 190 watts 120 watts
MSRP (at launch) $199 (4GB)
$239 (8GB)
$219 $199

Continue reading our look at the Radeon RX 580 in Prey!

AMD Compares 1x 32-Core EPYC to 2x 12-Core Xeon E5s

Subject: Processors | May 17, 2017 - 04:05 AM |
Tagged: amd, EPYC, 32 core, 64 thread, Intel, Broadwell-E, xeon

AMD has formally announced their EPYC CPUs. While Sebastian covered the product specifications, AMD has also released performance claims against a pair of Intel’s Broadwell-E Xeons. While Intel’s E5-2650 v4 processors have an MSRP of around $1170 USD, each, we don’t know how that price will compare to AMD’s offering. At first glance, pitting thirty two cores against two twelve-core chips seems a bit unfair, although it could end up being a very fair comparison if the prices align.

amd-2017-epyc-ubuntucompile.jpg

Image Credit: Patrick Moorhead

Patrick Moorhead, who was at the event, tweeted out photos of a benchmark where Ubuntu was compiled over GCC. It looks like EPYC completed in just 33.7s while the Broadwell-E chip took 37.2s (making AMD’s part ~9.5% faster). While this, again, stems from having a third more cores, this depends on how much AMD is going to charge you for them, versus Intel’s current pricing structure.

amd-2017-epyc-threads.jpg

Image Credit: Patrick Moorhead

This one chip also has 128 PCIe lanes, rather than Intel’s 80 total lanes spread across two chips.

AMD Announces Radeon Vega Frontier Edition Graphics Cards

Subject: Graphics Cards | May 16, 2017 - 07:39 PM |
Tagged: Vega, reference, radeon, graphics card, gpu, Frontier Edition, amd

AMD has revealed their concept of a premium reference GPU for the upcoming Radeon Vega launch, with the "Frontier Edition" of the new graphics cards.

Vega FE Slide.png

"Today, AMD announced its brand-new Radeon Vega Frontier Edition, the world’s most powerful solution for machine learning and advanced visualization aimed to empower the next generation of data scientists and visualization professionals -- the digital pioneers forging new paths in their fields. Designed to handle the most demanding design, rendering, and machine intelligence workloads, this powerful new graphics card excels in:

  • Machine learning. Together with AMD’s ROCm open software platform, Radeon Vega Frontier Edition enables developers to tap into the power of Vega for machine learning algorithm development. Frontier Edition delivers more than 50 percent more performance than today’s most powerful machine learning GPUs.
  • Advanced visualization. Radon Vega Frontier Edition provides the performance required to drive increasingly large and complex models for real-time visualization, physically-based rendering and virtual reality through the design phase as well as rendering phase of product development.
  • VR workloads. Radeon Vega Frontier Edition is ideal for VR content creation supporting AMD’s LiquidVR technology to deliver the gripping content, advanced visual comfort and compatibility needed for next-generation VR experiences.
  • Revolutionized game design workflows. Radeon Vega Frontier Edition simplifies and accelerates game creation by providing a single GPU optimized for every stage of a game developer’s workflow, from asset production to playtesting and performance optimization."

Vega FE.jpg

From the image provided on the official product page it appears that there will be both liquid-cooled (the gold card in the background) and air-cooled variants of these "Frontier Edition" cards, which AMD states will arrive with 16GB of HBM2 and offer 1.5x the FP32 performance and 3x the FP16 performance of the Fury X.

From AMD:

Radeon Vega Frontier Edition

  • Compute units: 64
  • Single precision compute performance (FP32): ~13 TFLOPS
  • Half precision compute performance (FP16): ~25 TFLOPS
  • Pixel Fillrate: ~90 Gpixels/sec
  • Memory capacity: 16 GBs of High Bandwidth Cache
  • Memory bandwidth: ~480 GBs/sec

The availability of the Radeon Vega Frontier Edition was announced as "late June", so we should not have too long to wait for further details, including pricing.

Source: AMD

AMD's 16-Core Ryzen Threadripper CPUs Coming This Summer

Subject: Processors | May 16, 2017 - 07:22 PM |
Tagged: Zen, Threadripper, ryzen, processor, HEDT, cpu, amd

AMD revealed their entry into high-end desktop (HEDT) with the upcoming Ryzen "Threadripper" CPUs, which will feature up to 16 cores and 32 threads.

Threadripper 2.png

Little information was revealed along with the announcement, other than to announce availablility as "summer 2017", though rumors and leaks surrounding Threadripper have been seen on the internet (naturally) leading up to today's announcement, including this one from Wccftech. Not only will Threadripper (allegedly) offer quad-channel memory support and 44 PCI Express lanes, but they are also rumored to be released in a massive 4094-pin package (same as "Naples" aka EPYC) that most assuredly will not fit into the AM4 socket.

WCCFTECH Chart 2.png

Image credit: Wccftech

These Threadripper CPUs follow the lead of Intel's HEDT parts on X99, which are essentially re-appropriated Xeons with higher clock speeds and some feature differences such as a lack of ECC memory support. It remains to be seen what exactly will separate the enthusiast AMD platform from the EPYC datacenter platform, though the rumored base clock speeds are much higher with Threadripper.

Source: AMD

AMD Announces EPYC: A Massive 32-Core Datacenter SoC

Subject: Processors | May 16, 2017 - 06:49 PM |
Tagged: Zen, server, ryzen, processor, EPYC, datacenter, cpu, amd, 64 thread, 32 core

AMD has announced their new datacenter CPU built on the Zen architecture, which the company is calling EPYC. And epic they are, as these server processors will be offered with up to 32 cores and 64 threads, 8 memory channels, and 128 PCI Express lanes per CPU.

Epyc_1.jpg

Some of the details about the upcoming "Naples" server processors (now EPYC) were revealed by AMD back in March, when the upcoming server chips were previewed:

"Naples" features:

  • A highly scalable, 32-core System on Chip (SoC) design, with support for two high-performance threads per core
  • Industry-leading memory bandwidth, with 8-channels of memory per "Naples" device. In a 2-socket server, support for up to 32 DIMMS of DDR4 on 16 memory channels, delivering up to 4 terabytes of total memory capacity.
  • The processor is a complete SoC with fully integrated, high-speed I/O supporting 128 lanes of PCIe, negating the need for a separate chip-set
  • A highly-optimized cache structure for high-performance, energy efficient compute
  • AMD Infinity Fabric coherent interconnect for two "Naples" CPUs in a 2-socket system
  • Dedicated security hardware 

EPYC Screen.png

Compared to Ryzen (or should it be RYZEN?), EPYC offers a huge jump in core count and available performance - though AMD's other CPU announcement (Threadripper) bridges the gap between the desktop and datacenter offerings with an HEDT product. This also serves to bring AMD's CPU offerings to parity with the Intel product stack with desktop/high performance desktop/server CPUs.

epycpackage.jpg

EPYC is a large processor. (Image credit: The Tech Report)

While specifications were not offered, there have been leaks (of course) to help fill in the blanks. Wccftech offers these specs for EPYC (on the left):

Wccftech Chart.png

(Image credit: Wccftech)

We await further information from AMD about the EPYC launch.

Source: AMD
Subject: General Tech
Manufacturer: The Khronos Group

It Started with an OpenCL 2.2 Press Release

Update (May 18 @ 4pm EDT): A few comments across the internet believes that the statements from The Khronos Group were inaccurately worded, so I emailed them yet again. The OpenCL working group has released yet another statement:

OpenCL is announcing that their strategic direction is to support CL style computing on an extended version of the Vulkan API. The Vulkan group is agreeing to advise on the extensions.

In other words, this article was and is accurate. The Khronos Group are converging OpenCL and Vulkan into a single API: Vulkan. There was no misinterpretation.

Original post below

Earlier today, we published a news post about the finalized specifications for OpenCL 2.2 and SPIR-V 1.2. This was announced through a press release that also contained an odd little statement at the end of the third paragraph.

We are also working to converge with, and leverage, the Khronos Vulkan API — merging advanced graphics and compute into a single API.

khronos-vulkan-logo.png

This statement seems to suggest that OpenCL and Vulkan are expecting to merge into a single API for compute and graphics at some point in the future. This seemed like a huge announcement to bury that deep into the press blast, so I emailed The Khronos Group for confirmation (and any further statements). As it turns out, this interpretation is correct, and they provided a more explicit statement:

The OpenCL working group has taken the decision to converge its roadmap with Vulkan, and use Vulkan as the basis for the next generation of explicit compute APIs – this also provides the opportunity for the OpenCL roadmap to merge graphics and compute.

This statement adds a new claim: The Khronos Group plans to merge OpenCL into Vulkan, specifically, at some point in the future. Making the move in this direction, from OpenCL to Vulkan, makes sense for a handful of reasons, which I will highlight in my analysis, below.

Going Vulkan to Live Long and Prosper?

The first reason for merging OpenCL into Vulkan, from my perspective, is that Apple, who originally created OpenCL, still owns the trademarks (and some other rights) to it. The Khronos Group licenses these bits of IP from Apple. Vulkan, based on AMD’s donation of the Mantle API, should be easier to manage from the legal side of things.

khronos-2016-vulkan-why.png

The second reason for going in that direction is the actual structure of the APIs. When Mantle was announced, it looked a lot like an API that wrapped OpenCL with a graphics-specific layer. Also, Vulkan isn’t specifically limited to GPUs in its implementation.

Aside: When you create a device queue, you can query the driver to see what type of device it identifies as by reading its VkPhysicalDeviceType. Currently, as of Vulkan 1.0.49, the options are Other, Integrated GPU, Discrete GPU, Virtual GPU, and CPU. While this is just a clue, to make it easier to select a device for a given task, and isn’t useful to determine what the device is capable of, it should illustrate that other devices, like FPGAs, could support some subset of the API. It’s just up to the developer to check for features before they’re used, and target it at the devices they expect.

If you were to go in the other direction, you would need to wedge graphics tasks into OpenCL. You would be creating Vulkan all over again. From my perspective, pushing OpenCL into Vulkan seems like the path of least resistance.

The third reason (that I can think of) is probably marketing. DirectX 12 isn’t attempting to seduce FPGA developers. Telling a game studio to program their engine on a new, souped-up OpenCL might make them break out in a cold sweat, even if both parties know that it’s an evolution of Vulkan with cross-pollination from OpenCL. OpenCL developers, on the other hand, are probably using the API because they need it, and are less likely to be shaken off.

What OpenCL Could Give Vulkan (and Vice Versa)

From the very onset, OpenCL and Vulkan were occupying similar spaces, but there are some things that OpenCL does “better”. The most obvious, and previously mentioned, element is that OpenCL supports a wide range of compute devices, such as FPGAs. That’s not the limit of what Vulkan can borrow, though, although it could make for an interesting landscape if FPGAs become commonplace in the coming years and decades.

khronos-SYCL_Color_Mar14_154_75.png

Personally, I wonder how SYCL could affect game engine development. This standard attempts to guide GPU- (and other device-) accelerated code into a single-source, C++ model. For over a decade, Tim Sweeney of Epic Games has talked about writing engines like he did back in the software-rendering era, but without giving up the ridiculous performance (and efficiency) provided by GPUs.

Long-time readers of PC Perspective might remember that I was investigating GPU-accelerated software rendering in WebCL (via Nokia’s implementation). The thought was that I could concede the raster performance of modern GPUs and make up for it with added control, the ability to explicitly target secondary compute devices, and the ability to run in a web browser. This took place in 2013, before AMD announced Mantle and browser vendors expressed a clear disinterest in exposing OpenCL through JavaScript. Seeing the idea was about to be crushed, I pulled out the GPU-accelerated audio ideas into a more-focused project, but that part of my history is irrelevant to this post.

The reason for bringing up this anecdote is because, if OpenCL is moving into Vulkan, and SYCL is still being developed, then it seems likely that SYCL will eventually port into Vulkan. If this is the case, then future game engines can gain benefits that I was striving toward without giving up access to fixed-function features, like hardware rasterization. If Vulkan comes to web browsers some day, it would literally prune off every advantage I was hoping to capture, and it would do so with a better implementation.

microsoft-2015-directx12-logo.jpg

More importantly, SYCL is something that Microsoft cannot provide with today’s DirectX.

Admittedly, it’s hard to think of something that OpenCL can acquire from Vulkan, besides just a lot more interest from potential developers. Vulkan was already somewhat of a subset of OpenCL that had graphics tasks (cleanly) integrated over top of it. On the other hand, OpenCL has been struggling to acquire mainstream support, so that could, in fact, be Vulkan’s greatest gift.

The Khronos Group has not provided a timeline for this change. It’s just a roadmap declaration.

Lian Li's New PC-T70 Test Bench

Subject: Cases and Cooling | May 16, 2017 - 06:05 PM |
Tagged: Test Bench, T70-1, PC-T70, open air case, Lian Li, acrylic

Lian Li has designed an open air case with an optional acrylic enclosure to help simulate normal case environs or to protect your components if you build a system you want to showcase.  The PC-T70 is primarily designed as a test bench so you can set up a E-ATX, ATX, or Micro-ATX/iTX motherboard and easily swap out components while benchmarking hardware or software.  The problem with test benches is one of temperature; most of us set up our systems in enclosed cases and the temperatures experienced will be different than in a case fully exposed to any wafting breeze.  Lian Li has overcome this with their optional T70-1, a set of acrylic side pieces and top with mounts for fans or radiators which allow you to simulate a closed case environment when you are reporting on running temperatures.

Capture.PNG

There is another use for this case which might tempt a different set of users.   The case fully exposes your components which makes this a great base to build an impressive mod on, or simply to show off all of those RGB LEDs you paid good money for.  The acrylic case ensures that your system cannot be permanently killed by a passing feline as well as providing mounting points for an impressive watercooling setup.  You can check out the full PR below the specs and video.

spectac.PNG

New PC-T70 Test Bench Simulates Any Case Environment Lian Li’s New Modular Bench Transforms for Both Closed-Air and Open-Air Testing

May 16, 2017, Keelung, Taiwan - Lian-Li Industrial Co. Ltd is eager to announce the PC-T70 test bench. After productive collaboration, taking feedback from high-end PC hardware reviewers, Lian Li sought to create a test bench that could both provide unhindered access for enthusiasts who want to rapidly swap hardware, and those who like to use their test benches as a workstation. Lian Li’s latest test bench is its most flexible yet – a sleek, minimal platform for easy hardware swapping, with an optional kit that encloses the bench with radiator mounts and an acrylic cover.

Unobstructed Design for Hardware Swapping
After taking feedback from PC hardware reviewers, Lian Li realized that simplicity was key. The PC-T70 has completely free access, with zero barriers hindering the installation of motherboards and other hardware. Users can even remove the back frame for expansion slots and IO cover if they so choose. Six open pass-throughs are positioned around the motherboard tray to route cables down to the PSU and drive mounts on the floor panel.

Simulate Closed-Air Case Environments for Advanced Testing
With the T70-1 upgrade kit, users can add side panels to the open bench, each mounting two 120mm or 140mm fans or a 240mm or 280mm radiator with removable mesh dust filters. It also includes a back panel, mounting an additional 120mm or 140mm exhaust fan and an acrylic canopy secured by magnetic strips to fully enclose the motherboard compartment, simulating a closed-air environment more representative of regular users – a valuable advantage for hardware reviewers. Every panel is modular and easily taken down, so users can rapidly cycle between closed and open-air setups.

A Bench Built for All Form Factors
The PC-T70 mounts E-ATX, ATX, Micro ATX, and mini ITX motherboards, with eight expansion slots to mount VGA cards as long as 330mm. While enclosed, its CPU cooler clearance is limited to 180mm. The floor panel mounts ATX PSUs as long as 330mm and as many as five 2.5” and one 3.5” drives or one 2.5” and two 3.5” storage drives. Users can also use the floor panel to mount a 360mm radiator, reservoirs, and pumps for custom water cooling loops.

Price and Availability
The PC-T70, including the T70-1 option kit is now available at Newegg for $189.99.
Also available in white.

Source: Lian Li

CORSAIR Launches T1 RACE Gaming Chair

Subject: General Tech | May 16, 2017 - 01:58 PM |
Tagged: gaming chair, corsair, T1 RACE

Corsair have jumped into the gaming chair market, a product we did not see much of which has recently taken off in a big way.  The T1 RACE is made of PU leather, also known as bicast leather, so the shiny finish should last quite a while though the feel will not quite the same as a true leather chair, nor will the price be as astronomical.  Depending on the type of polyurethane leather they used, this product might be vegan.  You can choose between yellow, white, blue or  red trim to highlight your chair, or if you prefer you can choose to forego the colours for a purely black chair.  It can recline 90° to 180° if you need a moment to lie back, the arm rests can be adjusted for height, width, position and angle and neck and lumbar PU leather pillows are included. 

Check out Corsair's page here or the PR just below.

unnamed.jpg

FREMONT, CA – May 16th, 2017 - CORSAIR®, a world leader in enthusiast memory, PC components and high-performance gaming hardware today announced the launch of its first gaming chair, the T1 RACE. Inspired by racing, crafted for comfort and built to last, the T1 Race joins CORSAIR’s award-winning range of mice, keyboards, headsets and mousepads to complete the ultimate gaming experience. Built using a solid steel skeleton and dense foam cushions, the T1 RACE has the strength to ensure a lifetime of sturdiness, while it’s 4D-movement armrests raise, lower, shift and swivel to put gamers in the most comfortable position every time. Styled to turn heads and finished with immaculate attention to detail, the T1 RACE is the gaming chair your desk deserves.

Upholstered in luxurious PU leather on seating surfaces and available in five different colors, T1 RACE lets you choose your seat to match your style, in either Yellow, White, Blue, Red or Black trim, finished with automotive color-matched stitching and base accents. Nylon caster wheels, often an optional upgrade on office and gaming chairs, are included with T1 RACE as standard, ensuring stability and smooth movement on any surface.

T1 RACE’s sculpted race-seat design and included neck and lumbar PU leather pillows provide adjustable support for day-long gaming sessions, while its 4D-moment armrests effortlessly adjust in height, width, position and angle to put your arms precisely where they need to be. A steel construction Class 4 gas lift provides reliable height adjustment, while the seat itself tilts up to 10° and can recline anywhere between 90° to 180°, lying completely flat for when you need to take a break from the action. Finishing the T1 RACE’s attention to detail, the CORSAIR logo is tastefully embroidered into the rear of the chair, and lightly embossed into the headrest for maximum comfort.

Source: Corsair

Pot, meet kettle. Is it worse to hoard exploits or patches?

Subject: General Tech | May 16, 2017 - 01:27 PM |
Tagged: security, microsoft

Microsoft and the NSA have each been blaming the other for the ability of WannaCrypt to utilize a vulnerability in SMBv1 to spread.  Microsoft considers the NSA's decision not to share the vulnerabilities which their Eternalblue tool utilizes with Microsoft and various other security companies to be the cause of this particular outbreak.  Conversely, the fact is that while Microsoft developed patches to address this vulnerability for versions of Windows including WinXP, Server 2003, and Windows 8 RT back in March, they did not release the patches for legacy OSes until the outbreak was well underway. 

Perhaps the most compelling proof of blame is the number of systems which should not have been vulnerable but were hit due to the fact that the available patches were never installed. 

These three problems, the NSA wanting to hoard vulnerabilities so they can exploit them for espionage, Microsoft ending support of older products because they are a business and do not find it profitable to support products a decade or more after release and users not taking advantage of available updates have left us in the pickle we find ourselves in this week.  On the plus side this outbreak does have people patching, so we have that going for us.

fingerpointing.jpg

"Speaking of hoarding, though, it's emerged Microsoft was itself stockpiling software – critical security patches for months."

Here is some more Tech News from around the web:

Tech Talk

 

Source: The Register

Khronos Group Published Finalized OpenCL 2.2 & SPIR-V 1.2

Subject: General Tech | May 16, 2017 - 09:00 AM |
Tagged: spir-v, opencl, Khronos

Aligning with the start of the International Workshop on OpenCL (IWOCL) 2017 in Toronto, Ontario, Canada, The Khronos Group has published the finalized specification for OpenCL 2.2 and SPIR-V 1.2. The headlining feature for this release is the OpenCL C++ kernel language, which SPIR-V 1.2 fully supports. Kernels are the portion of code that execute on the compute devices, such as GPUs, FPGAs, super computers, multi-core CPUs, and so forth.

OpenCL_Logo.png

The OpenCL C++ kernel language is a subset of the C++14 standard, bringing many of its benefits to these less-general devices. Classes help data and code to be more tightly integrated. Templates help define logic in a general way for whatever data type implements whatever it requires, which is useful for things like custom containers. Lambda expressions make it easy to write one-off methods, rather than forcing the developer to name something that will only be used once, like comparing two data types for a special sort in one specific spot of code.

Exposing these features to the OpenCL device also enables The Khronos Group to further the SYCL standard, which aims for “single-source” OpenCL development. Having the code that executes on OpenCL-compatible devices contain roughly the same features as the host code is kind-of necessary to let them be written together, rather than exist as two pools.

The final OpenCL 2.2 and SPIR-V 1.2 specs are available now, and on GitHub for the first time.

Author:
Subject: General Tech
Manufacturer: YouTube TV

YouTube Tries Everything

Back in March, Google-owned YouTube announced a new live TV streaming service called YouTube TV to compete with the likes of Sling, DirecTV Now, PlayStation Vue, and upcoming offerings from Hulu, Amazon, and others. All these services aim to deliver curated bundles of channels aimed at cord cutters that run over the top of customer’s internet only connections as replacements for or in addition to cable television subscriptions.  YouTube TV is the latest entrant to this market with the service only available in seven test markets currently, but it is off to a good start with a decent selection of content and features including both broadcast and cable channels, on demand media, and live and DVR viewing options. A responsive user interface and generous number of family sharing options (six account logins and three simultaneous streams) will need to be balanced by the requirement to watch ads (even on some DVR’ed shows) and the $35 per month cost.

Get YouTube TV 1.jpg

YouTube TV was launched in 5 cities with more on the way. Fortunately, I am lucky enough to live close enough to Chicago to be in-market and could test out Google’s streaming TV service. While not a full review, the following are my first impressions of YouTube TV.

Setup / Sign Up

YouTube TV is available with a one month free trail, after which you will be charged $35 a month. Sign up is a simple affair and can be started by going to tv.youtube.com or clicking the YouTube TV link from “hamburger” menu on YouTube. If you are on a mobile device, YouTube TV uses a separate app than the default YouTube app and weighs in at 9.11 MB for the Android version. The sign up process is very simple. After verifying your location, the following screens show you the channels available in your market and gives you the option of adding Showtime ($11) and/or Fox Soccer ($15) for additional monthly fees. After that, you are prompted for a payment method that can be the one already linked to your Google account and used for app purchases and other subscriptions. As far as the free trial, I was not charged anything and there was no hold on my account for the $35. I like that Google makes it easy to see exactly how many days you have left on your trial and when you will be charged if you do not cancel. Further, the cancel link is not buried away and is intuitively found by clicking your account photo in the upper right > Personal > Membership. Google is doing things right here. After signup, a tour is offered to show you the various features, but you can skip this if you want to get right to it.

In my specific market, I have the following channels. When I first started testing some of the channels were not available, and were just added today. I hope to see more networks added, and if Google can manage that YouTube TV and it’s $35/month price are going to shape up to be a great deal.

  • ABC 7, CBS 2, Fox 32, NBC 5, ESPN, CSN, CSN Plus, FS1, CW, USA, FX, Free Form, NBC SN, ESPN 2, FS2, Disney, E!, Bravo, Oxygen, BTN, SEC ESPN Network, ESPN News, CBS Sports, FXX, Syfy, Disney Junior, Disney XD, MSNBC, Fox News, CNBC, Fox Business, National Geographic, FXM, Sprout, Universal, Nat Geo Wild, Chiller, NBC Golf, YouTube Red Originals
  • Plus: AMC, BBC America, IFC, Sundance TV, We TV, Telemundo, and NBC Universal (just added).
  • Optional Add-Ons: Showtime and Fox Soccer.

I tested YouTube TV out on my Windows PCs and an Android phone. You can also watch YouTube TV on iOS devices, and on your TV using an Android TVs and Chromecasts (At time of writing, Google will send you a free Chromecast after your first month). (See here for a full list of supported devices.) There are currently no Roku or Apple TV apps.

Get YouTube TV_full list.jpg

Each YouTube TV account can share out the subscription to 6 total logins where each household member gets their own login and DVR library. Up to three people can be streaming TV at the same time. While out and about, I noticed that YouTube TV required me to turn on location services in order to use the app. Looking further into it, the YouTube TV FAQ states that you will need to verify your location in order to stream live TV and will only be able to stream live TV if you are physically in the markets where YouTube TV has launched. You can watch your DVR shows anywhere in the US. However, if you are traveling internationally you will not be able to use YouTube TV at all (I’m not sure if VPNs will get around this or if YouTube TV blocks this like Netflix does). Users will need to login from their home market at least once every 3 months to keep their account active and able to stream content (every month for MLB content).

YouTube TV verifying location in Chrome (left) and on the android app (right).

On one hand, I can understand this was probably necessary in order for YouTube TV to negotiate a licensing deal, and their terms do seem pretty fair. I will have to do more testing on this as I wasn’t able to stream from the DVR without turning on location services on my Android – I can chalk this up to growing pains though and it may already be fixed.

Features & First Impressions

YouTube TV has an interface that is perhaps best described as a slimmed down YouTube that takes cues from Netflix (things like the horizontal scrolling of shows in categories). The main interface is broken down into three sections: Library, Home, and Live with the first screen you see when logging in being Home. You navigate by scrolling and clicking, and by pulling the menus up from the bottom while streaming TV like YouTube.

YouTube TV Home.jpg

Continue reading for my first impressions of YouTube TV!

Author:
Subject: Processors
Manufacturer: Various

Application Profiling Tells the Story

It should come as no surprise to anyone that has been paying attention the last two months that the latest AMD Ryzen processors and architecture are getting a lot of attention. Ryzen 7 launched with a $499 part that bested the Intel $1000 CPU at heavily threaded applications and Ryzen 5 launched with great value as well, positioning a 6-core/12-thread CPU against quad-core parts from the competition. But part of the story that permeated through both the Ryzen 7 and the Ryzen 5 processor launches was the situation surrounding gaming performance, in particular 1080p gaming, and the surprising delta  that we see in some games.

Our team has done quite a bit of research and testing on this topic. This included a detailed look at the first asserted reason for the performance gap, the Windows 10 scheduler. Our summary there was that the scheduler was working as expected and that minimal difference was seen when moving between different power modes. We also talked directly with AMD to find out its then current stance on the results, backing up our claims on the scheduler and presented a better outlook for gaming going forward. When AMD wanted to test a new custom Windows 10 power profile to help improve performance in some cases, we took part in that too. In late March we saw the first gaming performance update occur courtesy of Ashes of the Singularity: Escalation where an engine update to utilize more threads resulted in as much as 31% average frame increase.

ping-amd.png

As a part of that dissection of the Windows 10 scheduler story, we also discovered interesting data about the CCX construction and how the two modules on the 1800X communicated. The result was significantly longer thread to thread latencies than we had seen in any platform before and it was because of the fabric implementation that AMD integrated with the Zen architecture.

This has led me down another hole recently, wondering if we could further compartmentalize the gaming performance of the Ryzen processors using memory latency. As I showed in my Ryzen 5 review, memory frequency and throughput directly correlates to gaming performance improvements, in the order of 14% in some cases. But what about looking solely at memory latency alone?

Continue reading our analysis of memory latency, 1080p gaming, and how it impacts Ryzen!!