Intel brings Iris Pro Graphics to Broadwell in LGA Sockets

Subject: Processors | March 19, 2014 - 08:00 PM |
Tagged: LGA, iris pro, Intel, gdc 14, GDC, Broadwell

We have great news for you this evening!  The demise of the LGA processor socket for Intel desktop users has been great exaggerated.  During a press session at GDC we learned that not only will Intel be offering LGA based processors for Broadwell upon its release (which we did not get more details on) but that there will be an unlocked SKU with Iris Pro graphics implemented.  

broadwell.jpg

Iris Pro, in its current version, is a high performance version of Intel's processor graphics that includes 128MB of embedded DRAM (eDRAM).  When we first heard that Iris Pro was not coming to the desktop market with an LGA1150 SKU we were confused and bitter but it seems that Intel was listening to feedback.  Broadwell will bring with it the first socketed version of Iris Pro graphics!

It's also nice to know that the rumors surrounding Intel's removal of the socket option for DIY builders was incorrect or possibly diverted due to the reaction. The enthusiast lives on!!

UPDATE: Intel has just confirmed that the upcoming socketed Broadwell CPUs will be compatible with 9-series motherboards that will be released later this spring. This should offer a nice upgrade path for users going into 2015.

Intel Confirms Haswell-E, 8-core Extreme Edition with DDR4 Memory

Subject: Processors | March 19, 2014 - 08:00 PM |
Tagged: X99, Intel, Haswell-E, gdc 14, GDC, ddr4

While talking with press at GDC in San Francisco today, Intel is pulling out all the stops to assure enthusiasts and gamers that they haven't been forgotten!  Since the initial release of the first Extreme Edition processor in 2003 (Pentium 4), Intel has moved from 1.7 million transistors to over 1.8 BILLION (Ivy Bride-E). Today Intel officially confirms that Haswell-E is coming!

haswelle.jpg

Details are light, but we know now that this latest incarnation of the Extreme Edition processor will be an 8-core design, running on a new Intel X99 chipset and will be the first to support DDR4 memory technology.  I think most of us are going to be very curious about the changes, both in pricing and performance, that the new memory technology will bring to the table for enthusiast and workstation users.

Timing is only listed as the second half of 2014, so we are going to be (impatiently) waiting along with you for more details.

Though based only on leaks that we found last week, the X99 chipset and Haswell-E will continue to have 40 lanes of PCI Express but increases the amount of SATA 6G ports from two to ten (!!) and USB 3.0 ports to six.  

GDC 14: Intel Ready Mode offers low power, always connected desktops

Subject: Processors, Systems | March 19, 2014 - 08:00 PM |
Tagged: ready mode, Intel, gdc 14, GDC

Intel Ready Mode is a new technology that looks to offer some of the features of connected standby for desktop and all-in-one PCs while using new power states of the Haswell architecture to keep power consumption incredibly low.  By combining a 4th Generation Core processor from Intel, a properly implemented motherboard and platform with new Intel or OEM software, users can access the data on their system or push data to their system without "waking up" the machine.

readymode1.jpg

This feature is partially enabled by the C7 state added to the Haswell architecture with the 4th Generation Core processors but could require motherboard and platform providers to update implementations to properly support the incredibly low idle power consumption.  

To be clear, this is not a desktop implementation of Microsoft Instant Go (Connected Standby) but instead is a unique and more flexible implementation.  While MS Instant Go only works on Windows 8 and with Metro applications, Intel Ready Mode will work with Windows 7 and Windows 8 and actually keeps the machine awake and active, just at a very low power level.  This allows users to not only make sure their software is always up to date and ready when they want to use the PC but enabled access to a remote PC from a remote location - all while in this low power state.

How low?  Well Intel has a note on its slide that mentions Fujitsu launched a feature called Low Power Active Mode in 2013 that was able to hit 5 watts when leveraging the Intel guidelines. You can essentially consider this an incredibly low power "awake" state for Intel PCs.

readymode2.jpg
 

Intel offers up some suggested usage models for Ready Mode and I will be interested to see what OEMs integrate support for this technology and if DIY users will be able to take advantage of it as well. Lenovo, ASUS, Acer, ECS, HP and Fujitsu are supporting it this year.

GDC 14: CRYENGINE To Support Mantle, AMD Gets Another Endorsement

Subject: General Tech, Shows and Expos | March 19, 2014 - 05:00 PM |
Tagged: Mantle, gdc 14, GDC, crytek, CRYENGINE

While I do not have too many details otherwise, Crytek and AMD have announced that mainline CRYENGINE will support the Mantle graphics API. CRYENGINE, by Crytek, now sits alongside Frostbite, by Dice, and Nitrous, by Oxide Games, as engines which support that alternative to DirectX and OpenGL. This comes little more than a week after their announcement of native Linux support with their popular engine.

Crysis2.jpg

The tape has separate draw calls!

Crytek has been "evaluating" the API for quite some time now, showing interest back at the AMD Developer Summit. Since then, they have apparently made a clear decision on it. It is also not the first time that CRYENGINE has been publicly introduced to Mantle, with Chris Robert's Star Citizen, also powered by the 4th Generation CRYENGINE, having announced support for the graphics API. Of course, there is a large gap between having a licensee do legwork to include an API and having the engine developer provide you supported builds (that would be like saying UnrealEngine 3 supports the original Wii).

Hopefully we will learn more as GDC continues.

Editor's (Ryan) Take:

As the week at GDC has gone on, AMD continues to push forward with Mantle and calls Crytek's implementation of the low level API "a huge endorsement" of the company's direction and vision for the future. Many, including myself, have considered that the pending announcement of DX12 would be a major set back for Mantle but AMD claims that is "short sited" and as more developers come into the Mantle ecosystem it is proof AMD is doing the "right thing."  

Here at GDC, AMD told us they have expanded the number of beta Mantle members dramatically with plenty more applications (dozens) in waiting.  Obviously this could put a lot of strain on AMD for Mantle support and maintenance but representatives assure us that the major work of building out documentation and development tools is nearly 100% behind them.

mantle.jpg

If stories like this one over at Semiaccurate are true, and that Microsoft's DirectX 12 will be nearly identical to AMD Mantle, then it makes sense that developers serious about new gaming engines can get a leg up on projects by learning Mantle today. Applying that knowledge to the DX12 API upon its release could speed up development and improve implementation efficiency. From what I am hearing from the few developers willing to even mention DX12, Mantle is much further along in its release (late beta) than DX12 is (early alpha).

AMD indeed was talking with and sharing the development of Mantle with Microsoft "every step of the way" and AMD has stated on several occasions that there were two outcomes with Mantle; it either becomes or inspires a new industry standard in game development. Even if DX12 is more or less a carbon copy of Mantle, forcing NVIDIA to implement that API style with DX12's release, AMD could potentially have the advantage of gaming performance and support between now and Microsoft's DirectX release.  That could be as much as a full calendar year from reports we are getting at GDC.  

Ray Tracing is back? That's Wizard!

Subject: General Tech, Shows and Expos | March 19, 2014 - 01:20 PM |
Tagged: Imagination Technologies, gdc 14, wizard, ray tracing

The Tech Report visited Imagination Technologies' booth at GDC where they were showing off a new processor, the Wizard GPU.  It is based on the PowerVR Series6XT Rogue graphics processor which is specifically designed to accelerate ray tracing performance, a topic we haven't heard much about lately.  They describe the performance as capable of processing 300 million rays and 100 million dynamic triangles per second which translates to 7 to 10 rays per pixel at 720p and 30Hz or 3 to 5 rays a pixel at 1080p and 30Hz.  That is not bad, though Imagination Technologies estimates movies display at a rate of 16 to 32 rays per pixel so it may be a while before we see a Ray Tracing slider under Advanced Graphics Options.

4_PowerVR Ray Tracing - hybrid rendering (4).jpg

"When we visited Imagination Technologies at CES, they were showing off some intriguing hardware that augments their GPUs in order to accelerate ray-traced rendering. Ray tracing is a well-known and high-quality form of rendering that relies on the physical simulation of light rays bouncing around in a scene. Although it's been used in movies and in static scene creation, ray tracing has generally been too computationally intensive to be practical for real-time graphics and gaming. However, Imagination Tech is looking to bring ray-tracing to real-time graphics—in the mobile GPU space, no less—with its new family of Wizard GPUs."

Here is some more Tech News from around the web:

Tech Talk

GDC 14: WebCL 1.0 Specification is Released by Khronos

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 09:03 AM |
Tagged: WebCL, gdc 14, GDC

The Khronos Group has just ratified the standard for WebCL 1.0. The API is expected to provide a massive performance boost to web applications which are dominated by expensive functions which can be offloaded to parallel processors, such as GPUs and multi-core CPUs. Its definition also allows WebCL to communicate and share buffers between it and WebGL with an extension.

WebCL_300_167_75.png

Frequent readers of the site might remember that I have a particular interest in WebCL. Based on OpenCL, it allows web apps to obtain a list of every available compute device and target it for workloads. I have personally executed tasks on an NVIDIA GeForce 670 discrete GPU and other jobs on my Intel HD 4000 iGPU, at the same time, using the WebCL prototype from Tomi Aarnio of Nokia Research. The same is true for users with multiple discrete GPUs installed in their system (even if they are not compatible with Crossfire, SLi, or are from different vendors altogether). This could be very useful for physics, AI, lighting, and other game middleware packages.

Still, browser adoption might be rocky for quite some time. Google, Mozilla, and Opera Software were each involved in the working draft. This leaves both Apple and Microsoft notably absent. Even then, I am not sure how much interest exists within Google, Mozilla, and Opera to take it from a specification to a working feature in their browsers. Some individuals have expressed more faith in WebGL compute shaders than WebCL.

Of course, that can change with just a single "killer app", library, or middleware.

I do expect some resistance from the platform holders, however. Even Google has been pushing back on OpenCL support in Android, in favor of their "Renderscript" abstraction. The performance of a graphics processor is also significant leverage for a native app. There is little, otherwise, that cannot be accomplished with Web standards except a web browser itself (and there is even some non-serious projects for that). If Microsoft can support WebGL, however, there is always hope.

The specification is available at the Khronos website.

Source: Khronos

GDC 14: EGL 1.5 Specification Released by Khronos

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 09:02 AM |
Tagged: OpenGL ES, opengl, opencl, gdc 14, GDC, EGL

The Khronos Group has also released their ratified specification for EGL 1.5. This API is at the center of data and event management between other Khronos APIs. This version increases security, interoperability between APIs, and support for many operating systems, including Android and 64-bit Linux.

khronos-EGL_500_123_75.png

The headline on the list of changes is the move that EGLImage objects makes, from the realm of extension into EGL 1.5's core functionality, giving developers a reliable method of transferring textures and renderbuffers between graphics contexts and APIs. Second on the list is the increased security around creating a graphics context, primarily designed for WebGL applications which any arbitrary website can become. Further down the list is the EGLSync object which allows further partnership between OpenGL (and OpenGL ES) and OpenCL. The GPU may not need CPU involvement when scheduling between tasks on both APIs.

During the call, the representative also wanted to mention that developers have asked them to bring EGL back to Windows. While it has not happened yet, they have announced that it is a current target.

The EGL 1.5 spec is available at the Khronos website.

Source: Khronos

GDC 14: SYCL 1.2 Provisional Spec Released by Khronos

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 09:01 AM |
Tagged: SYCL, opencl, gdc 14, GDC

To gather community feedback, the provisional specification for SYCL 1.2 has been released by The Khronos Group. SYCL extends itself upon OpenCL with the C++11 standard. This technology is built on another Khronos platform, SPIR, which allows the OpenCL C programming language to be mapped onto LLVM, with its hundreds of compatible languages (and Khronos is careful to note that they intend for anyone to make their own compatible alternative langauge).

khronos-SYCL_Color_Mar14_154_75.png

In short, SPIR allows many languages which can compile into LLVM to take advantage of OpenCL. SYCL is the specification for creating C++11 libraries and compilers through SPIR.

As stated earlier, Khronos wants anyone to make their own compatible language:

While SYCL is one possible solution for developers, the OpenCL group encourages innovation in programming models for heterogeneous systems, either by building on top of the SPIR™ low-level intermediate representation, leveraging C++ programming techniques through SYCL, using the open source CLU libraries for prototyping, or by developing their own techniques.

SYCL 1.2 supports OpenCL 1.2 and they intend to develop it alongside OpenCL. Future releases are expected to support the latest OpenCL 2.0 specification and keep up with future developments.

The SYCL 1.2 provisional spec is available at the Khronos website.

Source: Khronos

OCZ Partners With AMD at GDC to Showcase Solid-State Supercharged Systems for Gaming Professionals

Subject: General Tech, Shows and Expos | March 18, 2014 - 12:38 PM |
Tagged: gdc 14, amd, ocz, Vector 150

If you make it to the Game Developers Conference this year make sure to pay a visit to the AMD booth where you can get a look at OCZ's Vector 150 drives in action.  They aim to show that these drives are not only good for the gamer, they are good for the game designer as well.

unnamed.jpg

OCZ Vector 150 SSDs on Display at AMD Booth #1024, March 17-21 in San Francisco, CA

gdc.jpg

SAN JOSE, CA - March 17, 2014 - OCZ Storage Solutions - a Toshiba Group Company and leading provider of high-performance solid state drives (SSDs) for computing devices and systems, today announced its partnership with AMD to showcase the power of high performance technology at the Game Developer Conference (GDC) March 17-21 at the Moscone Center in San Francisco, CA. AMD's demo systems will feature best-in-class Vector 150 Series solid state drives demonstrating how developers can enhance productivity and efficiency in their work.

"We are excited to partner with AMD for the upcoming Game Developers Conference to support the fast growing interactive game development industry," said Alex Mei, CMO for OCZ Storage Solutions. "OCZ is dedicated to delivering premium solid state storage solutions that are not only a useful tool for developers, but also meet the unique demands of enthusiasts and gamers on all levels."

"Our presence at the 2014 Game Developer Conference will feature a number of high-performance gaming systems running 24/7 in harsh conditions," said Darren McPhee, director of product marketing, Graphics Business Unit, AMD. "We knew that OCZ Vector SSDs were uniquely ready to meet the reliability requirements of our gaming installations. Between the high performance graphics of AMD Radeon™ GPUs and the fast load times of OCZ Vector SSDs, visitors to AMD's booth in the South Hall are in for a great gaming experience!"

GDC is the world's largest game industry event, attracting over 23,000 professionals including programmers, artists, producers, designers, audio professionals, business decision-makers, and other digital gaming industry authorities. OCZ's premium Vector 150 Series, designed for workstation users along with bleeding-edge enthusiasts, will be in AMD systems that promote improved CPU and GPU performance, enhanced rendering, speed, and overall system performance. Professional developer applications demand peak transfer speeds and ultra-high performance; OCZ SSDs offer 100 times faster access to data, quicker boot ups, faster file transfers, and a more responsive computing experience than hard drives.

GDC enables OCZ to team up with valued industries partners like AMD to reaffirm the Company's commitment to the gaming segment, and promote the use of flash storage for both developers and the gamers themselves.

GDC 14: OpenGL ES 3.1 Spec Released by Khronos Group

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 17, 2014 - 09:01 AM |
Tagged: OpenGL ES, opengl, Khronos, gdc 14, GDC

Today, day one of Game Developers Conference 2014, the Khronos Group has officially released the 3.1 specification for OpenGL ES. The main new feature, brought over from OpenGL 4, is the addition of compute shaders. This opens GPGPU functionality to mobile and embedded devices for applications developed in OpenGL ES, especially if the developer does not want to add OpenCL.

The update is backward-compatible with OpenGL ES 2.0 and 3.0 applications, allowing developers to add features, as available, for their existing apps. On the device side, most functionality is expected to be a driver update (in the majority of cases).

opengl-es-logo.png

OpenGL ES, standing for OpenGL for Embedded Systems but is rarely branded as such, delivers what they consider the most important features from the graphics library to the majority of devices. The Khronos Group has been working toward merging ES with the "full" graphics library over time. The last release, OpenGL ES 3.0, was focused on becoming a direct subset of OpenGL 4.3. This release expands upon the feature-space it occupies.

OpenGL ES also forms the basis for WebGL. The current draft of WebGL 2.0 uses OpenGL ES 3.0 although that was not discussed today. I have heard murmurs (not from Khronos) about some parties pushing for compute shaders in that specification, which this announcement puts us closer to.

The new specification also adds other features, such as the ability to issue a draw without CPU intervention. You could imagine a particle simulation, for instance, that wants to draw the result after its compute shader terminates. Shading is also less rigid, where vertex and fragment shaders do not need to be explicitly linked into a program before they are used. I inquired about the possibility that compute devices could be targetted (for devices with two GPUs) and possibly load balanced, in a similar method to WebCL but no confirmation or denial was provided (although he did mention that it would be interesting for apps that fall somewhere in the middle of OpenGL ES and OpenCL).

The OpenGL ES 3.1 spec is available at the Khronos website.

Source: Khronos