All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: General Tech, Systems, Shows and Expos | June 3, 2014 - 04:46 AM | Scott Michaud
Tagged: ROG, gaming pc, computex 2014, computex, asus
Gaming PCs are often misunderstood. Many of our viewers will probably build their own from their personal selection of parts. If you would like to have someone else handle it, then an oft dismissed option is going through a system builder. If you find an option that is in your budget and has the performance you desire, then it is perfectly acceptable to buy it.
ASUS has just announced two offerings, branded Republic of Gamers (ROG), for you to consider.
The ROG G20 Gaming Desktop can be customized with options which range up to an Intel Core i7 with an NVIDIA GeForce GTX 780. It is designed to be quiet, with expected noise at around 23-25 dbA (it is unclear whether this is measured idle or under load). While it has two fans, it also uses "natural convection" cooling, a process which uses the excess heat to make hot air rise, which is replaced by cool air that cools the components.
Yup, the PC cools itself with the air motion caused by its own heat.
After customizations, the ROG G20 Gaming Desktop is expected to retail for $800-$1700, depending on what options the user selects, and be available in late Q3, for North Americans.
The other PC is the ROG GR8 Gaming Desktop. This device will include an Intel Core i7 and an NVIDIA GeForce GTX 750 Ti. Despite its form factor, a side panel allows user access to RAM and storage. It has Gigabit Ethernet and built-in 802.11ac wireless. While it obviously has HDMI outputs, it also includes DisplayPort.
ASUS does not currently have an expected price range, but it will also be available Q3, for North Americans.
Subject: General Tech, Displays, Shows and Expos | June 2, 2014 - 07:27 PM | Scott Michaud
Tagged: XB280HK, g-sync, computex 2014, computex, acer, 4k
Speaking of G-Sync monitors, Acer has announced a 4K monitor ahead of Computex with NVIDIA's adaptive refresh rate technology. While they do not ever say that it is 60Hz, I believe that it is. It also seems to be based on a TN panel. It, being G-Sync, connects with DisplayPort 1.2 and also includes four USB 3.0 ports. It, however, does not seem to support the competing AMD's FreeSync, integrated into VESA's DisplayPort 1.2a standard.
We do not currently have an image of the monitor
4K could be where we really start seeing benefits for G-Sync. At that resolution, it is very difficult to get a system, regardless of how many GPUs are inside it, which can play the most modern of games without dipping below 60 FPS. Once you miss your 16.67 millisecond window, your game starts hitching between images living 33.33ms and 16.67ms, when each frame is supposed to be presented at a constant rate.
As for pricing and availability: Q2 (early May to end of July). That's all we know.
For more Computex 2014 coverage, please check out our feed!
Subject: General Tech, Displays, Shows and Expos | June 2, 2014 - 06:28 PM | Scott Michaud
Tagged: g-sync, displays, display, computex 2014, computex, asus, 2560x1440, 144hz, 1440p
NVIDIA's G-Sync allows video cards to time the refresh rate of monitors. This is an advantage because the GPU knows when a frame is actually ready to be displayed to the user. The initial batch of announcements were each 1080p monitors, which are least likely to dip down into the 30-60Hz gap where G-Sync is noticeable.
Today at Computex, ASUS has announced a 27", 2560x1440, 144Hz G-Sync display. This higher resolution is starting to reach the point where faster graphics cards struggle to maintain 60 FPS. Not only that, but it is one of the first 1440p panels that you can get which supports high (over 100Hz) refresh rates, officially. Others exist, but "rare" is an understatement.
Its response rate is 1ms (GTG) which, unfortunately, suggests a TN panel. This might be a deal-breaker for some, but if you are looking for a G-Sync, 1440p, and high refresh rate panel, then it might be an acceptable compromise.
The ASUS PG278Q is available in Q2, which ASUS seems to define as the beginning of May to the end of July, for $799 USD. Unfortunately for AMD fans, the panel does not seem to support FreeSync, recently added to DisplayPort 1.2a. FreeSync, of course, is the competitor to G-Sync that AMD proposed to the VESA standards body.
For more Computex 2014 coverage, please check out our feed!
Subject: General Tech, Cases and Cooling, Shows and Expos | June 2, 2014 - 11:01 AM | Scott Michaud
Tagged: computex, Computer 2014
Cherry MX RGB key switches were teased since December but not yet made into a product. They generated interest by integrating red, green, and blue LEDs that, together, are capable of glowing any one of 16 million colors. Each key can even glow its own color and brightness independently, allowing users to color certain zones, with animation, if desired. Corsair had a year of exclusivity over this switch with their line of keyboards which they have done nothing with, so far.
Today, they announce that MX RGB switches will be available in four models:
- K70 RGB Red, available in late July ($169.99 MSRP)
- K70 RGB Blue, available in late August ($169.99 MSRP)
- K70 RGB Brown, available in late August ($169.99 MSRP)
- K95 RGB Red, available in late August ($189.99 MSRP)
If you happen to have wanted Cherry MX Blue or Brown, you will be looking at the K70 because the K95 RGB will only be available in Cherry MX RGB Red. Of course, that could change in future announcements but, even still, the main difference is the 18 macro keys. Honestly, though I have had several keyboards which offer these, I have never used mine. Then again, I also do not play MMOs or MOBAs so judge for yourself whether the extra keys are deal breakers.
Corsair Vengeance K95 RGB
As usual, Corsair puts a lot of thought into their keyboards. Each one is based on an NKRO matrix which provides "100% anti-ghosting" (rant: more precisely, the keyboard is built well enough that it physically cannot ghost to require anti-ghosting). Even their first generation design aced my grueling test, where I spam the equivalent of several hundred words per minute as input and compare it to what the keyboard believes.
Corsair Vengeance K70 RGB
Also announced, the M65 RGB Gaming Mouse. RGB LED lighting on a mouse is not as novel, but it will match your keyboard. That will be available late August for $69.99 MSRP.
All devices will come with a two (2) year warranty, which definitely gives confidence to someone considering peripherals in this price point.
For more Computex 2014 coverage, please check out our feed!
Subject: General Tech, Storage, Shows and Expos | June 2, 2014 - 11:01 AM | Scott Michaud
Tagged: usb 3.0, thumb drive, ssd, flash drive, corsair, computex 2014, computex
The Flash Voyager GTX is Corsair's attempt to be an SSD over USB 3.0. Differentiating itself from a standard USB flash drive, the Voyager GTX includes TRIM support, S.M.A.R.T. monitoring, and interfaces with USB Attached SCSI. It also comes in two, SSD-sized capacities, 128GB ($119.99) and 256GB ($199.99). These drives are rated at 450MB/s read and 350MB/s write.
This pricing structure puts the Voyager GTX against the Samsung 840 Pro, which is an interesting comparison to make. Both drives are backed by a five (5) year warranty and, while the 840 Pro has higher read bandwidth, the write speeds are fairly comparable. IOPS and write durability is not listed for the Corsair Flash Voyager GTX but, even if they are marginally behind, this has the advantage of USB.
Benchmarking should be interesting for this. I would be curious if this could lead to portable OS installations and abrupt boosts to Steam library sizes, both with SSD-like speeds.
The Corsair Flash Voyager GTX USB 3.0 drives will be available in July. The 128GB version has an MSRP of $119.99, while the 256GB is listed at $199.99.
For more Computex 2014 coverage, please check out our feed!
Subject: General Tech, Shows and Expos | April 8, 2014 - 11:07 PM | Scott Michaud
Tagged: thunderbolt, NAB 14, NAB, Elgato
Hmm, this is more Thunderbolt than I think we heard all year. Is there like, a video production event going on right now? No matter, because news is news (and so are product announcements). The Elgato Thunderbolt Dock connects to Thunderbolt, go figure, and provides three USB 3.0 ports, one Gigabit Ethernet port, one HDMI 1.4 output, one 3.5mm headphone jack, and one 3.5mm microphone jack. It also has a second Thunderbolt port to daisy chain with other devices, which is a common trait in Thunderbolt devices. It will retail for $229.95.
Yup, it is a Thunderbolt accessory.
Why does it seem like every Mac user in commercials have a studio apartment???
It makes sense to see devices like this. Thunderbolt is really an extension of PCIe which allows anything that was once an add-in board to be connected externally, albeit with significantly reduced bandwidth compared to PCIe 3.0 16x. This looks very clean, tidy, and much more desirable than crawling under the desk and swapping wires and thumb drives in the darkness behind your PC.
I would like to see some benchmarks on this device, however. Clearly, the sum of these outputs should be higher than the bandwidth allowed by Thunderbolt (especially if daisy-chaining another Thunderbolt device). I wonder how efficient it will be at keeping high quality signals when several devices are connected and running simultaneously.
The Elgato Thunderbolt Dock is available now for computers with a Thunderbolt port and either Mac OSX 10.9 or Windows 8.1. I guess us Windows 7 fans need to get used to the dust bunnies behind our PCs for a little longer...
Subject: General Tech, Graphics Cards, Processors, Shows and Expos | April 8, 2014 - 07:43 PM | Scott Michaud
Tagged: Intel, NAB, NAB 14, iris pro, Adobe, premiere pro, Adobe CC
When Adobe started to GPU-accelerate their applications beyond OpenGL, it started with NVIDIA and its CUDA platform. After some period of time, they started to integrate OpenCL support and bring AMD into the fold. At first, it was limited to a couple of Apple laptops but has since expanded to include several GPUs on both OSX and Windows. Since then, Adobe switched to a subscription-based release system and has published updates on a more rapid schedule. The next update of Adobe Premiere Pro CC will bring OpenCL to Intel Iris Pro iGPUs.
Of course, they specifically mentioned Adobe Premiere Pro CC which suggests that Photoshop CC users might be coming later. The press release does suggest that the update will affect both Mac and Windows versions of Adobe Premiere Pro CC, however, so at least platforms will not be divided. Well, that is, if you find a Windows machine with Iris Pro graphics. They do exist...
A release date has not been announced for this software upgrade.
Subject: General Tech, Networking, Systems, Shows and Expos | April 8, 2014 - 07:26 PM | Scott Michaud
Tagged: NAB, NAB 14, Thunderbolt 2, thunderbolt
Video professionals are still interested in Thunderbolt in probably much the same way as Firewire needed to be pried from their cold, dead hands. It is a very high bandwidth connector, useful for sending and receiving 4K video. Also, it was originally exclusive to Apple so you can guess which industries were first-adopters. Intel has focused their Thunderbolt announcements on the National Association of Broadcasters (NAB) show. This year, Thunderbolt Networking will be available for Windows via a driver. This will allow any combination of Macs and Windows PCs to be paired together by a 10 Gigabit network.
Of course, this is not going to be something that you can plug into a router. This is a point-to-point network for sharing files between two devices... really fast. Perhaps one use case would be a workstation with a Mac and a Windows PC on a KVM switch. If both are connected with Thunderbolt 2, they could share the same storage pool.
While this feature already exists on Apple devices, the PC driver will be available... "soon".
Subject: General Tech, Systems, Shows and Expos | April 8, 2014 - 05:11 AM | Scott Michaud
Tagged: BUILD 2014, microsoft, windows, winRT
A few days ago, I reported on the news from BUILD 2014 that Windows would see the return of the Start Menu and windowed apps. These features, which are not included with today's Windows 8.1 Update 1, will come in a later version. While I found these interface changes interesting, I reiterated that the user interface was not my concern: Windows Store certification was. I did leave room for a little hope, however, because Microsoft scheduled an announcement of changes. It was focused on enterprise customers, so I did not hold my breath.
And some things did change... but not enough for the non-enterprise user.
Microsoft is still hanging on to the curation of apps, except for "domain-joined" x86 Enterprise and x86 Pro PCs; RT devices and "not domain-joined" computers will only allow sideloaded apps with a key. This certificate (key) is not free for everyone. Of course, this does not have anything to do with native x86 applications. Thankfully, the prospect of WinRT APIs eventually replacing Win32, completely, seems less likely now. It could still be possible if Windows Store has a major surge in popularity but, as it stands right now, Microsoft seems to be spending less effort containing x86 for an eventual lobotomy.
If it does happen, it would be a concern for a variety of reasons:
Governments, foreign or domestic, who pressure Microsoft to ban encryption software.
Internet Explorer's Trident would have no competition to adopt new web standards.
Cannot create an app for just a friend or family member (unless it's a web app in IE).
When you build censorship, the crazies will come with demands to abuse it.
So I am still concerned about the future of Windows. I am still not willing to believe that Microsoft will support x86-exclusive applications until the end of time. If that happens, and sideloading is not publicly available, and web standards are forced into stagnation by a lack of alternative web browsers, then I can see bad times ahead. I will not really feel comfortable until a definitive pledge to allow users to control what can go on their device, even if Microsoft (or people with some form of authority over them) dislikes it, is made.
But I know that many disagree with me. What are your thoughts? Comment away!
Subject: General Tech, Shows and Expos | April 4, 2014 - 07:42 AM | Scott Michaud
Tagged: BUILD 2014, microsoft, .net
.NET has been very popular since its initial release. I saw it used frequently in applications, particularly when a simple form-like interface is required. It was easy to develop and accessible from several languages, such as C++, C#, and VB.NET. Enterprise application developers were particularly interested in it, especially with its managed security.
The framework drove an open source movement to write their own version, Mono, spearheaded by Novell. Some time later, the company Xamarin was created from the original Mono development team and maintains the project to this day. In fact, Miguel de Icaza was at Build 2014 discussing the initiative. He seems content with Microsoft's new Roslyn compiler and the working relationship between the two companies as a whole.
WinJS is released under the very permissive Apache 2.0 license. Other code, such as Windows Phone Toolkit, are released under other licenses, such as the Microsoft Public License (Ms-PL). Pay attention to any given project's license. It would not be wise to assume. Still, it sounds like a good step.
Subject: General Tech, Shows and Expos | April 3, 2014 - 01:53 AM | Scott Michaud
Tagged: BUILD 2014, microsoft, windows, start menu
Microsoft had numerous announcements during their Build 2014 opening keynote, which makes sense as they needed to fill the three hours that they assigned for it. In this post, I will focus on the upcoming changes to the Windows desktop experience. Two, albeit related, features were highlighted: the ability to run Modern Apps in a desktop window, and the corresponding return of the Start Menu.
I must say, the way that they grafted Start Screen tiles on the Start Menu is pretty slick. The Start Menu, since Windows Vista, has felt awkward with its split between recently used applications and common shortcuts in a breakout on the right with an expanded "All Programs" submenu handle on the bottom. It is functional, and it works perfectly fine, but something just felt weird about it. This looks a lot cleaner, in my opinion, especially since its width is variable according to how many applications are pinned.
Of course, my major complaint with Windows 8.x has nothing to do with the interface. There has not been any discussion around sideloading applications to get around Windows Store certification requirements. This is a major concern for browser vendors and should be one for many others, from hobbyists who might want to share their creations with one or two friends or family members, rather than everyone in an entire Windows Store region, or citizens of countries whose governments might pressure Microsoft to ban encryption or security applications.
That said, there is a session tomorrow called "Deploying and Managing Enterprise Apps", discussing changes app sideloading in Windows 8.1. Enterprise users are already allowed sideloading certificates from Microsoft. Maybe it will be expanded? I am not holding my breath.
Keep an eye out, because there should be a lot of news over the next couple of days.
Subject: Editorial, General Tech, Graphics Cards, Processors, Shows and Expos | March 30, 2014 - 05:45 AM | Scott Michaud
Tagged: gdc 14, GDC, GCN, amd
While Mantle and DirectX 12 are designed to reduce overhead and keep GPUs loaded, the conversation shifts when you are limited by shader throughput. Modern graphics processors are dominated by sometimes thousands of compute cores. Video drivers are complex packages of software. One of their many tasks is converting your scripts, known as shaders, into machine code for its hardware. If this machine code is efficient, it could mean drastically higher frame rates, especially at extreme resolutions and intense quality settings.
Emil Persson of Avalanche Studios, probably known best for the Just Cause franchise, published his slides and speech on optimizing shaders. His talk focuses on AMD's GCN architecture, due to its existence in both console and PC, while bringing up older GPUs for examples. Yes, he has many snippets of GPU assembly code.
AMD's GCN architecture is actually quite interesting, especially dissected as it was in the presentation. It is simpler than its ancestors and much more CPU-like, with resources mapped to memory (and caches of said memory) rather than "slots" (although drivers and APIs often pretend those relics still exist) and with how vectors are mostly treated as collections of scalars, and so forth. Tricks which attempt to combine instructions together into vectors, such as using dot products, can just put irrelevant restrictions on the compiler and optimizer... as it breaks down those vector operations into those very same component-by-component ops that you thought you were avoiding.
Basically, and it makes sense coming from GDC, this talk rarely glosses over points. It goes over execution speed of one individual op compared to another, at various precisions, and which to avoid (protip: integer divide). Also, fused multiply-add is awesome.
I know I learned.
As a final note, this returns to the discussions we had prior to the launch of the next generation consoles. Developers are learning how to make their shader code much more efficient on GCN and that could easily translate to leading PC titles. Especially with DirectX 12 and Mantle, which lightens the CPU-based bottlenecks, learning how to do more work per FLOP addresses the other side. Everyone was looking at Mantle as AMD's play for success through harnessing console mindshare (and in terms of Intel vs AMD, it might help). But honestly, I believe that it will be trends like this presentation which prove more significant... even if behind-the-scenes. Of course developers were always having these discussions, but now console developers will probably be talking about only one architecture - that is a lot of people talking about very few things.
This is not really reducing overhead; this is teaching people how to do more work with less, especially in situations (high resolutions with complex shaders) where the GPU is most relevant.
Subject: General Tech, Shows and Expos | March 22, 2014 - 05:41 AM | Scott Michaud
Tagged: opengl, nvidia, Intel, gdc 14, GDC, amd
So, for all the discussion about DirectX 12, the three main desktop GPU vendors, NVIDIA, AMD, and Intel, want to tell OpenGL developers how to tune their applications. Using OpenGL 4.2 and a few cross-vendor extensions, because OpenGL is all about its extensions, a handful of known tricks can reduce driver overhead up to ten-fold and increase performance up to fifteen-fold. The talk is very graphics developer-centric, but it basically describes a series of tricks known to accomplish feats similar to what Mantle and DirectX 12 suggest.
The 130-slide presentation is broken into a few sections, each GPU vendor getting a decent chunk of time. On occasion, they would mention which implementation fairs better with one function call. The main point that they wanted to drive home (since they clearly repeated the slide three times with three different fonts) is that none of this requires a new API. Everything exists and can be implemented right now. The real trick is to know how to not poke the graphics library in the wrong way.
The page also hosts a keynote from the recent Steam Dev Days.
That said, an advantage that I expect from DirectX 12 and Mantle is reduced driver complexity. Since the processors have settled into standards, I expect that drivers will not need to do as much unless the library demands it for legacy reasons. I am not sure how extending OpenGL will affect that benefit, as opposed to just isolating the legacy and building on a solid foundation, but I wonder if these extensions could be just as easy to maintain and optimize. Maybe it is.
Either way, the performance figures do not lie.
Subject: General Tech, Displays, Shows and Expos | March 22, 2014 - 05:04 AM | Scott Michaud
Tagged: oculus rift, Oculus, gdc 14, GDC
Last month, we published a news piece stating that Oculus Rift production has been suspended as "certain components" were unavailable. At the time, the company said they are looking for alternate suppliers but do not know how long that will take. The speculation was that the company was simply readying a new version and did not want to cannibalize their sales.
This week, they announced a new version which is available for pre-order and expected to ship in July.
DK2, as it is called, integrates a pair of 960x1080 OLED displays (correction, March 22nd @ 3:15pm: It is technically a single 1080p display that is divided per eye) for higher resolution and lower persistence. Citing Valve's VR research, they claim that the low persistence will reduce motion blur as your eye blends neighboring frames together. In this design, it flickers the image for a short period before going black, and does this at a high enough rate keep your eye fed with light. The higher resolution also prevents the "screen door effect" complained about by the first release. Like their "Crystal Cove" prototype, it also uses an external camera to reduce latency in detecting your movement. All of these should combine to less motion sickness.
I would expect that VR has a long road ahead of it before it becomes a commercial product for the general population, though. There are many legitimate concerns about leaving your users trapped in a sensory deprivation apparatus when Kinect could not even go a couple of days without someone pretending to play volleyball and wrecking their TV with ceiling fan fragments. Still, this company seems to be doing it intelligently: keep afloat on developers and lead users as you work through your prototypes. It is cool, even if it will get significantly better, and people will support its research while getting the best at the time.
DK2 is available for pre-order for $350 and is expected to ship in July.
Subject: General Tech, Shows and Expos | March 20, 2014 - 12:15 AM | Scott Michaud
Tagged: unreal engine 4, gdc 14, GDC, epic games
Game developers, from indie to the gigantic, can now access Unreal Engine 4 with a $19/month subscription (plus 5% of revenue from resulting sales). This is a much different model from UDK, which was free to develop games with their precompiled builds until commercial release, where an upfront fee and 25% royalty is then applied. For Unreal Engine 4, however, this $19 monthly fee also gives you full C++ source code access (which I have wondered about since the announcement that Unrealscript no longer exists).
Of course, the Unreal Engine 3-based UDK is still available (and just recently updated).
This is definitely interesting and, I believe, a response to publishers doubling-down on developing their own engines. EA has basically sworn off engines outside of their own Frostbite and Ingite technologies. Ubisoft has only announced or released three games based on Unreal Engine since 2011; Activision has announced or released seven in that time, three of which were in that first year. Epic Games has always been very friendly to smaller developers and, with the rise of the internet, it is becoming much easier for indie developers to release content through Steam or even their own website. These developers now have a "AAA" engine, which I think almost anyone would agree that Unreal Engine 4 is, with an affordable license (and full source access).
Speaking of full source access, licensees can access the engine at Epic's GitHub. While a top-five publisher might hesitate to share fixes and patches, the army of smaller developers might share and share-alike. This could lead to Unreal Engine 4 acquiring its own features rapidly. Epic highlights their Oculus VR, Linux and Steam OS, and native HTML5 initiatives but, given community support, there could be pushes into unofficial support for Mantle, TrueAudio, or other technologies. Who knows?
A sister announcement, albeit a much smaller one, is that Unreal Engine 4 is now part of NVIDIA's GameWorks initiative. This integrates various NVIDIA SDKs, such as PhysX, into the engine. The press release quote from Tim Sweeney is as follows:
Epic developed Unreal Engine 4 on NVIDIA hardware, and it looks and runs best on GeForce.
Another brief mention is that Unreal Engine 4 will have expanded support for Android.
Subject: General Tech, Shows and Expos | March 19, 2014 - 09:00 PM | Scott Michaud
Tagged: Mantle, gdc 14, GDC, crytek, CRYENGINE
While I do not have too many details otherwise, Crytek and AMD have announced that mainline CRYENGINE will support the Mantle graphics API. CRYENGINE, by Crytek, now sits alongside Frostbite, by Dice, and Nitrous, by Oxide Games, as engines which support that alternative to DirectX and OpenGL. This comes little more than a week after their announcement of native Linux support with their popular engine.
The tape has separate draw calls!
Crytek has been "evaluating" the API for quite some time now, showing interest back at the AMD Developer Summit. Since then, they have apparently made a clear decision on it. It is also not the first time that CRYENGINE has been publicly introduced to Mantle, with Chris Robert's Star Citizen, also powered by the 4th Generation CRYENGINE, having announced support for the graphics API. Of course, there is a large gap between having a licensee do legwork to include an API and having the engine developer provide you supported builds (that would be like saying UnrealEngine 3 supports the original Wii).
Hopefully we will learn more as GDC continues.
Editor's (Ryan) Take:
As the week at GDC has gone on, AMD continues to push forward with Mantle and calls Crytek's implementation of the low level API "a huge endorsement" of the company's direction and vision for the future. Many, including myself, have considered that the pending announcement of DX12 would be a major set back for Mantle but AMD claims that is "short sited" and as more developers come into the Mantle ecosystem it is proof AMD is doing the "right thing."
Here at GDC, AMD told us they have expanded the number of beta Mantle members dramatically with plenty more applications (dozens) in waiting. Obviously this could put a lot of strain on AMD for Mantle support and maintenance but representatives assure us that the major work of building out documentation and development tools is nearly 100% behind them.
If stories like this one over at Semiaccurate are true, and that Microsoft's DirectX 12 will be nearly identical to AMD Mantle, then it makes sense that developers serious about new gaming engines can get a leg up on projects by learning Mantle today. Applying that knowledge to the DX12 API upon its release could speed up development and improve implementation efficiency. From what I am hearing from the few developers willing to even mention DX12, Mantle is much further along in its release (late beta) than DX12 is (early alpha).
AMD indeed was talking with and sharing the development of Mantle with Microsoft "every step of the way" and AMD has stated on several occasions that there were two outcomes with Mantle; it either becomes or inspires a new industry standard in game development. Even if DX12 is more or less a carbon copy of Mantle, forcing NVIDIA to implement that API style with DX12's release, AMD could potentially have the advantage of gaming performance and support between now and Microsoft's DirectX release. That could be as much as a full calendar year from reports we are getting at GDC.
Subject: General Tech, Shows and Expos | March 19, 2014 - 05:20 PM | Jeremy Hellstrom
Tagged: Imagination Technologies, gdc 14, wizard, ray tracing
The Tech Report visited Imagination Technologies' booth at GDC where they were showing off a new processor, the Wizard GPU. It is based on the PowerVR Series6XT Rogue graphics processor which is specifically designed to accelerate ray tracing performance, a topic we haven't heard much about lately. They describe the performance as capable of processing 300 million rays and 100 million dynamic triangles per second which translates to 7 to 10 rays per pixel at 720p and 30Hz or 3 to 5 rays a pixel at 1080p and 30Hz. That is not bad, though Imagination Technologies estimates movies display at a rate of 16 to 32 rays per pixel so it may be a while before we see a Ray Tracing slider under Advanced Graphics Options.
"When we visited Imagination Technologies at CES, they were showing off some intriguing hardware that augments their GPUs in order to accelerate ray-traced rendering. Ray tracing is a well-known and high-quality form of rendering that relies on the physical simulation of light rays bouncing around in a scene. Although it's been used in movies and in static scene creation, ray tracing has generally been too computationally intensive to be practical for real-time graphics and gaming. However, Imagination Tech is looking to bring ray-tracing to real-time graphics—in the mobile GPU space, no less—with its new family of Wizard GPUs."
Here is some more Tech News from around the web:
- MoOx contacts make p-type transistor @ Nanotechweb
- Surrender your crypto keys or you're off to chokey, says Australia @ The Register
- Win XP holdouts storm eBay and licence brokers, hiss: Give us all your Windows 7 @ The Register
- Ubuntu Now Runs Well On The MacBook Air, Beats OS X In Graphics @ Phoronix
- Hidden 'Windigo' UNIX ZOMBIES are EVERYWHERE @ The Register
- Xbox boss Marc Whitten leaves Microsoft for Sonos as PS4 leads console sales @ The Inquirer
- Big Brother China Censors WeChat... Again @ TechARP
- Ergotech Freedom Quad 1-over-3 Desk Stand Review @ Techgage
- 10 Old Sprint Phones Can Now Get Totally Free Voice, Texts, and Data @ Gizmodo
Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 01:03 PM | Scott Michaud
Tagged: WebCL, gdc 14, GDC
The Khronos Group has just ratified the standard for WebCL 1.0. The API is expected to provide a massive performance boost to web applications which are dominated by expensive functions which can be offloaded to parallel processors, such as GPUs and multi-core CPUs. Its definition also allows WebCL to communicate and share buffers between it and WebGL with an extension.
Frequent readers of the site might remember that I have a particular interest in WebCL. Based on OpenCL, it allows web apps to obtain a list of every available compute device and target it for workloads. I have personally executed tasks on an NVIDIA GeForce 670 discrete GPU and other jobs on my Intel HD 4000 iGPU, at the same time, using the WebCL prototype from Tomi Aarnio of Nokia Research. The same is true for users with multiple discrete GPUs installed in their system (even if they are not compatible with Crossfire, SLi, or are from different vendors altogether). This could be very useful for physics, AI, lighting, and other game middleware packages.
Still, browser adoption might be rocky for quite some time. Google, Mozilla, and Opera Software were each involved in the working draft. This leaves both Apple and Microsoft notably absent. Even then, I am not sure how much interest exists within Google, Mozilla, and Opera to take it from a specification to a working feature in their browsers. Some individuals have expressed more faith in WebGL compute shaders than WebCL.
Of course, that can change with just a single "killer app", library, or middleware.
I do expect some resistance from the platform holders, however. Even Google has been pushing back on OpenCL support in Android, in favor of their "Renderscript" abstraction. The performance of a graphics processor is also significant leverage for a native app. There is little, otherwise, that cannot be accomplished with Web standards except a web browser itself (and there is even some non-serious projects for that). If Microsoft can support WebGL, however, there is always hope.
The specification is available at the Khronos website.
Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 01:02 PM | Scott Michaud
Tagged: OpenGL ES, opengl, opencl, gdc 14, GDC, EGL
The Khronos Group has also released their ratified specification for EGL 1.5. This API is at the center of data and event management between other Khronos APIs. This version increases security, interoperability between APIs, and support for many operating systems, including Android and 64-bit Linux.
The headline on the list of changes is the move that EGLImage objects makes, from the realm of extension into EGL 1.5's core functionality, giving developers a reliable method of transferring textures and renderbuffers between graphics contexts and APIs. Second on the list is the increased security around creating a graphics context, primarily designed for WebGL applications which any arbitrary website can become. Further down the list is the EGLSync object which allows further partnership between OpenGL (and OpenGL ES) and OpenCL. The GPU may not need CPU involvement when scheduling between tasks on both APIs.
During the call, the representative also wanted to mention that developers have asked them to bring EGL back to Windows. While it has not happened yet, they have announced that it is a current target.
The EGL 1.5 spec is available at the Khronos website.
Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 01:01 PM | Scott Michaud
Tagged: SYCL, opencl, gdc 14, GDC
To gather community feedback, the provisional specification for SYCL 1.2 has been released by The Khronos Group. SYCL extends itself upon OpenCL with the C++11 standard. This technology is built on another Khronos platform, SPIR, which allows the OpenCL C programming language to be mapped onto LLVM, with its hundreds of compatible languages (and Khronos is careful to note that they intend for anyone to make their own compatible alternative langauge).
In short, SPIR allows many languages which can compile into LLVM to take advantage of OpenCL. SYCL is the specification for creating C++11 libraries and compilers through SPIR.
As stated earlier, Khronos wants anyone to make their own compatible language:
While SYCL is one possible solution for developers, the OpenCL group encourages innovation in programming models for heterogeneous systems, either by building on top of the SPIR™ low-level intermediate representation, leveraging C++ programming techniques through SYCL, using the open source CLU libraries for prototyping, or by developing their own techniques.
SYCL 1.2 supports OpenCL 1.2 and they intend to develop it alongside OpenCL. Future releases are expected to support the latest OpenCL 2.0 specification and keep up with future developments.
The SYCL 1.2 provisional spec is available at the Khronos website.