hUMA's new trick; standardised task dispatch packets
Subject: General Tech | October 23, 2013 - 01:53 PM | Jeremy Hellstrom
Tagged: amd, hUMA
hUMA's currently well know trick, a shared memory space which both the CPU and GPU can access without penalty is only the first of its revealed optimizations. The Register talks today about another way in which this new architecture allows the CPU and GPU equal treatment, standardized task queues and dispatch packets which avoid dealing with a kernel level driver to assign tasks. With hUMA the GPU is able to shedule tasks for the CPU directly. That would allow any application that was designed to hUMA standards to have its various tasks assigned to the proper processor without needing extra coding. This not only makes it cheaper and quicker to design apps but would allow all hUMA apps to take advantage of the specialized abilities of both the CPU and GPU at no cost.
"The upcoming chips will utilise a technique AMD calls Heterogeneous Queuing (hQ). This new approach puts the GPU on an equal footing with the CPU: no longer will the graphics engine have to wait for the central processor to tell it what to do."
Here is some more Tech News from around the web:
- iPad Air vs iPad 4 specs comparison @ The Inquirer
- Firefox's Blocked-By-Default Java Isn't Going Down Well @ Slashdot
- Microsoft's Surface Pro 2 is harder to repair than an iPad @ The Inquirer
- ASUS talks Rampage IV Black Edition, next-gen video cards and cooling technology @ Hardware.info
- D-Link hole-prober finds 'backdoor' in Chinese wireless routers @ The Register
- be quiet! WorldWide Joint Giveaway - Win one Power Zone 1000W PSU, one Shadow Rock 2 CPU Cooler and two Silent Wings 140mm Fans
Get notified when we go live!