Subject: General Tech | October 23, 2013 - 01:53 PM | Jeremy Hellstrom
Tagged: amd, hUMA
hUMA's currently well know trick, a shared memory space which both the CPU and GPU can access without penalty is only the first of its revealed optimizations. The Register talks today about another way in which this new architecture allows the CPU and GPU equal treatment, standardized task queues and dispatch packets which avoid dealing with a kernel level driver to assign tasks. With hUMA the GPU is able to shedule tasks for the CPU directly. That would allow any application that was designed to hUMA standards to have its various tasks assigned to the proper processor without needing extra coding. This not only makes it cheaper and quicker to design apps but would allow all hUMA apps to take advantage of the specialized abilities of both the CPU and GPU at no cost.
"The upcoming chips will utilise a technique AMD calls Heterogeneous Queuing (hQ). This new approach puts the GPU on an equal footing with the CPU: no longer will the graphics engine have to wait for the central processor to tell it what to do."
Here is some more Tech News from around the web:
- iPad Air vs iPad 4 specs comparison @ The Inquirer
- Firefox's Blocked-By-Default Java Isn't Going Down Well @ Slashdot
- Microsoft's Surface Pro 2 is harder to repair than an iPad @ The Inquirer
- ASUS talks Rampage IV Black Edition, next-gen video cards and cooling technology @ Hardware.info
- D-Link hole-prober finds 'backdoor' in Chinese wireless routers @ The Register
- be quiet! WorldWide Joint Giveaway - Win one Power Zone 1000W PSU, one Shadow Rock 2 CPU Cooler and two Silent Wings 140mm Fans
The Really Good Times are Over
We really do not realize how good we had it. Sure, we could apply that to budget surpluses and the time before the rise of global terrorism, but in this case I am talking about the predictable advancement of graphics due to both design expertise and improvements in process technology. Moore’s law has been exceptionally kind to graphics. We can look back and when we plot the course of these graphics companies, they have actually outstripped Moore in terms of transistor density from generation to generation. Most of this is due to better tools and the expertise gained in what is still a fairly new endeavor as compared to CPUs (the first true 3D accelerators were released in the 1993/94 timeframe).
The complexity of a modern 3D chip is truly mind-boggling. To get a good idea of where we came from, we must look back at the first generations of products that we could actually purchase. The original 3Dfx Voodoo Graphics was comprised of a raster chip and a texture chip, each contained approximately 1 million transistors (give or take) and were made on a then available .5 micron process (we shall call it 500 nm from here on out to give a sense of perspective with modern process technology). The chips were clocked between 47 and 50 MHz (though often could be clocked up to 57 MHz by going into the init file and putting in “SET SST_GRXCLK=57”… btw, SST stood for Sellers/Smith/Tarolli, the founders of 3Dfx). This revolutionary graphics card at the time could push out 47 to 50 megapixels and had 4 MB of VRAM and was released in the beginning of 1996.
My first 3D graphics card was the Orchid Righteous 3D. Voodoo Graphics was really the first successful consumer 3D graphics card. Yes, there were others before it, but Voodoo Graphics had the largest impact of them all.
In 1998 3Dfx released the Voodoo 2, and it was a significant jump in complexity from the original. These chips were fabricated on a 350 nm process. There were three chips to each card, one of which was the raster chip and the other two were texture chips. At the top end of the product stack was the 12 MB cards. The raster chip had 4 MB of VRAM available to it while each texture chip had 4 MB of VRAM for texture storage. Not only did this product double performance from the Voodoo Graphics, it was able to run in single card configurations at 800x600 (as compared to the max 640x480 of the Voodoo Graphics). This is the same time as when NVIDIA started to become a very aggressive competitor with the Riva TnT and ATI was about to ship the Rage 128.
Subject: Graphics Cards | October 18, 2013 - 07:55 PM | Ryan Shrout
Tagged: video, tim sweeney, nvidia, Mantle, john carmack, johan andersson, g-sync, amd
If you weren't on our live stream from the NVIDIA "The Way It's Meant to be Played" tech day this afternoon, you missed a hell of an event. After the announcement of NVIDIA G-Sync variable refresh rate monitor technology, NVIDIA's Tony Tomasi brough one of the most intriguing panels of developers on stage to talk.
John Carmack, Tim Sweeney and Johan Andersson talk for over an hour, taking questions from the audience and even getting into debates amongst themselves in some instances. Topics included NVIDIA G-Sync of course, AMD's Mantle low-level API, the hurdles facing PC gaming and what direction each luminary is currently on for future development.
If you are a PC enthusiast or gamer you are definitely going to want to listen and watch the video below!
Subject: General Tech, Graphics Cards | October 17, 2013 - 04:37 PM | Scott Michaud
Tagged: radeon, R9 290X, amd
The NDA on AMD R9 290X benchmarks has not yet lifted but AMD was in Montreal to provide two previews: BioShock Infinite and Tomb Raider, both at 4K (3840 x 2160). Keep in mind, these scores are provided by AMD and definitely does not represent results from our monitor-capture solution. Expect more detailed results from us, later, as we do some Frame Rating.
The test machine used in both setups contains:
- Intel Core i7-3960X at 3.3 GHz
- MSI X79A-GD65
- 16GB of DDR3-1600
- Windows 7 SP1 64-bit
- NVIDIA GeForce GTX 780 (331.40 drivers) / AMD Radeon R9 290X (13.11 beta drivers)
The R9 290X is configured in its "Quiet Mode" during both benchmarks. This is particularly interesting, to me, as I was unaware of such feature (it has been a while since I last used a desktop AMD/ATi card). I would assume this is a fan and power profile to keep noise levels as silent as possible for some period of time. A quick Google search suggests this feature is new with the Radeon Rx200-series cards.
BioShock Infinite is quite demanding at 4K with ultra quality settings. Both cards maintain an average framerate above 30FPS.
AMD R9 290X "Quiet Mode": 44.25 FPS
NVIDIA GeForce GTX 780: 37.67 FPS
(Update 1: 4:44pm EST) AMD confirmed TressFX is disabled in these benchmark scores. It, however, is enabled if you are present in Montreal to see the booth. (end of update 1)
Tomb Raider is also a little harsh at those resolutions. Unfortunately, the results are ambiguous whether or not TressFX has been enabled throughout the benchmarks. The summary explicitly claims TressFX is enabled, while the string of settings contains "Tressfx=off". Clearly, one of the two entries is a typo. We are currently trying to get clarification. In the mean time:
AMD R9 290X "Quiet Mode": 40.2 FPS
NVIDIA GeForce GTX 780: 34.5 FPS
Notice how both of these results are not compared to a GeForce Titan. Recent leaks suggest a retail price for AMD's flagship card in the low-$700 market. The GeForce 780, on the other hand, resides in the $650-700 USD price point.
It seems pretty clear, to me, that cost drove this comparison rather than performance.
Subject: General Tech, Graphics Cards | October 17, 2013 - 03:46 PM | Scott Michaud
Tagged: amd, radeon
In summary, "We don't know yet".
We do know of a story posted by Fudzilla which cited Roy Taylor, VP of Global Channel Sales, as a source confirming the reintroduction of Never Settle for the new "Rx200" Radeon cards. Adding credibility, Roy Taylor retweeted the story via his official account. This tweet is still there as I write this post.
The Tech Report, after publishing the story, was contacted by Robert Hallock of AMD Gaming and Graphics. The official word, now, is that AMD does not have any announcements regarding bundles for new products. He is also quoted, "We continue to consider Never Settle bundles as a core component of AMD Gaming Evolved program and intend to do them again in the future".
So, I (personally) see promise in that we will see a new Never Settle bundle. For the moment, AMD is officially silent on the matter. Also, we do not know (and, it is possible, neither does AMD at this time) which games will be included and how many users can claim if there will even be a choice at all.
Subject: General Tech | October 15, 2013 - 01:30 PM | Jeremy Hellstrom
Tagged: amd, linux, catalyst, 3.12 kernel
Phoronix had a very pleasant surprise over the weekend which was apparently also a complete surprise to the AMD Linux driver team; vastly improved GPU performance on the 3.12 kernel. In tests on a dozen cards ranging from the elderly HD3850 to the HD6950 all cards showed at least some improvements and in some cases increases of 50%. Keep your eyes tuned for updates as they work with AMD's Linux team to see where these performance increases originated from and to give Phoronix time to get their hands on new hardware for more testing. Check out the complete review here.
"Over the weekend I released benchmarks showing the Linux 3.12 kernel bringing big AMD Radeon performance improvements. Those benchmarks of a Radeon HD 4000 series GPU showed the Linux 3.12 kernel bringing major performance improvements over Linux 3.11 and prior. Some games improved with just low double-digit gains while other Linux games were nearly 90% faster! Interestingly, the AMD Radeon Linux developers were even surprised by these findings. After carrying out additional tests throughout the weekend, I can confirm these truly incredible performance improvements on other hardware. In this article are results from ten different AMD Radeon graphics cards."
Here is some more Tech News from around the web:
- Imagination eyes Intel and ARM with Warrior P5600 CPU based on MIPS architecture @ The Inquirer
- Best Buy: Bring us your cowering, unwanted Microsoft Surface masses @ The Register
- Blackberry desperately reassures customers in an open letter @ The Inquirer
- Windows 8.1: Six Things Microsoft Got Right and Others That Are Still Missing @ TechSpot
- MS Word deserves DEATH says Brit SciFi author Charles Stross @ The Register
- Toshiba Set To Change SSD Landscape with Upper Tier SSDs at Rock Bottom Pricing @ SSD Review
- Devolo dLAN LiveCam Starter Kit @ NikKTech
- Control panel backdoor found in D-Link home routers @ The Register
- Ooma Office: VoIP Small Office Phone System Review @ TechwareLabs
- iPad 5 release date, rumours, price and features @ The Inquirer
- L-Com Gigabit Ethernet Desktop Switches @ TechwareLabs
- October 2013 Rumours @ Western Digital @ TechARP
Subject: Graphics Cards | October 14, 2013 - 08:52 PM | Ryan Shrout
Tagged: xbox one, microsot, Mantle, dx11, amd
Microsoft posted a new blog on its Windows site that discusses some of the new features of the latest DirectX on Windows 8.1 and the upcoming Xbox One. Of particular interest was a line that confirms what I have said all along about the much-hyped AMD Mantle low-level API: it is not compatible with Xbox One.
We are very excited that with the launch of Xbox One, we can now bring the latest generation of Direct3D 11 to console. The Xbox One graphics API is “Direct3D 11.x” and the Xbox One hardware provides a superset of Direct3D 11.2 functionality. Other graphics APIs such as OpenGL and AMD’s Mantle are not available on Xbox One.
What does this mean for AMD? Nothing really changes except some of the common online discussion about how easy it would now be for developers to convert games built for the console to the AMD-specific Mantle API. AMD claims that Mantle offers a significant performance advantage over DirectX and OpenGL by giving developers that choose to implement support for it closer access to the hardware without much of the software overhead found in other APIs.
This is what Mantle does. It bypasses DirectX (and possibly the hardware abstraction layer) and developers can program very close to the metal with very little overhead from software. This lowers memory and CPU usage, it decreases latency, and because there are fewer “moving parts” AMD claims that they can do 9x the draw calls with Mantle as compared to DirectX. This is a significant boost in overall efficiency. Before everyone gets too excited, we will not see a 9x improvement in overall performance with every application. A single HD 7790 running in Mantle is not going to power 3 x 1080P monitors in Eyefinity faster than a HD 7970 or GTX 780 (in Surround) running in DirectX. Mantle shifts the bottleneck elsewhere.
I still believe that AMD Mantle could bring interesting benefits to the AMD Radeon graphics cards on the PC but I think this official statement from Microsoft will dampen some of the over excitement.
Also worth noting is this comment about the DX11 implementation on the Xbox One:
With Xbox One we have also made significant enhancements to the implementation of Direct3D 11, especially in the area of runtime overhead. The result is a very streamlined, “close to metal” level of runtime performance. In conjunction with the third generation PIX performance tool for Xbox One, developers can use Direct3D 11 to unlock the full performance potential of the console.
So while Windows and the upcoming Xbox One will share an API there will still be performance advantages for games on the console thanks to the nature of a static hardware configuration.
Subject: General Tech, Graphics Cards | October 11, 2013 - 04:47 PM | Scott Michaud
Tagged: amd, R9 290X
When you deal with the web, almost nothing is hidden. The browsers (and some extensions) have access to just about everything on screen and off. Anyone browsing a site or app can inspect the contents and even modify it. That last part could be key.
TechPowerUp got their hands on a screenshot of developer tools inspecting the Newegg website. A few elements are hidden on the right hand side within the "Coming Soon" container. One element, id of "singleFinalPrice", is set "visibility: hidden;" with a price as its contents. In the TechPowerUp screenshot, this price is listed as "729.99" USD.
Of course, good journalism is confirming this yourself. As of the time I checked, this value is listed as "9999.99" USD. This means either one of two things: Newegg changed their value to a placeholder after the leak was discovered or the source of TechPowerUp's screenshot used those same developer tools to modify the content. It is actually a remarkably easy thing to do... here I change the value to 99 cents by right clicking on the element and modifying the HTML.
No, the R9 290X is not 99 cents.
So I would be careful to take these screenshots with a grain of salt if we only have access to one source. That said, $729.99 does sound like a reasonable price point for AMD to release a Titan-competitor at. Of course, that is exactly what a hoaxer would want.
But, as it stands right now, I would not get your hopes up. An MSRP of ~$699-$749 USD sounds legitimate but we do not have even a second source, at the moment, to confirm that. Still, this might be something our readers would like to know.
Subject: General Tech | October 11, 2013 - 03:19 PM | Jeremy Hellstrom
Tagged: amd, Mantle, gaming, valve
The Tech Report has been thinking on the upcoming release of SteamOS and AMD's Mantle and they see some problems that could come about because of them. Fragmentation has always been a problem for PCs, be it that the hardware between systems never matches or the wide variety of APIs and game engines on the software side. It can de daunting to begin developing a game and determining if optimizing for AMD, NVIDIA or Intel is worth considering as well as the choice between Direct3D or OpenGL or trying to make them both work. Mantle is now a choice, BF4 will actually be releasing a version that is natively Mantle shortly after they launch the first version of the game. Valve has also hinted that several AAA titles will be released on SteamOS, not necessarily Windows or Linux. What effect could this have on PC gaming as these new choices arrive at the same time the next generation consoles are released? Read on and see.
"Valve's SteamOS and AMD's Mantle API have the potential to do great things for PC gaming. However, they also threaten to fragment the platform at a critical time, when next-gen consoles are about to reduce the PC's performance and image quality lead by a long shot."
Here is some more Tech News from around the web:
- Windows 8.1 won't save Windows 8 @ The Inquirer
- The legacy IE survivor's guide: Firefox, Chrome... more IE? @ The Register
- Microsoft wants to 'move beyond' the Cookie Monster @ The Register
- Will BlackBerry be cherrypicked, or bought by its daddies? @ The Register
- Technology Before Its Time: 9 Products That Were Too Early to Market @ TechSpot
- D-Link DIR-868L Wireless AC1750 Dual-Band Cloud Router Review @ Legit Reviews
- Happy 10th b-day, Patch Tuesday: TWO critical IE 0-day bugs, did you say? @ The Register
- ASUS AOOC 2013 Finals Moscow Report @ techPowerUp
- Beginners Guides: Repairing a Cracked / Broken Notebook LCD Screen @ PCSTATS
- ASUS RT-AC66U AC1750 Wireless 802.11AC Router Review @MissingRemote
Subject: General Tech, Graphics Cards, Systems | October 10, 2013 - 06:59 PM | Scott Michaud
Tagged: amd, nvidia, Intel, Steam Machine
This should be little-to-no surprise for the viewers of our podcast, as this story was discussed there, but Valve has confirmed AMD and Intel graphics are compatible with Steam Machines. Doug Lombardi of Valve commented by email to, apparently, multiple sources including Forbes and Maximum PC.
Last week, we posted some technical specs of our first wave of Steam Machine prototypes. Although the graphics hardware that we've selected for the first wave of prototypes is a variety of NVIDIA cards, that is not an indication that Steam Machines are NVIDIA-only. In 2014, there will be Steam Machines commercially available with graphics hardware made by AMD, NVIDIA, and Intel. Valve has worked closely together with all three of these companies on optimizing their hardware for SteamOS, and will continue to do so into the foreseeable future.
Ryan and the rest of the podcast crew found the whole situation, "Odd". They could not understand why AMD referred the press to Doug Lombardi rather than circulate a canned statement from him. It was also weird why NVIDIA had an exclusive on the beta program with AMD being commercially available in 2014.
As I have said in the initial post: for what seems to be deliberate non-committal to a specific hardware spec, why limit to a single graphics provider?
Get notified when we go live!