AMD cards get an unexpected boost from the Linux 3.12 kernel

Subject: General Tech | October 15, 2013 - 01:30 PM |
Tagged: amd, linux, catalyst, 3.12 kernel

Phoronix had a very pleasant surprise over the weekend which was apparently also a complete surprise to the AMD Linux driver team; vastly improved GPU performance on the 3.12 kernel.  In tests on a dozen cards ranging from the elderly HD3850 to the HD6950 all cards showed at least some improvements and in some cases increases of 50%.  Keep your eyes tuned for updates as they work with AMD's Linux team to see where these performance increases originated from and to give Phoronix time to get their hands on new hardware for more testing.  Check out the complete review here.

AMD-Catalyst-Linux.jpg

"Over the weekend I released benchmarks showing the Linux 3.12 kernel bringing big AMD Radeon performance improvements. Those benchmarks of a Radeon HD 4000 series GPU showed the Linux 3.12 kernel bringing major performance improvements over Linux 3.11 and prior. Some games improved with just low double-digit gains while other Linux games were nearly 90% faster! Interestingly, the AMD Radeon Linux developers were even surprised by these findings. After carrying out additional tests throughout the weekend, I can confirm these truly incredible performance improvements on other hardware. In this article are results from ten different AMD Radeon graphics cards."

Here is some more Tech News from around the web:

Tech Talk

Source: Phoronix

Microsoft Confirms AMD Mantle Not Compatible with Xbox One

Subject: Graphics Cards | October 14, 2013 - 08:52 PM |
Tagged: xbox one, microsot, Mantle, dx11, amd

Microsoft posted a new blog on its Windows site that discusses some of the new features of the latest DirectX on Windows 8.1 and the upcoming Xbox One.  Of particular interest was a line that confirms what I have said all along about the much-hyped AMD Mantle low-level API: it is not compatible with Xbox One

We are very excited that with the launch of Xbox One, we can now bring the latest generation of Direct3D 11 to console. The Xbox One graphics API is “Direct3D 11.x” and the Xbox One hardware provides a superset of Direct3D 11.2 functionality. Other graphics APIs such as OpenGL and AMD’s Mantle are not available on Xbox One.

mantle.jpg

What does this mean for AMD?  Nothing really changes except some of the common online discussion about how easy it would now be for developers to convert games built for the console to the AMD-specific Mantle API.  AMD claims that Mantle offers a significant performance advantage over DirectX and OpenGL by giving developers that choose to implement support for it closer access to the hardware without much of the software overhead found in other APIs.

Josh summed it up in a recent editorial.

This is what Mantle does.  It bypasses DirectX (and possibly the hardware abstraction layer) and developers can program very close to the metal with very little overhead from software.  This lowers memory and CPU usage, it decreases latency, and because there are fewer “moving parts” AMD claims that they can do 9x the draw calls with Mantle as compared to DirectX.  This is a significant boost in overall efficiency.  Before everyone gets too excited, we will not see a 9x improvement in overall performance with every application.  A single HD 7790 running in Mantle is not going to power 3 x 1080P monitors in Eyefinity faster than a HD 7970 or GTX 780 (in Surround) running in DirectX.  Mantle shifts the bottleneck elsewhere.

I still believe that AMD Mantle could bring interesting benefits to the AMD Radeon graphics cards on the PC but I think this official statement from Microsoft will dampen some of the over excitement.

Also worth noting is this comment about the DX11 implementation on the Xbox One:

With Xbox One we have also made significant enhancements to the implementation of Direct3D 11, especially in the area of runtime overhead. The result is a very streamlined, “close to metal” level of runtime performance. In conjunction with the third generation PIX performance tool for Xbox One, developers can use Direct3D 11 to unlock the full performance potential of the console.

So while Windows and the upcoming Xbox One will share an API there will still be performance advantages for games on the console thanks to the nature of a static hardware configuration.

Source: Microsoft

Rumor: Web Developer Tools Leaked the R9 290X Price?

Subject: General Tech, Graphics Cards | October 11, 2013 - 04:47 PM |
Tagged: amd, R9 290X

When you deal with the web, almost nothing is hidden. The browsers (and some extensions) have access to just about everything on screen and off. Anyone browsing a site or app can inspect the contents and even modify it. That last part could be key.

amd-r9-290x-newegg01.jpg

TechPowerUp got their hands on a screenshot of developer tools inspecting the Newegg website. A few elements are hidden on the right hand side within the "Coming Soon" container. One element, id of "singleFinalPrice", is set "visibility: hidden;" with a price as its contents. In the TechPowerUp screenshot, this price is listed as "729.99" USD.

amd-r9-290x-newegg02.png

Of course, good journalism is confirming this yourself. As of the time I checked, this value is listed as "9999.99" USD. This means either one of two things: Newegg changed their value to a placeholder after the leak was discovered or the source of TechPowerUp's screenshot used those same developer tools to modify the content. It is actually a remarkably easy thing to do... here I change the value to 99 cents by right clicking on the element and modifying the HTML.

amd-r9-290x-newegg03.png

No, the R9 290X is not 99 cents.

So I would be careful to take these screenshots with a grain of salt if we only have access to one source. That said, $729.99 does sound like a reasonable price point for AMD to release a Titan-competitor at. Of course, that is exactly what a hoaxer would want.

But, as it stands right now, I would not get your hopes up. An MSRP of ~$699-$749 USD sounds legitimate but we do not have even a second source, at the moment, to confirm that. Still, this might be something our readers would like to know.

Source: TechPowerUp

Thoughts on the unintended consequences of Mantle and SteamOS

Subject: General Tech | October 11, 2013 - 03:19 PM |
Tagged: amd, Mantle, gaming, valve

The Tech Report has been thinking on the upcoming release of SteamOS and AMD's Mantle and they see some problems that could come about because of them.  Fragmentation has always been a problem for PCs, be it that the hardware between systems never matches or the wide variety of APIs and game engines on the software side.  It can de daunting to begin developing a game and determining if optimizing for AMD, NVIDIA or Intel is worth considering as well as the choice between Direct3D or OpenGL or trying to make them both work.  Mantle is now a choice, BF4 will actually be releasing a version that is natively Mantle shortly after they launch the first version of the game.  Valve has also hinted that several AAA titles will be released on SteamOS, not necessarily Windows or Linux.  What effect could this have on PC gaming as these new choices arrive at the same time the next generation consoles are released?  Read on and see.

too-many-choices.png

"Valve's SteamOS and AMD's Mantle API have the potential to do great things for PC gaming. However, they also threaten to fragment the platform at a critical time, when next-gen consoles are about to reduce the PC's performance and image quality lead by a long shot."

Here is some more Tech News from around the web:

Tech Talk

Valve Confirms Steam Machines are not NVIDIA Exclusive

Subject: General Tech, Graphics Cards, Systems | October 10, 2013 - 06:59 PM |
Tagged: amd, nvidia, Intel, Steam Machine

This should be little-to-no surprise for the viewers of our podcast, as this story was discussed there, but Valve has confirmed AMD and Intel graphics are compatible with Steam Machines. Doug Lombardi of Valve commented by email to, apparently, multiple sources including Forbes and Maximum PC.

steam-os-machines.png

Last week, we posted some technical specs of our first wave of Steam Machine prototypes. Although the graphics hardware that we've selected for the first wave of prototypes is a variety of NVIDIA cards, that is not an indication that Steam Machines are NVIDIA-only. In 2014, there will be Steam Machines commercially available with graphics hardware made by AMD, NVIDIA, and Intel. Valve has worked closely together with all three of these companies on optimizing their hardware for SteamOS, and will continue to do so into the foreseeable future.

Ryan and the rest of the podcast crew found the whole situation, "Odd". They could not understand why AMD referred the press to Doug Lombardi rather than circulate a canned statement from him. It was also weird why NVIDIA had an exclusive on the beta program with AMD being commercially available in 2014.

As I have said in the initial post: for what seems to be deliberate non-committal to a specific hardware spec, why limit to a single graphics provider?

Source: Maximum PC

New improved pricing from NVIDIA, plus a mysterious new GTX 760

Subject: General Tech | October 9, 2013 - 01:05 PM |
Tagged: amd, nvidia, price cuts, gtx 760

DigiTimes has broken the news that NVIDIA will be cutting prices on many of their cards in reaction to AMD's new GPU family.  Currently the lowest priced GTX660 is $150 after MIR and a GTX650Ti Boost can be had for $110.  We don't have any information as to how they will be updating the GTX 760, likely faster clocks but we can hope for something a little more adventurous.  The GTX 760 can be had for $250 right now but you should hold off to see what the new model has and what it does to the price of the current model.

14-121-775-TS.jpg

"Nvidia has offered price cuts for several of its graphics cards including the GTX 660 and GTX 650Ti Boost and will soon release an upgraded GTX 760, targeting AMD's Radeon R9 280X, according to sources from graphics card makers."

Here is some more Tech News from around the web:

Tech Talk

Source: DigiTimes

Verizon Cloud Powered By AMD Seamicro SM15000 Servers

Subject: General Tech | October 9, 2013 - 01:59 AM |
Tagged: verizon, sm15000, seamicro, enterprise, cloud, amd

Verizon is launching its new public cloud later this year and offering both compute and storage to enterprise customers. The Verizon Cloud Compute and Cloud Storage products will be hosted on AMD SM15000 servers in a multi-rack cluster built by Verizon-owned Terremark.

The new cloud service is powered by AMD SM15000 servers with are 10U units with multiple microservers interconnected by SeaMicro's Freedom Fabric technology. Verizon is aiming the new cloud service at business users, from SMBs to fortune 500 companies. Specifically, Verizon is hoping to entice IT departments to offload internal company applications and services to the Verizon Cloud Compute and Cloud Storage offerings. Using the SeaMicro-built (now owned by AMD) servers, Verizon has a high density, efficient infrastructure that can allegedly provision and deploy virtual machines with fine tuned specifications and performance and reliability backed by enterprise-level SLAs while being compliant with PCI and DoD standards for data security.

Verizon Cloud_SM15000 Servers in Racks.jpg

Verizon will be launching Cloud Compute and Cloud Storage as a public beta in Q4 of this year. Further, the company will be taking on beta customers later this month.

The AMD SM15000 is a high performance, high density server, and is an interesting product for cloud services thanks to the networking interconnects and power efficient compute cards. Verizon and AMD have been working together for the better part of two years on the cloud platform using the servers which were first launched to the public a little more than a year ago.

Verizon Cloud_SM15000 Storage.jpg

The SM15000 is a 10U server that is actually made up of multiple compute cards. AMD and SeaMicro also pack the server with a low latency and high bandwidth networking fabric to connect the servers to each other, multiple power supplies, and the ability to connect to a shared pool of storage that each compute card can access. Each compute card uses a small, cut down motherboard, processor, ram, and networking IO. The processors can be AMD Opteron, Intel Xeon, or Intel Atom with the future possibility of an AMD APU-based server (which is the configuration option that I am most interested in). In this case, Verizon appears to be using AMD Opteron chips, which means each compute card has a single eight core AMD Opteron CPU clocked at up to 2.8GHz, 64GB of system memory, and a 10Gbps networking link.

Verizon Cloud_AMD SM15000 Servers_Cloud Computing.jpg

In total, each SM15000 server is powered by 512 CPU cores, up to 4TB of RAM, ten 1,100W PSUs, and support for more than 5PB of shared storage. Considering Verizon is using multiple racks filled with SM15000 servers, there is a lot of hardware on offer to support a multitude of mission critical applications backed by SLAs (service level agreements, which are basically guarantees of uptime and/or performance).

I'm looking forward to seeing what sorts of things customers end up doing with the Verizon Cloud and how the SeaMicro-built servers hold up once the service is fully ramped up and utilized.

You can find more information on the SM15000 servers in my article on their initial debut. Also, the full press release on the Verizon Cloud is below.

Source: AMD

C'mon man! Do you really think your technically inclined customers aren't going to catch on?

Subject: General Tech | October 8, 2013 - 07:59 PM |
Tagged: amd, dirty pool, linux, open source

Rebranding existing hardware with fancy new model numbers is nothing new to either AMD or NVIDA.  Review sites catch on immediately and while we like to see optimizations applied to mature chips we much prefer brand new hardware.  When you release the Q-1200000 as the X-2200000 the best you will get from review sites is a recommendation to go with the model that has the lower price, not the higher model number. Most enthusiasts have caught on to the fact that they are the same product; we do not like it but we have come to accept it as common business practice.  Certain unintentional consequences from designs we can forgive as long as you admit the issue and work to rectify it, only the intentional limitations are being mentioned in this post.

This is where the problem comes in as it seems that this intentional misleading of customers has created a mindset where it is believed that it is OK to intentionally impose performance limitations on products.  Somehow companies have convinced themselves that a customer base who routinely tears apart hardware, uses scanners to see inside actual components and who write their own OSes from scratch (or at least update the kernel) will somehow not be able to discover these limitations.  Thus we have yesterday's revelation that NVIDIA has artificially limited the number of screens usable in Linux to three; not because of performance or stability issues but simply because it might provide Linux users with a better experience that Windows users.

Apparently AMD is not to be outdone when it comes to this kind of dirty pool, in their case it is audio that is limited as opposed to video.  If you are so uncouth as to use a DVI to HDMI adapter which did not come with your shiny new Radeon then you are not allowed to have audio signals transferred over that HDMI cable on either Windows or Linux.  There is a ... shall we say Apple-like hardware check, that Phoronix reported on which will disable the audio output unless a specific EEPROM on your adapter is detected.   NVIDIA doesn't sell monitors nor is AMD really in the dongle business but apparently they are willing to police the components you choose to use, though the causes of AMD's decision are not as clear as NVIDIA's for as far as we know Monster Cable does not have the magic EEPROM in their adapters.

If your customers are as talented as your engineers you might not want to listen to your salespeople who tell you that partnerships with other companies are more important than antagonizing your customers by trying to pull a fast one on them.  We will find out and it will come back to haunt you.  Unless the payoffs you get from your partnerships are more than you make selling to customers in which case you might as well just ignore us.

addamswheel7b1.jpg

"For some AMD Radeon graphics cards when using the Catalyst driver, the HDMI audio support isn't enabled unless using the simple DVI to HDMI adapter included with the graphics card itself... If you use another DVI-to-HDMI adapter, it won't work with Catalyst. AMD intentionally implemented checks within their closed-source driver to prevent other adapters from being used, even though they will work just fine."

Here is some more Tech News from around the web:

Tech Talk

Source: Phoronix

Hello again Tahiti

Subject: Graphics Cards | October 8, 2013 - 05:30 PM |
Tagged: amd, GCN, graphics core next, hd 7790, hd 7870 ghz edition, hd 7970 ghz edition, r7 260x, r9 270x, r9 280x, radeon, ASUS R9 280X DirectCU II TOP

AMD's rebranded cards have arrived, though with a few improvements to the GCN architecture that we already know so well.  This particular release seems to be focused on price for performance which is certainly not a bad thing in these uncertain times.  The 7970 GHz Edition launched at $500, while the new R9 280X will arrive at $300 which is a rather significant price drop and one which we hope doesn't damage AMD's bottom line too badly in the coming quarters.  [H]ard|OCP chose the ASUS R9 280X DirectCU II TOP to test, with a custom PCB from ASUS and a mild overclock which helped it pull ahead of the 7970 GHz.  AMD has tended towards leading off new graphics card families with the low and midrange models, we have yet to see the top of the line R9 290X in action yet.

Ryan's review, including frame pacing, can be found right here.

H_1381134778LXpy2pFUWk_1_6_l.jpg

"We evaluate the new ASUS R9 280X DirectCU II TOP video card and compare it to GeForce GTX 770 and Radeon HD 7970 GHz Edition. We will find out which video card provides the best value and performance in the $300 price segment. Does it provide better performance a than its "competition" in the ~$400 price range?"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP
Author:
Manufacturer: AMD

The AMD Radeon R9 280X

Today marks the first step in an introduction of an entire AMD Radeon discrete graphics product stack revamp. Between now and the end of 2013, AMD will completely cycle out Radeon HD 7000 cards and replace them with a new branding scheme. The "HD" branding is on its way out and it makes sense. Consumers have moved on to UHD and WQXGA display standards; HD is no longer extraordinary.

But I want to be very clear and upfront with you: today is not the day that you’ll learn about the new Hawaii GPU that AMD promised would dominate the performance per dollar metrics for enthusiasts.  The Radeon R9 290X will be a little bit down the road.  Instead, today’s review will look at three other Radeon products: the R9 280X, the R9 270X and the R7 260X.  None of these products are really “new”, though, and instead must be considered rebrands or repositionings. 

There are some changes to discuss with each of these products, including clock speeds and more importantly, pricing.  Some are specific to a certain model, others are more universal (such as updated Eyefinity display support). 

Let’s start with the R9 280X.

 

AMD Radeon R9 280X – Tahiti aging gracefully

The AMD Radeon R9 280X is built from the exact same ASIC (chip) that powers the previous Radeon HD 7970 GHz Edition with a few modest changes.  The core clock speed of the R9 280X is actually a little bit lower at reference rates than the Radeon HD 7970 GHz Edition by about 50 MHz.  The R9 280X GPU will hit a 1.0 GHz rate while the previous model was reaching 1.05 GHz; not much a change but an interesting decision to be made for sure.

Because of that speed difference the R9 280X has a lower peak compute capability of 4.1 TFLOPS compared to the 4.3 TFLOPS of the 7970 GHz.  The memory clock speed is the same (6.0 Gbps) and the board power is the same, with a typical peak of 250 watts.

280x-1.jpg

Everything else remains the same as you know it on the HD 7970 cards.  There are 2048 stream processors in the Tahiti version of AMD’s GCN (Graphics Core Next), 128 texture units and 32 ROPs all being pushed by a 384-bit GDDR5 memory bus running at 6.0 GHz.  Yep, still with a 3GB frame buffer.

Continue reading our review of the AMD Radeon R9 280X, R9 270X and R7 260X!!!