Subject: Graphics Cards | December 31, 2013 - 03:13 PM | Jeremy Hellstrom
Tagged: amd, asus, ASUS ROG, MATRIX PLATINUM R9 280X
There is a lot to love about the ASUS ROG Matrix Platinum R9 280X, from its looks and faster and quieter performance when compared to the reference model. Still, [H]ard|OCP doesn't recommend running the fan at 100% except in brief moments when you need to dump heat quickly as it is quite loud but 50-60% will keep you below 90C and is not very noticeable. It overclocks very well, they hit 1270MHz core and 6.6GHz VRAM easily and noticed performance improvements when they did so, pushing past the GTX 770 occasionally. There is one major disappointment with this card, the price is currently over 50% higher than the base 280X MSRP so you might want to hold off for a while before thinking of purchasing this card.
"Today we take ASUS' elite R9 280X product, the ASUS ROG MATRIX PLATINUM R9 280X video card, and put its advanced power phasing and board components to the test as we harshly overclock it. We compare it to an ASUS GeForce GTX 770 DirectCU II. Then we will put it head to head against the overclocked SAPPHIRE TOXIC R9 280X."
Here are some more Graphics Card articles from around the web:
- Powercolor Radeon R9-290X Review @ Bjorn3D
- Gigabyte R9 270X Windforce OC @ Kitguru
- Sapphire R9 290X Tri-X OC @ Kitguru
- The $109 Console-killer GPU: AMD's Radeon R7 260 Graphics Card Reviewed @ Techgage
- AMD Catalyst 2013 Linux Graphics Driver Year-In-Review @ Phoronix
- Asus GTX 780 Ti DirectCU II OC @ Kitguru
- Zotac GeForce GTX 780 Ti AMP! Review @ Bjorn3D
Subject: General Tech | December 31, 2013 - 03:00 PM | Jeremy Hellstrom
Tagged: 2013, amd, nvidia, Intel, arm, qualcomm
2013 has been an incredible year and when looking at The Inquirer's look back on the releases of this year it is hard to believe that all of these releases took place in 12 short months. Haswell and Richland were the only two traditional CPU architecture updates for high powered desktop applications which stymied the enthusiasm of some gamers but the real star of 2013 was low powered silicon. ARM has always held strong in this market and celebrated several major releases such as 28nm dual core Cortex A15s and Qualcomm's raising of the bar on mobile graphics with the dual-core and quad-core Snapdragon 400 chips but they lost market share to three newcomers to the low powered market. NVIDIA's quad-core Tegra 4 SoC arrived with decent performance and graphics improvements compared to their previous generation and allowed the release of the Shield which has helped them become more than a GPU company that is also dipping its toes into the HPC market. AMD announced the G series of SoCs for industrial applications with a TDP in the neighbourhood of 6W as well as Temash which will power next generation tablets and hybrid mobile devices but it was really Intel that shone brightest at the low end. Bay Trail has completely reversed the perception of Atom from a product that is not really good at anything to an impressive low powered chip that provides impressive performance for small mobile devices and might find its self a role in the server room as well. That only scratches the top layer of silicon, click over for more of the year in review.
"While Intel and AMD battled out their ongoing war, Nvidia took the stage to announce its latest Tegra 4 system on a chip (SoC), a quad-core chip with a significant graphics boost. The firm did its best to play down the fact that its Tegra 4 has the same CPU core count as its previous-generation Tegra 3, and instead it focused on GPU performance, an area where the Tegra 3 was starting to look dated against newer chips from rivals such as Samsung."
Here is some more Tech News from around the web:
- Goodbye, consoles @ The Tech Report
- Intel Releases A Boatload Of Haswell Documentation @ Phoronix
- Apple’s Newest Mac Pro Costs Less than DIY PC Build… Thanks to AMD @ Techgage
- Open-Source AMD Radeon Graphics Had A Wonderful 2013 @ Phoronix
- BlackBerry CEO John Chen: Y'know what, we'll go back to enterprise stuff @ The Register
- Joke no more: Comedy virty currency Dogecoin gets real in big Xmas heist @ The Register
- Rollei S-50 Wi-Fi Nitro Circus Live Limited Edition Action Camera @ NikKTech
- How To Fix Whatsapp Chat History Corruption @ Tech ARP
- KitGuru Annual Hardware Awards 2013
- Pittasoft BlackVue DR550GW-2CH Car Dashcam @ NikKTech
Subject: Graphics Cards | December 30, 2013 - 02:32 PM | Ryan Shrout
Tagged: amd, Mantle, hawaii, BF4, battlefield 4
If you have been following the mess than has been Battlefield 4 since its release, what with the crashing on both PCs and consoles, you know that EA and DICE have decided that fixing the broken game is the number 1 priority. Gee, thanks.
While they work on that though, there is another casualty of development other than the pending DLC packs: AMD's Mantle version of the game. If you remember way back in September of 2013, along with the announcement of AMD's Hawaii GPUs, AMD and DICE promised a version of the BF4 game running on Mantle as a free update in December. If you are counting, that is just 1 more day away from being late.
Today we got this official statement from AMD:
After much consideration, the decision was made to delay the Mantle patch for Battlefield 4. AMD continues to support DICE on the public introduction of Mantle, and we are tremendously excited about the coming release for Battlefield 4! We are now targeting a January release and will have more information to share in the New Year.
Well, it's not a surprise but it sure is a bummer. One of the killer new features for AMD's GPUs was supposed to be the ability to use this new low-level API to enhance performance for PC games. As Josh stated in our initial article on the subject, "It bypasses DirectX (and possibly the hardware abstraction layer) and developers can program very close to the metal with very little overhead from software. This lowers memory and CPU usage, it decreases latency, and because there are fewer “moving parts” AMD claims that they can do 9x the draw calls with Mantle as compared to DirectX. This is a significant boost in overall efficiency."
It seems that buyers of the AMD R9 series of graphics need to wait at least another month to really see what the promise of Mantle is really all about. Will the wait be worth it?
Subject: General Tech, Graphics Cards | December 19, 2013 - 07:23 PM | Scott Michaud
Tagged: amd, firepro, SPECviewperf
SPECviewperf 12 is a benchmark for workstation components that attempts to measure performance expected for professional applications. It is basically synthetic but is designed to quantify how your system can handle Maya, for instance. AMD provided us with a press deck of some benchmarks they ran leading to many strong FirePro results in the entry to mid-range levels.
They did not include high-end results which they justify with the quote, "[The] Vast majority of CAD and CAE users purchase entry and mid-range Professional graphics boards". That slide, itself, was titled, "Focusing Where It Matters Most". I will accept that but I assume they did the benchmarks and wonder if it would have just been better to include them.
The cards AMD compared are:
- Quadro 410 ($105) vs FirePro V3900 ($105)
- Quadro K600 ($160) vs FirePro V4900 ($150)
- Quadro K2000 ($425) vs FirePro W5000 ($425)
- Quadro K4000 ($763) vs FirePro W7000 ($750)
In each of the pairings, about as equally-priced as possible, AMD held decent lead throughout eight tests included in SPECviewperf 12. You could see the performance gap leveling off as prices begun to rise, however.
Obviously a single benchmark suite should be just one data-point when comparing two products. Still, these are pretty healthy performance numbers.
The First Custom R9 290X
It has been a crazy launch for the AMD Radeon R9 series of graphics cards. When we first reviewed both the R9 290X and the R9 290, we came away very impressed with the GPU and the performance it provided. Our reviews of both products resulted in awards of the Gold class. The 290X was a new class of single GPU performance while the R9 290 nearly matched performance at a crazy $399 price tag.
But there were issues. Big, glaring issues. Clock speeds had a huge amount of variance depending on the game and we saw a GPU that was rated as "up to 1000 MHz" running at 899 MHz in Skyrim and 821 MHz in Bioshock Infinite. Those are not insignificant deltas in clock rate that nearly perfectly match deltas in performance. These speeds also changed based on the "hot" or "cold" status of the graphics card - had it warmed up and been active for 10 minutes prior to testing? If so, the performance was measurably lower than with a "cold" GPU that was just started.
That issue was not necessarily a deal killer; rather, it just made us rethink how we test GPUs. The fact that many people were seeing lower performance on retail purchased cards than with the reference cards sent to press for reviews was a much bigger deal. In our testing in November the retail card we purchased, that was using the exact same cooler as the reference model, was running 6.5% slower than we expected.
The obvious hope was the retail cards with custom PCBs and coolers would be released from AMD partners and somehow fix this whole dilemma. Today we see if that was correct.
Subject: General Tech | December 18, 2013 - 01:21 PM | Jeremy Hellstrom
Tagged: amd, 2014, beema, Kabini, FS1B
DigiTimes has put together an overview of AMD's plans to take back market share over the coming year, though of course AMD is not confirming or denying the accuracy of their report. First off will be the coming of the 28nm Kaveri family in January with availability planned to follow quickly. Beema, which will be based on the Puma+ architecture should arrive in the summer but there is also a Kabini-based series for the new socket, FS1B, which will get limited release in some areas. FS1B will be used for up coming Sempron and Athlon models designed for low power usage scenarios though don't expect to see AM3+ or FM2 disappear any time soon. You will have to wait for 2015 before Carrizo and Nolan make an appearance.
"AMD has been enhancing the marketing of its processors in DIY markets and aims to increase its global DIY market share from about 30% currently to 40%, and to reach a DIY market share above 45% in China in particular, at the end of 2014, according to Taiwan-based motherboard makers."
Here is some more Tech News from around the web:
- NVIDIA Optimus On Ubuntu 13.10 Linux vs. Windows 8.1 @ Phoronix
- Microsoft admits: We WON'T pick the next Steve Ballmer this year @ The Register
- Drawers full of different chargers? The IEC has a one-plug-to-rule-them-all @ The Register
- Bogus Firefox add-on FORCES WITLESS USERS to join vuln-hunting party @ The Register
- Samsung, TSMC to share Apple 14/16nm chip orders @ DigiTimes
- Half of IT pros plan to use Windows XP after support ends in 2014 @ The Inquirer
- Porsche Proves MPAA Wrong By Letting You Download a Car @ [H]ard|OCP
Subject: General Tech, Processors | December 14, 2013 - 01:55 AM | Scott Michaud
Tagged: opteron, arm, amd
The ARMv8 architecture extends the hardware platform to 64-bit. This increase is mostly useful to address massive amounts of memory but can also have other benefits for performance. I think many of us remember the excitement prior to x86-64 and the subsequent let-down when we realized that, for most applications, typical vector extensions kept up in performance especially considering the compatibility issues of the day. It needed to happen but it was a hard sell until... it was just ubiquitous.
AMD has not kept it secret that they are developing 64-bit ARM processors for data centers but, until this week, further details were scarce. Under the codename, "Seattle", these processors will be available in four and eight cores. The Opteron branding will expand beyond x86 to include these new processors. The pitch to enterprises is simple: want both ARM and x86? Why bother with two vendors!
Seattle will also support up to 128GB of ECC memory and 10 Gigabit Ethernet for dense, but power efficient, compute clusters. It will be manufactured on the 28nm process.
The majority of AMD's blog post proclaimed its commitment to software support and it is definitely true that they hold a very high status in both the Linux and Apache Foundations. ARMv8 is supported in Linux starting with kernel 3.7.
Seattle is expected to launch in the second half of 2014.
Subject: Graphics Cards | December 12, 2013 - 05:20 PM | Ryan Shrout
Tagged: video, amd, radeon, hawaii, r9 290, R9 290X, bitcoin, litecoin, mining
If you already listened to this weeks PC Perspective Podcast, then feel free to disregard this post. For the rest of you - subscribe to our damned weekly podcast would you already?!?
In any event, I thought it might be interesting to extract this 6 minute discussion we had during last nights live streamed podcast about how the emergence of Litecoin mining operations is driving up prices of GPUs, particularly the compute-capable R9 290 and R9 290X Hawaii-based cards from AMD.
Check out these prices currently on Amazon!
- Radeon R9 290X - $725+
- Radeon R9 290 - $499+
- Radeon R9 280X - $429+
- GeForce GTX 770 - $409+
- GeForce GTX 780 - $509+
- GeForce GTX 780 Ti - $699+
The price of the GTX 770 is a bit higher than it should be while the GTX 780 and GTX 780 Ti are priced in the same range they have been for the last month or so. The same cannot be said for the AMD cards listed here - the R9 280X is selling for $130 more than its expected MSRP at a minimum but you'll see quite a few going for much higher on Amazon, Ebay (thanks TR) and others. The Radeon R9 290 has an MSRP of $399 from AMD but the lowest price we found on Amazon was $499 and anything on Newegg.com is showing at the same price, but sold out. The R9 290X is even more obnoxiously priced when you can find them.
Do you have any thoughts on this? Do you think Litecoin mining is really causing these price inflations and what does that mean for AMD, NVIDIA and the gamer?
Subject: General Tech | December 12, 2013 - 01:35 AM | Ken Addison
Tagged: z87, xfire, video, shield, R9 290X, podcast, pcper, nvidia, litecoin, grid, frame rating, eyefinity, crossfire, amd
PC Perspective Podcast #280 - 12/12/2013
Join us this week as we discuss the NVIDIA GRID Beta, R9 290X Custom Coolers, 2TB SSDs and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Josh Walrath, Allyn Malventano and Scott Michaud
Subject: General Tech, Graphics Cards | December 11, 2013 - 05:58 PM | Scott Michaud
Tagged: frame pacing, frame rating, amd, southern islands, 4k, eyefinity, crossfire, microstutter
The frame pacing issue has been covered at our website for almost a year now. It stems from the original "microstutter" problem which dates back over a year before we could quantify it. We like to use the term "Frame Rating" to denote the testing methodology we now use for our GPU tests.
AMD fared worse at these tests than NVIDIA (although even they had some problems in certain configurations). They have dedicated a lot of man-hours to the problem resulting in a driver updates for certain scenarios. Crossfire while utilizing Eyefinity or 4K MST was one area they did not focus on. The issue has been addressed in Hawaii and AMD asserted that previous cards will get a software fix soon.
The good news is that we have just received word from AMD that they plan on releasing a beta driver for Southern Islands and earlier GPUs (AMD believes it should work for anything that's not "legacy"). As usual, until it ships anything could change, but it looks good for now.
The beta "frame pacing" driver addressing Crossfire with 4K and Eyefinity, for supported HD-series and Southern Islands-based Rx cards, is expected to be public sometime in January.