Subject: General Tech | October 9, 2013 - 01:05 PM | Jeremy Hellstrom
Tagged: amd, nvidia, price cuts, gtx 760
DigiTimes has broken the news that NVIDIA will be cutting prices on many of their cards in reaction to AMD's new GPU family. Currently the lowest priced GTX660 is $150 after MIR and a GTX650Ti Boost can be had for $110. We don't have any information as to how they will be updating the GTX 760, likely faster clocks but we can hope for something a little more adventurous. The GTX 760 can be had for $250 right now but you should hold off to see what the new model has and what it does to the price of the current model.
"Nvidia has offered price cuts for several of its graphics cards including the GTX 660 and GTX 650Ti Boost and will soon release an upgraded GTX 760, targeting AMD's Radeon R9 280X, according to sources from graphics card makers."
Here is some more Tech News from around the web:
- Selling phones? Nonsense, BlackBerry is all about THE CLOUD @ The Register
- Pokewithastick, an Arduino programmable web-logger/server @ Hack a Day
- Android adware that MUST NOT BE NAMED threatens MILLIONS @ The Register
- Dangerous VBulletin Exploit In the Wild @ Slashdot
- Mad Catz is taking pre-orders for the Mojo games console @ The Inquirer
- Sony Alpha NEX-5T Review @ TechReviewSource
Subject: General Tech | October 9, 2013 - 01:59 AM | Tim Verry
Tagged: verizon, sm15000, seamicro, enterprise, cloud, amd
Verizon is launching its new public cloud later this year and offering both compute and storage to enterprise customers. The Verizon Cloud Compute and Cloud Storage products will be hosted on AMD SM15000 servers in a multi-rack cluster built by Verizon-owned Terremark.
The new cloud service is powered by AMD SM15000 servers with are 10U units with multiple microservers interconnected by SeaMicro's Freedom Fabric technology. Verizon is aiming the new cloud service at business users, from SMBs to fortune 500 companies. Specifically, Verizon is hoping to entice IT departments to offload internal company applications and services to the Verizon Cloud Compute and Cloud Storage offerings. Using the SeaMicro-built (now owned by AMD) servers, Verizon has a high density, efficient infrastructure that can allegedly provision and deploy virtual machines with fine tuned specifications and performance and reliability backed by enterprise-level SLAs while being compliant with PCI and DoD standards for data security.
Verizon will be launching Cloud Compute and Cloud Storage as a public beta in Q4 of this year. Further, the company will be taking on beta customers later this month.
The AMD SM15000 is a high performance, high density server, and is an interesting product for cloud services thanks to the networking interconnects and power efficient compute cards. Verizon and AMD have been working together for the better part of two years on the cloud platform using the servers which were first launched to the public a little more than a year ago.
The SM15000 is a 10U server that is actually made up of multiple compute cards. AMD and SeaMicro also pack the server with a low latency and high bandwidth networking fabric to connect the servers to each other, multiple power supplies, and the ability to connect to a shared pool of storage that each compute card can access. Each compute card uses a small, cut down motherboard, processor, ram, and networking IO. The processors can be AMD Opteron, Intel Xeon, or Intel Atom with the future possibility of an AMD APU-based server (which is the configuration option that I am most interested in). In this case, Verizon appears to be using AMD Opteron chips, which means each compute card has a single eight core AMD Opteron CPU clocked at up to 2.8GHz, 64GB of system memory, and a 10Gbps networking link.
In total, each SM15000 server is powered by 512 CPU cores, up to 4TB of RAM, ten 1,100W PSUs, and support for more than 5PB of shared storage. Considering Verizon is using multiple racks filled with SM15000 servers, there is a lot of hardware on offer to support a multitude of mission critical applications backed by SLAs (service level agreements, which are basically guarantees of uptime and/or performance).
I'm looking forward to seeing what sorts of things customers end up doing with the Verizon Cloud and how the SeaMicro-built servers hold up once the service is fully ramped up and utilized.
You can find more information on the SM15000 servers in my article on their initial debut. Also, the full press release on the Verizon Cloud is below.
Subject: General Tech | October 8, 2013 - 07:59 PM | Jeremy Hellstrom
Tagged: amd, dirty pool, linux, open source
Rebranding existing hardware with fancy new model numbers is nothing new to either AMD or NVIDA. Review sites catch on immediately and while we like to see optimizations applied to mature chips we much prefer brand new hardware. When you release the Q-1200000 as the X-2200000 the best you will get from review sites is a recommendation to go with the model that has the lower price, not the higher model number. Most enthusiasts have caught on to the fact that they are the same product; we do not like it but we have come to accept it as common business practice. Certain unintentional consequences from designs we can forgive as long as you admit the issue and work to rectify it, only the intentional limitations are being mentioned in this post.
This is where the problem comes in as it seems that this intentional misleading of customers has created a mindset where it is believed that it is OK to intentionally impose performance limitations on products. Somehow companies have convinced themselves that a customer base who routinely tears apart hardware, uses scanners to see inside actual components and who write their own OSes from scratch (or at least update the kernel) will somehow not be able to discover these limitations. Thus we have yesterday's revelation that NVIDIA has artificially limited the number of screens usable in Linux to three; not because of performance or stability issues but simply because it might provide Linux users with a better experience that Windows users.
Apparently AMD is not to be outdone when it comes to this kind of dirty pool, in their case it is audio that is limited as opposed to video. If you are so uncouth as to use a DVI to HDMI adapter which did not come with your shiny new Radeon then you are not allowed to have audio signals transferred over that HDMI cable on either Windows or Linux. There is a ... shall we say Apple-like hardware check, that Phoronix reported on which will disable the audio output unless a specific EEPROM on your adapter is detected. NVIDIA doesn't sell monitors nor is AMD really in the dongle business but apparently they are willing to police the components you choose to use, though the causes of AMD's decision are not as clear as NVIDIA's for as far as we know Monster Cable does not have the magic EEPROM in their adapters.
If your customers are as talented as your engineers you might not want to listen to your salespeople who tell you that partnerships with other companies are more important than antagonizing your customers by trying to pull a fast one on them. We will find out and it will come back to haunt you. Unless the payoffs you get from your partnerships are more than you make selling to customers in which case you might as well just ignore us.
"For some AMD Radeon graphics cards when using the Catalyst driver, the HDMI audio support isn't enabled unless using the simple DVI to HDMI adapter included with the graphics card itself... If you use another DVI-to-HDMI adapter, it won't work with Catalyst. AMD intentionally implemented checks within their closed-source driver to prevent other adapters from being used, even though they will work just fine."
Here is some more Tech News from around the web:
- The TR Podcast 143: Steamy news from Hawaii
- 4GB DDR3 contract prices to rise by double digit percentage in October @ DigiTimes
- Microsoft unveils enterprise cloud solutions @ DigiTimes
- AMD's SeaMicro: 'We're the mystery vendor behind Verizon's cloud' @ The Register
- Intel HD Sandy/Ivy Bridge Linux Graphics Performance On Ubuntu 13.10 @ Phoronix
- Club3D Radeon R9 280X royalKing 3GB Graphics Card Giveaway @ eTeknix
Subject: Graphics Cards | October 8, 2013 - 05:30 PM | Jeremy Hellstrom
Tagged: amd, GCN, graphics core next, hd 7790, hd 7870 ghz edition, hd 7970 ghz edition, r7 260x, r9 270x, r9 280x, radeon, ASUS R9 280X DirectCU II TOP
AMD's rebranded cards have arrived, though with a few improvements to the GCN architecture that we already know so well. This particular release seems to be focused on price for performance which is certainly not a bad thing in these uncertain times. The 7970 GHz Edition launched at $500, while the new R9 280X will arrive at $300 which is a rather significant price drop and one which we hope doesn't damage AMD's bottom line too badly in the coming quarters. [H]ard|OCP chose the ASUS R9 280X DirectCU II TOP to test, with a custom PCB from ASUS and a mild overclock which helped it pull ahead of the 7970 GHz. AMD has tended towards leading off new graphics card families with the low and midrange models, we have yet to see the top of the line R9 290X in action yet.
Ryan's review, including frame pacing, can be found right here.
"We evaluate the new ASUS R9 280X DirectCU II TOP video card and compare it to GeForce GTX 770 and Radeon HD 7970 GHz Edition. We will find out which video card provides the best value and performance in the $300 price segment. Does it provide better performance a than its "competition" in the ~$400 price range?"
Here are some more Graphics Card articles from around the web:
- AMD's Radeon R7 260X @ The Tech Report
- AMD's Radeon R9 280X and 270X @ The Tech Report
- AMD Radeon R9 270X & R7 260X Review @ Neoseeker
- AMD Radeon R7 260X 2GB @ eTeknix
- AMD Radeon R9 270X 2GB @ eTeknix
- AMD Radeon R7 260X, R9 270X and R9 280X @ Hardware.info
- Sapphire AMD Radeon R9 280X Vapor-X OC 3GB @ eTeknix
- Radeon R9 270X and R7 260X @ TechSpot
- AMD Radeon R9 270X & R7 260X @ Legion Hardware
- AMD Radeon R9 270X & R7 260X Review @ Hardware Canucks
- AMD Radeon R9 280X 3GB Review @ Hardware Canucks
- Sapphire R9 280X Vapor X @ Kitguru
- AMD R7 260X @ Kitguru
- AMD R9 270X @ Kitguru
The AMD Radeon R9 280X
Today marks the first step in an introduction of an entire AMD Radeon discrete graphics product stack revamp. Between now and the end of 2013, AMD will completely cycle out Radeon HD 7000 cards and replace them with a new branding scheme. The "HD" branding is on its way out and it makes sense. Consumers have moved on to UHD and WQXGA display standards; HD is no longer extraordinary.
But I want to be very clear and upfront with you: today is not the day that you’ll learn about the new Hawaii GPU that AMD promised would dominate the performance per dollar metrics for enthusiasts. The Radeon R9 290X will be a little bit down the road. Instead, today’s review will look at three other Radeon products: the R9 280X, the R9 270X and the R7 260X. None of these products are really “new”, though, and instead must be considered rebrands or repositionings.
There are some changes to discuss with each of these products, including clock speeds and more importantly, pricing. Some are specific to a certain model, others are more universal (such as updated Eyefinity display support).
Let’s start with the R9 280X.
AMD Radeon R9 280X – Tahiti aging gracefully
The AMD Radeon R9 280X is built from the exact same ASIC (chip) that powers the previous Radeon HD 7970 GHz Edition with a few modest changes. The core clock speed of the R9 280X is actually a little bit lower at reference rates than the Radeon HD 7970 GHz Edition by about 50 MHz. The R9 280X GPU will hit a 1.0 GHz rate while the previous model was reaching 1.05 GHz; not much a change but an interesting decision to be made for sure.
Because of that speed difference the R9 280X has a lower peak compute capability of 4.1 TFLOPS compared to the 4.3 TFLOPS of the 7970 GHz. The memory clock speed is the same (6.0 Gbps) and the board power is the same, with a typical peak of 250 watts.
Everything else remains the same as you know it on the HD 7970 cards. There are 2048 stream processors in the Tahiti version of AMD’s GCN (Graphics Core Next), 128 texture units and 32 ROPs all being pushed by a 384-bit GDDR5 memory bus running at 6.0 GHz. Yep, still with a 3GB frame buffer.
Subject: General Tech, Graphics Cards | October 2, 2013 - 09:03 PM | Scott Michaud
Tagged: amd, linux
Last week, NVIDIA published documentation for Nouveau to heal wounds with the open source community. AMD had a better reputation and intends to maintain it. On Tuesday, Alex Deucher published 9 PDF documents, 1178 pages of register and acceleration documentation along with 18 pages of HDA GPU audio programming details, compared to the 42 pages NVIDIA published.
Sure, a page to page comparison is meaningless, but it is clear AMD did not want to be outdone. This is especially true when you consider that some of these documents date back to early 2009. Still, reactionary or not, the open source community should accept the assistance with open arms... and open x86s?
I should note that these documents do not cover Volcanic Islands; they are for everything between Evergreen and Sea Islands.
Subject: General Tech | September 26, 2013 - 02:41 PM | Ken Addison
Tagged: video, valve, SteamOS, Steam Box, steam, razer, R9 290X, R9, R7, podcast, Naga, corsair, amd
PC Perspective Podcast #270 - 09/26/2013
Join us this week as we discuss AMDs new GPU lineup, SteamOS, the Steam Box, and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Jeremy Hellstrom, Josh Walrath, Allyn Malventano, and Morry Teitelman
Week in Review:
0:43:15 Happy 10th anniversary Hammer!
Hardware/Software Picks of the Week:
1-888-38-PCPER or email@example.com
Subject: General Tech, Graphics Cards | September 26, 2013 - 03:35 AM | Scott Michaud
Tagged: frame rating, frame pacing, amd
Scott Wasson of The Tech Report just received an interview with Raja Koduri, head of Graphics Hardware and Software Development at AMD, a few hours ago. Part of the interview discussed frame the frame pacing issues we, as well as The Tech Report, published over the last year. In short, the news seems good for owners of Radeon graphics cards, future and even current.
The "Hawaii" powered Radeon R9 290 and R9 290X graphics cards are expected to handle CrossFire pacing acceptably at launch. Clearly, if there is ever a time to fix the problem, it would be in new hardware. Still, this is good news for interested customers; if all goes to plan, you are likely going to have a good experience out of the box.
Current owners of GCN-based video cards, along with potential buyers of the R9 280X and lower upcoming cards, will apparently need to wait for AMD to release a driver to fix these issues. However, this driver is not far off: Koduri, unclear whether on or off the record, intends for an autumn release. This driver is expected to cover frame pacing issues for CrossFire, Eyefinity, and 4K.
Koduri does believe the CrossFire issues were unfortunate and expresses a desire to fix the issue for his customers.
Keep checking PC Perspective for more information as it comes out!
Editor's Note: I just spoke with Raja Koduri as well and he basically reiterated everything that Scott noted in his story on The Tech Report as well. The upcoming 290X will have frame pacing at Eyefinity and 4K resolution at launch while the cards below that in the R9 series, and users of Radeon HD 7000 cards (and likely beyond) will need some more time before the driver is ready. I'll be able to talk quite a bit more about the changes to BOTH architectures very shortly so stay tuned for that.
AMD is up to some interesting things. Today at AMD’s tech day, we discovered a veritable cornucopia of information. Some of it was pretty interesting (audio), some was discussed ad-naseum (audio, audio, and more audio), and one thing in particular was quite shocking. Mantle was the final, big subject that AMD was willing to discuss. Many assumed that the R9 290X would be the primary focus of this talk, but in fact it very much was an aside that was not discussed at any length. AMD basically said, “Yes, the card exists, and it has some new features that we are not going to really go over at this time.” Mantle, as a technology, is at the same time a logical step as well as an unforeseen one. So what all does Mantle mean for users?
Looking back through the mists of time, when dinosaurs roamed the earth, the individual 3D chip makers all implemented low level APIs that allowed programmers to get closer to the silicon than what other APIs such as Direct3D and OpenGL would allow. This was a very efficient way of doing things in terms of graphics performance. It was an inefficient way to do things for a developer writing code for multiple APIs. Microsoft and the Kronos Group had solutions with Direct3D and OpenGL that allowed these programmers to develop for these high level APIs very simply (comparatively so). The developers could write code that would run D3D/OpenGL, and the graphics chip manufacturers would write drivers that would interface with Direct3D/OpenGL, which then go through a hardware abstraction layer to communicate with the hardware. The onus was then on the graphics people to create solid, high performance drivers that would work well with DirectX or OpenGL, so the game developer would not have to code directly for a multitude of current and older graphics cards.
Subject: General Tech, Graphics Cards, Shows and Expos | September 25, 2013 - 05:23 PM | Scott Michaud
Tagged: radeon, R9 290X, R9, R7, GPU14, amd
The next generation of AMD graphics processors are being announced this afternoon. They carefully mentioned this event is not a launch. We do not yet know, although I hope we will learn today, when you can give them your money.
When you can, you will have five products to choose from:
- R7 250
- R7 260X
- R9 270X
- R9 280X
- R9 290X
AMD only provides 3D Mark Fire Strike scores for performance. I assume they are using the final score, and not the "graphics score" although they were unclear.
The R7 250 is the low end card of the group with 1GB of GDDR5. Performance, according to 3DMark scores (>2000 on Fire Strike), is expected to be about two-thirds of what an NVIDIA GeForce GTX 650 Ti can deliver. Then again, that card retails for about ~$130 USD. The R7 250 has an expected retail value of less than < $89 USD. This is a pretty decent offering which can probably play Battlefield 3 at 1080p if you play with the graphics quality settings somewhere around "medium". This is just my estimate, of course.
The R7 260X is the next level up. The RAM has been double over the R7 250 to 2GB of GDDR5 and its 3DMark score almost doubled, too (> 3700 on Fire Strike). This puts it almost smack dab atop the Radeon HD 6970. The R7 260X is about $20-30 USD cheaper than the HD 6970. The R7 is expected to retail for $139. Good price cut while keeping up to date on architecture.
The R9 270X is the low end of the high end parts. With 2GB of GDDR5 and a 3DMark Fire Strike score of >5500, this is aimed at the GeForce 670. The R7 270X will retail for around ~$199 which is about $120 USD cheaper than NVIDIA's offering.
The R9 280X should be pretty close to the 7970 GHz Edition. It will be about ~$90 cheaper with an expected retail value of $299. It also has a bump in frame buffer over the lower-tier R9 270X, containing 3GB of GDDR5.
Not a lot is known about the top end, R9 290X, except that it will be the first gaming GPU to cross 5 TeraFLOPs of compute performance. To put that into comparison, the GeForce Titan has a theoretical maximum of 4.5 TeraFLOPs.
If you are interested in the R9 290X and Battlefield 4, you will be able to pre-order a limited edition package containing both products. Pre-orders open "from select partners" October 3rd. For how much? Who knows.
We will keep you informed as we are informed. Also, the announcement is still going on, so tune in!
Get notified when we go live!