AMD Planning APU13 Developer Summit In San Jose, California

Subject: General Tech | May 1, 2013 - 07:08 AM |
Tagged: hUMA, hsa, apu13, APU, amd, AFDS

AMD announced its third annual Developer Summit last week. Dubbed “APU13,” the upcoming summit is the AMD equivalent to NVIDIA’s GTC and is an annual event that brings together industry analysts, researchers, programmers, academics, and software/hardware companies pursuing heterogeneous computing technologies.

In previous years, the AMD Developer Summit has been the launchpad for C++ AMP and the HSA Foundation. This year’s Summit will continue that trend towards heterogeneous computing as well as look back over the year and provide updates on where the various HSA member companies are at as far as goals to move towards standards-based heterogenous computing.

AMD Logo.png

In addition to keynote speeches from AMD and some of its partners, expect a great deal of presentations and workshops from researchers and programmers that are working on new programming models and hardware solutions to efficiently use CPU and GPU processors. More information on hUMA is one of the likely topics, for example. Discussion about upcoming hardware, process nodes, and products may also be on the table so far as it relates to the HSA theme. Considering the summit is called “APU13,” I also expect that AMD will reveal additional details on the company’s Kaveri APU as well as a look into its future product road map.

AMD is currently asking for presentation proposals from researchers in a number of HSA and technology-related fields including heterogeneous computing, cloud computing, web technologies, programming languages, gaming and graphics technologies, and software security. The lineup of presenters for the summit is still being worked out, and proposal papers will be accepted until May 10th with the winners being notified over the summer.

In all, AMD’s APU13 should be an exciting and intellectual event. Last year’s AMD Fusion Developer Summit (AFDS) was an interesting and fun event to cover, and I hope that APU13 will keep up the same momentum and interest in heterogeneous computing that AFDS started.

Source: AMD
Author:
Manufacturer: Various

Our 4K Testing Methods

You may have recently seen a story and video on PC Perspective about a new TV that made its way into the office.  Of particular interest is the fact that the SEIKI SE50UY04 50-in TV is a 4K television; it has a native resolution of 3840x2160.  For those that are unfamiliar with the new upcoming TV and display standards, 3840x2160 is exactly four times the resolution of current 1080p TVs and displays.  Oh, and this TV only cost us $1300.

seiki5.jpg

In that short preview we validated that both NVIDIA and AMD current generation graphics cards support output to this TV at 3840x2160 using an HDMI cable.  You might be surprised to find that HDMI 1.4 can support 4K resolutions, but it can do so only at 30 Hz (60 Hz 4K TVs won't be available until 2014 most likely), half the refresh rate of most TVs and monitors at 60 Hz.  That doesn't mean we are limited to 30 FPS of performance though, far from it.  As you'll see in our testing on the coming pages we were able to push out much higher frame rates using some very high end graphics solutions.

I should point out that I am not a TV reviewer and I don't claim to be one, so I'll leave the technical merits of the monitor itself to others.  Instead I will only report on my experiences with it while using Windows and playing games - it's pretty freaking awesome.  The only downside I have found in my time with the TV as a gaming monitor thus far is with the 30 Hz refresh rate and Vsync disabled situations.  Because you are seeing fewer screen refreshes over the same amount of time than you would with a 60 Hz panel, all else being equal, you are getting twice as many "frames" of the game being pushed to the monitor each refresh cycle.  This means that the horizontal tearing associated with Vsync will likely be more apparent than it would otherwise. 

4ksizes.png

Image from Digital Trends

I would likely recommend enabling Vsync for a tear-free experience on this TV once you are happy with performance levels, but obviously for our testing we wanted to keep it off to gauge performance of these graphics cards.

Continue reading our results from testing 4K 3840x2160 gaming on high end graphics cards!!

AMD Releases FX CPU Refreshes

Subject: Processors | April 30, 2013 - 02:04 PM |
Tagged: amd, FX, vishera, bulldozer, FX-6350, FX-4350, FX-6300, FX-4300, 32 nm, SOI, Beloved

 

Today AMD has released two new processors that address the AM3+ market.  The FX-6350 and FX-4350 are two new refreshes of the quad and hex core lineup of processors.  Currently the FX-8350 is still the fastest of the breed, and there is no update for that particular number yet.  This is not necessarily a bad thing, but there are those of us who are still awaiting the arrival of the rumored “Centurion”.

These parts are 125 watt TDP units, which are up from their 95 watt predecessors.  The FX-6350 runs at 3.9 GHz with a 4.2 GHz boost clock.  This is up 300 MHz stock and 100 MHz boost from the previous 95 watt FX-6300.  The FX-4350 runs at 3.9 GHz with a 4.3 GHz boost clock.  This is 100 MHz stock and 300 MHz boost above that of the FX-4300.  What is of greater interest here is that the L3 cache goes from 4 MB on the 4300 to 8 MB on the 4350.  This little fact looks to be the reason why the FX-4350 is now a 125 watt TDP part.

fx_logo.jpg

It has been some two years since AMD started shipping 32 nm PD-SOI/HKMG products to the market, and it certainly seems as though spinning off GLOBALFOUNDRIES has essentially stopped the push to implement new features into a process node throughout the years.  As many may remember, AMD was somewhat famous for injecting new process technology into current nodes to improve performance, yields, and power characteristics in “baby steps” type fashion instead of leaving the node as is and making a huge jump with the next node.  Vishera has been out for some 7 months now and we have not really seen any major improvement in regards to performance and power characteristics.  I am sure that yields and bins have improved, but the bottom line is that this is only a minor refresh and AMD raised TDPs to 125 watts for these particular parts.

The FX-6350 is again a three module part containing six cores.  Each module features 2 MB of L2 cache for a total of 6 MB L2 and the entire chip features 8 MB of L3 cache.  The FX-4350 is a two module chip with four cores.  The modules again feature the same 2 MB of L2 cache for a total of 4 MB active on the chip with the above mentioned 8 MB of L3 cache that is double what the FX-4300 featured.

Perhaps soon we will see updates on FM2 with the Richland series of desktop processors, but for now this refresh is all AMD has at the moment.  These are nice upgrades to the line.  The FX-6350 does cost the same as the FX-6300, but the thinking behind that is that the 6300 is more “energy efficient”.  We have seen in the past that AMD (and Intel for that matter) does put a premium on lower wattage parts in a lineup.  The FX-4350 is $10 more expensive than the 4300.  It looks as though the FX-6350 is in stock at multiple outlets but the 4350 has yet to show up.

These will fit in any modern AM3+ motherboard with the latest BIOS installed.  While not an incredibly exciting release from AMD, it at least shows that they continue to address their primary markets.  AMD is in a very interesting place, and it looks like Rory Read is busy getting the house in order.  Now we just have to see if they can curve back their cost structure enough to make the company more financially stable.  Indications are good so far, but AMD has a long ways to go.  But hey, at least according to AMD the FX series is beloved!

Source: AMD

hUMA has come with a weapon to slay the memory latency dragon

Subject: General Tech | April 30, 2013 - 01:23 PM |
Tagged: Steamroller, piledriver, Kaveri, Kabini, hUMA, hsa, GCN, bulldozer, APU, amd

AMD may have united GPU and CPU into the APU but one hurdle had remained until now, the the non-uniformity of memory access between the two processors.  Today we learned about one of the first successful HAS projects called Heterogeneous Uniform Memory Access, aka hUMA, which will appear in the upcoming Kaveri chip family.   The use of this new technology will allow the on-die CPU and GPU to access the same memory pool, both physical and virtual and any data passed between the two processors will remain coherent.  As The Tech Report mentions in their overview hUMA will not provide as much of a benefit to discrete GPUs, while they will be able to share address space the widely differing clock speeds between GDDR5 and DDR3 prevent unification to the level of an APU.

Make sure to read Josh's take as well so you can keep up with him on the Podcast.

huma_02.jpg

"At the Fusion Developer Summit last June, AMD CTO Mark Papermaster teased Kaveri, AMD's next-generation APU due later this year. Among other things, Papermaster revealed that Kaveri will be based on the Steamroller architecture and that it will be the first AMD APU with fully shared memory.

Last week, AMD shed some more light on Kaveri's uniform memory architecture, which now has a snazzy marketing name: heterogeneous uniform memory access, or hUMA for short."

Here is some more Tech News from around the web:

Tech Talk

Author:
Subject: Processors
Manufacturer: AMD

heterogeneous Uniform Memory Access

 

Several years back we first heard AMD’s plans on creating a uniform memory architecture which will allow the CPU to share address spaces with the GPU.  The promise here is to create a very efficient architecture that will provide excellent performance in a mixed environment of serial and parallel programming loads.  When GPU computing came on the scene it was full of great promise.  The idea of a heavily parallel processing unit that will accelerate both integer and floating point workloads could be a potential gold mine in wide variety of applications.  Alas, the promise of the technology did not meet expectations when we have viewed the results so far.  There are many problems with combining serial and parallel workloads between CPUs and GPUs, and a lot of this has to do with very basic programming and the communication of data between two separate memory pools.

huma_01.jpg

CPUs and GPUs do not share common memory pools.  Instead of using pointers in programming to tell each individual unit where data is stored in memory, the current implementation of GPU computing requires the CPU to write the contents of that address to the standalone memory pool of the GPU.  This is time consuming and wastes cycles.  It also increases programming complexity to be able to adjust to such situations.  Typically only very advanced programmers with a lot of expertise in this subject could program effective operations to take these limitations into consideration.  The lack of unified memory between CPU and GPU has hindered the adoption of the technology for a lot of applications which could potentially use the massively parallel processing capabilities of a GPU.

The idea for GPU compute has been around for a long time (comparatively).  I still remember getting very excited about the idea of using a high end video card along with a card like the old GeForce 6600 GT to be a coprocessor which would handle heavy math operations and PhysX.  That particular plan never quite came to fruition, but the idea was planted years before the actual introduction of modern DX9/10/11 hardware.  It seems as if this step with hUMA could actually provide a great amount of impetus to implement a wide range of applications which can actively utilize the GPU portion of an APU.

Click here to continue reading about AMD's hUMA architecture.

MSI's FM2-A85XA-G65, an inexpensive start to a great HTPC

Subject: Motherboards | April 26, 2013 - 03:32 PM |
Tagged: msi, FM2-A85XA-G65, socket fm2, amd

Compared to most current Intel boards, MSI's FM2-A85XA-G65 is very clean looking, especially around the CPU socket.  This particular board benefits from LucidLogix's Virtu MVP in addition to hybrid Crossfire present on FM2 boards and will handle proper Crossfire as it does have a pair of PCIe 16x slots.  You could build an agile HTPC with this motherboard, with 8 channel sound available and D-Sub, DVI-D, HDMI and DisplayPort output all present.  The board is currently available for $110, a full review can be found over at X-bit Labs.

xb_board1_big.jpg

"MSI mainboards look great, have very convenient layout and use only high-quality components. These boards are energy-efficient, fast and work well with default settings. However, there are quite a few things in their BIOS that could use some extra work and the boards do not keep the power-saving technologies up and running during overclocking. Everything we have just said is true for the main hero of our today’s review."

Here are some more Motherboard articles from around the web:

Motherboards

Source: X-bit Labs

Podcast #248 - AMD HD 7990, CrossFire Frame Rating improvements, 4K TVs and more!

Subject: General Tech | April 25, 2013 - 02:13 PM |
Tagged: video, Xe, seiki, raidr, podcast, nvidia, Never Serttle, hd 7990, GA-Z77N-WiFi, frame rating, crossfire, amd, 4k

PC Perspective Podcast #248 - 04/25/2013

Join us this week as we discuss AMD HD 7990, CrossFire Frame Rating improvements, 4K TVs and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano

Program length: 1:16:34

  1. 0:01:20 Update on Indiegogo: You guys rock!
  2. Week in Review:
  3. News items of interest:
    1. Ryan: Seiki 4K TV - more support from enthusiasts! and wet puppies
    2. Jeremy: This is not news people, NFC is a feature but if you are paranoid you can check with this app
    3. Allyn: Put your bits on an ioSafe. Put your 'papers' here.
    4. Tim: BT Sync, it's in public alpha now so go grab it!
  4. 1-888-38-PCPER or podcast@pcper.com
  5. Closing/outro

 

Author:
Subject: Processors
Manufacturer: AMD

Jaguar Hits the Embedded Space

 

It has long been known that AMD has simply not had a lot of luck going head to head against Intel in the processor market.  Some years back they worked on differentiating themselves, and in so doing have been able to stay afloat through hard times.  The acquisitions that AMD has made in the past decade are starting to make a difference in the company, especially now that the PC market that they have relied upon for revenue and growth opportunities is suddenly contracting.  This of course puts a cramp in AMD’s style, but with better than expected results in their previous quarter, things are not nearly as dim as some would expect.

Q1 was still pretty harsh for AMD, but they maintained their marketshare in both processors and graphics chips.  One area that looks to get a boost is that of embedded processors.  AMD has offered embedded processors for some time, but with the way the market is heading they look to really ramp up their offerings to fit in a variety of applications and SKUs.  The last generation of G-series processors were based upon the Bobcat/Brazos platform.  This two chip design (APU and media hub) came in a variety of wattages with good performance from both the CPU and GPU portion.  While the setup looked pretty good on paper, it was not widely implemented because of the added complexity of a two chip design plus thermal concerns vs. performance.

soc_arch.jpg

AMD looks to address these problems with one of their first, true SOC designs.  The latest G-series SOC’s are based upon the brand new Jaguar core from AMD.  Jaguar is the successor to the successful Bobcat core which is a low power, dual core processor with integrated DX11/VLIW5 based graphics.  Jaguar improves performance vs. Bobcat in CPU operations between 6% to 13% when clocked identically, but because it is manufactured on a smaller process node it is able to do so without using as much power.  Jaguar can come in both dual core and quad core packages.  The graphics portion is based on the latest GCN architecture.

Read the rest of the AMD G-Series release by clicking here!

XFX Announces Malta Dual-GPU Radeon HD 7990

Subject: Graphics Cards | April 24, 2013 - 10:14 PM |
Tagged: xfx, malta, hd 7990, GCN, dual gpu, amd

Now that AMD’s dual-gpu Malta graphics card is official, cards from Add-In Board (AIB) partners are starting to roll in. One such recently announced card is the XFX Radeon HD 7990 card. The XFX card is based on the reference AMD design, which includes two Radeon HD 7970 GPUs in a Crossfire configuration.

The two GPUs can boost up to 1GHz clock speeds and feature a total of 4096 stream processors, 256 texture units, 64 ROPs, and 8.6 billion transistors. The card also includes 3GB of GDDR5 memory per GPU running off a 384-bit bus. It supports AMD’s Eyefinity technology and offers up one DL-DVI and four mini-DisplayPort video outputs.

XFX Radeon HD 7990.jpg

The XFX HD 7990 uses the reference AMD heatsink as well, which includes a massive aluminum fin stack with five copper heatpipes that run the length of the heasink and directly touch the two 7970 GPUs. Three shrouded fans, in turn, keep the heatsink cool.

The dual-GPU monster is eligible for AMD’s Never Settle bundle which includes eight free games. With purchase of the HD 7990 (from any eligible AIB), you get free key codes for the following games:

  • Bioshock Infinite
  • Crysis 3
  • Deus Ex: Human Revolution
  • Far Cry 3
  • Far Cry 3: Blood Dragon
  • Hitman: Absolution
  • Sleeping Dogs
  • Tomb Raider

The XFX press release further assures gamers that the card can, in fact, play Crysis 3 at maximum settings at a resolution of 3840 x 2160. The company did not mention pricing, however.

For those interested in AMD’s new Malta GPU, check out our review as well as how the card performs when paired with a prototype AMD driver that seeks to address some of the frame rating issues exhibited by AMD's Crossfire multi-GPU solution.

Source: XFX

PowerColor Launches HD 7990 V2 Based On Official AMD Malta GPU

Subject: Graphics Cards | April 24, 2013 - 07:09 PM |
Tagged: amd, powercolor, hd 7990, malta, dual gpu, crossfire

PowerColor (a TUL corporation brand) launched its dual-GPU Radeon HD 7990 V2 graphics card, and this time the card is based on the (recently reviewed) official dual-GPU AMD “Malta” GPU announced at the Games Developers Conference (GDC). The new HD 7990 V2 graphics card features two AMD HD 7970 cards in a Crossfire configuration. That means that the Malta-based card features a total of 4096 stream processors, and a rated 8.2 TFLOPS of peak performance.

PowerColor HD 7990 V2 Malta.jpg

The PowerColor HD 7990 V2 joins the company’s existing Devil 13 and HD 7990 graphics cards. The new card sports a triple-fan shrouded heatsink that is somewhat tamer-looking that the custom Devil 13. Other hardware includes 3GB of GDDR5 RAM per GPU clocked at 1500MHz and running on a 384-bit bus (again, per GPU) for a total of 6GB. Both GPUs have clock speeds of 950MHz base and up to 1GHz boost.

PowerColor HD 7990 V2 Malta GPU.jpg

The new GPU has a single DL-DVI and four mini-DisplayPort video outputs. PowerColor is touting the card’s Eyefinity prowess as well as its ZeroCore support for reducing power usage when idle. The board has a TDP of 750W and is powered by two PCI-E power connections. In all, the HD 7990 V2 graphics card measures 305 x 110 x 38mm. While PowerColor has not released pricing or availability, expect the card to be available soon and around the same price (or a bit lower than) as its existing (custom) HD 7990.

The full press release can be found here.

Source: PowerColor