Author:
Manufacturer: AMD

Frame Pacing for CrossFire

When the Radeon HD 7990 launched in April of this year, we had some not-so-great things to say about it.  The HD 7990 depends on CrossFire technology to function and we had found quite a few problems with AMD's CrossFire technology over the last months of testing with our Frame Rating technology, the HD 7990 "had a hard time justifying its $1000 price tag."  Right at launch, AMD gave us a taste of a new driver that they were hoping would fix the frame pacing and frame time variance issues seen in CrossFire, and it looked positive.  The problem was that the driver wouldn't be available until summer.

As I said then: "But until that driver is perfected, is bug free and is presented to buyers as a made-for-primetime solution, I just cannot recommend an investment this large on the Radeon HD 7990."

Today could be a very big day for AMD - the release of the promised driver update that enables frame pacing on AMD 7000-series CrossFire configurations including the Radeon HD 7990 graphics cards with a pair of Tahiti GPUs. 

It's not perfect yet and there are some things to keep an eye on.  For example, this fix will not address Eyefinity configurations which includes multi-panel solutions and the new 4K 60 Hz displays that require a tiled display configuration.  Also, we found some issues with more than two GPU CrossFire that we'll address in a later page too.

 

New Driver Details

Starting with 13.8 and moving forward, AMD plans to have the frame pacing fix integrated into all future drivers.  The software team has implemented a software based frame pacing algorithm that simply monitors the time it takes for each GPU to render a frame, how long a frame is displayed on the screen and inserts delays into the present calls when necessary to prevent very tightly timed frame renders.  This balances or "paces" the frame output to the screen without lowering the overall frame rate.  The driver monitors this constantly in real-time and minor changes are made on a regular basis to keep the GPUs in check. 

7990card.JPG

As you would expect, this algorithm is completely game engine independent and the games should be completely oblivious to all that is going on (other than the feedback from present calls, etc). 

This fix is generic meaning it is not tied to any specific game and doesn't require profiles like CrossFire can from time to time.  The current implementation will work with DX10 and DX11 based titles only with DX9 support being added later with another release.  AMD claims this was simply a development time issue and since most modern GPU-bound titles are DX10/11 based they focused on that area first.  In phase 2 of the frame pacing implementation AMD will add in DX9 and OpenGL support.  AMD wouldn't give me a timeline for implementation though so we'll have to see how much pressure AMD continues with internally to get the job done.

Continue reading our story of the new AMD Catalyst 13.8 beta driver with frame pacing support!!

AMD Catalyst for Windows 8.1 Release Preview

Subject: General Tech, Graphics Cards | June 26, 2013 - 03:05 PM |
Tagged: Windows 8.1, radeon, amd

You should be extremely cautious about upgrading to the Windows 8.1 Release Preview. Each of your apps, and all of your desktop software, must be reinstalled when the final code is released later this year; it is a detour to a dead end.

AMD-Catalyst.jpg

If curiosity overwhelms reason, and your graphics card was made by AMD withing the last few years, you will at least have a driver available.

It would be a good idea to refer to the AMD article to ensure that your specific model is supported. The driver covers many graphics cards from the Radeon, APU, and FirePro product categories. Many models are certified against Windows Display Driver Model version 1.3 (WDDM 1.3) although some, pre-Graphics Core Next architecture (as far as I can tell), are left behind with WDDM 1.2 introduced with Windows 8.

WDDM 1.3, new to Windows 8.1, allows for a few new developer features:

  • Enumerating GPU engine capabilities
    • A DirectX interface to query card capabilities
    • Helps schedule work, especially in "Linked Display Adapter" (LDA, think Crossfire) configurations.
  • Using cross-adapter resources in a hybrid system
    • For systems with both discrete and embedded GPUs, such as an APU and a Radeon Card
    • Allows for automatic loading of both GPUs simultaneously for appropriate applications
    • Cool, but I've already loaded separate OpenCL kernels simultaneously on both GTX 670 and Intel HD 4000 in Windows 7. Admittedly, it would be nice if it were officially supported functionality, though.
  • Choice in YUV format ranges, studio or extended, for Microsoft Media Foundation (MMF)
    • Formerly, MMF video processing assumed 16-235 black-white, which professional studios use.
    • Webcam and Point-and-Shoot use 0-255 (a full byte), which are now processed properly.
  • Wireless Display (Miracast)
    • Attach your PC wirelessly to a Miracast display adapter attached to TV by HDMI, or whatever.
  • Multiplane overlay support
    • Allows GPU to perform complicated compositing, such as video over a website.
    • If it's the same as proposed for Linux, will also allow translucency.

AMD's advertised enhancements for Windows 8.1 are:

  • Wireless Display
    • Already covered, a part of WDDM 1.3.
  • 48 Hz Dynamic Refresh rates for Video Playback
    • Not a clue, unless it is part of an upcoming HFR format for consumers.
  • Aggressive V-sync interrupt optimization
    • Again, not a clue, but it sounds like something to be Frame Rated?
  • Skype/Lync video conferencing acceleration
    • ... just when we move to a dual-machine Skype broadcasting setup...
  • DX 11.1 feature: Tiled Resources
    • Some sources claim DirectX 11.2???
    • Will render the most apparent details to a player with higher quality.

If you own Windows 8, you can check out 8.1 by downloading it from the Windows Store... if you dare. By tomorrow, Microsoft will provide ISO version for users to create install media for users who want to fresh-install to a, hopefully unimportant, machine.

The drivers, along with (again) the list of supported cards, are available at AMD.

Source: AMD

Frame Rating: AMD plans driver release to address frame pacing for July 31st

Subject: Graphics Cards | June 20, 2013 - 01:05 PM |
Tagged: radeon, nvidia, geforce, frame rating, fcat, crossfire, amd

Well, the date has been set.  AMD publicly stated on its @AMDRadeon Twitter account that a new version the prototype driver we originally previewed with the release of the Radeon HD 7990 in April will be released to the public on July 31st.  For a problem that many in the industry didn't think existed.  

 

 

Since that April release AMD has been very quiet about its driver changes and actually has refused to send me updated prototypes over the spring.  Either they have it figured out or they are worried they haven't - but it looks like we'll find out at the end of next month and I feel pretty confident that the team will be able to address the issues we brought to light.

For those of you that might have missed the discussion, our series of Frame Rating stories will tell you all about the issues with frame pacing and stutter in regards to AMD's CrossFire multi-GPU technology. 

AMD gave the media a prototype driver in April to test with the Radeon HD 7990, a card that depends on CrossFire to work correctly, and the improvements were pretty drastic.

BF3_2560x1440_PLOT_0.png

So what can we expect on July 31st?  A driver that will give users the option to disable or enable the frame pacing technology they are developing - though I am still of the mindset that disabling is never advantageous.  More to come in the next 30 days!

Source: Twitter

Rumor: AMD Gets Exclusive Optimization for all Frostbite 3 Games

Subject: Graphics Cards | June 18, 2013 - 12:39 PM |
Tagged: radeon, nvidia, geforce, frostbite 3, ea, dice, amd

UPDATE #3

The original source article at IGN.com has been updated with some new information.  Now they are saying the agreement between AMD and EA is "non-exclusive and gamers using other components will be supported." 

The quote from an EA rep says as follows:

DICE has a partnership with AMD specifically for Battlefield 4 on PC to showcase and optimize the game for AMD hardware," an EA spokesperson said. "This does not exclude DICE from working with other partners to ensure players have a great experience across a wide set of PCs for all their titles.

END UPDATE #3

This could be a huge deal for NVIDIA and AMD in the coming months - according to a story at IGN.com, AMD has entered into an agreement with EA that will allow them exclusive rights to optimization for all games based around the Frostbite 3 engine.  That includes Battlefield 4, Mirror's Edge 2, Need for Speed Rivals and many more games due out this year and in 2014.  Here is the quote that is getting my attention:

Starting with the release of Battlefield 4, all current and future titles using the Frostbite 3 engine — Need for Speed Rivals, Mirror's Edge 2, etc. — will ship optimized exclusively for AMD GPUs and CPUs. While Nvidia-based systems will be supported, the company won't be able to develop and distribute updated drivers until after each game is released.

bf4.jpg

Battlefield 4 will be exclusive optimized for AMD hardware.

This is huge news for AMD as the Frostbite 3 engine will be used for all EA published games going forward with the exception of sports titles.  The three mentioned above are huge but this also includes Star Wars Battlefront, Dragon Age and even the next Mass Effect so I can't really emphasize enough how big of a win this could be for AMD's marketing and developer relations teams. 

I am particularly interested in this line as well:

While Nvidia-based systems will be supported, the company won't be able to develop and distribute updated drivers until after each game is released.

The world of PC optimizations and partnerships has been around for a long time so this isn't a huge surprise for anyone that follows PC gaming.  What is bothersome to me is that both EA and AMD are saying are rumored to have agreed that NVIDIA won't get access to the game as it is being developed - something that is CRUCIAL for day-of driver releases and performance tweaks for GeForce card owners.  In most cases, both AMD and NVIDIA developer relations teams get early access to game builds for PC titles in order to validate compatibility and to improve performance of these games for the public release.  Without these builds, NVIDIA would be at a big disadvantage.  This is exactly what happend with the recent Tomb Raider release.

UPDATE

AMD called me to reiterate their stance that competition does not automatically mean cutting out the other guy.  In the Tomb Raider story linked above, Neil Robison, AMD's Senior Director of Consumer and Graphics Alliances, states quite plainly: "The thing that angers me the most is when I see a request to debilitate a game. I understand winning, I get that, and I understand aggressive companies, I get that. Why would you ever want to introduce a feature on purpose that would make a game not good for half the gaming audience?"

So what do we take away from that statement, made in a story published in March, and today's rumor?  We have to take AMD at its word until we see solid evidence otherwise, or enough cases of this occurring to feel like I am being duped but AMD wants us all to know that they are playing the game the "right way."  That stance just happens to be counter to this rumor. 

END UPDATE

tombraider.jpg

NVIDIA had performance and compatibility issues with Tomb Raider upon release.

The irony in all of this is that AMD has been accusing NVIDIA of doing this exact thing for years - though without any public statements from developers, publishers or NVIDIA.  When Batman: Arkham Asylum was launched AMD basically said that NVIDIA had locked them out of supporting antialiasing.  In 2008, Assassin's Creed dropped DX 10.1 support supposedly because NVIDIA asked them too, who didn't have support for it at the time in GeForce cards.  Or even that NVIDIA was disabling cores for PhysX CPU support to help prop up GeForce sales.  At the time, AMD PR spun this as the worst possible thing for a company to do in the name of gamers, that is was bad for the industry, etc.  But times change as opportunity changes.

The cold truth is that this is why AMD decided to take the chance that NVIDIA was allegedly unwilling to and take the console design wins that are often noted as being "bad business."  If settling for razor thin margins on the consoles is a risk, the reward that AMD is hoping to get is exactly this: benefits in other markets thanks to better relationships with game developers.

ps4controller.jpg

Will the advantage be with AMD thanks to PS4 and Xbox One hardware?

At E3 I spoke in-depth with both NVIDIA and AMD executives about this debate and as you might expect both have very different opinions about what is going to transpire in the next 12-24 months.  AMD views this advantage (being in the consoles) as the big bet that is going to pay off for the more profitable PC space.  NVIDIA thinks that AMD still doesn't have what it takes to truly support developers in the long run and they don't have the engineers to innovate on the technology side.  In my view, having Radeon-based processors in the Xbox One and Playstation 4 (as well as the Wii U I guess) gives AMD a head start but won't win them the race for the hearts and minds of PC gamers. There is still a lot of work to be done for that.

Before this story broke I was planning on outlining another editorial on this subject and it looks like it just got promoted to a top priority.  There appear to be a lot of proverbial shoes left to drop in this battle, but it definitely needs more research and discussion. 

batmanaa.jpg

Remember the issues with Batman: Arkham Asylum?  I do.

I asked both NVIDIA and AMD for feedback on this story but only AMD has replied thus far.  Robert Hallock, PR manager for gaming and graphics, Graphics Business Unit at AMD sent me this:

It makes sense that game developers would focus on AMD hardware with AMD hardware being the backbone of the next console generation. At this time, though, our relationship with EA is exclusively focused on Battlefield 4 and its hardware optimizations for AMD CPUs, GPUs and APUs.

Not much there, but he is also not denying of the original report coming from IGN.  It might just be too early for a more official statement.  I will update this story with information from NVIDIA if I hear anything else.

What do YOU think about this announcement though?  Is this good news for AMD and bad news for NVIDIA?  Is it good or bad for the gamer and in particular, the PC gamer?  Your input will help guide or upcoming continued talks with NVIDIA and AMD on the subject. 

UPDATE #2

Just so we all have some clarification on this and on the potential for validity of the rumor, this is where I sourced the story from this afternoon:

taylorquote.png

END UPDATE #2

Source: IGN

AMD wants you to know there is a Radeon HD 7970 GHz Edition for $419

Subject: Graphics Cards | June 7, 2013 - 11:33 AM |
Tagged: amd, radeon, hd 7970 ghz edition, HD 7970, never settle

AMD just passed me a note that I found to be very interesting.  In an obvious response to the release of the NVIDIA GeForce GTX 770 that offers the GK104 GPU (previously only in the GTX 680) for a lower price of $399, AMD wants you to know that at least ONE Radeon HD 7970 GHz Edition card is priced lower than the others.

sapphire7970ghz.png

The Sapphire Vapor-XHD 7970 GHz Edition is currently listed on Newegg.com for $419, a cool $30 less than the other HD 7970 GHz Edition cards.  This is not a card-wide price drop to $419 though.  AMD had this to say:

In late May I noted that we would be working with our partners to improve channel supply of the AMD Radeon™ HD 7970 GHz Edition to North American resellers like Newegg.com. Today I’m mailing to let you know that this process has begun to bear fruit, with the Sapphire Vapor-X HD 7970 GHz Edition now listing for the AMD SEP of $419 US. Of course, this GPU is also eligible for the Never Settle Reloaded AND Level Up programs!

Improving supply is an ongoing process, of course, but we’re pleased with the initial results of our efforts and hope you might pass word to your readers if you get a chance.

This "ongoing process" might mean that we'll see other partners' card sell for this lower price but it also might not.  In AMD's defense, our testing proves that in single GPU configurations, the Radeon HD 7970 GHz Edition does very well compared to the GTX 770, especially at higher resolutions.

I did ask AMD for some more answers in regards to what other partners think about a competitor getting unique treatment with AMD to offer this lower price unit, but I haven't received an answer yet.  I'll update here when we do!

For today though, if you are looking for a Radeon HD 7970 GHz Edition that also comes with the AMD Never Settle game bundle (Crysis 3, Bioshock Infinite, Far Cry 3: Blood Dragon and Tomb Raider), it's hard to go wrong with that $419 option.

Source: Newegg.com
Author:
Subject: Processors
Manufacturer: AMD

The Architectural Deep Dive

AMD officially unveiled their brand new Bobcat architecture to the world at CES 2011.  This was a very important release for AMD in the low power market.  Even though Netbooks were a dying breed at that time, AMD experienced a good uptick in sales due to the good combination of price, performance, and power consumption for the new Brazos platform.  AMD was of the opinion that a single CPU design would not be able to span the power consumption spectrum of CPUs at the time, and so Bobcat was designed to fill that space which existed from 1 watt to 25 watts.  Bobcat never was able to get down to that 1 watt point, but the Z-60 was a 4.5 watt part with two cores and the full 80 Radeon cores.

jag_01.jpg

The Bobcat architecture was produced on TSMC’s 40 nm process.  AMD eschewed the upcoming 32 nm HKMG/SOI process that was being utilized for the upcoming Llano and Bulldozer parts.  In hindsight, this was a good idea.  Yields took a while to improve on GLOBALFOUNDRIES new process, while the existing 40 nm product from TSMC was running at full speed.  AMD was able to provide the market in fairly short order with good quantities of Bobcat based APUs.  The product more than paid for itself, and while not exactly a runaway success that garnered many points of marketshare from Intel, it helped to provide AMD with some stability in the market.  Furthermore, it provided a very good foundation for AMD when it comes to low power parts that are feature rich and offer competitive performance.

The original Brazos update did not happen, instead AMD introduced Brazos 2.0 which was a more process improvement oriented product which featured slightly higher speeds but remained in the same TDP range.  The uptake of this product was limited, and obviously it was a minor refresh to buoy purchases of the aging product.  Competition was coming from low power Ivy Bridge based chips, as well as AMD’s new Trinity products which could reach TDPs of 17 watts.  Brazos and Brazos 2.0 did find a home in low powered, but full sized notebooks that were very inexpensive.  Even heavily leaning Intel based manufacturers like Toshiba released Brazos based products in the sub-$500 market.  The combination of good CPU performance and above average GPU performance made this a strong product in this particular market.  It was so power efficient, small batteries were typically needed, thereby further lowering the cost.

All things must pass, and Brazos is no exception.  Intel has a slew of 22 nm parts that are encroaching on the sub-15 watt territory, ARM partners have quite a few products that are getting pretty decent in terms of overall performance, and the graphics on all of these parts are seeing some significant upgrades.  The 40 nm based Bobcat products are no longer competitive with what the market has to offer.  So at this time we are finally seeing the first Jaguar based products.  Jaguar is not a revolutionary product, but it improves on nearly every aspect of performance and power usage as compared to Bobcat. 

Continue reading our analysis of the new Jaguar and GCN architecture!!

AMD to erupt Volcanic Islands GPUs as early as Q4 2013?

Subject: Editorial, General Tech, Graphics Cards, Processors | May 8, 2013 - 06:32 PM |
Tagged: Volcanic Islands, radeon, ps4, amd

So the Southern Islands might not be entirely stable throughout 2013 as we originally reported; seismic activity being analyzed suggests the eruption of a new GPU micro-architecture as early as Q4. These Volcanic Islands, as they have been codenamed, should explode onto the scene opposing NVIDIA's GeForce GTX 700-series products.

It is times like these where GPGPU-based seismic computation becomes useful.

The rumor is based upon a source which leaked a fragment of a slide outlining the processor in block diagram form and specifications of its alleged flagship chip, "Hawaii". Of primary note, Volcanic Islands is rumored to be organized with both Serial Processing Modules (SPMs) and a Parallel Compute Module (PCM).

Radeon9000.jpg

So apparently a discrete GPU can have serial processing units embedded on it now.

Heterogeneous Systems Architecture (HSA) is a set of initiatives to bridge the gap between massively parallel workloads and branching logic tasks. We usually make reference to this in terms of APUs and bringing parallel-optimized hardware to the CPU. In this case, we are discussing it in terms of bringing serial processing to the discrete GPU. According to the diagram, the chip within would contain 8 processor modules each with two processing cores and an FPU for a total of 16 cores. There does not seem to be any definite identification whether these cores would be based upon their license to produce x86 processors or their other license to produce ARM processors. Unlike an APU, this is heavily skewed towards parallel computation rather than a relatively even balance between CPU, GPU, and chipset features.

Now of course, why would they do that? Graphics processors can do branching logic but it tends to sharply cut performance. With an architecture such as this, a programmer might be able to more efficiently switch between parallel and branching logic tasks without doing an expensive switch across the motherboard and PCIe bus between devices. Josh Walrath suggested a server containing these as essentially add-in card computers. For gamers, this might help out with workloads such as AI which is awkwardly split between branching logic and massively parallel visibility and path-finding tasks. Josh seems skeptical about this until HSA becomes further adopted, however.

Still, there is a reason why they are implementing this now. I wonder, if the SPMs are based upon simple x86 cores, how the PS4 will influence PC gaming. Technically, a Volcanic Island GPU would be an oversized PS4 within an add-in card. This could give AMD an edge, particularly in games ported to the PC from the Playstation.

This chip, Hawaii, is rumored to have the following specifications:

  • 4096 stream processors
  • 16 serial processor cores on 8 modules
  • 4 geometry engines
  • 256 TMUs
  • 64 ROPs
  • 512-bit GDDR5 memory interface, much like the PS4.
  • 20 nm Gate-Last silicon fab process
    • Unclear if TSMC or "Common Platform" (IBM/Samsung/GLOBALFOUNDRIES)

Softpedia is also reporting on this leak. Their addition claims that the GPU will be designed on a 20nm Gate-Last fabrication process. While gate-last is considered to be not worth the extra effort in production, Fully Depleted Silicon On Insulator (FD-SOI) is apparently "amazing" on gate-last at 28nm and smaller fabrication. This could mean that AMD is eying that technology and making this design with intent of switching to an FD-SOI process, without a large redesign which an initially easier gate-first production would require.

Well that is a lot to process... so I will leave you with an open question for our viewers: what do you think AMD has planned with this architecture, and what do you like and/or dislike about what your speculation would mean?

Source: TechPowerUp

AMD Unveils New Gamer Memory: MOAR RAMDISK!

Subject: Memory | May 7, 2013 - 09:01 PM |
Tagged: radeon ramdisk, radeon, memory, amd, 4GB, 2133, 1.65v

 

AMD makes memory!  Ok, they likely contract out memory.  Then they brand it!  Then they throw in some software to make RAMDisks out of all that memory that you are not using.  Let us face it; AMD is not particularly doing anything new here with memory.  It is very much a commodity market that is completely saturated with quality parts from multiple manufacturers.

So why is AMD doing it?  Well, I guess part of it is simply brand recognition and potentially another source of income to help pad the bottom line.  They will not sell these parts for a loss, and they will have buyers with the diehard AMD fans.  Tim covered the previous release of AMD memory pretty well, and he looked at the performance results of the free RAMDisk software that AMD bundled with the DIMMs.  It does exactly what it is supposed to, but of course it takes portions of memory away.  When dealing with upwards of 16 GB of memory for a desktop computer, sacrificing half of that is really not that big a deal unless heavy duty image and video editing are required.

amd_mem_01.jpg

*Tombraider not included with Radeon Memory.  Radeon RAMDisk instead!

Today AMD is announcing a new memory product and a new bundled version of the RAMDisk software.  The top end SKU is now the AMD Radeon RG2133 DDR-3 modules.  It comes in a package of up to 4 x 4GB DIMMS and carries a CAS latency of 10 with the voltage at a reasonable 1.65v.  These modules are programmed with both the Intel based XMP and the AMD based AMP (MP stands for Memory Profiles… if that wasn’t entirely obvious).  The modules themselves are reasonable in terms of size (they will fit in any board, even with larger heatsinks on the CPU).  AMD claims that they are all high quality parts, which again is not entirely surprising since I do not know of anyone who advertises that their DIMMS feature only the most mediocre memory modules available.

amd_mem_02.jpg

Faster memory is faster, water is wet, and Ken still needs a girlfriend.

AMD goes on to claim that faster memory does improve overall system performance.  Furthermore AMD has revealed that UV light is in fact a cancer causing agent, Cocoa Puffs will turn any milk brown, and passing gas in church will rarely be commented upon (unless it is truly rank or you start calling yourself “Legion”).  Many graphs were presented that essentially showed an overclocked APU with this memory will outperform a non-overclocked APU with DDR-3 1600 units.  Truly eye opening, to say the least.

amd_mem_03.jpg

How much RAMDisk can any one man take?  AMD wants to know!

The one big piece of the pie that we have yet to talk about is the enhanced version of Radeon RAMDisk (is Farva naming these things?).  This particular version can carve out up to 64 GB of memory for a RAMDisk!  I can tell you this now, me and my 8 GB of installed memory will get a LOT of mileage out of this one!  I can only imagine the product meeting.  “Hey, I’ve got a great idea!  We can give them up to 64 GB of RAMDisk!”  While another person replies, “How do you propose getting people above 64 GB, much less 32 GB of memory on a consumer level product…?”  After much hand wringing and mumbling someone comes up with, “I know!  They can span it across two motherboards!  That way they have to buy an extra motherboard AND a CPU!  Think of our attach rate!”  And there was much rejoicing.

amd_mem_04.jpg

Inconceivable!!!

So yes, more memory that goes faster is better.  Radeon RAMDisk is not just a comic superhero, it can improve overall system performance.  Combine the two and we have AMD Radeon Memory RG2133 with 64 GB of RAMDisk.  Considering that the top SKU will feature 4 x 4GB DIMMS, a user only needs to buy four kits and four motherboards and processors to get a 64GB RAMDisk.  Better throw in another CPU and motherboard so a user can at least have 16GB of memory available as, you know, memory.

Update and Clarification

Perhaps my tone was a bit too sarcastic, but I just am not seeing the value here.  Apparently (and I was not given this info before hand) the 4 x 4 GB kits with the 64 GB RAMDisk will retail at $155.  Taking a quick look at Newegg I see that a user can buy quite a few different 2 x 8 GB 2133 kits anywhere from $139 to $145 with similar or better latencies/voltages.  Around $155 users will get better latencies and voltages down to 1.5v.  For 4 x 4GB kits we again see prices start at the $139 mark, but there are a significant number of other kits with again better voltages and latencies from $144 through $155.

Users can also get the free version of the Radeon RAMDisk that will utilize up to 4GB of space.  There are multiple other software kits for not a whole lot of money (less than $10) that will provide you up to 16 GB of RAMDisk.  I just find the whole kit to be comparable to what is currently out there.  Offering a 64 GB RAMDisk for use with 16 GB of total system memory just seems to be really silly.  The only way that could possibly be interesting would be if you could allocate 8 GB of that onto RAM and the other 56 GB onto a fast SSD.  I do not believe that to be the case with this software, but I would love to be proved wrong.

Source: AMD
Author:
Manufacturer: Various

Our 4K Testing Methods

You may have recently seen a story and video on PC Perspective about a new TV that made its way into the office.  Of particular interest is the fact that the SEIKI SE50UY04 50-in TV is a 4K television; it has a native resolution of 3840x2160.  For those that are unfamiliar with the new upcoming TV and display standards, 3840x2160 is exactly four times the resolution of current 1080p TVs and displays.  Oh, and this TV only cost us $1300.

seiki5.jpg

In that short preview we validated that both NVIDIA and AMD current generation graphics cards support output to this TV at 3840x2160 using an HDMI cable.  You might be surprised to find that HDMI 1.4 can support 4K resolutions, but it can do so only at 30 Hz (60 Hz 4K TVs won't be available until 2014 most likely), half the refresh rate of most TVs and monitors at 60 Hz.  That doesn't mean we are limited to 30 FPS of performance though, far from it.  As you'll see in our testing on the coming pages we were able to push out much higher frame rates using some very high end graphics solutions.

I should point out that I am not a TV reviewer and I don't claim to be one, so I'll leave the technical merits of the monitor itself to others.  Instead I will only report on my experiences with it while using Windows and playing games - it's pretty freaking awesome.  The only downside I have found in my time with the TV as a gaming monitor thus far is with the 30 Hz refresh rate and Vsync disabled situations.  Because you are seeing fewer screen refreshes over the same amount of time than you would with a 60 Hz panel, all else being equal, you are getting twice as many "frames" of the game being pushed to the monitor each refresh cycle.  This means that the horizontal tearing associated with Vsync will likely be more apparent than it would otherwise. 

4ksizes.png

Image from Digital Trends

I would likely recommend enabling Vsync for a tear-free experience on this TV once you are happy with performance levels, but obviously for our testing we wanted to keep it off to gauge performance of these graphics cards.

Continue reading our results from testing 4K 3840x2160 gaming on high end graphics cards!!

Forget the ARES II, here's a reference 7990

Subject: General Tech | April 24, 2013 - 12:51 PM |
Tagged: ARES II, amd, radeon, hd 7990, malta, tahiti

We've seen tests of dual 7970s in CrossFire simulating a 7990 and ASUS released the ARES II which was the closest we had until today with the release of the reference HD 7990.  There are many reviews to chose from when looking at this new flagship card, such as from a pure performance perspective such as [H]ard|OCP's which did not come out well for AMD's new card.  If you are more interested in our new Frame Rating process then there are two reviews to read, one that deals with the 7990 on the publicly available driver and perhaps more interesting is a prototype driver provided to Ryan that is intended to fix Crossfire stuttering on single displays but not for EyeFinity

slide6_0.jpg

"Today marks the launch of AMD's Radeon HD 7990. The Radeon HD 7990 is a dual-GPU video card that has its two GPUs down on a single PCB that uses CrossFire to operate the two Radeon HD 7970 GPUs. We will test this video card in the latest games, comparing it to GeForce GTX 680 SLI and Radeon HD 7970 GHz Edition CrossFire. "

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP