For those who dare to taunt the honey badger

Subject: General Tech | November 26, 2014 - 02:42 PM |
Tagged: far cry 3, amd, nvidia, gaming

Far Cry 4 uses the same engine as the previous game, Dunia Engine 2, albeit updated and modified for the new features GPUs can handle, especially NVIDIA's Gameworks features.  This gives you some idea of how your system will handle the game but for a definitive look at performance just check out this review at [H]ard|OCP.  For their testing they used the GeForce 344.75 WHQL on their GTX 980 and 970 and the Catalyst 14.11.2 Beta for the R9 290X and 290.  On the Ultra preset running at 1440p the performance differences between the AMD and NVIDIA cards were negligible, once they started testing the new features such as the enhanced godrays and AA options there were some significant differences which you should educate yourself about.  It is worth noting that even two GTX 980s in SLI at 3600x1920 are not capable of handling 8x MSAA, thankfully SMAA is supported in the game.


"Far Cry 4 is here, and we take an early look at how current video cards stack up in performance, and which quality settings are graphically demanding. We will also look at some image quality comparisons and talk about the state of this game at launch. Will it measure up to Far Cry 3 in terms of graphic fidelity?"

Here is some more Tech News from around the web:


Source: [H]ard|OCP

AMD Announces Carrizo and Carrizo-L SOCs

Subject: Processors | November 20, 2014 - 01:31 PM |
Tagged: amd, APU, carrizo, Carrizo-L, Kaveri, Excavator, Steamroller, SoC, Intel, mobile

AMD has certainly gone about doing things in a slightly different manner than we are used to.  Today they announced their two latest APUs which will begin shipping in the first half of 2015.  These APUs are running at AMD and are being validated as we speak.  AMD did not release many details on these products, but what we do know is pretty interesting.

Carrizo is based on the latest iteration of AMD’s CPU technology.  Excavator is the codename for these latest CPU cores, and they promise to be smaller and more efficient than the previous Steamroller core which powers the latest Kaveri based APUs.  Carrizo-L is the lower power variant which will be based on the Puma+ core.  The current Beema APU is based on the Puma architecture.


Roadmaps show that the Carrizo APUs will be 28 nm products, presumably fabricated by GLOBALFOUNDRIES.  Many were hoping that AMD would make the jump to 20 nm with this generation of products, but that does not seem to be the case.  This is not surprising due to the limitations of that particular process when dealing with large designs that require a lot of current.  AMD will likely be pushing for 16 nm FinFET for the generation of products after Carrizo.

The big Carrizo supposedly has a next generation GCN unit.  My guess here is that it will use the same design as we saw with the R9 285.  That particular product is a next generation unit that has improved efficiency.  AMD did not release how many GCN cores will be present in Carizzo, but it will be very similar to what we see now with Kaveri.  Carrizo-L will use the same GCN units as the previous generation Beema based products.


I believe AMD has spent a lot more time hand tuning Excavator instead of relying on a lot of automated place and route.  This should allow them to retain much of the performance of the part, all the while cutting down on transistor count dramatically.  Some rumors that I have seen point to each Excavator module being 40% smaller than Steamroller.  I am not entirely sure they have achieved that type of improvement, but more hand layout does typically mean greater efficiency and less waste.  The downside to hand layout is that it is extremely time and manpower intensive.  Intel can afford this type of design while AMD has to rely more on automated place and route.

Carrizo will be the first HSA 1.0 compliant SOC.  It is in fact an SOC as it integrates the southbridge functions that previously had been handled by external chips like the A88X that supports the current Kaveri desktop APUs.  Carrizo and Carrizo-L will also share the same infrastructure.  This means that motherboards that these APUs will be soldered onto are interchangeable.  One motherboard from the partner OEMs will be able to address multiple markets that will see products range from 4 watts TDP up to 35 watts.

Finally, both APUs feature the security processor that allows them access to the ARM TrustZone technology.  This is a very small ARM processor that handles the secure boot partition and handles the security requests.  This puts AMD on par with Intel and their secure computing solution (vPro).


These products will be aimed only at the mobile market.  So far AMD has not announced Carrizo for the desktop market, but when they do I would imagine that they will hit a max TDP of around 65 watts.  AMD claims that Carrizo is one of the biggest jumps for them in terms of power efficiency.  A lot of different pieces of technology have all come together with this product to make them more competitive with Intel and their process advantage.  Time will tell if this is the case, but for now AMD is staying relevant and pushing their product releases so that they are more consistently ontime.

Source: AMD

Samsung Announces First FreeSync UHD Monitors

Subject: Displays | November 20, 2014 - 10:50 AM |
Tagged: TN, Samsung, nvidia, monitor, ips, g-sync, freesync, amd

We have been teased for the past few months about when we would see the first implementations of AMD’s FreeSync technology, but now we finally have some concrete news about who will actually be producing these products.

Samsung has announced that they will be introducing the world’s first FreeSync enabled Ultra HD monitors.  The first models to include this feature will be the updated UD590 and the new UE850.  These will be introduced to the market in March of 2015.  The current UD590 monitor is a 28” unit with 3845x2160 resolution with up to 1 billion colors.  This looks to be one of those advanced TN panels that are selling from $500 to $900, depending on the model.


AMD had promised some hand’s on time for journalists by the end of this year, and shipping products in the first half of next year.  It seems that Samsung is the first to jump on the wagon.  We would imagine that others will be offering the technology.  In theory this technology offers many of the same benefits of NVIDIA’s G-SYNC, but it does not require the same level of hardware.  I can imagine that we will be seeing some interesting comparisons next year with shipping hardware and how Free-Sync stacks up to G-SYNC.

Joe Chan, Vice President of Samsung Electronics Southeast Asia Headquarters commented, “We are very pleased to adopt AMD FreeSync technology to our 2015 Samsung Electronics Visual Display division’s UHD monitor roadmap, which fully supports open standards.  With this technology, we believe users including gamers will be able to enjoy their videos and games to be played with smoother frame display without stuttering or tearing on their monitors.”

Source: Samsung

CS:GO and TF2 on Linux and Radeon

Subject: General Tech | November 12, 2014 - 05:10 PM |
Tagged: linux, amd, radeon, CS:GO, tf2

With the new driver from AMD and a long list of cards to test, from an R9290 all the way back to an HD4650, Phoronix has put together a rather definitive list of the current performance you can expect from CS:GO and TF2.  CS:GO was tested at 2560x1600 and showed many performance changes from the previous driver, including some great news for 290 owners.  TF2 was tested at the same resolution and many of the GPUs were capable of providing 60FPS or higher, again with the 290 taking the lead.  Phoronix also did testing on the efficiency of these cards, detailing the number of frames per second, per watt used, this may not be pertinent to many users but does offer an interesting look at the efficiency of the GPUs.  If you are gaming on a Radeon on Linux now is a good time to upgrade your drivers and associated programs.


"The latest massive set of Linux test data we have to share with Linux gamers and enthusiasts is a look at Counter-Strike: Global Offensive and Team Fortress 2 when using the very newest open-source Radeon graphics driver code. The very latest open-source Radeon driver code tested with these popular Valve Linux games were the Linux 3.18 Git kernel, Mesa 10.4-devel, LLVM 3.6 SVN, and xf86-video-ati 7.5.99."

Here is some more Tech News from around the web:


Source: Phoronix

Podcast #325 - Samsung 850 Pro Roundup, MSI's GTX 760 ITX, 8GB R9 290X and more!

Subject: General Tech | November 6, 2014 - 02:26 PM |
Tagged: video, Samsung, R9 290X, podcast, nvidia, mx cherry brown, msi titan, msi, hawaii, gtx 760 itx, assassin's creed, amd, 8gb, 850 PRO

PC Perspective Podcast #325 - 11/06/2014

Join us this week as we discuss our Samsung 850 Pro Roundup, MSI's GTX 760 ITX, 8GB R9 290X and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

AMD Gaming Evolved Goes Beyond Earth with Hawaii

Subject: General Tech, Graphics Cards | November 6, 2014 - 12:00 AM |
Tagged: radeon, r9 295x2, R9 290X, r9 290, R9, hawaii, civilization, beyond earth, amd

Why settle for space, when you can go Beyond Earth too (but only if you go to Hawaii)!


The Never Settle promotion launched itself into space a couple of months ago, but AMD isn't settling for that. If you purchase a Hawaii-based graphics card (R9 290, R9 290X, or R9 295X2) then you will get a free copy of Civilization: Beyond Earth on top of the choice of three games (or game packs) from the Never Settle Space Gold Reward tier. Beyond Earth makes a lot of sense of course, because it is a new game that is also one of the most comprehensive implementations of Mantle yet.


To be eligible, the purchase would need to be made starting November 6th (which is today). Make sure that you check to make sure that what you're buying is a "qualifying purchase" from "participating retailers", because that is a lot of value to miss in a moment of carelessness.

AMD has not specified an end date for this promotion.

Source: AMD

8 GB Variants of the R9 290X Coming This Month

Subject: General Tech, Graphics Cards | November 5, 2014 - 12:56 PM |
Tagged: radeon, R9 290X, R9, amd, 8gb

With the current range of AMD’s R9 290X cards sitting at 4 GB of memory, listings for an 8 GB version have appeared on an online retailer. As far back as March, Sapphire was rumored to be building an 8 GB variety. Those rumours were supposedly quashed last month by AMD and Sapphire. However, AMD has since confirmed the existence of the new additions to the series. Pre-orders have appeared online and are said to be shipping out this month.


Image Credit: Overclockers UK

With 8 GB of GDDR5 memory and price tags between $480 and $520, these new additions, expectedly, do not come cheap. Compared to the 4 GB versions of the R9 290X line, which run about $160 less according to the online retailer, is it worth upgrading at this stage? For the people using a single 1080p monitor, the answer is likely no. For those with multi-screen setups, or those with deep enough pockets to own a 4K display, however, the benefits may begin to justify the premium. At 4K though, just a single 8 GB R9 290X may not provide the best experience; a Crossfire setup would benefit more from the 8 GB bump, being less reliant on GPU speed.

AMD’s 8 GB R9 290X’s are currently available for preorder: a reference version for £299.99 + VAT (~$480) and a Vapor-X version for £324.99 + VAT (~$520). They are slated to ship later this month.

Battle of the low cost SoCs, Sempron versus Celeron

Subject: Processors | November 3, 2014 - 02:38 PM |
Tagged: Sempron 2650, low cost, Intel, Celeron J1800, asus AM1M-A, ASRock D1800M, amd

For a mere $60 you can get the ASRock D1800M motherboard with a Celeron J1800 installed, or for about $8 more you can get a socketed Sempron 2650 and compatible motherboard.  After that it is merely a matter of adding a PSU, RAM and storage and you have a working machine for very little cost. Those were the systems which Hardware Secrets tested out to see which low cost, low powered system made more sense to purchase for light browsing and media consumption.  As you would expect the 1Ghz clock advantage that the Celeron enjoys pushed its performance above the Sempron in all tests but 3D Mark but what is interesting is that the performance gap was nowhere near as large a percentage difference as the clock speed.  While it is clear that the Celeron runs cooler, quieter and faster the fact that the AMD solution is socketed might sway some buyers decision.  Check out the full review if you are interested in working machines that cost less than $200 to assemble.


"Both AMD and Intel recently released new families of low cost, low TDP desktop CPUs. AMD launched the AM1 platform with Sempron and Athlon "Kabini" processors, while Intel released the "Bay Trail-D" Celeron and Pentium CPUs, recognizable by the use of the letter "J" on the model naming. Among the lowest-end models of each family are, respectively, the AMD Sempron 2650, and the Intel Celeron J1800. Let's compare the performance of those CPUs and discover which one is the best buy in the low-end market segment."

Here are some more Processor articles from around the web:


Podcast #324 - Civilization: Beyond Earth, Consoles Performance Issues, Samsung SSD updates and more

Subject: General Tech | October 30, 2014 - 02:10 PM |
Tagged: xbox one, video, steiger dynamics, ps4, podcast, nvidia, Mantle, LIVA, Intel, ECS, Broadwell-E, amd, Alienware 13

PC Perspective Podcast #324 - 10/30/2014

Join us this week as we discuss Civilization: Beyond Earth Performance, Consoles Performance Issues, Samsung SSD updates and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

Sony PS4 and Microsoft Xbox One Already Hitting a Performance Wall

Subject: General Tech, Graphics Cards | October 27, 2014 - 04:50 PM |
Tagged: xbox one, sony, ps4, playstation 4, microsoft, amd

A couple of weeks back a developer on Ubisoft's Assassin's Creed Unity was quoted that the team had decided to run both the Xbox One and the Playstation 4 variants of the game at 1600x900 resolution "to avoid all the debates and stuff." Of course, the Internet exploded in a collection of theories about why that would be the case: were they paid off by Microsoft?

For those of us that focus more on the world of PC gaming, however, the following week an email into the weekly podcast from an anonymous (but seemingly reliable) developer on the Unity team raised even more interesting material. In this email, despite addressing other issues on the value of pixel count and the stunning visuals of the game, the developer asserted that we may have already peaked on the graphical compute capability of these two new gaming consoles. Here is a portion of the information:

The PS4 couldn’t do 1080p 30fps for our game, whatever people, or Sony and Microsoft say. ...With all the concessions from Microsoft, backing out of CPU reservations not once, but twice, you’re looking at about a 1-2 FPS difference between the two consoles.

What's hard is not getting the game to render but getting everything else in the game at the same level of performance we designed from the start for the graphics. By the amount of content and NPCs in the game, from someone who witnessed a lot of optimizations for games from Ubisoft in the past, this is crazily optimized for such a young generation of consoles. This is really about to define the next-generation unlike any other game beforehand.

We are bound from the CPU because of AI. Around 50% of the CPU is used for the pre-packaged rendering parts..


So, if we take this anonymous developers information as true, and this whole story is based on that assumption, then have learned some interesting things.

  1. The PS4, the more graphically powerful of the two very similarly designed consoles, was not able to maintain a 30 FPS target when rendering at 1920x1080 resolution with Assassin's Creed Unity.
  2. The Xbox One (after giving developers access to more compute cycles previously reserved to Kinect) is within a 1-2 FPS mark of the PS4.
  3. The Ubisoft team see Unity as being "crazily optimized" for the architecture and consoles even as we just now approach the 1 year anniversary of their release.
  4. Half of the CPU compute time is being used to help the rendering engine by unpacking pre-baked lighting models for the global illumination implementation and thus the game is being limited by the 50% remaining performance power the AI, etc.

It would appear that just as many in the media declared when the specifications for the new consoles were announced, the hardware inside the Playstation 4 and Xbox One undershoots the needs of game developers to truly build "next-generation" games. If, as this developer states, we are less than a year into the life cycle of hardware that was planned for an 8-10 year window and we have reached performance limits, that's a bad sign for game developers that really want to create exciting gaming worlds. Keep in mind that this time around the hardware isn't custom built cores or using a Cell architecture - we are talking about very basic x86 cores and traditional GPU hardware that ALL software developers are intimately familiar with. It does not surprise me one bit that we have seen more advanced development teams hit peak performance.


If the PS4, the slightly more powerful console of the pair, is unable to render reliably at 1080p with a 30 FPS target, then unless the Ubisoft team are completely off the rocker in terms of development capability, the advancement of gaming on consoles would appear to be somewhat limited. Remember the specifications for these two consoles:

  PlayStation 4 Xbox One
Processor 8-core Jaguar APU 8-core Jaguar APU
Motherboard Custom Custom
Memory 8GB GDDR5 8GB DDR3
Graphics Card 1152 Stream Unit APU 768 Stream Unit APU
Peak Compute 1,840 GFLOPS 1,310 GFLOPS

The custom built parts from AMD both feature an 8-core Jaguar x86 architecture and either 768 or 1152 stream processors. The Jaguar CPU cores aren't high performance parts: single-threaded performance of Jaguar is less than the Intel Silvermont/Bay Trail designs by as much as 25%. Bay Trail is powering lots of super low cost tablets today and even the $179 ECS LIVA palm-sized mini-PC we reviewed this week. And the 1152/768 stream processors in the GPU portion of the AMD APU provide some punch, but a Radeon HD 7790 (now called the R7 260X), released in March of 2013, provides more performance than the PS4 and the Radeon R7 250X is faster than what resides in the Xbox One.


If you were to ask me today what kind of performance would be required from AMD's current GPU lineup for a steady 1080p gaming experience on the PC, I would probably tell you the R9 280, a card you can buy today for around $180. From NVIDIA, I would likely pick a GTX 760 (around $200).

Also note that if the developer is using 50% of the CPU resources for rendering computation and the remaining 50% isn't able to hold up its duties on AI, etc., we likely have hit performance walls on the x86 cores as well.

Even if this developer quote is 100% correct that doesn't mean that the current generation of consoles is completely doomed. Microsoft has already stated that DirectX 12, focused on performance efficiency of current generation hardware, will be coming to the Xbox One and that could mean additional performance gains for developers. The PS4 will likely have access to OpenGL Next that is due in the future. And of course, it's also possible that this developer is just wrong and there is plenty of headroom left in the hardware for games to take advantage of.


But honestly, based on my experience with these GPU and CPU cores, I don't think that's the case. If you look at screenshots of Assassin's Creed Unity and then look at the minimum and recommended specifications for the game on the PC, there is huge, enormous discrepancy. Are the developers just writing lazy code and not truly optimizing for the hardware? It seems unlikely that a company the size of Ubisoft would choose this route on purpose, creating a console game that runs in a less-than-ideal state while also struggling on the PC version. Remember, there is almost no "porting" going on here: the Xbox One and Playstation 4 share the same architecture as the PC now.

Of course, we might just be treading through known waters. I know we are a bit biased, and so is our reader base, but I am curious: do you think MS and Sony have put themselves in a hole with their shortsighted hardware selections?

UPDATE: It would appear that a lot of readers and commentors take our editorial on the state of the PS4 and XB1 as a direct attack on AMD and its APU design. That isn't really the case - regardless of what vendors' hardware is inside the consoles, had Microsoft and Sony still targeted the same performance levels, we would be in the exact same situation. An Intel + NVIDIA hardware combination could just have easily been built to the same peak theoretical compute levels and would have hit the same performance wall just as quickly. MS and Sony could have prevented this by using higher performance hardware, selling the consoles at a loss out the gate and preparing each platform for the next 7-10 years properly. And again, the console manufacturers could have done that with higher end AMD hardware, Intel hardware or NVIDIA hardware. The state of the console performance war is truly hardware agnostic.