AMD AM1 Retested on 60 Watt Power Supply

Subject: Editorial | April 23, 2014 - 09:51 PM |
Tagged: TDP, Athlon 5350, Asus AM1I-A, amd, AM1

If I had one regret about my AM1 review that posted a few weeks ago, it was that I used a pretty hefty (relatively speaking) 500 watt power supply for a part that is listed at a 25 watt TDP.  Power supplies really do not hit their efficiency numbers until they are at least under 50% load.  Even the most efficient 500 watt power supply is going to inflate the consumption numbers of these diminutive parts that we are currently testing.

am1p_01.jpg

Keep it simple... keep it efficient.

Ryan had sent along a 60 watt notebook power supply with an ATX cable adapter at around the same time as I started testing on the AMD Athlon 5350 and Asus AM1I-A.  I was somewhat roped into running that previously mentioned 500 watt power supply due to comparative reasons.  I was using a 100 watt TDP A10-6790 APU with a pretty loaded Gigabyte A88X based ITX motherboard.  That combination would have likely fried the 60 watt (12v x 5A) notebook power supply under load.

Now that I had a little extra time on my hands, I was able to finally get around to seeing exactly how efficient this little number could get.  I swapped the old WD Green 1 TB drive for a new Samsung 840 EVO 500 GB SSD.  I removed the BD-ROM drive completely from the equation as well.  Neither of those parts uses a lot of wattage, but I am pushing this combination to go as low as I possibly can.

power-idle.png

power-load.png

The results are pretty interesting.  At idle we see the 60 watt supply (sans spinning drive and BD-ROM) hitting 12 watts as measured from the wall.  The 500 watt power supply and those extra pieces added another 11 watts of draw.  At load we see a somewhat similar numbers, but not nearly as dramatic as at idle.  The 60 watt system is drawing 29 watts while the 500 watt system is at 37 watts.

am1p_02.jpg

So how do you get from a 60 watt notebook power adapter to ATX standard? This is the brains behind the operation.

The numbers for both power supplies are both good, but we do see that we get a nice jump in efficiency from using the smaller unit and a SSD instead of a spinning drive.  Either way, the Athlon 5350 and AMD AM1 infrastructure sip power as compared to most desktop processors.

Source: AMD

Post Tax Day Celebration! Win an EVGA Hadron Air and GeForce GTX 750!

Subject: Editorial, General Tech, Graphics Cards | April 16, 2014 - 07:01 PM |
Tagged: hadron air, hadron, gtx 750, giveaway, evga, contest

In these good old United States of America, April 15th is a trying day. Circled on most of our calendars is the final deadline for paying up your bounty to Uncle Sam so we can continue to have things like freeway systems and universal Internet access. 

But EVGA is here for us! Courtesy of our long time sponsor you can win a post-Tax Day prize pack that includes both an EVGA Hadron Air mini-ITX chassis (reviewed by us here) as well as an EVGA GeForce GTX 750 graphics card. 

evgacontestapril.jpg

Nothing makes paying taxes better than free stuff that falls under the gift limit...

With these components under your belt you are well down the road to PC gaming bliss, upgrading your existing PC or starting a new one in a form factor you might not have otherwise imagined. 

Competing for these prizes is simple and open to anyone in the world, even if you don't suffer the same April 15th fear that we do. (I'm sure you have your own worries...)

  1. Fill out the form at the bottom of this post to give us your name and email address, in addition to the reasons you love April 15th! (Seriously, we need some good ideas for next year to keep our heads up!) Also, this does not mean you should leave a standard comment on the post to enter, though you are welcome to do that too.
     
  2. Stop by our Facebook page and give us a LIKE (I hate saying that), head over to our Twitter page and follow @pcper and heck, why not check our our many videos and subscribe to our YouTube channel?
     
  3. Why not do the same for EVGA's Facebook and Twitter accounts?
     
  4. Wait patiently for April 30th when we will draw and update this news post with the winners name and tax documentation! (Okay, probably not that last part.)

A huge thanks goes out to friends and supporters at EVGA for providing us with the hardware to hand out to you all. If it weren't for sponsors like this PC Perspective just couldn't happen, so be sure to give them some thanks when you see them around the In-tar-webs!!

Good luck!

Source: EVGA

Ars Technica Estimates Steam Sales and Hours Played

Subject: Editorial, General Tech | April 16, 2014 - 01:56 AM |
Tagged: valve, steam

Valve does not release sales or hours played figures for any game on Steam and it is rare to find a publisher who will volunteer that information. That said, Steam user profiles list that information on a per-account basis. If someone, say Ars Technica, had access to sufficient server capacity, say an Amazon Web Services instance, and a reasonable understanding of statistics, then they could estimate.

Oh look, Ars Technica estimated by extrapolating from over 250,000 random accounts.

SteamHW.png

If interested, I would definitely look through the original editorial for all of its many findings. Here, if you let me (and you can't stop me even if you don't), I would like to add my own analysis on a specific topic. The Elder Scrolls V: Skyrim on the PC, according to VGChartz, sold 3.42 million copies on at retail, worldwide. The thing is, Steamworks was required for every copy sold at retail or online. According to Ars Technica's estimates, 5.94 million copies were registered with Steam.

5.94 minus 3.42 is 2.52 million copies sold digitally. Almost a third of PC sales were made through Steam and other digital distribution platforms. Also, this means that the PC was the game's second-best selling platform, ahead of the PS3 (5.43m) and behind the Xbox 360 (7.92m), minus any digital sales on those platforms if they exist, of course. Despite its engine being programmed in DirectX 9, it is still a fairly high-end game. That is a fairly healthy install base for decent gaming PCs.

Did you discover anything else on your own? Be sure to discuss it in our comments!

Source: Ars Technica
Author:
Subject: Editorial
Manufacturer: Microsoft
Tagged:

Taking it all the way to 12!

Microsoft has been developing DirectX for around 20 years now.  Back in the 90s, the hardware and software scene for gaming was chaotic, at best.  We had wonderful things like “SoundBlaster compatibility” and 3rd party graphics APIs such as Glide, S3G, PowerSGL, RRedline, and ATICIF.  OpenGL was aimed more towards professional applications and it took John Carmack and iD, through GLQuake in 1996, to start the ball moving in that particular direction.  There was a distinct need to provide standards across audio and 3D graphics that would de-fragment the industry and developers.  DirectX was introduced with Windows 95, but the popularity of Direct3D did not really take off until DirectX 3.0 that was released in late 1996.

dx_history.jpg

DirectX has had some notable successes, and some notable let downs, over the years.  DX6 provided a much needed boost in 3D graphics, while DX8 introduced the world to programmable shading.  DX9 was the most long-lived version, thanks to it being the basis for the Xbox 360 console with its extended lifespan.  DX11 added in a bunch of features and made programming much simpler, all the while improving performance over DX10.  The low points?  DX10 was pretty dismal due to the performance penalty on hardware that supported some of the advanced rendering techniques.  DirectX 7 was around a little more than a year before giving way to DX8.  DX1 and DX2?  Yeah, those were very unpopular and problematic, due to the myriad changes in a modern operating system (Win95) as compared to the DOS based world that game devs were used to.

Some four years ago, if going by what NVIDIA has said, initial talks were initiated to start pursuing the development of DirectX 12.  DX11 was released in 2009 and has been an excellent foundation for PC games.  It is not perfect, though.  There is still a significant impact in potential performance due to a variety of factors, including a fairly inefficient hardware abstraction layer that relies more upon fast single threaded performance from a CPU rather than leveraging the power of a modern multi-core/multi-thread unit.  This has the result of limiting how many objects can be represented on screen as well as different operations that would bottleneck even the fastest CPU threads.

Click here to read the rest of the article!

GDC 2014: Shader-limited Optimization for AMD's GCN

Subject: Editorial, General Tech, Graphics Cards, Processors, Shows and Expos | March 30, 2014 - 01:45 AM |
Tagged: gdc 14, GDC, GCN, amd

While Mantle and DirectX 12 are designed to reduce overhead and keep GPUs loaded, the conversation shifts when you are limited by shader throughput. Modern graphics processors are dominated by sometimes thousands of compute cores. Video drivers are complex packages of software. One of their many tasks is converting your scripts, known as shaders, into machine code for its hardware. If this machine code is efficient, it could mean drastically higher frame rates, especially at extreme resolutions and intense quality settings.

amd-gcn-unit.jpg

Emil Persson of Avalanche Studios, probably known best for the Just Cause franchise, published his slides and speech on optimizing shaders. His talk focuses on AMD's GCN architecture, due to its existence in both console and PC, while bringing up older GPUs for examples. Yes, he has many snippets of GPU assembly code.

AMD's GCN architecture is actually quite interesting, especially dissected as it was in the presentation. It is simpler than its ancestors and much more CPU-like, with resources mapped to memory (and caches of said memory) rather than "slots" (although drivers and APIs often pretend those relics still exist) and with how vectors are mostly treated as collections of scalars, and so forth. Tricks which attempt to combine instructions together into vectors, such as using dot products, can just put irrelevant restrictions on the compiler and optimizer... as it breaks down those vector operations into those very same component-by-component ops that you thought you were avoiding.

Basically, and it makes sense coming from GDC, this talk rarely glosses over points. It goes over execution speed of one individual op compared to another, at various precisions, and which to avoid (protip: integer divide). Also, fused multiply-add is awesome.

I know I learned.

As a final note, this returns to the discussions we had prior to the launch of the next generation consoles. Developers are learning how to make their shader code much more efficient on GCN and that could easily translate to leading PC titles. Especially with DirectX 12 and Mantle, which lightens the CPU-based bottlenecks, learning how to do more work per FLOP addresses the other side. Everyone was looking at Mantle as AMD's play for success through harnessing console mindshare (and in terms of Intel vs AMD, it might help). But honestly, I believe that it will be trends like this presentation which prove more significant... even if behind-the-scenes. Of course developers were always having these discussions, but now console developers will probably be talking about only one architecture - that is a lot of people talking about very few things.

This is not really reducing overhead; this is teaching people how to do more work with less, especially in situations (high resolutions with complex shaders) where the GPU is most relevant.

Subject: Editorial, Storage
Manufacturer: Intel

Introduction and Background

Introduction:

Back in 2010, Intel threw a bit of a press thing for a short list of analysts and reviewers out at their IMFT flash memory plant at Lehi, Utah. The theme and message of that event was to announce 25nm flash entering mass production. A few years have passed, and 25nm flash is fairly ubiquitous, with 20nm rapidly gaining as IMFT scales production even higher with the smaller process. Last week, Intel threw a similar event, but instead of showing off a die shrink or even announcing a new enthusiast SSD, they chose to take a step back and brief us on the various design, engineering, and validation testing of their flash storage product lines.

heisman-cropped.jpg

At the Lehi event, I did my best to make off with a 25nm wafer.

Many topics were covered at this new event at the Intel campus at Folsom, CA, and over the coming weeks we will be filling you in on many of them as we take the necessary time to digest the fire hose of intel (pun intended) that we received. Today I'm going to lay out one of the more impressive things I saw at the briefings, and that is the process Intel goes through to ensure their products are among the most solid and reliable in the industry.

Read on for more on how Intel tests their products!

Mozilla Dumps "Metro" Version of Firefox

Subject: Editorial, General Tech | March 16, 2014 - 03:27 AM |
Tagged: windows, mozilla, microsoft, Metro

If you use the Firefox browser on a PC, you are probably using its "Desktop" application. They also had a version for "Modern" Windows 8.x that could be used from the Start Screen. You probably did not use it because fewer than 1000 people per day did. This is more than four orders of magnitude smaller than the number of users for Desktop's pre-release builds.

Yup, less than one-thousandth.

22-mozilla-2.jpg

Jonathan Nightingale, VP of Firefox, stated that Mozilla would not be willing to release the product without committing to its future development and support. There was not enough interest to take on that burden and it was not forecast to have a big uptake in adoption, either.

From what we can see, it's pretty flat.

The code will continue to exist in the organization's Mercurial repository. If "Modern" Windows gets a massive influx of interest, they could return to what they had. It should also be noted that there never was a version of Firefox for Windows RT. Microsoft will not allow third-party rendering engines as a part of their Windows Store certification requirements (everything must be based on Trident, the core of Internet Explorer). That said, this is also true of iOS and Firefox Junior exists with these limitations. It's not truly Firefox, little more than a re-skinned Safari (as permitted by Apple), but it exists. I have heard talks about Firefox Junior for Windows RT, Internet Explorer reskinned by Mozilla, but not to any detail. The organization is very attached to its own technology because, if whoever made the engine does not support new features or lags in JavaScript performance, the re-skins have nothing to leverage it.

Paul Thurrott of WinSupersite does not blame Mozilla for killing "Metro" Firefox. He acknowledges that they gave it a shot and did not see enough pre-release interest to warrant a product. He places some of the blame on Microsoft for the limitations it places on browsers (especially on Windows RT). In my opinion, this is just a symptom of the larger problem of Windows post-7. Hopefully, Microsoft can correct these problems and do so in a way that benefits their users (and society as a whole).

Source: Mozilla

Valve's Direct3D to OpenGL Translator (Or Part of It)

Subject: Editorial, General Tech | March 11, 2014 - 10:15 PM |
Tagged: valve, opengl, DirectX

Late yesterday night, Valve released source code from their "ToGL" transition layer. This bundle of code sits between "[a] limited subset of Direct3D 9.0c" and OpenGL to translate engines which are designed in the former, into the latter. It was pulled out of the DOTA 2 source tree and published standalone... mostly. Basically, it is completely unsupported and probably will not even build without some other chunks of the Source engine.

valve-dx-opengl.jpg

Still, Valve did not need to release this code, but they did. How a lot of open-source projects work is that someone dumps a starting blob, and if sufficient, the community pokes and prods it to mold it into a self-sustaining entity. The real question is whether the code that Valve provided is sufficient. As often is the case, time will tell. Either way, this is a good thing that other companies really should embrace: giving out your old code to further the collective. We are just not sure how good.

ToGL is available now at Valve's GitHub page under the permissive, non-copyleft MIT license.

Source: Valve GitHub

The Bigger They Are: The Titan They Fall? 48GB Install

Subject: Editorial, General Tech | February 25, 2014 - 03:43 PM |
Tagged: titanfall, ssd

UPDATE (Feb 26th): Our readers pointed out in the comments, although I have yet to test it, that you can change Origin's install-to directory before installing a game to have them on a separate hard drive as the rest. Not as easy as Steam's method, but apparently works for games like this that you want somewhere else. I figured it would forget games in the old directory, but apparently not.

Well, okay. Titanfall will require a significant amount of hard drive space when it is released in two weeks. Receiving the game digitally will push 21GB of content through your modem and unpack to 48GB. Apparently, the next generation has arrived.

titanfall.jpg

Honestly, I am not upset over this. Yes, this basically ignores customers who install their games to their SSDs. Origin, at the moment, forces all games to be installed in a single directory (albeit that can be anywhere) unlike Steam, which allows games to be individually sent to multiple folders. It would be a good idea to keep those customers in mind... but not at the expense of the game itself. Like always, both "high-end" and "unoptimized" titles have high minimum specifications; we decide which one applies by considering how effectively the performance is used.

That is something that we will need to find out when it launches on March 11th.

Source: PC Gamer

Kaveri loves fast memory

Subject: Editorial, General Tech | February 25, 2014 - 03:34 PM |
Tagged: ddr3, Kaveri, A10 7850K, amd, linux

You don't often see performance scaling as clean as what Phoronix saw when testing the effect of memory speed on AMD's A10-7850K.  Pick any result and you can clearly see a smooth increase in performance from DDR3-800 to DDR3-2400.  The only time that increase seems to decline slightly is between DDR3-2133 and 2400MHz, with some tests showing little to no increase between those two speeds.  Some tests do still show an improvement, for certain workloads on Linux the extra money is worth it but in other cases you can save a few dollars and limit yourself to the slightly cheaper DDR3-2133.  Check out the full review here.

image.php_.jpg

"Earlier in the week I published benchmarks showing AMD Kaveri's DDR3-800MHz through DDR3-2133MHz system memory performance. Those results showed this latest-generation AMD APU craving -- and being able to take advantage of -- high memory frequencies. Many were curious how DDR3-2400MHz would fair with Kaveri so here's some benchmarks as we test out Kingston's HyperX Beast 8GB DDR3-2400MHz memory kit."

Here are some more Memory articles from around the web:

Memory

Source: Phoronix