PC Perspective Hardware Workshop 2014 @ Quakecon 2014 in Dallas, TX

Subject: Editorial, General Tech, Shows and Expos | July 10, 2014 - 08:55 PM |
Tagged: workshop, video, streaming, quakecon, prizes, live, giveaways

It is that time of year again: another installment of the PC Perspective Hardware Workshop!  Once again we will be presenting on the main stage at Quakecon 2014 being held in Dallas, TX July 17-20th.

logo-1500px.jpg
 

Main Stage - Quakecon 2014

Saturday, July 19th, 12:00pm CT

Our thanks go out to the organizers of Quakecon for allowing us and our partners to put together a show that we are proud of every year.  We love giving back to the community of enthusiasts and gamers that drive us to do what we do!  Get ready for 2 hours of prizes, games and raffles and the chances are pretty good that you'll take something out with you - really, they are pretty good!

Our primary partners at the event are those that threw in for our ability to host the workshop at Quakecon and for the hundreds of shirts we have ready to toss out!  Our thanks to NVIDIASeasonic and Logitech!!

nvidia_logo_small.png

seasonic-transparent.png

logitech-transparent.png

Live Streaming

If you can't make it to the workshop - don't worry!  You can still watch the workshop live on our live page as we stream it over one of several online services.  Just remember this URL: http://pcper.com/live and you will find your way!

 

PC Perspective LIVE Podcast and Meetup

We are planning on hosting any fans that want to watch us record our weekly PC Perspective Podcast (http://pcper.com/podcast) on Wednesday or Thursday evening in our meeting room at the Hilton Anatole.  I don't yet know exactly WHEN or WHERE the location will be, but I will update this page accordingly on Wednesday July 16th when we get the data.  You might also consider following me on Twitter for updates on that status as well.

After the recording, we'll hop over the hotel bar for a couple drinks and hang out.  We have room for at leaast 50-60 people to join us in the room but we'll still be recording if just ONE of you shows up.  :)

Prize List (will continue to grow!)

Continue reading to see the list of prizes for the workshop!!!

Battlefield Will Not Be Annualized Says Patrick Söderlund

Subject: Editorial, General Tech | June 17, 2014 - 07:54 PM |
Tagged: battlefield, medal of honor, ea

Last year, we got Battlefield 4. The year before? Medal of Honor: Warfighter. The year before? Battlefield 3. The year before? Medal of Honor (Reboot). We will not be getting a new Medal of Honor this year, because Danger Close was shut down in June 2013. Danger Close developed the two recent Medal of Honor titles and, as EA Los Angeles, many of the previous Medal of Honor titles and many RTS games (Command and Conquer, Red Alert, Lord of the Rings: The Battle for Middle-Earth).

battlefield-hardline.jpg

Many of their employees are now working at DICE LA.

So, when a new Medal of Honor title should be released, we get Battlefield: Hardline. A person with decent pattern recognition might believe that Battlefield, or its spinoffs, would fill the gap left by Medal of Honor. Not so, according to Patrick Söderlund, Executive VP of EA Studios. As was the case at E3, where both studios (DICE and Visceral) repetitively claimed that Battlefield: Hardline was the product (literally) of a fluke encounter and pent-up excitement for cops and robbers.

Of course, they do not close the door for annualized Battlefield releases, either. They just say that it is not their plan to have that be "the way it's going to be forever and ever". Honestly, for all the hatred that annualized releases get, the problem is not the frequency. If EA can bring out a Battlefield title every year, and one that is continually a good game, then power to them. The problem is that, with an annual release cycle, it is hard to get success-after-success, especially when fatigue is an opposing, and (more importantly) ever-increasing force.

It is the hard, but lucrative road.

Source: PC Gamer

Why would SanDisk buy Fusion-io for $1.1 Billion?

Subject: Editorial, Storage | June 17, 2014 - 09:56 AM |
Tagged: sandisk, fusion-io, buyout

Fusion-io was once a behemoth of flash memory storage. Back when SSDs were having a hard time saturating SATA 3Gb/sec, Fusion-io was making fire breathing PCIe SSDs full of SLC flash and pushing relatively insane IOPS and throughput figures. Their innovations were a good formula at the time. They made the controller a very simple device, basically just a simple bridge from the PCIe bus to the flash memory. This meant that most of the actual work was done in the driver. This meant that Fusion-io SSDs were able to leverage the CPU and memory of the host system to achieve very high performance.

iops (2010).jpg

Fusion-io ioDrive 160 creams the competition back in 2010.

Being the king of IOPS back in the early days of flash memory storage, Fusion-io was able to charge a premium for their products. In a 2010 review, I priced their 160GB SSD at about $40/GB. In the years since, while flash memory prices (and therefore SSD products) have steadily dropped in price while achieving higher and higher performance figures, Fusion-io products have mostly remained static in price. All of this time, the various iterations of the ioDrive continued to bank on the original model of a simple controller and the bulk of the work taking place in the driver. This actually carries a few distinct disadvantages, in that the host system has to spent a relatively large amount of CPU and memory resources towards handling the Fusion-io devices. While this enables higher performance, it leaves less resources available to actually do stuff with the data. This ends up adding to the build cost of a system, as more CPU cores and memory must be thrown at the chassis handling the storage. In more demanding cases, additional systems would need to be added to the rack space in order to handle the additional storage overhead in addition to the other required workloads. Lastly, the hefty driver means Fusion-io devices are not bootable, despite early promises to the contrary. This isn't necessarily a deal breaker for enterprise use, but it does require system builders to add an additional storage device (from a different vendor) to handle OS duties.

iops (2014).png

In 2014, the other guys are making faster stuff. Note this chart is 4x the scale of the 2010 chart.

Lets fast forward to present times. Just over a week ago, Fusion-io announced their new 'Atomic' line of SSDs. The announcement seemed to fall flat, and did little to save the continuous decline of their stock price. I suspect this was because despite new leadership, these new products are just another iteration of the same resource consuming formula. Another reason for the luke warm reception might have been the fact that Intel launched their P3700 series a few days prior. The P3700 is a native PCIe SSD that employs the new NVM Express communication standard. This open standard was developed specifically for flash memory communication, and it allows more direct access to flash in a manner that significantly reduces the overhead required to perform high data throughputs and very high IO's per second. NVMe is a very small driver stack with native support built into modern operating systems, and is basically the polar opposite of the model Fusion-io has relied on for years now.

NVMe.png

Intel's use of NVMe enables very efficient access to flash memory with minimal CPU overhead.

Fusion-io's announcement claimed "The Atomic Series of ioMemory delivers the highest transaction rate per gigabyte for everything from read intensive workflows to mixed workloads.". Let's see how this stacks up against the Intel P3700 - an SSD that launched the same week:



Model Fusion-io PX600 Intel P3700
Capacity (TB) 1.0 1.3 2.6 5.2 0.4 0.8 1.6 2.0
Interface / Flash type PCIe 2.0 x8 / 20nm MLC PCIe 3.0 x4 / 20nm MLC
Read BW (GB/sec) 2.7 2.7 2.7 2.7 2.7 2.8 2.8 2.8
Write BW (GB/sec) 1.5 1.7 2.2 2.1 1.2 1.9 1.9 1.9
4k random read IOPS 196,000 235,000 330,000 276,000 450,000 460,000 450,000 450,000
Read transactions/GB 196 181 127 53 1,125 575 281 225
4k random write IOPS 320,000 370,000 375,000 375,000 75,000 90,000 150,000 175,000
Write transactions/GB 320 285 144 72 188 113 94 88
4k 70/30 R/W IOPS Unlisted 150,000 200,000 240,000 250,000
Read latency 92us 20/115us
Write latency 15us 20/25us
Endurance (PBW) 12 16 32 64 7.3 14.6 29.2 36.5
Endurance / TB 12.0 12.3 12.3 12.3 18.3 18.3 18.3 18.3
Cost Unlisted $1,207 $2,414 $4,828 $6,035
Cost/GB Unlisted $3.02 $3.02 $3.02 $3.02
Warranty 5 years 5 years
                 

Source: Fusion-io / Intel

We are comparing flagship to flagship (in a given form factor) here. Starting from the top, the Intel P3700 is available in generally smaller capacities than the Fusion-io PX600. Both use 20nm flash, but the P3700 uses half the data lanes at twice the throughput. Regarding Fusion-io's 'transaction rate per GB' point, well, it's mostly debunked by the Intel P3700, which has excellent random read performance all the way down to its smallest 400GB capacity point. The seemingly unreal write specs seen from the PX600 are, well, actually unreal. Flash memory writes take longer than reads, so the only logical explanation for the inversion we see here is that Fusion-io's driver is passing those random writes through RAM first. Writing to RAM might be quicker, but you can't sustain it indefinitely, and it consumes more host system resources in the process. Moving further down the chart, we see Intel coming in with a ~50% higher endurance rating when compared to the Fusion-io. The warranties may be of equal duration, but the Intel drive is (on paper / stated warranty) guaranteed to outlast the Fusion-io part when used in a heavy write environment.

For pricing, Intel launched the P3700 at a competitive $3/GB. Pricing data for Fusion-io is not available, as they are behind a bit of a 'quote wall', and no pricing at all was included with the Atomic product launch press materials. Let's take a conservative guess and assume the new line is half the cost/GB of their previous long-standing flagship, the Octal. One vendor lists pricing directly at $124,995 for 10.24TB ($12.21/GB) and $99,995 for 5.12TB ($19.53/GB), both of which require minumum support contracts as an additional cost. Half of $12/GB is still more than twice the $3/GB figure from Intel.

My theory as to why SanDisk is going for Fusion-io?

  • A poor track record since the Fusion-io IPO have driven the stock price way down, making it prime for a buyout.
  • SanDisk is one of the few remaining flash memory companies that does not own their own high end controller tech.
  • Recent Fusion-io product launch overshadowed by much larger (Intel) company launching a competing superior product at a lower cost/GB.

So yeah, the buyout seemed inevitable. The question that remains is what will SanDisk do with them once they've bought them? Merging the two will mean that Fusion-io can include 'in house' flash and (hopefully) offer their products at a lower cost/GB, but that can only succeed if the SanDisk flash performs adequately. Assuming it does, there's still the issue of relatively high costs when compared to freshly competing products from Intel and others. Last but not least is the ioDrive driver model, which grows incresingly dated while the rest of the industry adopts NVMe.

AMD Restructures. Lisa Su Is Now COO.

Subject: Editorial, General Tech, Graphics Cards, Processors, Chipsets | June 13, 2014 - 06:45 PM |
Tagged: x86, restructure, gpu, arm, APU, amd

According to VR-Zone, AMD has reworked their business, last Thursday, sorting each of their projects into two divisions and moving some executives around. The company is now segmented into the "Enterprise, Embedded, and Semi-Custom Business Group", and the "Computing and Graphics Business Group". The company used to be divided between "Computing Solutions", which handled CPUs, APUs, chipsets, and so forth, "Graphics and Visual Solutions", which is best known for GPUs but also contains console royalties, and "All Other", which was... everything else.

amd-new2.png

Lisa Su, former general manger of global business, has moved up to Chief Operating Officer (COO), along with other changes.

This restructure makes sense for a couple of reasons. First, it pairs some unprofitable ventures with other, highly profitable ones. AMD's graphics division has been steadily adding profitability to the company while its CPU division has been mostly losing money. Secondly, "All Other" is about a nebulous as a name can get. Instead of having three unbalanced divisions, one of which makes no sense to someone glancing at AMD's quarterly earnings reports, they should now have two, roughly equal segments.

At the very least, it should look better to an uninformed investor. Someone who does not know the company might look at the sheet and assume that, if AMD divested from everything except graphics, that the company would be profitable. If, you know, they did not know that console contracts came into their graphics division because their compute division had x86 APUs, and so forth. This setup is now more aligned to customers, not products.

Source: VR-Zone

Podcast #304 - GeForce GTX TITAN Z, Core i7-4790K, Gigabyte Z97X-SOC Force and more!

Subject: Editorial | June 12, 2014 - 02:28 PM |
Tagged: Z97X-SOC Force, video, titan z, radeon, project tango, podcast, plextor, nvidia, Lightning, gtx titan z, gigabyte, geforce, E3 14, amd, 4790k, 290x

PC Perspective Podcast #304 - 06/12/2014

We have lots of reviews to talk about this week including the GeForce GTX TITAN Z, Core i7-4790K, Gigabyte Z97X-SOC Force, E3 News and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Josh Walrath, Jeremy Hellstrom and Allyn Maleventano

Program length: 1:11:36

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

Join the PC Perspective Team and Austin Evans LIVE on Tonight's Podcast!

Subject: Editorial | June 4, 2014 - 07:42 PM |
Tagged: video, pcper, live, austin evans

Tonight's live edition of the PC Perspective Podcast is going to have a special guest, the Internet's Austin Evans. You likely know of Austin through his wildly popular YouTube channel or maybe his dance moves.

But seriously, Austin Evans is a great guy with a lot of interesting input on technology. Stop by our live page at http://www.pcper.com/live at 10pm EST / 7pm EST for all the fun!

Make sure you don't miss it by signing up for our PC Perspective Live Mailing List!

pcperlive.png

Source: PCPer Live!

TrueCrypt Taken Offline Doesn't Pass My Smell Test

Subject: Editorial, General Tech | May 29, 2014 - 02:17 AM |
Tagged: TrueCrypt

It should not pass anyone's smell test but it apparently does, according to tweets and other articles. Officially, the TrueCrypt website (which redirects to their SourceForge page) claims that, with the end of Windows XP support (??), the TrueCrypt development team wants users to stop using their software. Instead, they suggest a switch to BitLocker, Mac OSX built-in encryption, or whatever random encryption suite comes up when you search your Linux distro's package manager (!?). Not only that, but several versions of Windows (such as 7 Home Premium) do not have access to BitLocker. Lastly, none of these are a good solution for users who want a single encrypted container across multiple OSes.

A new version (don't use it!!!) called TrueCrypt 7.2 was released and signed with their private encryption key.

TrueCrypt_Logo.png

The developers have not denied the end of support, and its full-of-crap reason. (Seriously, because Microsoft deprecated Windows XP almost two months ago, they pull support for a two year old version now?)

They have also not confirmed it. They have been missing since at least "the announcement" (or earlier if they were not the ones who made it). Going missing and unreachable, the day of your supposedly gigantic resignation announcement, does not support the validity of that announcement. 

To me, that is about as unconfirmed as you can get.

Still, people are believing the claims that TrueCrypt 7.1a is not secure. The version has been around since February 2012 and, beyond people looking at its source code, has passed a significant portion of a third-party audit. Even if you believe the website, it only says that TrueCrypt will not be updated for security. It does not say that TrueCrypt 7.1a is vulnerable to any known attack.

In other words, the version that has been good enough for over two years, and several known cases of government agencies being unable to penetrate it, is probably as secure today as it was last week.

"The final version", TrueCrypt 7.2, is a decrypt-only solution. It allows users to unencrypt existing vaults, although who knows what else it does, to move it to another solution. The source code changes have been published, and they do not seem shady so far, but since we cannot even verify that their private key has not leaked, I wouldn't trust it. A very deep compromise could make finding vulnerabilities very difficult.

So what is going on? Who knows. One possibility is that they were targeted for a very coordinated hack, one which completely owned them and their private key, performed by someone(s) who spent a significant amount of time modifying a fake 7.2 version. Another possibility is that they were legally gagged and forced to shut down operations, but they managed to negotiate a method for users to decrypt existing data with a neutered build.

One thing is for sure, if this is a GoG-style publicity stunt, I will flip a couple of tables.

We'll see. ┻━┻ \_()_/ ┻━┻

Source: TrueCrypt

Mozilla Firefox to Implement Adobe DRM for Video

Subject: Editorial, General Tech | May 14, 2014 - 09:56 PM |
Tagged: ultraviolet, mozilla, DRM, Adobe Access, Adobe

Needless to say, DRM is a controversial topic and I am clearly against it. I do not blame Mozilla. The non-profit organization responsible for Firefox knew that they could not oppose Chrome, IE, and Safari while being a consumer software provider. I do not even blame Apple, Google, and Microsoft for their decisions, either. This problem is much bigger and it comes down to a total misunderstanding of basic mathematics (albeit at a ridiculously abstract and applied level).

22-mozilla-2.jpg

Simply put, piracy figures are meaningless. They are a measure of how many people use content without paying (assuming they are even accurate). You know what is more useful? Sales figures. Piracy figures are measurements, dependent variables, and so is revenue. Measurements cannot influence other measurements. Specifically, measurements cannot influence anything because they are, themselves, the result of influences. That is what "a measure" is.

Implementing DRM is not a measurement, however. It is a controllable action whose influence can be recorded. If you implement DRM and your sales go down, it hurt you. You may notice piracy figures decline. However, you should be too busy to care because you should be spending your time trying to undo the damage you did to your sales! Why are you looking at piracy figures when you're bleeding money?

I have yet to see a DRM implementation that correlated with an increase in sales. I have, however, seen some which correlate to a massive decrease.

The thing is, Netflix might know that and I am pretty sure that some of the web browser companies know that. They do not necessarily want to implement DRM. What they want is content and, surprise, the people who are in charge of the content are definitely not enlightened to that logic. I am not even sure if they realize that the reason why content is pirated before their release dates is because they are not leaked by end users.

But whatever. Technical companies, who want that content available on their products, are stuck finding a way to appease those content companies in a way that damages their users and shrinks their potential market the least. For Mozilla, this means keeping as much open as possible.

do-not-hurt-2.jpg

Since they do not have existing relationships with Hollywood, Adobe Access will be the actual method of displaying the video. They are clear to note that this only applies to video. They believe their existing relationships in text, images, and games will prevent the disease from spreading. This is basically a plug-in architecture with a sandbox that is open source and as strict as possible.

This sandbox is intended to prevent a security vulnerability from having access to the host system, give a method of controlling the DRM's performance if it hitches, and not allow the DRM to query the machine for authentication. The last part is something they wanted to highlight, because it shows their effort to protect the privacy of their users. They also imply a method for users to opt-out but did not go into specifics.

As an aside, Adobe will support their Access DRM software on Windows, Mac, and Linux. Mozilla is pushing hard for Android and Firefox OS, too. According to Adobe, Access DRM is certified for use with Ultraviolet content.

I accept Mozilla's decision to join everyone else but I am sad that it came to this. I can think of only two reasons for including DRM: for legal (felony) "protection" under the DMCA or to make content companies feel better while they slowly sink their own ships chasing after numbers which have nothing to do with profits or revenue.

Ultimately, though, they made a compromise. That is always how we stumble and fall down slippery slopes. I am disappointed but I cannot suggest a better option.

Source: Mozilla

Mozilla Makes Suggestions to the FCC about Net Neutrality

Subject: Editorial, General Tech | May 5, 2014 - 08:08 PM |
Tagged: mozilla, net neutrality

Recently, the FCC has been moving to give up Net Neutrality. Mozilla, being dedicated to the free (as in speech) and open internet, has offered a simple compromise. Their proposal is that the FCC classifies internet service providers (ISPs) as common carriers on the server side, forcing restrictions on them to prevent discrimination of traffic to customers, while allowing them to be "information services" to consumers.

mozilla-fcc.png

In other words, force ISPs to allow services to have unrestricted access to consumers, without flipping unnecessary tables with content distribution (TV, etc.) services. Like all possibilities so far, it could have some consequences, however.

"Net Neutrality" is a hot issue lately. Simply put, the internet gives society an affordable method of sharing information. How much is "just information" is catching numerous industries off guard, including ones which Internet Service Providers (ISPs) participate in (such as TV and Movie distribution), and that leads to serious tensions.

On the one hand, these companies want to protect their existing business models. They want consumers to continue to select their cable and satellite TV packages, on-demand videos, and other services at controlled profit margins and without the stress and uncertainty of competing.

On the other hand, if the world changes, they want to be the winner in that new reality. Yikes.

mozilla-UP.jpg

A... bad... photograph of Mozilla's "UP" anti-datamining proposal.

Mozilla's proposal is very typical of them. They tend to propose compromises which divides an issue such that both sides get the majority of their needs. Another good example is "UP", or User Personalization, which tries to cut down on data mining by giving a method for the browser to tell websites what they actually want to know (and let the user tell the browser how much to tell them). The user would compromise, giving the amount of information they find acceptable, so the website would compromise and take only what they need (rather than developing methods to grab anything and everything they can). It feels like a similar thing is happening here. This proposal gives users what they want, freedom to choose services without restriction, without tossing ISPs into "Title II" common carrier altogether.

Of course, this probably comes with a few caveats...

The first issue that pops in my mind is, "What is a service?". I see this causing problems for peer-to-peer applications (including BitTorrent Sync and Crashplan, excluding Crashplan Central). Neither endpoint would necessarily be classified as "a server", or at least convince a non-technical lawmaker that is the case, and thus ISPs would not need to apply common carrier restrictions to them. This could be a serious issue for WebRTC. Even worse, companies like Google and Netflix would have no incentive to help fight those battles -- they're legally protected. It would have to be defined, very clearly, what makes "a server".

Every method will get messy for someone. Still, the discussion is being made.

Source: Mozilla

Post Tax Day Celebration! Win an EVGA Hadron Air and GeForce GTX 750!

Subject: Editorial, General Tech, Graphics Cards | April 30, 2014 - 10:05 AM |
Tagged: hadron air, hadron, gtx 750, giveaway, evga, contest

Congrats to our winner: Pierce H.! Check back soon for more contests and giveaways at PC Perspective!!

In these good old United States of America, April 15th is a trying day. Circled on most of our calendars is the final deadline for paying up your bounty to Uncle Sam so we can continue to have things like freeway systems and universal Internet access. 

But EVGA is here for us! Courtesy of our long time sponsor you can win a post-Tax Day prize pack that includes both an EVGA Hadron Air mini-ITX chassis (reviewed by us here) as well as an EVGA GeForce GTX 750 graphics card. 

evgacontestapril.jpg

Nothing makes paying taxes better than free stuff that falls under the gift limit...

With these components under your belt you are well down the road to PC gaming bliss, upgrading your existing PC or starting a new one in a form factor you might not have otherwise imagined. 

Competing for these prizes is simple and open to anyone in the world, even if you don't suffer the same April 15th fear that we do. (I'm sure you have your own worries...)

  1. Fill out the form at the bottom of this post to give us your name and email address, in addition to the reasons you love April 15th! (Seriously, we need some good ideas for next year to keep our heads up!) Also, this does not mean you should leave a standard comment on the post to enter, though you are welcome to do that too.
     
  2. Stop by our Facebook page and give us a LIKE (I hate saying that), head over to our Twitter page and follow @pcper and heck, why not check our our many videos and subscribe to our YouTube channel?
     
  3. Why not do the same for EVGA's Facebook and Twitter accounts?
     
  4. Wait patiently for April 30th when we will draw and update this news post with the winners name and tax documentation! (Okay, probably not that last part.)

A huge thanks goes out to friends and supporters at EVGA for providing us with the hardware to hand out to you all. If it weren't for sponsors like this PC Perspective just couldn't happen, so be sure to give them some thanks when you see them around the In-tar-webs!!

Good luck!

Source: EVGA

AMD AM1 Retested on 60 Watt Power Supply

Subject: Editorial | April 23, 2014 - 09:51 PM |
Tagged: TDP, Athlon 5350, Asus AM1I-A, amd, AM1

If I had one regret about my AM1 review that posted a few weeks ago, it was that I used a pretty hefty (relatively speaking) 500 watt power supply for a part that is listed at a 25 watt TDP.  Power supplies really do not hit their efficiency numbers until they are at least under 50% load.  Even the most efficient 500 watt power supply is going to inflate the consumption numbers of these diminutive parts that we are currently testing.

am1p_01.jpg

Keep it simple... keep it efficient.

Ryan had sent along a 60 watt notebook power supply with an ATX cable adapter at around the same time as I started testing on the AMD Athlon 5350 and Asus AM1I-A.  I was somewhat roped into running that previously mentioned 500 watt power supply due to comparative reasons.  I was using a 100 watt TDP A10-6790 APU with a pretty loaded Gigabyte A88X based ITX motherboard.  That combination would have likely fried the 60 watt (12v x 5A) notebook power supply under load.

Now that I had a little extra time on my hands, I was able to finally get around to seeing exactly how efficient this little number could get.  I swapped the old WD Green 1 TB drive for a new Samsung 840 EVO 500 GB SSD.  I removed the BD-ROM drive completely from the equation as well.  Neither of those parts uses a lot of wattage, but I am pushing this combination to go as low as I possibly can.

power-idle.png

power-load.png

The results are pretty interesting.  At idle we see the 60 watt supply (sans spinning drive and BD-ROM) hitting 12 watts as measured from the wall.  The 500 watt power supply and those extra pieces added another 11 watts of draw.  At load we see a somewhat similar numbers, but not nearly as dramatic as at idle.  The 60 watt system is drawing 29 watts while the 500 watt system is at 37 watts.

am1p_02.jpg

So how do you get from a 60 watt notebook power adapter to ATX standard? This is the brains behind the operation.

The numbers for both power supplies are both good, but we do see that we get a nice jump in efficiency from using the smaller unit and a SSD instead of a spinning drive.  Either way, the Athlon 5350 and AMD AM1 infrastructure sip power as compared to most desktop processors.

Source: AMD

Ars Technica Estimates Steam Sales and Hours Played

Subject: Editorial, General Tech | April 16, 2014 - 01:56 AM |
Tagged: valve, steam

Valve does not release sales or hours played figures for any game on Steam and it is rare to find a publisher who will volunteer that information. That said, Steam user profiles list that information on a per-account basis. If someone, say Ars Technica, had access to sufficient server capacity, say an Amazon Web Services instance, and a reasonable understanding of statistics, then they could estimate.

Oh look, Ars Technica estimated by extrapolating from over 250,000 random accounts.

SteamHW.png

If interested, I would definitely look through the original editorial for all of its many findings. Here, if you let me (and you can't stop me even if you don't), I would like to add my own analysis on a specific topic. The Elder Scrolls V: Skyrim on the PC, according to VGChartz, sold 3.42 million copies on at retail, worldwide. The thing is, Steamworks was required for every copy sold at retail or online. According to Ars Technica's estimates, 5.94 million copies were registered with Steam.

5.94 minus 3.42 is 2.52 million copies sold digitally. Almost a third of PC sales were made through Steam and other digital distribution platforms. Also, this means that the PC was the game's second-best selling platform, ahead of the PS3 (5.43m) and behind the Xbox 360 (7.92m), minus any digital sales on those platforms if they exist, of course. Despite its engine being programmed in DirectX 9, it is still a fairly high-end game. That is a fairly healthy install base for decent gaming PCs.

Did you discover anything else on your own? Be sure to discuss it in our comments!

Source: Ars Technica

GDC 2014: Shader-limited Optimization for AMD's GCN

Subject: Editorial, General Tech, Graphics Cards, Processors, Shows and Expos | March 30, 2014 - 01:45 AM |
Tagged: gdc 14, GDC, GCN, amd

While Mantle and DirectX 12 are designed to reduce overhead and keep GPUs loaded, the conversation shifts when you are limited by shader throughput. Modern graphics processors are dominated by sometimes thousands of compute cores. Video drivers are complex packages of software. One of their many tasks is converting your scripts, known as shaders, into machine code for its hardware. If this machine code is efficient, it could mean drastically higher frame rates, especially at extreme resolutions and intense quality settings.

amd-gcn-unit.jpg

Emil Persson of Avalanche Studios, probably known best for the Just Cause franchise, published his slides and speech on optimizing shaders. His talk focuses on AMD's GCN architecture, due to its existence in both console and PC, while bringing up older GPUs for examples. Yes, he has many snippets of GPU assembly code.

AMD's GCN architecture is actually quite interesting, especially dissected as it was in the presentation. It is simpler than its ancestors and much more CPU-like, with resources mapped to memory (and caches of said memory) rather than "slots" (although drivers and APIs often pretend those relics still exist) and with how vectors are mostly treated as collections of scalars, and so forth. Tricks which attempt to combine instructions together into vectors, such as using dot products, can just put irrelevant restrictions on the compiler and optimizer... as it breaks down those vector operations into those very same component-by-component ops that you thought you were avoiding.

Basically, and it makes sense coming from GDC, this talk rarely glosses over points. It goes over execution speed of one individual op compared to another, at various precisions, and which to avoid (protip: integer divide). Also, fused multiply-add is awesome.

I know I learned.

As a final note, this returns to the discussions we had prior to the launch of the next generation consoles. Developers are learning how to make their shader code much more efficient on GCN and that could easily translate to leading PC titles. Especially with DirectX 12 and Mantle, which lightens the CPU-based bottlenecks, learning how to do more work per FLOP addresses the other side. Everyone was looking at Mantle as AMD's play for success through harnessing console mindshare (and in terms of Intel vs AMD, it might help). But honestly, I believe that it will be trends like this presentation which prove more significant... even if behind-the-scenes. Of course developers were always having these discussions, but now console developers will probably be talking about only one architecture - that is a lot of people talking about very few things.

This is not really reducing overhead; this is teaching people how to do more work with less, especially in situations (high resolutions with complex shaders) where the GPU is most relevant.

Mozilla Dumps "Metro" Version of Firefox

Subject: Editorial, General Tech | March 16, 2014 - 03:27 AM |
Tagged: windows, mozilla, microsoft, Metro

If you use the Firefox browser on a PC, you are probably using its "Desktop" application. They also had a version for "Modern" Windows 8.x that could be used from the Start Screen. You probably did not use it because fewer than 1000 people per day did. This is more than four orders of magnitude smaller than the number of users for Desktop's pre-release builds.

Yup, less than one-thousandth.

22-mozilla-2.jpg

Jonathan Nightingale, VP of Firefox, stated that Mozilla would not be willing to release the product without committing to its future development and support. There was not enough interest to take on that burden and it was not forecast to have a big uptake in adoption, either.

From what we can see, it's pretty flat.

The code will continue to exist in the organization's Mercurial repository. If "Modern" Windows gets a massive influx of interest, they could return to what they had. It should also be noted that there never was a version of Firefox for Windows RT. Microsoft will not allow third-party rendering engines as a part of their Windows Store certification requirements (everything must be based on Trident, the core of Internet Explorer). That said, this is also true of iOS and Firefox Junior exists with these limitations. It's not truly Firefox, little more than a re-skinned Safari (as permitted by Apple), but it exists. I have heard talks about Firefox Junior for Windows RT, Internet Explorer reskinned by Mozilla, but not to any detail. The organization is very attached to its own technology because, if whoever made the engine does not support new features or lags in JavaScript performance, the re-skins have nothing to leverage it.

Paul Thurrott of WinSupersite does not blame Mozilla for killing "Metro" Firefox. He acknowledges that they gave it a shot and did not see enough pre-release interest to warrant a product. He places some of the blame on Microsoft for the limitations it places on browsers (especially on Windows RT). In my opinion, this is just a symptom of the larger problem of Windows post-7. Hopefully, Microsoft can correct these problems and do so in a way that benefits their users (and society as a whole).

Source: Mozilla

Valve's Direct3D to OpenGL Translator (Or Part of It)

Subject: Editorial, General Tech | March 11, 2014 - 10:15 PM |
Tagged: valve, opengl, DirectX

Late yesterday night, Valve released source code from their "ToGL" transition layer. This bundle of code sits between "[a] limited subset of Direct3D 9.0c" and OpenGL to translate engines which are designed in the former, into the latter. It was pulled out of the DOTA 2 source tree and published standalone... mostly. Basically, it is completely unsupported and probably will not even build without some other chunks of the Source engine.

valve-dx-opengl.jpg

Still, Valve did not need to release this code, but they did. How a lot of open-source projects work is that someone dumps a starting blob, and if sufficient, the community pokes and prods it to mold it into a self-sustaining entity. The real question is whether the code that Valve provided is sufficient. As often is the case, time will tell. Either way, this is a good thing that other companies really should embrace: giving out your old code to further the collective. We are just not sure how good.

ToGL is available now at Valve's GitHub page under the permissive, non-copyleft MIT license.

Source: Valve GitHub

The Bigger They Are: The Titan They Fall? 48GB Install

Subject: Editorial, General Tech | February 25, 2014 - 03:43 PM |
Tagged: titanfall, ssd

UPDATE (Feb 26th): Our readers pointed out in the comments, although I have yet to test it, that you can change Origin's install-to directory before installing a game to have them on a separate hard drive as the rest. Not as easy as Steam's method, but apparently works for games like this that you want somewhere else. I figured it would forget games in the old directory, but apparently not.

Well, okay. Titanfall will require a significant amount of hard drive space when it is released in two weeks. Receiving the game digitally will push 21GB of content through your modem and unpack to 48GB. Apparently, the next generation has arrived.

titanfall.jpg

Honestly, I am not upset over this. Yes, this basically ignores customers who install their games to their SSDs. Origin, at the moment, forces all games to be installed in a single directory (albeit that can be anywhere) unlike Steam, which allows games to be individually sent to multiple folders. It would be a good idea to keep those customers in mind... but not at the expense of the game itself. Like always, both "high-end" and "unoptimized" titles have high minimum specifications; we decide which one applies by considering how effectively the performance is used.

That is something that we will need to find out when it launches on March 11th.

Source: PC Gamer

Kaveri loves fast memory

Subject: Editorial, General Tech | February 25, 2014 - 03:34 PM |
Tagged: ddr3, Kaveri, A10 7850K, amd, linux

You don't often see performance scaling as clean as what Phoronix saw when testing the effect of memory speed on AMD's A10-7850K.  Pick any result and you can clearly see a smooth increase in performance from DDR3-800 to DDR3-2400.  The only time that increase seems to decline slightly is between DDR3-2133 and 2400MHz, with some tests showing little to no increase between those two speeds.  Some tests do still show an improvement, for certain workloads on Linux the extra money is worth it but in other cases you can save a few dollars and limit yourself to the slightly cheaper DDR3-2133.  Check out the full review here.

image.php_.jpg

"Earlier in the week I published benchmarks showing AMD Kaveri's DDR3-800MHz through DDR3-2133MHz system memory performance. Those results showed this latest-generation AMD APU craving -- and being able to take advantage of -- high memory frequencies. Many were curious how DDR3-2400MHz would fair with Kaveri so here's some benchmarks as we test out Kingston's HyperX Beast 8GB DDR3-2400MHz memory kit."

Here are some more Memory articles from around the web:

Memory

Source: Phoronix

Irrational Games Implodes with Controlled Demolition

Subject: Editorial, General Tech | February 19, 2014 - 06:15 PM |
Tagged: bioshock infinite

The team behind the original BioShock and Bioshock: Infinite decided to call it quits. After seventeen years, depending on where you start counting, the company dissolved to form another, much smaller studio. Only about fifteen employees will transition to the new team. The rest are provided financial support, given a bit of time to develop their portfolios, and can attend a recruitment day to be interviewed by other studios and publishers. They may also be offered employment elsewhere in Take-Two Interactive.

bioshock_infinite_sp.jpg

The studio formed by the handful of remaining employees will look to develop games based on narrative, which is definitely their strength. Each game will be distributed digitally and Take-Two will continue to be their parent company.

While any job loss is terrible, I am interested in the future project. BioShock: Infinite sold millions of copies but I wonder if its size ultimately caused it harm. It was pretty and full of detail, at the expense of requiring a large team. The game had a story which respected your intelligence, you may not understand it and that was okay, but I have little confidence that it was anywhere close to the team's original vision. From budget constraints to the religious beliefs of development staff, we already know about several aspects of the game that changed significantly. Even Elizabeth, according to earlier statements from Ken Levine, was on the bubble because of her AI's complexity. I can imagine how difficult it is to resist those changes when seeing man-hour budgets. I cannot, however, imagine BioShock: Infinite without Elizabeth. A smaller team might help them concentrate their effort where it matters and keep artistic vision from becoming too dilute.

As for BioShock? The second part of the Burial at Sea DLC is said to wrap up the entire franchise. 2K will retain the license if they want to release sequels or spin-offs. I doubt Ken Levine will have anything more to do with it, however.

Oh PCMag, Console vs PC

Subject: Editorial, General Tech, Systems | February 12, 2014 - 10:45 PM |
Tagged: xbox, xbone, ps4, Playstation, pc gaming

PCMag, your source for Apple and gaming console coverage (I joke), wrote up an editorial about purchasing a gaming console. Honestly, they should have titled it, "How to Buy a Game Device" since they also cover the NVIDIA SHIELD and other options.

The entire Console vs PC debate bothers me, though. Neither side handles it well.

PS4-01.png

I will start by highlighting problems with the PC side, before you stop reading. Everyone says you can assemble your own gaming PC to save a little money. Yes, that is true and it is unique to the platform. The problem is that the public vision then becomes, "You must assemble and maintain your own gaming PC".

No.

No. No. No.

Some people prefer the support system provided by the gaming consoles. If it bricks, which some of them do a lot, you can call up the manufacturer for a replacement in a few weeks. The same could be absolutely true for a gaming PC. There is nothing wrong with purchasing a computer from a system builder, ranging from Dell to Puget Systems.

The point of gaming PC is that you do not need to. You can also deal with a small business. For Canadians, if you purchase all of your hardware through NCIX, you can add $50 to your order for them to ship your parts as a fully assembled PC, with Windows installed (if purchased). You also get a one-year warranty. The downside is that you lose your ability to pick-and-choose components from other retailers and you cannot reuse your old stuff. Unfortunately, I do not believe NCIX USA offers this. Some local stores may offer similar benefits, though. One around my area assembled for free.

The benefits of the PC is always choice. You can assemble it yourself (or with a friend). You can have a console-like experience with a system builder. You can also have something in-between with small businesses. It is your choice.

Most importantly, your choice of manufacturer does not restrict your choice in content.

5-depressing.png

As for the consoles, I cannot find a rock-solid argument that will always be better on them. If you are thinking about purchasing one, the available content should sway your decision. Microsoft will be the place to get "Halo". Sony will be the place to get "The Last of Us". Nintendo will be the place to get "Mario". Your money should go where the content you want is. That, and wherever your friends play.

But, of course, then you are what made the content exclusive.

Note: Obviously the PC has issues with proprietary platforms, too. Unlike the consoles, it could also be a temporary issue. The PC business model does not depend upon Windows. If it remains a sufficient platform? Great. If not, we have multiple options which range from Linux/SteamOS to Web Standards for someone to develop a timeless classic on.

Source: PCMag

(Phoronix) Intel Haswell iGPU Linux Performance in a Slump?

Subject: Editorial, General Tech, Graphics Cards | January 22, 2014 - 02:12 AM |
Tagged: linux, intel hd graphics, haswell

Looking through this post by Phoronix, it would seem that Intel had a significant regression in performance on Ubuntu 14.04 with the Linux 3.13 kernel. In some tests, HD 4600 only achieves about half of the performance recorded on the HD 4000. I have not been following Linux iGPU drivers and it is probably a bit late to do any form of in-depth analysis... but yolo. I think the article actually made a pretty big mistake and came to the exact wrong conclusion.

Let's do this!

7-TuxGpu.png

According to the article, in Xonotic v0.7, Ivy Bridge's Intel HD 4000 scores 176.23 FPS at 1080p on low quality settings. When you compare this to Haswell's HD 4600 and its 124.45 FPS result, this seems bad. However, even though they claim this as a performance regression, they never actually post earlier (and supposedly faster) benchmarks.

So I dug one up.

Back in October, the same test was performed with the same hardware. The Intel HD 4600 was not significantly faster back then, rather it was actually a bit slower with a score of 123.84 FPS. The Intel HD 4000 managed 102.68 FPS. Haswell did not regress between that time and Ubuntu 14.04 on Linux 3.13, Ivy Bridge received a 71.63% increase between then and Ubuntu 14.04 on Linux 3.13.

Of course, there could have been a performance increase between October and now and that recently regressed for Haswell... but I could not find those benchmarks. All I can see is that Haswell has been quite steady since October. Either way, that is a significant performance increase on Ivy Bridge since that snapshot in time, even if Haswell had a rise-and-fall that I was unaware of.

Source: Phoronix