Join the PC Perspective Team and Austin Evans LIVE on Tonight's Podcast!

Subject: Editorial | June 4, 2014 - 04:42 PM |
Tagged: video, pcper, live, austin evans

Tonight's live edition of the PC Perspective Podcast is going to have a special guest, the Internet's Austin Evans. You likely know of Austin through his wildly popular YouTube channel or maybe his dance moves.

But seriously, Austin Evans is a great guy with a lot of interesting input on technology. Stop by our live page at http://www.pcper.com/live at 10pm EST / 7pm EST for all the fun!

Make sure you don't miss it by signing up for our PC Perspective Live Mailing List!

pcperlive.png

Source: PCPer Live!

TrueCrypt Taken Offline Doesn't Pass My Smell Test

Subject: Editorial, General Tech | May 28, 2014 - 11:17 PM |
Tagged: TrueCrypt

It should not pass anyone's smell test but it apparently does, according to tweets and other articles. Officially, the TrueCrypt website (which redirects to their SourceForge page) claims that, with the end of Windows XP support (??), the TrueCrypt development team wants users to stop using their software. Instead, they suggest a switch to BitLocker, Mac OSX built-in encryption, or whatever random encryption suite comes up when you search your Linux distro's package manager (!?). Not only that, but several versions of Windows (such as 7 Home Premium) do not have access to BitLocker. Lastly, none of these are a good solution for users who want a single encrypted container across multiple OSes.

A new version (don't use it!!!) called TrueCrypt 7.2 was released and signed with their private encryption key.

TrueCrypt_Logo.png

The developers have not denied the end of support, and its full-of-crap reason. (Seriously, because Microsoft deprecated Windows XP almost two months ago, they pull support for a two year old version now?)

They have also not confirmed it. They have been missing since at least "the announcement" (or earlier if they were not the ones who made it). Going missing and unreachable, the day of your supposedly gigantic resignation announcement, does not support the validity of that announcement. 

To me, that is about as unconfirmed as you can get.

Still, people are believing the claims that TrueCrypt 7.1a is not secure. The version has been around since February 2012 and, beyond people looking at its source code, has passed a significant portion of a third-party audit. Even if you believe the website, it only says that TrueCrypt will not be updated for security. It does not say that TrueCrypt 7.1a is vulnerable to any known attack.

In other words, the version that has been good enough for over two years, and several known cases of government agencies being unable to penetrate it, is probably as secure today as it was last week.

"The final version", TrueCrypt 7.2, is a decrypt-only solution. It allows users to unencrypt existing vaults, although who knows what else it does, to move it to another solution. The source code changes have been published, and they do not seem shady so far, but since we cannot even verify that their private key has not leaked, I wouldn't trust it. A very deep compromise could make finding vulnerabilities very difficult.

So what is going on? Who knows. One possibility is that they were targeted for a very coordinated hack, one which completely owned them and their private key, performed by someone(s) who spent a significant amount of time modifying a fake 7.2 version. Another possibility is that they were legally gagged and forced to shut down operations, but they managed to negotiate a method for users to decrypt existing data with a neutered build.

One thing is for sure, if this is a GoG-style publicity stunt, I will flip a couple of tables.

We'll see. ┻━┻ \_()_/ ┻━┻

Source: TrueCrypt
Author:
Manufacturer: Various

The AMD Argument

Earlier this week, a story was posted in a Forbes.com blog that dove into the idea of NVIDIA GameWorks and how it was doing a disservice not just on the latest Ubisoft title Watch_Dogs but on PC gamers in general. Using quotes from AMD directly, the author claims that NVIDIA is actively engaging in methods to prevent game developers from optimizing games for AMD graphics hardware. This is an incredibly bold statement and one that I hope AMD is not making lightly. Here is a quote from the story:

Gameworks represents a clear and present threat to gamers by deliberately crippling performance on AMD products (40% of the market) to widen the margin in favor of NVIDIA products. . . . Participation in the Gameworks program often precludes the developer from accepting AMD suggestions that would improve performance directly in the game code—the most desirable form of optimization.

The example cited on the Forbes story is the recently released Watch_Dogs title, which appears to show favoritism towards NVIDIA GPUs with performance of the GTX 770 ($369) coming close the performance of a Radeon R9 290X ($549).

It's evident that Watch Dogs is optimized for Nvidia hardware but it's staggering just how un-optimized it is on AMD hardware.

watch_dogs_ss9_99866.jpg

Watch_Dogs is the latest GameWorks title released this week.

I decided to get in touch with AMD directly to see exactly what stance the company was attempting to take with these kinds of claims. No surprise, AMD was just as forward with me as they appeared to be in the Forbes story originally.

The AMD Stance

Central to AMD’s latest annoyance with the competition is the NVIDIA GameWorks program. First unveiled last October during a press event in Montreal, GameWorks combines several NVIDIA built engine functions into libraries that can be utilized and accessed by game developers to build advanced features into games. NVIDIA’s website claims that GameWorks is “easy to integrate into games” while also including tutorials and tools to help quickly generate content with the software set. Included in the GameWorks suite are tools like VisualFX which offers rendering solutions like HBAO+, TXAA, Depth of Field, FaceWorks, HairWorks and more. Physics tools include the obvious like PhysX while also adding clothing, destruction, particles and more.

Continue reading our editorial on the verbal battle between AMD and NVIDIA about the GameWorks program!!

Mozilla Firefox to Implement Adobe DRM for Video

Subject: Editorial, General Tech | May 14, 2014 - 06:56 PM |
Tagged: ultraviolet, mozilla, DRM, Adobe Access, Adobe

Needless to say, DRM is a controversial topic and I am clearly against it. I do not blame Mozilla. The non-profit organization responsible for Firefox knew that they could not oppose Chrome, IE, and Safari while being a consumer software provider. I do not even blame Apple, Google, and Microsoft for their decisions, either. This problem is much bigger and it comes down to a total misunderstanding of basic mathematics (albeit at a ridiculously abstract and applied level).

22-mozilla-2.jpg

Simply put, piracy figures are meaningless. They are a measure of how many people use content without paying (assuming they are even accurate). You know what is more useful? Sales figures. Piracy figures are measurements, dependent variables, and so is revenue. Measurements cannot influence other measurements. Specifically, measurements cannot influence anything because they are, themselves, the result of influences. That is what "a measure" is.

Implementing DRM is not a measurement, however. It is a controllable action whose influence can be recorded. If you implement DRM and your sales go down, it hurt you. You may notice piracy figures decline. However, you should be too busy to care because you should be spending your time trying to undo the damage you did to your sales! Why are you looking at piracy figures when you're bleeding money?

I have yet to see a DRM implementation that correlated with an increase in sales. I have, however, seen some which correlate to a massive decrease.

The thing is, Netflix might know that and I am pretty sure that some of the web browser companies know that. They do not necessarily want to implement DRM. What they want is content and, surprise, the people who are in charge of the content are definitely not enlightened to that logic. I am not even sure if they realize that the reason why content is pirated before their release dates is because they are not leaked by end users.

But whatever. Technical companies, who want that content available on their products, are stuck finding a way to appease those content companies in a way that damages their users and shrinks their potential market the least. For Mozilla, this means keeping as much open as possible.

do-not-hurt-2.jpg

Since they do not have existing relationships with Hollywood, Adobe Access will be the actual method of displaying the video. They are clear to note that this only applies to video. They believe their existing relationships in text, images, and games will prevent the disease from spreading. This is basically a plug-in architecture with a sandbox that is open source and as strict as possible.

This sandbox is intended to prevent a security vulnerability from having access to the host system, give a method of controlling the DRM's performance if it hitches, and not allow the DRM to query the machine for authentication. The last part is something they wanted to highlight, because it shows their effort to protect the privacy of their users. They also imply a method for users to opt-out but did not go into specifics.

As an aside, Adobe will support their Access DRM software on Windows, Mac, and Linux. Mozilla is pushing hard for Android and Firefox OS, too. According to Adobe, Access DRM is certified for use with Ultraviolet content.

I accept Mozilla's decision to join everyone else but I am sad that it came to this. I can think of only two reasons for including DRM: for legal (felony) "protection" under the DMCA or to make content companies feel better while they slowly sink their own ships chasing after numbers which have nothing to do with profits or revenue.

Ultimately, though, they made a compromise. That is always how we stumble and fall down slippery slopes. I am disappointed but I cannot suggest a better option.

Source: Mozilla

Mozilla Makes Suggestions to the FCC about Net Neutrality

Subject: Editorial, General Tech | May 5, 2014 - 05:08 PM |
Tagged: mozilla, net neutrality

Recently, the FCC has been moving to give up Net Neutrality. Mozilla, being dedicated to the free (as in speech) and open internet, has offered a simple compromise. Their proposal is that the FCC classifies internet service providers (ISPs) as common carriers on the server side, forcing restrictions on them to prevent discrimination of traffic to customers, while allowing them to be "information services" to consumers.

mozilla-fcc.png

In other words, force ISPs to allow services to have unrestricted access to consumers, without flipping unnecessary tables with content distribution (TV, etc.) services. Like all possibilities so far, it could have some consequences, however.

"Net Neutrality" is a hot issue lately. Simply put, the internet gives society an affordable method of sharing information. How much is "just information" is catching numerous industries off guard, including ones which Internet Service Providers (ISPs) participate in (such as TV and Movie distribution), and that leads to serious tensions.

On the one hand, these companies want to protect their existing business models. They want consumers to continue to select their cable and satellite TV packages, on-demand videos, and other services at controlled profit margins and without the stress and uncertainty of competing.

On the other hand, if the world changes, they want to be the winner in that new reality. Yikes.

mozilla-UP.jpg

A... bad... photograph of Mozilla's "UP" anti-datamining proposal.

Mozilla's proposal is very typical of them. They tend to propose compromises which divides an issue such that both sides get the majority of their needs. Another good example is "UP", or User Personalization, which tries to cut down on data mining by giving a method for the browser to tell websites what they actually want to know (and let the user tell the browser how much to tell them). The user would compromise, giving the amount of information they find acceptable, so the website would compromise and take only what they need (rather than developing methods to grab anything and everything they can). It feels like a similar thing is happening here. This proposal gives users what they want, freedom to choose services without restriction, without tossing ISPs into "Title II" common carrier altogether.

Of course, this probably comes with a few caveats...

The first issue that pops in my mind is, "What is a service?". I see this causing problems for peer-to-peer applications (including BitTorrent Sync and Crashplan, excluding Crashplan Central). Neither endpoint would necessarily be classified as "a server", or at least convince a non-technical lawmaker that is the case, and thus ISPs would not need to apply common carrier restrictions to them. This could be a serious issue for WebRTC. Even worse, companies like Google and Netflix would have no incentive to help fight those battles -- they're legally protected. It would have to be defined, very clearly, what makes "a server".

Every method will get messy for someone. Still, the discussion is being made.

Source: Mozilla

Post Tax Day Celebration! Win an EVGA Hadron Air and GeForce GTX 750!

Subject: Editorial, General Tech, Graphics Cards | April 30, 2014 - 07:05 AM |
Tagged: hadron air, hadron, gtx 750, giveaway, evga, contest

Congrats to our winner: Pierce H.! Check back soon for more contests and giveaways at PC Perspective!!

In these good old United States of America, April 15th is a trying day. Circled on most of our calendars is the final deadline for paying up your bounty to Uncle Sam so we can continue to have things like freeway systems and universal Internet access. 

But EVGA is here for us! Courtesy of our long time sponsor you can win a post-Tax Day prize pack that includes both an EVGA Hadron Air mini-ITX chassis (reviewed by us here) as well as an EVGA GeForce GTX 750 graphics card. 

evgacontestapril.jpg

Nothing makes paying taxes better than free stuff that falls under the gift limit...

With these components under your belt you are well down the road to PC gaming bliss, upgrading your existing PC or starting a new one in a form factor you might not have otherwise imagined. 

Competing for these prizes is simple and open to anyone in the world, even if you don't suffer the same April 15th fear that we do. (I'm sure you have your own worries...)

  1. Fill out the form at the bottom of this post to give us your name and email address, in addition to the reasons you love April 15th! (Seriously, we need some good ideas for next year to keep our heads up!) Also, this does not mean you should leave a standard comment on the post to enter, though you are welcome to do that too.
     
  2. Stop by our Facebook page and give us a LIKE (I hate saying that), head over to our Twitter page and follow @pcper and heck, why not check our our many videos and subscribe to our YouTube channel?
     
  3. Why not do the same for EVGA's Facebook and Twitter accounts?
     
  4. Wait patiently for April 30th when we will draw and update this news post with the winners name and tax documentation! (Okay, probably not that last part.)

A huge thanks goes out to friends and supporters at EVGA for providing us with the hardware to hand out to you all. If it weren't for sponsors like this PC Perspective just couldn't happen, so be sure to give them some thanks when you see them around the In-tar-webs!!

Good luck!

Source: EVGA

AMD AM1 Retested on 60 Watt Power Supply

Subject: Editorial | April 23, 2014 - 06:51 PM |
Tagged: TDP, Athlon 5350, Asus AM1I-A, amd, AM1

If I had one regret about my AM1 review that posted a few weeks ago, it was that I used a pretty hefty (relatively speaking) 500 watt power supply for a part that is listed at a 25 watt TDP.  Power supplies really do not hit their efficiency numbers until they are at least under 50% load.  Even the most efficient 500 watt power supply is going to inflate the consumption numbers of these diminutive parts that we are currently testing.

am1p_01.jpg

Keep it simple... keep it efficient.

Ryan had sent along a 60 watt notebook power supply with an ATX cable adapter at around the same time as I started testing on the AMD Athlon 5350 and Asus AM1I-A.  I was somewhat roped into running that previously mentioned 500 watt power supply due to comparative reasons.  I was using a 100 watt TDP A10-6790 APU with a pretty loaded Gigabyte A88X based ITX motherboard.  That combination would have likely fried the 60 watt (12v x 5A) notebook power supply under load.

Now that I had a little extra time on my hands, I was able to finally get around to seeing exactly how efficient this little number could get.  I swapped the old WD Green 1 TB drive for a new Samsung 840 EVO 500 GB SSD.  I removed the BD-ROM drive completely from the equation as well.  Neither of those parts uses a lot of wattage, but I am pushing this combination to go as low as I possibly can.

power-idle.png

power-load.png

The results are pretty interesting.  At idle we see the 60 watt supply (sans spinning drive and BD-ROM) hitting 12 watts as measured from the wall.  The 500 watt power supply and those extra pieces added another 11 watts of draw.  At load we see a somewhat similar numbers, but not nearly as dramatic as at idle.  The 60 watt system is drawing 29 watts while the 500 watt system is at 37 watts.

am1p_02.jpg

So how do you get from a 60 watt notebook power adapter to ATX standard? This is the brains behind the operation.

The numbers for both power supplies are both good, but we do see that we get a nice jump in efficiency from using the smaller unit and a SSD instead of a spinning drive.  Either way, the Athlon 5350 and AMD AM1 infrastructure sip power as compared to most desktop processors.

Source: AMD

Ars Technica Estimates Steam Sales and Hours Played

Subject: Editorial, General Tech | April 15, 2014 - 10:56 PM |
Tagged: valve, steam

Valve does not release sales or hours played figures for any game on Steam and it is rare to find a publisher who will volunteer that information. That said, Steam user profiles list that information on a per-account basis. If someone, say Ars Technica, had access to sufficient server capacity, say an Amazon Web Services instance, and a reasonable understanding of statistics, then they could estimate.

Oh look, Ars Technica estimated by extrapolating from over 250,000 random accounts.

SteamHW.png

If interested, I would definitely look through the original editorial for all of its many findings. Here, if you let me (and you can't stop me even if you don't), I would like to add my own analysis on a specific topic. The Elder Scrolls V: Skyrim on the PC, according to VGChartz, sold 3.42 million copies on at retail, worldwide. The thing is, Steamworks was required for every copy sold at retail or online. According to Ars Technica's estimates, 5.94 million copies were registered with Steam.

5.94 minus 3.42 is 2.52 million copies sold digitally. Almost a third of PC sales were made through Steam and other digital distribution platforms. Also, this means that the PC was the game's second-best selling platform, ahead of the PS3 (5.43m) and behind the Xbox 360 (7.92m), minus any digital sales on those platforms if they exist, of course. Despite its engine being programmed in DirectX 9, it is still a fairly high-end game. That is a fairly healthy install base for decent gaming PCs.

Did you discover anything else on your own? Be sure to discuss it in our comments!

Source: Ars Technica
Author:
Subject: Editorial
Manufacturer: Microsoft
Tagged:

Taking it all the way to 12!

Microsoft has been developing DirectX for around 20 years now.  Back in the 90s, the hardware and software scene for gaming was chaotic, at best.  We had wonderful things like “SoundBlaster compatibility” and 3rd party graphics APIs such as Glide, S3G, PowerSGL, RRedline, and ATICIF.  OpenGL was aimed more towards professional applications and it took John Carmack and iD, through GLQuake in 1996, to start the ball moving in that particular direction.  There was a distinct need to provide standards across audio and 3D graphics that would de-fragment the industry and developers.  DirectX was introduced with Windows 95, but the popularity of Direct3D did not really take off until DirectX 3.0 that was released in late 1996.

dx_history.jpg

DirectX has had some notable successes, and some notable let downs, over the years.  DX6 provided a much needed boost in 3D graphics, while DX8 introduced the world to programmable shading.  DX9 was the most long-lived version, thanks to it being the basis for the Xbox 360 console with its extended lifespan.  DX11 added in a bunch of features and made programming much simpler, all the while improving performance over DX10.  The low points?  DX10 was pretty dismal due to the performance penalty on hardware that supported some of the advanced rendering techniques.  DirectX 7 was around a little more than a year before giving way to DX8.  DX1 and DX2?  Yeah, those were very unpopular and problematic, due to the myriad changes in a modern operating system (Win95) as compared to the DOS based world that game devs were used to.

Some four years ago, if going by what NVIDIA has said, initial talks were initiated to start pursuing the development of DirectX 12.  DX11 was released in 2009 and has been an excellent foundation for PC games.  It is not perfect, though.  There is still a significant impact in potential performance due to a variety of factors, including a fairly inefficient hardware abstraction layer that relies more upon fast single threaded performance from a CPU rather than leveraging the power of a modern multi-core/multi-thread unit.  This has the result of limiting how many objects can be represented on screen as well as different operations that would bottleneck even the fastest CPU threads.

Click here to read the rest of the article!

GDC 2014: Shader-limited Optimization for AMD's GCN

Subject: Editorial, General Tech, Graphics Cards, Processors, Shows and Expos | March 29, 2014 - 10:45 PM |
Tagged: gdc 14, GDC, GCN, amd

While Mantle and DirectX 12 are designed to reduce overhead and keep GPUs loaded, the conversation shifts when you are limited by shader throughput. Modern graphics processors are dominated by sometimes thousands of compute cores. Video drivers are complex packages of software. One of their many tasks is converting your scripts, known as shaders, into machine code for its hardware. If this machine code is efficient, it could mean drastically higher frame rates, especially at extreme resolutions and intense quality settings.

amd-gcn-unit.jpg

Emil Persson of Avalanche Studios, probably known best for the Just Cause franchise, published his slides and speech on optimizing shaders. His talk focuses on AMD's GCN architecture, due to its existence in both console and PC, while bringing up older GPUs for examples. Yes, he has many snippets of GPU assembly code.

AMD's GCN architecture is actually quite interesting, especially dissected as it was in the presentation. It is simpler than its ancestors and much more CPU-like, with resources mapped to memory (and caches of said memory) rather than "slots" (although drivers and APIs often pretend those relics still exist) and with how vectors are mostly treated as collections of scalars, and so forth. Tricks which attempt to combine instructions together into vectors, such as using dot products, can just put irrelevant restrictions on the compiler and optimizer... as it breaks down those vector operations into those very same component-by-component ops that you thought you were avoiding.

Basically, and it makes sense coming from GDC, this talk rarely glosses over points. It goes over execution speed of one individual op compared to another, at various precisions, and which to avoid (protip: integer divide). Also, fused multiply-add is awesome.

I know I learned.

As a final note, this returns to the discussions we had prior to the launch of the next generation consoles. Developers are learning how to make their shader code much more efficient on GCN and that could easily translate to leading PC titles. Especially with DirectX 12 and Mantle, which lightens the CPU-based bottlenecks, learning how to do more work per FLOP addresses the other side. Everyone was looking at Mantle as AMD's play for success through harnessing console mindshare (and in terms of Intel vs AMD, it might help). But honestly, I believe that it will be trends like this presentation which prove more significant... even if behind-the-scenes. Of course developers were always having these discussions, but now console developers will probably be talking about only one architecture - that is a lot of people talking about very few things.

This is not really reducing overhead; this is teaching people how to do more work with less, especially in situations (high resolutions with complex shaders) where the GPU is most relevant.