Subject: Editorial, General Tech | April 16, 2014 - 01:56 AM | Scott Michaud
Tagged: valve, steam
Valve does not release sales or hours played figures for any game on Steam and it is rare to find a publisher who will volunteer that information. That said, Steam user profiles list that information on a per-account basis. If someone, say Ars Technica, had access to sufficient server capacity, say an Amazon Web Services instance, and a reasonable understanding of statistics, then they could estimate.
If interested, I would definitely look through the original editorial for all of its many findings. Here, if you let me (and you can't stop me even if you don't), I would like to add my own analysis on a specific topic. The Elder Scrolls V: Skyrim on the PC, according to VGChartz, sold 3.42 million copies on at retail, worldwide. The thing is, Steamworks was required for every copy sold at retail or online. According to Ars Technica's estimates, 5.94 million copies were registered with Steam.
5.94 minus 3.42 is 2.52 million copies sold digitally. Almost a third of PC sales were made through Steam and other digital distribution platforms. Also, this means that the PC was the game's second-best selling platform, ahead of the PS3 (5.43m) and behind the Xbox 360 (7.92m), minus any digital sales on those platforms if they exist, of course. Despite its engine being programmed in DirectX 9, it is still a fairly high-end game. That is a fairly healthy install base for decent gaming PCs.
Did you discover anything else on your own? Be sure to discuss it in our comments!
Taking it all the way to 12!
Microsoft has been developing DirectX for around 20 years now. Back in the 90s, the hardware and software scene for gaming was chaotic, at best. We had wonderful things like “SoundBlaster compatibility” and 3rd party graphics APIs such as Glide, S3G, PowerSGL, RRedline, and ATICIF. OpenGL was aimed more towards professional applications and it took John Carmack and iD, through GLQuake in 1996, to start the ball moving in that particular direction. There was a distinct need to provide standards across audio and 3D graphics that would de-fragment the industry and developers. DirectX was introduced with Windows 95, but the popularity of Direct3D did not really take off until DirectX 3.0 that was released in late 1996.
DirectX has had some notable successes, and some notable let downs, over the years. DX6 provided a much needed boost in 3D graphics, while DX8 introduced the world to programmable shading. DX9 was the most long-lived version, thanks to it being the basis for the Xbox 360 console with its extended lifespan. DX11 added in a bunch of features and made programming much simpler, all the while improving performance over DX10. The low points? DX10 was pretty dismal due to the performance penalty on hardware that supported some of the advanced rendering techniques. DirectX 7 was around a little more than a year before giving way to DX8. DX1 and DX2? Yeah, those were very unpopular and problematic, due to the myriad changes in a modern operating system (Win95) as compared to the DOS based world that game devs were used to.
Some four years ago, if going by what NVIDIA has said, initial talks were initiated to start pursuing the development of DirectX 12. DX11 was released in 2009 and has been an excellent foundation for PC games. It is not perfect, though. There is still a significant impact in potential performance due to a variety of factors, including a fairly inefficient hardware abstraction layer that relies more upon fast single threaded performance from a CPU rather than leveraging the power of a modern multi-core/multi-thread unit. This has the result of limiting how many objects can be represented on screen as well as different operations that would bottleneck even the fastest CPU threads.
Subject: Editorial, General Tech, Graphics Cards, Processors, Shows and Expos | March 30, 2014 - 01:45 AM | Scott Michaud
Tagged: gdc 14, GDC, GCN, amd
While Mantle and DirectX 12 are designed to reduce overhead and keep GPUs loaded, the conversation shifts when you are limited by shader throughput. Modern graphics processors are dominated by sometimes thousands of compute cores. Video drivers are complex packages of software. One of their many tasks is converting your scripts, known as shaders, into machine code for its hardware. If this machine code is efficient, it could mean drastically higher frame rates, especially at extreme resolutions and intense quality settings.
Emil Persson of Avalanche Studios, probably known best for the Just Cause franchise, published his slides and speech on optimizing shaders. His talk focuses on AMD's GCN architecture, due to its existence in both console and PC, while bringing up older GPUs for examples. Yes, he has many snippets of GPU assembly code.
AMD's GCN architecture is actually quite interesting, especially dissected as it was in the presentation. It is simpler than its ancestors and much more CPU-like, with resources mapped to memory (and caches of said memory) rather than "slots" (although drivers and APIs often pretend those relics still exist) and with how vectors are mostly treated as collections of scalars, and so forth. Tricks which attempt to combine instructions together into vectors, such as using dot products, can just put irrelevant restrictions on the compiler and optimizer... as it breaks down those vector operations into those very same component-by-component ops that you thought you were avoiding.
Basically, and it makes sense coming from GDC, this talk rarely glosses over points. It goes over execution speed of one individual op compared to another, at various precisions, and which to avoid (protip: integer divide). Also, fused multiply-add is awesome.
I know I learned.
As a final note, this returns to the discussions we had prior to the launch of the next generation consoles. Developers are learning how to make their shader code much more efficient on GCN and that could easily translate to leading PC titles. Especially with DirectX 12 and Mantle, which lightens the CPU-based bottlenecks, learning how to do more work per FLOP addresses the other side. Everyone was looking at Mantle as AMD's play for success through harnessing console mindshare (and in terms of Intel vs AMD, it might help). But honestly, I believe that it will be trends like this presentation which prove more significant... even if behind-the-scenes. Of course developers were always having these discussions, but now console developers will probably be talking about only one architecture - that is a lot of people talking about very few things.
This is not really reducing overhead; this is teaching people how to do more work with less, especially in situations (high resolutions with complex shaders) where the GPU is most relevant.
Introduction and Background
Back in 2010, Intel threw a bit of a press thing for a short list of analysts and reviewers out at their IMFT flash memory plant at Lehi, Utah. The theme and message of that event was to announce 25nm flash entering mass production. A few years have passed, and 25nm flash is fairly ubiquitous, with 20nm rapidly gaining as IMFT scales production even higher with the smaller process. Last week, Intel threw a similar event, but instead of showing off a die shrink or even announcing a new enthusiast SSD, they chose to take a step back and brief us on the various design, engineering, and validation testing of their flash storage product lines.
At the Lehi event, I did my best to make off with a 25nm wafer.
Many topics were covered at this new event at the Intel campus at Folsom, CA, and over the coming weeks we will be filling you in on many of them as we take the necessary time to digest the fire hose of intel (pun intended) that we received. Today I'm going to lay out one of the more impressive things I saw at the briefings, and that is the process Intel goes through to ensure their products are among the most solid and reliable in the industry.
Subject: Editorial, General Tech | March 16, 2014 - 03:27 AM | Scott Michaud
Tagged: windows, mozilla, microsoft, Metro
If you use the Firefox browser on a PC, you are probably using its "Desktop" application. They also had a version for "Modern" Windows 8.x that could be used from the Start Screen. You probably did not use it because fewer than 1000 people per day did. This is more than four orders of magnitude smaller than the number of users for Desktop's pre-release builds.
Yup, less than one-thousandth.
Jonathan Nightingale, VP of Firefox, stated that Mozilla would not be willing to release the product without committing to its future development and support. There was not enough interest to take on that burden and it was not forecast to have a big uptake in adoption, either.
From what we can see, it's pretty flat.
Paul Thurrott of WinSupersite does not blame Mozilla for killing "Metro" Firefox. He acknowledges that they gave it a shot and did not see enough pre-release interest to warrant a product. He places some of the blame on Microsoft for the limitations it places on browsers (especially on Windows RT). In my opinion, this is just a symptom of the larger problem of Windows post-7. Hopefully, Microsoft can correct these problems and do so in a way that benefits their users (and society as a whole).
Subject: Editorial, General Tech | March 11, 2014 - 10:15 PM | Scott Michaud
Tagged: valve, opengl, DirectX
Late yesterday night, Valve released source code from their "ToGL" transition layer. This bundle of code sits between "[a] limited subset of Direct3D 9.0c" and OpenGL to translate engines which are designed in the former, into the latter. It was pulled out of the DOTA 2 source tree and published standalone... mostly. Basically, it is completely unsupported and probably will not even build without some other chunks of the Source engine.
Still, Valve did not need to release this code, but they did. How a lot of open-source projects work is that someone dumps a starting blob, and if sufficient, the community pokes and prods it to mold it into a self-sustaining entity. The real question is whether the code that Valve provided is sufficient. As often is the case, time will tell. Either way, this is a good thing that other companies really should embrace: giving out your old code to further the collective. We are just not sure how good.
ToGL is available now at Valve's GitHub page under the permissive, non-copyleft MIT license.
Subject: Editorial, General Tech | February 25, 2014 - 03:43 PM | Scott Michaud
Tagged: titanfall, ssd
UPDATE (Feb 26th): Our readers pointed out in the comments, although I have yet to test it, that you can change Origin's install-to directory before installing a game to have them on a separate hard drive as the rest. Not as easy as Steam's method, but apparently works for games like this that you want somewhere else. I figured it would forget games in the old directory, but apparently not.
Well, okay. Titanfall will require a significant amount of hard drive space when it is released in two weeks. Receiving the game digitally will push 21GB of content through your modem and unpack to 48GB. Apparently, the next generation has arrived.
Honestly, I am not upset over this. Yes, this basically ignores customers who install their games to their SSDs. Origin, at the moment, forces all games to be installed in a single directory (albeit that can be anywhere) unlike Steam, which allows games to be individually sent to multiple folders. It would be a good idea to keep those customers in mind... but not at the expense of the game itself. Like always, both "high-end" and "unoptimized" titles have high minimum specifications; we decide which one applies by considering how effectively the performance is used.
That is something that we will need to find out when it launches on March 11th.
Subject: Editorial, General Tech | February 25, 2014 - 03:34 PM | Jeremy Hellstrom
Tagged: ddr3, Kaveri, A10 7850K, amd, linux
You don't often see performance scaling as clean as what Phoronix saw when testing the effect of memory speed on AMD's A10-7850K. Pick any result and you can clearly see a smooth increase in performance from DDR3-800 to DDR3-2400. The only time that increase seems to decline slightly is between DDR3-2133 and 2400MHz, with some tests showing little to no increase between those two speeds. Some tests do still show an improvement, for certain workloads on Linux the extra money is worth it but in other cases you can save a few dollars and limit yourself to the slightly cheaper DDR3-2133. Check out the full review here.
"Earlier in the week I published benchmarks showing AMD Kaveri's DDR3-800MHz through DDR3-2133MHz system memory performance. Those results showed this latest-generation AMD APU craving -- and being able to take advantage of -- high memory frequencies. Many were curious how DDR3-2400MHz would fair with Kaveri so here's some benchmarks as we test out Kingston's HyperX Beast 8GB DDR3-2400MHz memory kit."
Here are some more Memory articles from around the web:
- DDR3-800MHz To DDR3-2133MHz Memory Testing With AMD's Kaveri @ Phoronix
- G.SKILL Ripjaws 8GB 2133MHz DDR3L SO-DIMM Memory Kit Review @ Legit Reviews
- Patriot Viper 3 16GB 2400MHz Memory Kit @ eTeknix
- Team Group Vulcan 8GB 2400MHz C10 Memory Kit @ eTeknix
- Patriot Viper 8GB DDR3-2133 C11 (Low Profile) Memory @ Funky Kit
- G.SKILL Ripjaws 1866MHz 8GB DDR3L SO-DIMM Memory Kit Review @ Legit Reviews
Subject: Editorial, General Tech | February 19, 2014 - 06:15 PM | Scott Michaud
Tagged: bioshock infinite
The team behind the original BioShock and Bioshock: Infinite decided to call it quits. After seventeen years, depending on where you start counting, the company dissolved to form another, much smaller studio. Only about fifteen employees will transition to the new team. The rest are provided financial support, given a bit of time to develop their portfolios, and can attend a recruitment day to be interviewed by other studios and publishers. They may also be offered employment elsewhere in Take-Two Interactive.
The studio formed by the handful of remaining employees will look to develop games based on narrative, which is definitely their strength. Each game will be distributed digitally and Take-Two will continue to be their parent company.
While any job loss is terrible, I am interested in the future project. BioShock: Infinite sold millions of copies but I wonder if its size ultimately caused it harm. It was pretty and full of detail, at the expense of requiring a large team. The game had a story which respected your intelligence, you may not understand it and that was okay, but I have little confidence that it was anywhere close to the team's original vision. From budget constraints to the religious beliefs of development staff, we already know about several aspects of the game that changed significantly. Even Elizabeth, according to earlier statements from Ken Levine, was on the bubble because of her AI's complexity. I can imagine how difficult it is to resist those changes when seeing man-hour budgets. I cannot, however, imagine BioShock: Infinite without Elizabeth. A smaller team might help them concentrate their effort where it matters and keep artistic vision from becoming too dilute.
As for BioShock? The second part of the Burial at Sea DLC is said to wrap up the entire franchise. 2K will retain the license if they want to release sequels or spin-offs. I doubt Ken Levine will have anything more to do with it, however.
It wouldn’t be February if we didn’t hear the Q4 FY14 earnings from NVIDIA! NVIDIA does have a slightly odd way of expressing their quarters, but in the end it is all semantics. They are not in fact living in the future, but I bet their product managers wish they could peer into the actual Q4 2014. No, the whole FY14 thing relates back to when they made their IPO and how they started reporting. To us mere mortals, Q4 FY14 actually represents Q4 2013. Clear as mud? Lord love the Securities and Exchange Commission and their rules.
The past quarter was a pretty good one for NVIDIA. They came away with $1.144 billion in gross revenue and had a GAAP net income of $147 million. This beat the Street’s estimate by a pretty large margin. As a response, trading of NVIDIA’s stock has gone up in after hours. This has certainly been a trying year for NVIDIA and the PC market in general, but they seem to have come out on top.
NVIDIA beat estimates primarily on the strength of the PC graphics division. Many were focusing on the apparent decline of the PC market and assumed that NVIDIA would be dragged down by lower shipments. On the contrary, it seems as though the gaming market and add-in sales on the PC helped to solidify NVIDIA’s quarter. We can look at a number of factors that likely contributed to this uptick for NVIDIA.