Things are about to get...complicated
Earlier this week, the team behind Ashes of the Singularity released an updated version of its early access game, which updated its features and capabilities. With support for DirectX 11 and DirectX 12, and adding in multiple graphics card support, the game featured a benchmark mode that got quite a lot of attention. We saw stories based on that software posted by Anandtech, Guru3D and ExtremeTech, all of which had varying views on the advantages of one GPU or another.
That isn’t the focus of my editorial here today, though.
Shortly after the initial release, a discussion began around results from the Guru3D story that measured frame time consistency and smoothness with FCAT, a capture based testing methodology much like the Frame Rating process we have here at PC Perspective. In that post on ExtremeTech, Joel Hruska claims that the results and conclusion from Guru3D are wrong because the FCAT capture methods make assumptions on the output matching what the user experience feels like. Maybe everyone is wrong?
First a bit of background: I have been working with Oxide and the Ashes of the Singularity benchmark for a couple of weeks, hoping to get a story that I was happy with and felt was complete, before having to head out the door to Barcelona for the Mobile World Congress. That didn’t happen – such is life with an 8-month old. But, in my time with the benchmark, I found a couple of things that were very interesting, even concerning, that I was working through with the developers.
FCAT overlay as part of the Ashes benchmark
First, the initial implementation of the FCAT overlay, which Oxide should be PRAISED for including since we don’t have and likely won’t have a DX12 universal variant of, was implemented incorrectly, with duplication of color swatches that made the results from capture-based testing inaccurate. I don’t know if Guru3D used that version to do its FCAT testing, but I was able to get some updated EXEs of the game through the developer in order to the overlay working correctly. Once that was corrected, I found yet another problem: an issue of frame presentation order on NVIDIA GPUs that likely has to do with asynchronous shaders. Whether that issue is on the NVIDIA driver side or the game engine side is still being investigated by Oxide, but it’s interesting to note that this problem couldn’t have been found without a proper FCAT implementation.
With all of that under the bridge, I set out to benchmark this latest version of Ashes and DX12 to measure performance across a range of AMD and NVIDIA hardware. The data showed some abnormalities, though. Some results just didn’t make sense in the context of what I was seeing in the game and what the overlay results were indicating. It appeared that Vsync (vertical sync) was working differently than I had seen with any other game on the PC.
For the NVIDIA platform, tested using a GTX 980 Ti, the game seemingly randomly starts up with Vsync on or off, with no clear indicator of what was causing it, despite the in-game settings being set how I wanted them. But the Frame Rating capture data was still working as I expected – just because Vsync is enabled doesn’t mean you can look at the results in capture formats. I have written stories on what Vsync enabled captured data looks like and what it means as far back as April 2013. Obviously, to get the best and most relevant data from Frame Rating, setting vertical sync off is ideal. Running into more frustration than answers, I moved over to an AMD platform.
Subject: General Tech | February 27, 2016 - 12:48 AM | Scott Michaud
Tagged: windows 10, microsoft
I haven't seen these first hand, but several tech blogs are reporting that Microsoft has begun advertising on the Windows 10 lock screen. In this case, a full screen image from Rise of the Tomb Raider appears to be overlaid with a few links, like we used to see on the Bing homepage (except that this seems to launch the Windows Store app).
Image Credit: David McGavern via Twitter
One key thing to note, though, is that Microsoft allows you to disable these. If you go to Settings -> Personalization -> Lock screen, you can flip “Get fun facts, tips, tricks, and more on your lock screen” to off. It also doesn't appear to be targeted based on your personal information, although that is difficult to tell with a sample size of 1. In that case, however, where the ad is located would be fairly irrelevant.
Personally, I'm not too upset. Microsoft allows an easy, safe opt-out, although the option could be better labeled. Regardless of the number of people who block the ads, the part that matters is how intrusive they are. If the setting continuously reverts, or it moves to Group Policy, the registry, or worse, then it could be a problem. (Or, of course, if it sacrifices usability, performance, or security.)
The last part is an interesting note. I've read a few comments that are concerned about it being an attack vector for malware. They seem to assume it must be, but not necessarily (relative to other theoretical attacks, like Microsoft.com itself getting hacked). It looks like everything is served directly from Microsoft, and the functionality is severely limited. It could be done right, but yes, it's possible that they could be tricked in the future into providing a malicious link (just like they could be tricked into hosting a malicious app at the Windows Store itself). They mostly depends on the volume and type of ads they plan to integrate into the OS, and where.
Windows Store brings me to my personal concern -- antitrust.
Governments have been more permissive about this issue than they were two decades ago. Back then, the integration of web browsers and media applications were cause for litigation. Now, companies like Apple are able to ship OSes that disallow third-party browsers (beyond just reskins of Safari). (Update: Feb 26th @ 8:48pm - There was a bit of confusion. I am referring to iOS, specifically. Mac OSX allows full third-party web browsers. The example was referring to how iOS is legally treated compared to how Windows XP/Vista/7 was.) Again, Microsoft needed to put a browser ballot in Windows XP, Vista, and 7, yet, when Windows RT launched, they would completely ban any web browser unless it used Internet Explorer's Trident engine. Think about that direction shift for a moment.
I wouldn't accuse Windows Store of having a dominant stance in digital distribution market share, but it definitely has unique exposure within the OS. This could become a growing concern, especially if Microsoft progresses with their initiatives. At the same time, several other platform owners are doing the same thing (pretty much all of them). Should we place boundaries on this behavior? If so, what and how?
Subject: Mobile | February 26, 2016 - 05:04 PM | Sebastian Peak
Tagged: windows phone, Project Astoria, microsoft, developers, build 2015, Android
A smartphone is nothing without a large selection of quality apps these days, and to that end it seemed Microsoft was going to follow the BlackBerry OS 10 method (injecting life into a platform barren of software by borrowing Android apps) when they announced the "Windows Bridge for Android" last year.
(Image credit: Microsoft)
Blackberry accomplished this by adding the Amazon app store to OS 10, which gave BB users at least some of the apps an Android user has access to via Google Play. BlackBerry also provided devs tools to help them convert Android apps to run on the BB 10 OS platform, but the market share of BB OS 10 just isn’t high enough to justify many doing this.
Microsoft appeared to be headed in this direction when they introduced Project Astoria at last year’s Build conference, which was going to enable devs to bring Android apps over to the Windows mobile OS. Well, that’s over. In an update published yesterday by Kevin Gallo, Microsoft’s Director of Windows Developer Platform, this news was spun positively (of course).
“We also announced the Windows Bridge for Android (project “Astoria”) at Build last year, and some of you have asked about its status. We received a lot of feedback that having two Bridge technologies to bring code from mobile operating systems to Windows was unnecessary, and the choice between them could be confusing. We have carefully considered this feedback and decided that we would focus our efforts on the Windows Bridge for iOS and make it the single Bridge option for bringing mobile code to all Windows 10 devices, including Xbox and PCs. For those developers who spent time investigating the Android Bridge, we strongly encourage you to take a look at the iOS Bridge and Xamarin as great solutions.”
To editoralize here a bit, I will add that I own a Lumia smartphone, and in my experience Windows Phone is an innovative and extremely efficient mobile OS. However, the lack of quality apps (and the non-existent updates for those that do exist) is too great a barrier to use a Windows Phone as my primary device. It’s telling that BlackBerry's latest smartphone, the Priv, runs Android, as BlackBerry has effectively given up trying to compete with their own OS.
BlackBerry Priv, which runs the Android 5 OS (image credit: BlackBerry)
Microsoft seems unwilling to do this, but they are a software company first and foremost and that's not surprising. But as a hardware company they have struggled with portable devices, as we saw with the ill-fated Kin smartphone, and of course the Zune music player. Android is the only realistic option if you want to compete with iOS on a smartphone, but Microsoft hasn't given up on the OS just yet. As much as I like the tiled interface, I think it's time to say goodbye to this iteration of Windows Mobile.
Subject: General Tech | February 24, 2016 - 09:41 PM | Scott Michaud
Tagged: microsoft, xamarin, Qt, .net, mono
Microsoft has purchased Xamarin, who currently maintain the Mono project.
This requires a little background. The .NET Framework was announced in 2000, and it quickly became one of the most popular structures to write native applications, especially simple ones. Apart from ASP.NET, which is designed for servers, support extended back to Windows 98, but it really defined applications throughout the Windows XP era. If you ever downloaded utilities that were mostly checkboxes and text elements, they were probably developed in .NET and programmed in C#.
Today, Qt and Web are very popular choice for new applications, but .NET is keeping up.
The Mono project brought the .NET framework, along with its managed languages such as C#, to Linux, Mac, and also Windows because why not. Android and iOS versions exist from Xamarin, under the name Xamarin.iOS and Xamarin.Android, but those are proprietary. Now that Microsoft has purchased Xamarin, it would seem like they now control the .NET-derived implementations on Android and iOS. The Mono project itself, as it exists for Linux, Mac, and Windows, are under open licenses, so (apart from Microsoft's patents that were around since day one) the framework could always be forked if the community dislikes the way it is developing. To visualize the scenario, think of when LibreOffice split from OpenOffice a little while after Oracle purchased Sun.
If they do split, however, it would likely be without iOS and Android components.
Subject: General Tech | February 17, 2016 - 11:47 PM | Scott Michaud
Tagged: microsoft, windows 10, pc gaming, DirectX 12
Last week, Microsoft announced that Quantum Break would arrive on the PC. At the same time, they listed the system requirements, which included the requirement of Windows 10. It will only be available on Windows 10 (outside of Xbox One). They also mentioned that the game would require DirectX 12, which made the issue more interesting. It wasn't that Microsoft was pushing their OS with first-party software, they were using an API that is only available in Windows 10, and it had the potential to make a better video game.
Then they announced that it would only be available on Windows Store, which swings the pendulum back in the other direction. Oh well, it was nice while it lasted.
In all seriousness, we'll probably see games begin to deprecate DirectX 11 once DirectX 12 (or Vulkan) becomes ubiquitous. These new APIs significantly change how content is designed and submit to GPU(s), and do so in ways that seem difficult to scale back. Granted, I've talked to game developers and I've yet to have my suspicions validated, but it seems like the real benefit of the APIs will be when art and content can be created differently -- more objects, simpler objects, potentially splitting materials that are modified into separate instances, and so forth.
Quantum Break will come out on April 5th, along with a few other DX12-based titles.
Subject: General Tech | February 10, 2016 - 08:52 PM | Scott Michaud
Tagged: microsoft, windows 10
Since the release of Windows 10, Microsoft has been pushing for updates to just happen. They want users to receive each of them, because then it's harder for malware authors to take advantage of known vulnerabilities and it's also easier for Microsoft to update Windows (because it would have fewer permutations of patch levels). These updates would arrive with the useless name “Cumulative update for Windows 10 (some version)” and no further information, besides a list of changed files without any context.
Now with slightly less blindness...
Microsoft now has a page that lists the general improvements as a bullet-point list. It's not an extensive list of changes, and most of them are related to security and privacy, but that is to be expected now that Windows has moved to a build paradigm. They are broken down by build level, though, which lets you see everything that happened to 10240 since it launched separate from the list of everything that happened to 10586 since it was published.
This is positive. Microsoft should have done this for a while, and I hope they continue indefinitely.
Subject: General Tech | February 10, 2016 - 08:03 PM | Scott Michaud
Tagged: microsoft, windows 10
The Windows 10 preview program had two settings: Fast and Slow. A third one has been added, called Release Preview, although it sounds a bit different from the other two. According to the blog post, which is supposed to be about a new build of Windows 10 Mobile, Release Preview will grant early access to updates on the current branch of Windows 10. They also call it “updates” instead of “builds”. Fast and Slow, as they have existed, provide builds for the next branch of Windows 10 at the time.
When the public was on July release, Insider provided builds that ended up in the November update. When Windows 10 1511 was released, Insiders received builds on the “Redstone” branch, to be released at some point in the future. That is, apparently, not how Release Preview ring will work. They will receive 1511 updates early. They might receive the final Redstone-one build before general availability, but that is just speculation.
This might take some pressure off of Slow, which, during the Threshold-two timeframe, only received a single build, 10565, outside of the final 10586 one that was released to the public. Slow ended up being little more than a release candidate ring for the upcoming branch. If they push that to Release Preview, for the build that the public will see, then Slow might receive a few more steps on the upcoming branch, especially now that Fast are receiving more frequent updates. After all, users who are only interested in one or two builds per branch will probably be satisfied with pre-release updates and the final build early (again, if they release the final builds early on Release Preview, which is speculation).
Or Microsoft might just want a few more testers for Windows Update patches. Who knows?
Subject: General Tech | February 5, 2016 - 10:06 PM | Jeremy Hellstrom
Tagged: onedrive, microsoft, cloud storage
Remember the good old days when OneDrive moved from offering you 1TB of storage to an unlimited amount? That did not last too long, they changed their minds and dropped the paid service back to 1TB and the free version from 15GB to 5GB, with a chance to grandfather in the additional storage if you followed up with them.
A viewer recently encountered this for the first time and it seems appropriate to remind everyone about the change. If you have the paid service and are storing over 1TB you may have already heard from Microsoft but if not then consider this the warning that you have better trim down the amount of data you store on OneDrive as the changes are going to happen in the latter half of this year. The same goes for free users who have 15GB, or 30GB if you opted into the camera roll service, get the amount of files you have stored on OneDrive under 5GB or risk losing data you would rather keep. The standalone 100GB and 200GB plans will be reduced to 50GB, the price will remain at $1.99 per month.
The whole situation is reminiscent of a teacher in a classroom full of kids choosing to punish the entire class for the actions of a few individuals; in this case the tiny percentage which exceeded 75TB of usage. Make sure to clean up your OneDrive as soon as possible, this is not something you want to wait until the last minute to do.
"If you are using more than 5 GB of free storage, you will continue to have access to all files for at least 12 months after these changes go into effect in early 2016. In addition, you can redeem a free one-year Office 365 Personal subscription (credit card required), which includes 1 TB of OneDrive storage."
Here is some more Tech News from around the web:
- Intel Says Chips To Become Slower But More Energy Efficient @ Slashdot
- The USB Type-C Cable That Will Break Your Computer @ Hack a Day
- Mysterious 'Code 53' error is borking iPhones beyond repair @ The Inquirer
- Two Outstanding All-in-One Linux Servers @ Linux.com
- iOS flaw lets hackers thwart lock screen passcode on iPhones and iPads @ The InquirerE
- Ubuntu 6.06 To Ubuntu 16.04 LTS Performance Benchmarks: 10 Years Of Linux Performance @ Phoronix
- New AI chip from MIT gives Skynet a tenfold speed boost @ The Register
- Pebble punts out new firmware to watch you as you sleep @ The Register
- AUO starts shipping bezel-less Ultra HD TV panels in 1Q16 @ DigiTimes
- A Bot That Drives Robocallers Insane @ Slashdot
Subject: General Tech | February 4, 2016 - 06:18 PM | Tim Verry
Tagged: open source, microsoft, machine learning, deep neural network, deep learning, cntk, azure
Microsoft has been using deep neural networks for awhile now to power its speech recognition technologies bundled into Windows and Skype to identify and follow commands and to translate speech respectively. This technology is part of Microsoft's Computational Network Toolkit. Last April, the company made this toolkit available to academic researchers on Codeplex, and it is now opening it up even more by moving the project to GitHub and placing it under an open source license.
Lead by chief speech and computer scientist Xuedong Huang, a team of Microsoft researchers built the Computational Network Toolkit (CNTK) to power all their speech related projects. The CNTK is a deep neural network for machine learning that is built to be fast and scalable across multiple systems, and more importantly, multiple GPUs which excel at these kinds of parallel processing workloads and algorithms. Microsoft heavily focused on scalability with CNTK and according to the company's own benchmarks (which is to say to be taken with a healthy dose of salt) while the major competing neural network tool kits offer similar performance running on a single GPU, when adding more than one graphics card CNTK is vastly more efficient with almost four times the performance of Google's TensorFlow and a bit more than 1.5-times Torch 7 and Caffe. Where CNTK gets a bit deep learning crazy is its ability to scale beyond a single system and easily tap into Microsoft's Azure GPU Lab to get access to numerous GPUs from their remote datacenters -- though its not free you don't need to purchase, store, and power the hardware locally and can ramp the number up and down based on how much GPU muscle you need. The example Microsoft provided showed two similarly spec'd Linux systems with four GPUs each running on Azure cloud hosting getting close to twice the performance of the 4 GPU system (75% increase). Microsoft claims that "CNTK can easily scale beyond 8 GPUs across multiple machines with superior distributed system performance."
Using GPU-based Azure machines, Microsoft was able to increase the performance of Cortana's speech recognition by 10-times compared to the local systems they were previously using.
It is always cool to see GPU compute in practice and now that CNTK is available to everyone, I expect to see a lot of new uses for the toolkit beyond speech recognition. Moving to an open source license is certainly good PR, but I think it was actually done more for Microsoft's own benefit rather than users which isn't necessarily a bad thing since both get to benefit from it. I am really interested to see what researchers are able to do with a deep neural network that reportedly offers so much performance thanks to GPUs. I'm curious what new kinds of machine learning opportunities the extra speed will enable.
If you are interested, you can check out CNTK on GitHub!
That Depends on Whether They Need One
Ars Technica UK published an editorial called, Hey Valve: What's the point of Steam OS? The article does not actually pose the question in it's text -- it mostly rants about technical problems with a Zotac review unit -- but the headline is interesting none-the-less.
Here's my view of the situation.
The Death of Media Center May Have Been...
There's two parts to this story, and both center around Windows 8. The first was addressed in an editorial that I wrote last May, titled The Death of Media Center & What Might Have Been. Microsoft wanted to expand the PC platform into the living room. Beyond the obvious support for movies, TV, and DVR, they also pushed PC gaming in a few subtle ways. The Games for Windows certification required games to be launchable by Media Center and support Xbox 360 peripherals, which pressures game developers to make PC games comfortable to play on a couch. They also created Tray and Play, which is an optional feature that allows PC games to be played from the disk while they installed in the background. Back in 2007, before Steam and other digital distribution services really took off, this eliminated install time, which was a major user experience problem with PC gaming (and a major hurdle for TV-connected PCs).
It also had a few nasty implications. Games for Windows Live tried to eliminate modding by requiring all content to be certified (or severely limiting the tools as seen in Halo 2 Vista). Microsoft was scared about the content that users could put into their games, especially since Hot Coffee (despite being locked, first-party content) occurred less than two years earlier. You could also argue that they were attempting to condition PC users to accept paid DLC.
Regardless of whether it would have been positive or negative for the PC industry, the Media Center initiative launched with Windows Vista, which is another way of saying “exploded on the launch pad, leaving no survivors.” Windows 7 cleared the wreckage with a new team, who aimed for the stars with Windows 8. They ignored the potential of the living room PC, preferring devices and services (ie: Xbox) over an ecosystem provided by various OEMs.
If you look at the goals of Steam OS, they align pretty well with the original, Vista-era ambitions. Valve hopes to create a platform that hardware vendors could compete on. Devices, big or small, expensive or cheap, could fill all of the various needs that users have in the living room. Unfortunately, unlike Microsoft, they cannot be (natively) compatible with the catalog of Windows software.
This may seem like Valve is running toward a cliff, but keep reading.
What If Steam OS Competed with Windows Store?
Windows 8 did more than just abandon the vision of Windows Media Center. Driven by the popularity of the iOS App Store, Microsoft saw a way to end the public perception that Windows is hopelessly insecure. With the Windows Store, all software needs to be reviewed and certified by Microsoft. Software based on the Win32 API, which is all software for Windows 7 and earlier, was only allowed within the “Desktop App,” which was a second-class citizen and could be removed at any point.
This potential made the PC software industry collectively crap themselves. Mozilla was particularly freaked out, because Windows Store demanded (at the time) that all web browsers become reskins of Internet Explorer. This means that Firefox would not be able to implement any new Web standards on Windows, because it can only present what Internet Explorer (Trident) draws. Mozilla's mission is to develop a strong, standards-based web browser that forces all others to interoperate or die.
Remember: “This website is best viewed with Internet Explorer”?
Executives from several PC gaming companies, including Valve, Blizzard, and Mojang, spoke out against Windows 8 at the time (along with browser vendors and so forth). Steam OS could be viewed as a fire escape for Valve if Microsoft decided to try its luck and kill, or further deprecate, Win32 support. In the mean time, Windows PCs could stream to it until Linux gained a sufficient catalog of software.
Image Credit: Wikipedia
This is where Steam OS gets interesting. Its software library cannot compete against Windows with its full catalog of Win32 applications, at least not for a long time. On the other hand, if Microsoft continues to support Win32 as a first-class citizen, and they returned to the level of openness with software vendors that they had in the Windows XP era, then Valve doesn't really have a reason to care about Steam OS as anything more than a hobby anyway. Likewise, if doomsday happens and something like Windows RT ends up being the future of Windows, as many feared, then Steam OS wouldn't need to compete against Windows. Its only competition from Microsoft would be Windows Store apps and first-party software.
I would say that Valve might even have a better chance than Microsoft in that case.