Subject: General Tech, Systems | March 8, 2017 - 12:20 PM | Jeremy Hellstrom
Tagged: qualcomm, OCP, microsoft, falkor, centriq 2400, azure, arm, 10nm
Last December Qualcomm announced plans to launch their Centriq 2400 series of platforms for data centres, demonstrating Apache Spark and Hadoop on Linux as well as a Java demo. They announced a 48 Core design based on ARM v8 and fabbed with on Samsung's 10nm process, which will compete against Intel's current offerings for the server room.
Today marks the official release of the Qualcomm Falkor CPU and Centriq 2400 series of products, as well as the existence of a partnership with Microsoft which may see these products offered to Azure customers. Microsoft has successfully configured a version of Windows Server to run on these new chips, which is rather big news for customers looking for low powered hosting solutions running a familiar OS. The Centriq 2400 family is compliant with Microsoft's Project Olympus architecture, used by the Open Compute Project Foundation to offer standardized building blocks upon which you can design a data centre from scratch or use as an expansion plan.
Enough of the background, we are here for the specifications of the new platform and what can be loaded onto a Centriq 2400. The reference motherboard supports SOCs of up to 48 cores, with both single and dual socket designs announced. Each SOC can support up to six channels of DDR4 in either single or dual channel configurations with a maximum of 768GB installed. Falkor will offer 32 lanes of PCIe 3.0, eight SATA ports and a GbE ethernet port as well as USB and a standard 50Gb/s NIC. NVMe is supported, one design offers 20 NVMe drives with a PCIe 16x slot but you can design the platform to match your requirements. Unfortunately they did not discuss performance during their call, nor any suggested usage scenarios. We expect to hear more about that during the 2017 Open Compute Platform US Summit, which starts today.
The submission of the design to Open Compute Project ensures a focus on compatibility and modularity and allows a wide variety of designs to be requested and networked together. If you have a need for HPC performance you can request a board with an HPC GPU such as a FirePro or Tesla, or even drop in your own optimized FPGA. Instead of opting for an impressive but expensive NVME storage solution, you can modify the design to accommodate 16 SATA HDDs for affordable storage.
Qualcomm have already announced Windows 10 support on their Snapdragon, but the fact that Microsoft are internally running Windows Server on an ARM v8 based processor is much more impressive. Intel and AMD have long held reign in the server room and have rightfully shrugged of the many times in which companies have announced ARM based servers which will offer more power efficient alternatives. Intel have made huge advances at creating low power chips for the server room; AMD's recently announced Naples shows their intentions to hold their market share as well.
If the submission to the OPC succeeds then we may see the first mainstream ARM based servers appear on the market. Even if the Windows Server instances remain internal to Microsoft, the Centriq series will support Red Hat, CentOS, Canonical and Ubuntu as well as both GCC and LLVM compilers.
(click to seriously embiggen)
ARM may finally have reached the server market after all these years and it will be interesting to see how they fare. AMD and Intel have both had to vastly reduce the power consumption of their chips and embrace a diametrically opposite design philosophy; instead of a small number of powerful chips, servers of the future will consist of arrays of less powerful chips working in tandem. ARM has had to do the opposite, they are the uncontested rulers of low powered chips but have had to change their designs to increase the processing capabilities of their chips in order to produce an effective product for the server room.
Could Qualcomm successful enter the server room; or will their ARMs not have the necessary reach?
Subject: General Tech | October 6, 2016 - 11:37 PM | Tim Verry
Tagged: supercomputer, microsoft, deep neural network, azure, artificial intelligence, ai
Microsoft recently announced it would be restructuring 5,000 employees as it focuses its efforts on artificial intelligence with a new AI and Research Group. The Redmond giant is pulling computer scientists and engineers from Microsoft Research, the Information Platfrom, Bing, and Cortana groups, and the Ambient Computing and Robotics teams. Led by 20 year Microsoft veteran Harry Shum (who has worked in both research and engineering roles at Microsoft), the new AI team promises to "democratize AI" and be a leader in the field with intelligent products and services.
It seems that "democratizing AI" is less about free artificial intelligence and more about making the technology accessible to everyone. The AI and Research Group plans to develop artificial intelligence to the point where it will change how humans interact with their computers (read: Cortana 2.0) with services and commands being conversational rather than strict commands, new applications baked with AI such as office and photo editors that are able to proof read and suggest optimal edits respectively, and new vision, speech, and machine analytics APIs that other developers will be able to harness for their own applications. (Wow that's quite the long sentence - sorry!)
Further, Microsoft wants to build the world's fastest AI supercomputer using its Azure cloud computing service. The Azure-powered AI will be available to everyone for their applications and research needs (for a price, of course!). Microsoft certainly has the money, brain power, and computing power to throw at the problem, and this may be one of the major areas where looking to "the cloud" for a company's computing needs is a smart move as the up front capital needed for hardware, engineers, and support staff to do something like this in-house would be extremely prohibative. It remains to be seen whether Microsoft will win out in the wake of competitors at being the first, but it is certainly staking its claim and does not want to be left out completely.
“Microsoft has been working in artificial intelligence since the beginning of Microsoft Research, and yet we’ve only begun to scratch the surface of what’s possible,” said Shum, executive vice president of the Microsoft AI and Research Group. “Today’s move signifies Microsoft’s commitment to deploying intelligent technology and democratizing AI in a way that changes our lives and the world around us for the better. We will significantly expand our efforts to empower people and organizations to achieve more with our tools, our software and services, and our powerful, global-scale cloud computing capabilities.”
Interestingly, this announcement comes shortly after a previous announcement that industry giants Amazon, Facebook, Google-backed DeepMind, IBM, and Microsoft founded the not-for-profit Partnership On AI organization that will collaborate and research best practices on AI development and exploitation (and hopefully how to teach them not to turn on us heh).
I am looking forward to the future of AI and the technologies it will enable!
Subject: General Tech | February 4, 2016 - 01:18 PM | Tim Verry
Tagged: open source, microsoft, machine learning, deep neural network, deep learning, cntk, azure
Microsoft has been using deep neural networks for awhile now to power its speech recognition technologies bundled into Windows and Skype to identify and follow commands and to translate speech respectively. This technology is part of Microsoft's Computational Network Toolkit. Last April, the company made this toolkit available to academic researchers on Codeplex, and it is now opening it up even more by moving the project to GitHub and placing it under an open source license.
Lead by chief speech and computer scientist Xuedong Huang, a team of Microsoft researchers built the Computational Network Toolkit (CNTK) to power all their speech related projects. The CNTK is a deep neural network for machine learning that is built to be fast and scalable across multiple systems, and more importantly, multiple GPUs which excel at these kinds of parallel processing workloads and algorithms. Microsoft heavily focused on scalability with CNTK and according to the company's own benchmarks (which is to say to be taken with a healthy dose of salt) while the major competing neural network tool kits offer similar performance running on a single GPU, when adding more than one graphics card CNTK is vastly more efficient with almost four times the performance of Google's TensorFlow and a bit more than 1.5-times Torch 7 and Caffe. Where CNTK gets a bit deep learning crazy is its ability to scale beyond a single system and easily tap into Microsoft's Azure GPU Lab to get access to numerous GPUs from their remote datacenters -- though its not free you don't need to purchase, store, and power the hardware locally and can ramp the number up and down based on how much GPU muscle you need. The example Microsoft provided showed two similarly spec'd Linux systems with four GPUs each running on Azure cloud hosting getting close to twice the performance of the 4 GPU system (75% increase). Microsoft claims that "CNTK can easily scale beyond 8 GPUs across multiple machines with superior distributed system performance."
Using GPU-based Azure machines, Microsoft was able to increase the performance of Cortana's speech recognition by 10-times compared to the local systems they were previously using.
It is always cool to see GPU compute in practice and now that CNTK is available to everyone, I expect to see a lot of new uses for the toolkit beyond speech recognition. Moving to an open source license is certainly good PR, but I think it was actually done more for Microsoft's own benefit rather than users which isn't necessarily a bad thing since both get to benefit from it. I am really interested to see what researchers are able to do with a deep neural network that reportedly offers so much performance thanks to GPUs. I'm curious what new kinds of machine learning opportunities the extra speed will enable.
If you are interested, you can check out CNTK on GitHub!
Subject: General Tech | September 22, 2015 - 01:06 PM | Jeremy Hellstrom
Tagged: azure, microsoft, linux
It is a strange new world we find ourselves, where part of Microsoft's Azure infrastructure will be built on Linux. Azure Cloud Switch will allow software-defined networking to be used on Azure for those who are brave enough to dabble in SDN. Microsoft will be incorporating the OpenCompute developed Switch Abstraction Interface based on Linux, as The Register points out this is likely due to a lack of similar functionality in Windows software. In this particular case Microsoft will not be reinventing the wheel but will wisely focus on improving the functionality of Azure and Azure based products such as Office 365 which they have developed in house. The 'cloud' is a strange place and it just got a little bit stranger.
"Redmond's revealed that it's built something called Azure Cloud Switch (ACS), describing it as “a cross-platform modular operating system for data center networking built on Linux” and “our foray into building our own software for running network devices like switches.”"
Here is some more Tech News from around the web:
- Office 2016 for Windows 10 arrives with cloud-first sway, and Sway @ The Inquirer
- Shattered Skype slowly staggers to its feet after 15-HOUR outage outrage @ The Register
- Microsoft starts to fix Start Menu in new Windows 10 preview @ The Register
- Mapin: Candy Crush Trojan horse threat hits Android @ The Inquirer
- Get to Know the Elementary OS Freya Firewall Tool @ Linux.com
- Design and Print a Passive Speaker for Your Phone @ MAKE:Blog
- 5 Fantastic Tabletop Gaming Props You Can Print @ MAKE:Blog
- Samsung announces first customer-facing M2 SSD drive and it's wicked-fast @ The Inquirer
- Rikomagic V5 4k Android TV Stick Review @ NikKTech
- Netgear Powerline 1200 PLP1200 Adapter Set Review @ NikKTech
- Apple iPhones, iPads BRICKED by iOS 9's 'slide-to-upgrade' bug @ The Register
- iOS9 Review @ Hardware Secrets
Subject: General Tech | November 14, 2013 - 01:23 PM | Jeremy Hellstrom
Tagged: running gag, microsoft, azure, cloud, office 365
Microsoft's Azure and its applications such as Office 365 are quickly gaining a reputation and it is not a very good one. On November 11th Azure suffered an outage on some of its services across the entire planet and last night saw the Lync and email servers die. That doesn't seem to have stopped companies from adopting the service, though perhaps that is more a decision being made by beancounters than it is by people who understand what is meant by "that is not a lot of 9s". Since email is considered by most users to be the absolute most critical business service there are going to be a lot of complaints; at least you won't hear them until after Microsoft gets onmicrosoft.com working again. The Register will post more on this as they receive confirmation but for now the hypothesis it was a DNS issue.
"Numerous other sub-domains of onmicrosoft.com were also affected, we've verified, and the issue appeared to be briefly widespread. It was initially feared a DNS cockup was to blame, but we're still investigating."
Here is some more Tech News from around the web:
- Mantle to power 15 Frostbite games; DICE calls for multi-vendor support @ The Tech Report
- AMD reveals 2014 APU roadmap for tablets, convertibles @ The Tech Report
- AMD’s Project Discovery sneak peak @ Kitguru
- Nokia Lumia 1520 specs, release date, price and where to buy @ The Inquirer
- The TRUTH about mystery Trojan found in SPAAACE @ The Register
- Red Hat announces it has cooked up Fedora 20 'Heisenbug' Beta @ The Inquirer
- ASUS RT-AC56U Gigabit Router @ LanOC Reviews
Subject: General Tech | November 11, 2013 - 03:28 PM | Jeremy Hellstrom
Tagged: microsoft, azure, red dog, cloud
The Register had a chance to conduct a brief interview with the Windows Azure general manager, Mike Neil, about what caused the recent global Azure failure. The beginning was an update pushed to the Red Dog front end software which customers interface with and which communicates to load balancers for resource scheduling which started to break the ability of some admins to move VMs from staging to production. While the problems were limited and intermittent, they were occurring in all regions of the globe which did not speak well of the systems partitioning. Microsoft has realized that Red Dog is a single point of failure and will be working to modify that for the future and also discussed some of the other underlying technologies here.
"Windows Azure suffered a global meltdown at the end of October that caused us to question whether Microsoft had effectively partitioned off bits of the cloud from one another. Now we have some answers."
Here is some more Tech News from around the web:
- AMD Lands Open-Source "Hawaii" GPU Driver Code @ Phoronix
- Windows, Office zero-day vuln must wait for next Patch Tuesday, says MS @ The Register
- International Space Station Infected With Malware Carried By Russian Astronauts @ Slashdot
- BlizzCon 2013 Coverage @ Legit Reviews
- Xbox One price, release date and availability @ The Inquirer
- $5 Smartphone Projector @ MAKE:Blog
- Group test: 13 printers and all-in-ones @ Hardware.info
- TteSPORTS "Which Gamer Are You?" Giveaway @ eTeknix
- Sandberg Worldwide Joint Giveaway @ NikKTech
- Win Phanteks Enthoo Primo and more with KitGuru
Subject: General Tech | October 31, 2013 - 12:53 PM | Jeremy Hellstrom
Tagged: microsoft, azure, office 365, fail
For the better part of yesterday a good portion of Microsoft's Azure was down across the globe, with no geographic location left unaffected. Azure is not only Microsoft's cloud storage service but also handles authentication for Office 365 and hosts the Exchange servers used by the new office suite. Thankfully it was not a complete outage but the scope of the problem is quite worrisome, Microsoft has always claimed that Azure is partitioned geographically to prevent these types of global outages; their FTP service also failed during this outage adding credence to the lack of partitioning and possibility of cascading failures. A failure of this magnitude on a business critical service is quite worrying but allowed The Register to give us a new term, "Blue Sky of Death".
"Microsoft's Windows Azure cloud was hit by a worldwide partial compute outage today, calling into question how effectively Redmond has partitioned its service.
The problems emerged at 2.35AM UTC, and were still ongoing as of 10.20PM UTC the same day, according to the company's service dashboard."
Here is some more Tech News from around the web:
- Hackaday Interview with Amal Graafstra, Creator of xNT Implant Chip @ Hack a Day
- WD slips bullet between teeth, gets ready to hand $706 MEELLION to Seagate @ The Register
- Intel announces first commercial availability of 4G LTE modem; introduces module for 4G connected tablets and ultrabooks @ DigiTimes
- Use Your Smartphone as a Microscope for Less Than $10 @ Hack a Day
- Zetta Z12 Intelligent Security Camcorder @ NikKTech
Subject: General Tech | October 1, 2013 - 01:14 PM | Jeremy Hellstrom
Tagged: microsoft, azure, cloud, DoD, secure
Microsoft just picked up a big win in their battle against IBM and Amazon for a share of the Cloud now that the US Government has certified them as being secure. This is their first such certification which opens up a very large market for them and will make them more attractive to private firms as well. While most salespeople will tell you that the only thing that matters about the cloud is high availability, IT departments are far more concerned about security. High availability is assumed, if that is the only sales pitch a cloud provider gives you then you should probably stay away from them, your clients will be much happier knowing their proprietary data is secure and available as opposed to just available. Slashdot commenters await you.
"Microsoft's cloud storage platform Azure received their first government certification yesterday, less than 24 hours before the official shutdown. The certification, which grants Azure 'Provisional Authority to Operate,' should make it easier for Microsoft to compete with rivals like IBM and Amazon Web Services for government contracts. The certification signifies that the Department of Defense, Homeland Security, and US General Services Administration have all deemed Azure safe from external hackers. Government cloud contracts are a lucrative market, as seen by Amazon's recent tussle with IBM over a $600M contract for a private CIA cloud."
Here is some more Tech News from around the web:
- A Closer Look at AMD's Mantle API @ Hardware Canucks
- Interview with AMD's Matt Skynner about Mantle and new Radeon cards @ Hardware.info
- BlackBerry ripped itself apart wooing CIOs AND iPhone fanbois - insiders @ The Register
- iPhone and iPad users discover an iMessage bug in iOS 7 @ The Inquirer
Subject: General Tech | May 2, 2013 - 05:07 AM | Tim Verry
Tagged: windows, thin client, remote desktop, mohoro, microsoft, cloud computing, azure
Microsoft may be working on its own cloud-based desktop service according to sources speaking with ZDNet’s Mary Jo Foley. The rumored service codenamed “Mohoro” would build the Windows desktop SaaS (Software as a Service) solution on top of the company’s Windows Azure cloud computing platform. With Mohoro, Microsoft would provide Azure virtual machines running the Windows operating system. Users would then be able to remote into the desktop on any Internet connected computer or mobile device (with remote desktop support) and get access to their own desktop and applications.
The Windows desktop... coming soon to a cloud near you?
Windows Azure users can already run virtual machines with Linux or Windows OSes, but in the case of Windows Microsoft only allows server versions to be run. Incensing restrictions prevent users from loading consumer operating systems such as Windows XP, 7 or 8 onto the virtual machines. The rumored Mohoro service would apparently relax the licensing restrictions and allow businesses or consumers to deploy client operating systems running on the Azure VMs. It would basically take the need for enterprises to run their own hardware and move it to “the cloud” behind a Microsoft-run subscription service.
It is an interesting idea that I could see universities and businesses looking into. The Azure platform is actually pretty good, from what little testing I've done on it. However, I think that for many consumers a local install is preferable. Although syncing applications and files can be a pain if you have multiple machines, you retain control of your data and are not bound to needing an always-on Internet connection to access that data and run applications. Further, latency issues and bandwidth caps with home Internet connections make a paid-for Azure desktop less appealing to home users. I think Microsoft would have a hard-enough time selling users a subsciption for a local/traditional Windows installation, much less a subscription for an OS requiring an always-on Internet connection to use their computer.
Would you use an Azure-powered desktop as your main OS?