Subject: General Tech | June 8, 2015 - 12:35 AM | Tim Verry
Tagged: windows, remote management, powershell, openssh, mac os x, linux
Citing both leadership and corporate cultural changes within Microsoft, the PowerShell team – led by Team Group Software Engineering Manager Angel Calvo – excitedly announced support for OpenSSH earlier this week. Specifically, the team (finally, after the third such attempt) got the go-ahead from Microsoft's leadership and plans are underway to natively support OpenSSH in PowerShell as well as to contribute to the OpenBSD project on behalf of Microsoft.
Details are scarce, but this is great news for system administrators and a nice extra feature for enthusiasts that like to dabble in those "other" operating systems (which is to say, pretty much every OS except Windows) and remotely access them over a secure SSH connection to perform maintenance or transfer files.
Currently, Windows users need to use third party tools to support SSH clients and servers such as PuTTY (and PSCP) and Cygwin (not pictured).
Until now, users have had to rely on third party tools such as PuTTY, Filezilla, and Cygwin among others for their SSH, SCP, and SFTP needs. Accessing Linux machines using PuTTY is fairly straightforward, but going the other direction and trying to set it up so that you can access a Windows machine from a Linux machine over SSH could certainly be made easier and more stable. Native support for OpenSSH would mean both client and server support built into Windows and support for SSH, SFTP, and SCP protocols.
From the MSDN blog and this twitter exchange, OpenSSH in Windows PowerShell is still in its infancy. It will not be launching with the rest of Windows 10 on July 29th, but with the level of customer interest hopefully pushing the refreshed Microsoft to make this a priority we may see it within the next year or two, and certainly before Windows 11!
Are you ready to get your native SSH on using PowerShell, or will you be sticking with your current third party implementations?
Subject: Graphics Cards | February 28, 2013 - 02:42 AM | Josh Walrath
Tagged: workstations, virtualization, Teradici, remote management, R5000, pitcairn, PCoIP, firepro, amd
A few days back AMD released one of their latest FIREPRO workstation graphics cards. For most users out there this will be received with a bit of a shrug. This release is a bit different though, and it reflects a change in direction in the PC market. The original PC freed users from mainframes and made computing affordable for most people. Today we are seemingly heading back to the mainframe/thin client setup of yore, but with hardware and connectivity that obviously was not present in the late 70s. The FIREPRO R5000 is hoping to redefine remote graphics.
Today’s corporate environment is chaotic when it comes to IT systems. The amount of malware, poor user decisions, and variability in software and hardware configurations is a constant headache to IT workers. A big push it to make computing more centralized in the company with easy oversight from IT workers. Servers with multiple remote users can be more easily updated and upgraded than going to individual PCs around the offices to do the same work. This is good for a lot of basic users, but it does not address the performance needs of power users who typically run traditional workstations.
AMD hopes to change that thinking with the R5000. This is a Pitcairn based product (7800 series on the desktop) that is built to workstation standards. It also features a secret weapon; the Teradici TERA2240 host processor. Teradici is a leader in PCoIP technology. PCoIP is simply “PC over IP”. Instead of a traditional remote host which limits performance and desktop space, Teradici developed PCoIP to more adequately send large amounts of pixel data over a network. The user essentially is able to leverage the power of a modern GPU rather than rely on the more software based rendering of remote sessions. The user has a thin client provided by a variety of OEMs to choose from and they connect directly over IP.
The advantages here is that the GPU is again used to its full potential, which is key for those doing heavy video editing work, 3D visualization, and CADD type workloads. The latest R5000 can support resolutions up to 2560x1600 up to two displays. The same card can support 1920x1200 on four displays. It supports upwards of 60 fps in applications. The TERA2240 essentially encodes the output and streams it over IP. The thin client re-encodes the stream and displays the results. This promises very low latency over smaller networks, and very manageable latency over large or wide area networks.
The downside here is that one client at a time can connect to the card. The card cannot be virtualized as such so that multiple users can access the resources of the GPU. The card CAN run in a virtualized environment, but it is again limited to one client per card. Multiple cards can be placed in each server and the hardware is then placed in its own VM. While this makes management of hardware a bit easier, it still is an expensive solution when it comes to a per user basis. Where efficiency may be regained is when it is placed in an environment where shift work takes place. Or another setting is a University where these cards are housed in high powered servers away from classrooms so cooling and sound are not issues impeding learning.