Subject: General Tech | March 5, 2013 - 01:02 PM | Jeremy Hellstrom
Tagged: virtualization, shadow defender, light virtualization
Light Virtualization is essentially a sandbox tool for those who do not have the equipment available to set up a full virtual machine server. It allows you to create a virtualization buffer or partition on your system drive which will not save any changes made to the system unless specified, perfect for testing software or patches as well as for ensuring almost any malware infections will not survive a reboot. Shadow Defender is light virtualization software which has been around for a while but has not been updated for about two years, until very recently when a major update arrived. This update encompasses many of the recent changes to hardware such as TRIM support for SSDs and even support for Win8. While it won't stop an infection from hitting your PC, as long as you do not save any of the changes made to the virtualized portion of your drive any rootkit or other such malware will not survive a reboot. Take a look at how to use the software and how effective it is over at Tweaktown.
"Shadow Defender (or SD for short as it is known among its dedicated fans) has been enjoying a great reputation among Light Virtualization fans during the last few years. There has been a barren period of two and a half years where development was interrupted, leaving v18.104.22.1685 (which was released back in February 2010) as the last known good version."
Here is some more Tech News from around the web:
- Need an army of killer zombies? Yours for just $25 per 1,000 PCs @ The Register
- NetApp could use Microsoft to beat off VMware's virtual tool @ The Register
- ASRock introduces Haswell Z87/H87/B85 motherboards @ Hardware.info
- USB 2.0 vs. USB 3.0 Flash Drives On Linux @ Phoronix
- TP-Link TL-WDR4900 review: TP-Link's best router yet @ Hardware.info
Subject: Graphics Cards | February 27, 2013 - 09:42 PM | Josh Walrath
Tagged: workstations, virtualization, Teradici, remote management, R5000, pitcairn, PCoIP, firepro, amd
A few days back AMD released one of their latest FIREPRO workstation graphics cards. For most users out there this will be received with a bit of a shrug. This release is a bit different though, and it reflects a change in direction in the PC market. The original PC freed users from mainframes and made computing affordable for most people. Today we are seemingly heading back to the mainframe/thin client setup of yore, but with hardware and connectivity that obviously was not present in the late 70s. The FIREPRO R5000 is hoping to redefine remote graphics.
Today’s corporate environment is chaotic when it comes to IT systems. The amount of malware, poor user decisions, and variability in software and hardware configurations is a constant headache to IT workers. A big push it to make computing more centralized in the company with easy oversight from IT workers. Servers with multiple remote users can be more easily updated and upgraded than going to individual PCs around the offices to do the same work. This is good for a lot of basic users, but it does not address the performance needs of power users who typically run traditional workstations.
AMD hopes to change that thinking with the R5000. This is a Pitcairn based product (7800 series on the desktop) that is built to workstation standards. It also features a secret weapon; the Teradici TERA2240 host processor. Teradici is a leader in PCoIP technology. PCoIP is simply “PC over IP”. Instead of a traditional remote host which limits performance and desktop space, Teradici developed PCoIP to more adequately send large amounts of pixel data over a network. The user essentially is able to leverage the power of a modern GPU rather than rely on the more software based rendering of remote sessions. The user has a thin client provided by a variety of OEMs to choose from and they connect directly over IP.
The advantages here is that the GPU is again used to its full potential, which is key for those doing heavy video editing work, 3D visualization, and CADD type workloads. The latest R5000 can support resolutions up to 2560x1600 up to two displays. The same card can support 1920x1200 on four displays. It supports upwards of 60 fps in applications. The TERA2240 essentially encodes the output and streams it over IP. The thin client re-encodes the stream and displays the results. This promises very low latency over smaller networks, and very manageable latency over large or wide area networks.
The downside here is that one client at a time can connect to the card. The card cannot be virtualized as such so that multiple users can access the resources of the GPU. The card CAN run in a virtualized environment, but it is again limited to one client per card. Multiple cards can be placed in each server and the hardware is then placed in its own VM. While this makes management of hardware a bit easier, it still is an expensive solution when it comes to a per user basis. Where efficiency may be regained is when it is placed in an environment where shift work takes place. Or another setting is a University where these cards are housed in high powered servers away from classrooms so cooling and sound are not issues impeding learning.
Subject: General Tech | November 2, 2012 - 12:37 PM | Jeremy Hellstrom
Tagged: HyperV, microsoft, windows server, WS2012, vmware, virtualization
Today is launch day for WS2012, the first new Microsoft server OS many years and it brings with it a host of changes. While advertisers talk about 'the Cloud', server admins are into virtualization and that is a big part of the update to WS2012, with Hyper-V arriving an integral feature of the new server OS and highlights Microsoft's next target. They've demonstrated the robustness of their virtualization implementation by running Bing and TechNet on Hyper-V, something very important for a relatively new piece of enterprise software to accomplish. The specifications are impressive, from the amount of CPUs and addressable memory which can be granted to a VM, to the virtual Level-2 switches which can be created. Of course it is not all virtuality, a new ReFS disk format and built in file deduplication make this a much more impressive upgrade than the previous Server 2003 to Server 2008. Read more and catch some movies at The Register.
"In 1985, Commodore held the UK launch of the Amiga 1000 at the World of Commodore Show at the Novotel in Hammersmith. Twenty-seven years later, Microsoft used the same venue to host the Technical Launch of Windows Server 2012."
Here is some more Tech News from around the web:
- My six days with Windows 8 @ The Tech Report
- Intro to the Paravirtualization Spectrum With Xen (Part 2) @ Linux.com
- Flash slash at OCZ: New CEO cuts nearly 200 jobs, 150 products @ The Register
- Apple patches security flaws in Safari @ The Inquirer
- Elpida says acquisition by Micron approved by Tokyo court @ DigiTimes
- 3D Printing vs Patents and Gun Controls @ Benchmark Reviews
- D-Link DIR-865L @ Hardware.info
- Touring Microsoft, Sony and Apple Stores on Windows 8’s Launch Day @ TechSpot
- Rapture Game Studios @ LanOC Reviews
- Bjorn3D/Kingston – Get Ready For The Holidays
Subject: General Tech | September 11, 2012 - 10:41 AM | Tim Verry
Tagged: virtualization, radeon, onlive, gaming, cloud gaming, ciinow, amd
In the wake of OnLive going bankrupt and selling itself to new investors, a new cloud gaming company has emerged called CiiNOW. The company was founded in 2010 and now has 24 employees. It has managed to raise more than $13 million USD, but with a new investment from new chip designer AMD CiiNow is ready to go public with its software. Interestingly, instead of starting its own cloud gaming service, CiiNow is positioning itself as a Middleware company by selling its virtualization and gaming software to other companies. Those business customers would then use CiiNow’s software to start their own cloud gaming services.
In the deal with AMD, CiiNOW will recommend AMD Radeon graphics cards to customers as well as supporting them on its software platform. According to CiiNow, its virtualized platform is able to run on any data center or cloud computing platform’s hardware. While OnLive generally required specialized servers where the graphics card was dedicated to providing games to one (or a small number of) user(s), CiiNow claims to be able to provide up to eight 720p HD streams per server blade, and up to 272 HD streams per traditional server rack. On the user side of things, CiiNow has stated that gamers would need at least a Mbps internet connection in order to play the streamed games effectively. Company CEO Ron Haberman was quoted by Venture Beat in stating the following:
“One of the big issues with cloud gaming is that no one likes to talk about costs, we are more economical because we virtualize any hardware that fits underneath our software.”
While the company has not gone into details about how the virtualization software works on off-the-shelf servers, they claim that it is an extremely scalable solution that can support rapidly growing numbers of end users without dramatically increasing hardware costs. It's impossible to say how well cloud gaming services based on this technology will work without more details or a hands on, but it is nice to see someone else take up the mantle from OnLive – especially with competitor Gaikai being bought out by Sony. CiiNow wants its technology to be used to deliver AAA titles to gamers over the Internet, so I'm interested in how they are going to pull that off using varying hardware with CiiNow's software layer running on top (specifically, the performance they will be able to get out of the hardware and how it will be sliced up between clients/gamers).
The company has said that games will not need to be ported to the virtualized software to work, only a DRM free copy from the publisher needs to be provided to load it onto the platform. Further, the cloud gaming provider using CiiNow's software will be able to support game pads and other controllers to interact with the streamed games. CiiNow does not list specific latency numbers on its site, but claims that it is using a low latency H.264 video stream to send the gameplay down to users. It remains to be seen whether or not it will be able to match or exceed NVIDIA's GRID technology in that respect, however.
There are still a lot of questions about how CiiNOW's software will work, and whether it will advance cloud gaming in general. Fortunately, you should be able to get some answers soon as the company's software is now available to the public, and we should start to see some new cloud gaming providers popping up based on the virtualization technology. Reportedly, the company has completed several trial runs in Europe and has potential customers in the US, Korea, and Australia. CiiNow claims that it could take around two months from when a customer orders equipment before its cloud gaming service can go live, so the first fruits of CiiNow's labor might emerge by the end of this year.
There is a preview of a cloud gaming service up on CiiNOW's website, but no partners with plans to launch gaming services have been publicly announced yet.
In the video below, CiiNOW CEO Ron Haberman introduces the company's new cloud gaming platform.
Continue reading for my speculation and brief thoughts on cloud gaming. Feel free to join the comment discussion (no registration required).
Subject: General Tech | August 28, 2012 - 01:45 PM | Jeremy Hellstrom
Tagged: virtualization, Seoul, seamicro, opteron, Delhi, amd, Abu Dhabi
AMD recently compared the cost per virtual machine of two eight-core 2.9GHz Xeon E5-2690 processors with 256GB in each node against servers with two 16-core 2.7GHz Opteron 6284SE and only 128GB per node. Hyperthreading was enabled on the Intel machines so each box presented 64 threads to the VMmark test you can see below. AMD is hoping to highlight the difference in pricing, while they may perform about 25% slower than an Intel based server, they cost 30% less to purchase, which in racks costing $10,000 or more will add up to some significant savings. As well The Register talks about the future of AMD's servers, we know that the Abu Dhabi Opteron 6300s for two-socket and four-socket machines, the Seoul Opteron 4300s for two-socket and single-socket machines, and the Delhi Opteron 3300s for single-socket boxes will arrive staggered throughout this year and next, offering some new hope for AMD's processing power. They also touch on Seamicro and the interconnect technology AMD purchased which could see next generation Opterons working with FirePro cards to really start to offer something new from AMD which could be a big jump in performance compared to their current server offerings.
"It is not as much fun to be in the server part of Advanced Micro Devices these days, with Intel surging in the server racket and expanding out to switching and storage with its Xeon processors and Intel more or less counting the substantial innovations that AMD's engineers crafted for the Opterons a decade ago. The good news if you like a good fight is that there is a whole new management and engineering team at AMD now, and they not only understand that AMD has to do some serious innovating, but they are itching for the fight."
Here is some more Tech News from around the web:
- Disable Java NOW, users told, as 0-day exploit hits web @ The Register
- Kingston launches enterprise SSDNow E100 drives @ The Inquirer
- AMD launches four TFLOPS Firepro S9000 accelerator @ The Inquirer
- Soft robots given veins the let them change their stripes @ Hack a Day
- Follow Our Live Coverage from LinuxCon and CloudOpen @ Linux.com
- Win The New iPad! @ eTeknix
Subject: Processors | June 8, 2012 - 03:51 PM | Jeremy Hellstrom
Tagged: ubuntu, linux, Intel, Ivy Bridge, compiler, virtualization
Phoronix have been very busy lately, getting their heads around the functionality of Ivy Bridge on Linux and as these processor are much more compatible than their predecessors it has resulted in a lot of testing. The majority of the testing focused on the performance of GCC, LLVM/Clang, DragonEgg, PathScale EKOPath, and Open64 on an i7-3770K using a wide variety of programs and benchmarks. Their initial findings favoured GCC over all other compilers as in general it took top spot, with LLVM having issues with some of their tests. They then started to play around with the instruction sets the processor was allowed to use, by disabling some of the new features they could emulate how the Ivy Bridge processor would perform if it was from a previous generation of chips, good to judge the improvement of raw processing power. They finished up by testing its virtualization performance, with BareMetal, the Kernel-based Virtual Machine virtualization and Oracle VM VirtualBox. You can see how they compared right here.
"From an Intel Core i7 3770K "Ivy Bridge" system here is an 11-way compiler comparison to look at the performance of these popular code compilers on the latest-generation Intel hardware. Among the compilers being compared on Intel's Ivy Bridge platform are multiple releases of GCC, LLVM/Clang, DragonEgg, PathScale EKOPath, and Open64."
Here are some more Processor articles from around the web:
- Intel's ultrabook-bound Core i5-3427U processor @ The Tech Report
- Intel Core i5 3470 Review: HD 2500 Graphics Tested @ AnandTech
- Comparing Ivy Bridge vs. Sandy Bridge @ TechReviewSource
- EE Bookshelf: ARM Cortex M Architecture Overview @ Adafruit
- The Workstation & Server CPU Comparison Guide @ TechARP
- The Bulldozer Aftermath: Delving Even Deeper @ AnandTech
- AMD E-Series APU “Brazo 2.0″ @ Bjorn3D
- AMD A8-3870K Black Edition & Hybrid Crossfire @ OC3D
- AMD A4 3400 APU @ Kitguru
Subject: Processors | April 3, 2012 - 12:43 PM | Jeremy Hellstrom
Tagged: virtualization, ubuntu 12.04, Sandy Bridge E, Intel, FX 8150, Core i7 3960X, bulldozer, amd
Phoronix is taking the latest Ubuntu release and testing the performance on AMD's FX 8150 against Intel's Core-i7 3960X to see their relative performance in a virtual environment. Both machines had issues, Xen had critical issues which prevented it from running on the Bulldozer and ASUS motherboard system, while the Sandy Bridge chip had issues with Virtualbox. The testing was not so much a comparison of the performance difference between the two chips as it is a test of efficiency of these processors running tasks when virtualized. As both chips averaged 90%+ of base performance when virtualized you can see that both architectures have come a long way in this particular usage.
Also, keep your eyes out for a CPU review from Josh which should be arriving soon.
"With the upcoming availability of Ubuntu 12.04 "Precise Pangolin" being a Long-Term Support (LTS) release that will be quickly making its way into many enterprise environments, here's a look at the virtualization performance of this popular Linux distribution. In particular, being looked at is the Linux virtualization performance of KVM, Xen, and Oracle VirtualBox compared to bare metal when using Intel Sandy Bridge Extreme and AMD Bulldozer hardware."
Here are some more Processor articles from around the web:
- Xeon E5 2600: Interview with Intel IT's Ajay Chandramouly @ TechSpot
- Intel To Launch Ivy Bridge Desktop Processors This Week @ TechARP
- Mobile CPU Comparison Guide @ TechARP
- Intel Ivy Bridge Overclocking with the Core i7 3770K and Core i5 3570K CPUs @ Tweaktown
- AMD A8-3870 FM1 CPU @ Rbmods
Subject: General Tech, Mobile | January 11, 2012 - 03:48 AM | Tim Verry
Tagged: virtualization, tegra, Lucidlogix, gpu, gaming, game, embedded, CES2012, CES
Earlier today Lucid (LucidLogix), the company behind quite a few GPU virtualization technologies, announced yet another piece of GPU virtualization software. This time; however, instead of wrangling as much performance as possible from multi-GPU beasts, this technology- codenamed "XLR8"- is aimed at the mobile market of tablets, smartphones, and laptops with integrated graphics. Such products are powered by integrated GPUs in AMD's APUs and Intel's Sandy Bridge and Ivy Bridge processors, and by the GPUs in mobile SoCs (system on a chip) like Nvidia's Tegra and ARM's Mali graphics processors. XLR8 uses "unique CPU multithreading" to feed the mobile GPUs as efficiently as possible.
According to Lucid, many of the PC graphics issues are magnified when it comes to embedded GPUS including visual tearing, pipeline inefficiencies, power management, and artifacting. Offir Remez, president of Lucid further stated that most of the big, popular PC games have playability issues on mobile platforms and on computers with integrated graphics. "If it's got a GPU, we can improve the end user experience."
The company further expanded that the XLR8 technology works by disabling unnecessary and redundant processes in addition to "unique multithreading" to improve system (gaming) responsiveness up to 200 percent. The XLR8 software monitors battery drain and power draw while shutting down background processes to increase CPU frame generation and minimizing redundant GPU rendering processes.
If this sounds a lot like marketing speak, it certainly does. On the other hand, Lucid has been able to push some useful virtualization technology into desktops, so maybe mobile platforms are just the next step for the company. The company is currently demonstrating the XLR8 software in private at CES and is being tested by hardware partners. Mobile SoC are getting faster and more powerful, and on battery powered devices there is always room for efficiency improvements. Once reviewers manage to get their hands on some actual hardware, and XLR8 is past the concept/testing stage you can bet that people will have a better understanding of what exactly XLR8 is capable of.
PC Perspective's CES 2012 coverage is sponsored by MSI Computer.
Follow all of our coverage of the show at http://pcper.com/ces!