Subject: Editorial, General Tech | June 20, 2011 - 03:24 AM | Tim Verry
Tagged: simulator, networking, Internet, cyber warfare
Our world is the host to numerous physical acts of aggression every day, and until a few years ago those acts have remained in the (relatively) easily comprehensible physical world. However, the millions of connected servers and clients that overlay the numerous nations around the world have rapidly become host to what is known as “cyber warfare,” which amounts to subversion and attacks against another people or nation through electronic means-- by attacking its people or its electronic and Internet-based infrastructure.
While physical acts of aggression are easier to examine (and gather evidence) and attribute to the responsible parties, attacks on the Internet are generally the exact opposite. Thanks to the anonymity of the Internet, it is much more difficult to determine the originator of the attack. Further, the ethical debate of whether physical actions in the form of military action is appropriate in response to online attacks comes into question.
It seems as though the Pentagon is seeking the answers to the issues of attack attribution and appropriate retaliation methods through the usage of an Internet simulator dubbed the National Cyber Range. According to Computer World, two designs for the simulator are being constructed by Lockheed Martin with a $30.8 million USD grant and Johns Hopkins University Applied Physics Laboratory with a $24.7 million USD grant provided by DARPA.
The National Cyber Range is to be designed to mimic human behavior in response to various DefCon and InfoCon (Informational Operations Condition) levels. It will allow the Pentagon and authorized parties to study the effectiveness of war plan execution as it simulates offensive and defensive actions on the scale of nation-backed levels of cyber warfare. Once the final National Cyber Range design has been chosen by DARPA from the two competing projects (by Johns Hopkins and Lockheed Martin), the government would be able to construct a toolkit that would allow them to easily transfer and conduct cyber warfare testing from any facility.
Image cortesy Kurtis Scaletta via Flickr Creative Commons.
Subject: General Tech, Mobile | June 19, 2011 - 12:22 AM | Scott Michaud
Tagged: tablet, sony, S2, S1
We are going to see quite a few Android-based tablets come out in the next few months as the flood gates open for tablet creators. We have been reporting on strong rumors have been pointing to Amazon stepping in the tablet space to extend their Kindle portfolio this fall. Amazon is generally very successful when they decide to step in the market, yet that did not deter Sony from preparing to dive in to the tablet space as well. Sony are preparing to launch a 9-inch tablet and a dual screen 5.5-inch tablet in the autumn and to build hype they have released a video ad campaign to build hype for that event.
This “Two Will” Pass
As you can tell from watching the video, it says little about the product except that they slide really quickly, absolutely love someone, cast ominous shadows, and can kill action figures with lightbulb mind bullets. Sony did mention that this is just the first episode of five so it is possible that their later videos may be more informative. However, if you just want to see what an Echochrome 2-esque city has to do with Android tablet then be sure to watch the next four commercials.
We apologize for the lack of a podcast, but you don't often get a chance to see your city humiliate its self
Subject: General Tech | June 17, 2011 - 06:36 PM | Jeremy Hellstrom
Tagged: friday, PC Perspective Forums
Before your weekly tour through the PC Perspective Forums, it would behoove you to check out the bottom of some of our front page stories. If you click on the comments link, or just scroll down after clicking on an article you will notice it is possible to have a discussion about that article right there on the front page with other readers and with the creator of the review or news post. It is easy and you don't even need to sign up, though we would prefer that you do as membership at PC Perspective does have its privileges. If you want to remain anonymous or unverified then certainly do, though we do require you to fill something in the email address box and read a captcha, as there are still spammers out there on the internets.
To get really in depth advice and opinions you really should head off to the PC Perspective Forums, we can't give you indepth instructions on using Microsoft's debugging tools in the comments section but it is a piece of cake (slightly old cake, but still) to provide step by step instructions with pictures in the Forum its self. Some things just shouldn't be on the front page, but in the forums you will find kind souls ready to help your wetware as well as your hardware. Sometimes you will even find independent reviews in the Forums.
In the Cases'n'Cooling Forum, a new case mod has appeared; the Blood Ice HAF 922 is worth a look whether you are into case modding or just want to see an impressive tool set and workbench. In the Storage Forum a member is having an unpleasant time with a recent SSD upgrade as is someone in the Linux Forum.
The live watchers are already upset with us and the rest will just be receiving the bad news, but for the first time in quite a while we failed to provide our dedicated viewers a fresh PC Perspective Podcast on Wednesday. With Ryan in Seattle schmoozing with AMD and a pending riot of Vancouver residents, we decided to call it off. Perhaps next week we shall torture you with a double length episode?
Subject: General Tech | June 17, 2011 - 02:37 PM | Scott Michaud
We have been long battling online menaces that are looking to generate money off of the grief of others. It used to be simple for the attack to be successful: release virus; ???; profit. Now that worms are much less common the focus has shifted from invading a person’s computer to tricking the person to allow you in their computer or attacking the service they are accessing. Now, what was once a far-fetched joke by a popular comic strip is true: people are being contacted at home and told to infect their computer.
Your call is VERY important to us.
The story for security has always been the same: be careful what you do, keep your attack surface as small as possible, and limit the damage in the event of a breech. You need to be aware, regardless of what platform you utilize, that you are only as safe as your level of complacency. If someone is attempting to get you to do something quickly, they likely are trying to play on your complacency by distracting you with an urgency. The disappointing part is that in the heat of the moment even someone aware of these attacks could still be susceptible to them because social engineering is simply very effective.
All of the above said, the silver lining to this whole problem is that the attackers are getting substantially more desperate which means that it is only a matter of time before the pool of attackers shrinks due to lack of profitability. The problem will never go away, but as the difficulty steadily increases for the attackers (which it is, otherwise they would not be so inventive) the draw of money will seem much less luscious.
Subject: General Tech | June 17, 2011 - 02:24 PM | Jeremy Hellstrom
Tagged: TSMC, southern islands, northern islands, llano, global foundries, arm, amd, 40nm, 32nm, 28nm
Back in April there was a kerfuffle in the news about a deal penned between AMD, Global Foundries and TSMC. It is not worth repeating completely as you can follow the story by using the previous link, suffice to say that it did not indicate problems with the relationship between AMD and Global Foundries.
The previous post was specifically about 40nm and 32nm process chips, however today we hear from DigiTimes that TSMC has scored a deal with AMD for the 28nm Southern Islands APUs of which we have seen much recently. The 40nm Northern Islands GPUs will also be produced by TSMC. That leaves a lot of production capabilities free at Global Foundries to work on ARM processors.
"AMD reportedly has completed the tape-out of its next-generation GPU, codenamed Southern Islands, on Taiwan Semiconductor Manufacturing Company's (TSMC) 28nm process with High-k Metal Gate (HKMG) technology, according to a Chinese-language Commercial Times report. The chip is set to expected to enter mass produciton at the end of 2011.
TSMC will also be AMD's major foundry partner for the 28nm Krishna and Wichita accelerated processing units (APUs), with volume production set to begin in the first half of 2012, the report said.
TSMC reportedly contract manufactures the Ontario, Zacate and Desna APUs for AMD as well as the Northern Island family of GPUs. All of these use the foundry's 40nm process technology.
TSMC was quoted as saying in previous reports that it had begun equipment move-in for the phase one facility of a new 12-inch fab (Fab 15) with volume production of 28nm technology products slated for the fourth quarter of 2011. The foundry previously said it would begin moving equipment into the facility in June, with volume production expected to kick off in the first quarter of 2012."
Here is some more Tech News from around the web:
- ARM acquires Obsidian Software @ The Inquirer
- Mozilla pushes out final Firefox 5 test build @ The Register
- Sega Hacked @ XSReviews
- Tablets of 2011: What to Look For - June Update @ TechSpot
- A few thoughts on Ultrabooks @ The Tech Report
- Dsabling Windows Pagefile & Hibernation to Reclaim SSD Space @ Techgage
- Overclockers Benchmarking Party II: Where the Bell Tolls!
- Post Computex 2011 - Part 2 @ Bjorn3D
Subject: General Tech, Graphics Cards, Mobile | June 17, 2011 - 04:35 AM | Scott Michaud
Tagged: webgl, microsoft
WebGL: Heaven or Hell?
(Image from MrDoob WebGL demo; contains Lucy model from Stanford 3D repository)
WebGL is an API very similar to OpenGL ES 2.0: the API used for OpenGL features in embedded systems, particularly smart phones. The goal of WebGL is to provide a light-weight, CSS obeying, 3D and shader system for websites that require advanced 3D graphics or even general purpose calculations performed on the shader units of the client’s GPU. Mozilla and Google currently have support in their public browsers with Opera and Apple shipping support in the near future. Microsoft has stated that allowing third-party websites that level of access to the hardware is dangerous as security vulnerabilities that formerly needed to be exploited locally can now be exploited from the web browser. This is an area of expertise that Microsoft knows all too well from their past attempts at active(x)ly adding scripting functionality to the web browser evolving into a decade-long game of whack-a-mole for security holes.
But skeptics to Microsoft’s position could easily point to their effort to single out the one standard based on OpenGL, competitor to their still-cherished DirectX standard. Regardless of Microsoft’s motives it seems to put to rest the question of whether Microsoft will be working towards implementing WebGL in any release of Internet Explorer currently in development.
Do you think Microsoft is warning its competitors about its past ActiveX woes, or is this more politically motivated? Comment below (registration not required.)
Subject: General Tech, Storage | June 16, 2011 - 03:02 PM | Scott Michaud
Tagged: ssd, Intel, enterprise
Intel is currently in the process of releasing their 2011 lineup of solid state hard drives. A lot of news and products came out regarding their consumer 300-series and enthusiast 500-series line however it has been pretty silent regarding their enterprise 700-series products. That has changed recently with the release of specifications as a result of Anandtech’s coverage of the German hardware website ComputerBase.de.
And how does it compare to OCZ?
Intel will be releasing two enterprise SSDs: the SATA 3 Gbps based 710 SSD codename Lyndonville and the PCI express 2.0 based 720 SSD codename Ramsdale. The SATA based 710 will feature 25nm MLC-HET flash at capacities of 100, 200, and 300 GB. The 710 will have read and write speeds of 270/210 MB/s with 35,000/3300 read and write IOPS at 4KB and a 64MB cache. The PCIe based 720 will feature 34nm SLC flash at capacities of 200 and 400 GB. The 720 will be substantially faster than the 710 with read and write speeds of 2200/1800 MB/s with 180,000/56,000 read and write IOPS at 4KB and a 512MB cache. On the security front the 710 will be encrypted with 128 bit AES encryption where the 720 will be encrypted with 256 bit AES.
While there has been no hint toward pricing of these drives Intel is still expected to make a second quarter release date for their SATA based 710 SSD. If you are looking for a PCI express SSD you will need to be a bit more patient as they are still expected to be released in the fourth quarter. It will be interesting to see how the Intel vs OCZ fight will play out in 2012 for dominance in the PCIe-based SSD space.
Subject: General Tech | June 16, 2011 - 12:57 PM | Jeremy Hellstrom
Tagged: amd, Intel, nvidia
In some sort of bizarre voyeuristic hardware love/hate triangle AMD, Intel and NVIDIA are all semi-intertwined and being observed by Microsoft. Speaking with The Inquirer the VP of product and platform marketing at AMD, Leslie Sobon, stated that there was no chance that Intel would attempt to purchase NVIDIA as AMD did with ATI. AMD's purchase was less about the rights to the Radeon series as it was taking possession of the intellectual property that ATI owned after a decade of creating GPUs and lead directly to the APUs that AMD has recently released which will likely become their main product. Intel already has a working architecture that combines GPU and CPU and doesn't need to purchase another company's IP in order to develop that type of product.
There is another reason for purchasing NVIDIA though, which has very little to do with their discreet graphics card IP and everything to do with Tegra and Fermi which are two specialized products which so far Intel doesn't have an answer for. A vastly improved and shrunken Atom might be able to push Tegra off of mobile platforms and perhaps specialized SandyBridge CPUs could accelerate computation like the Fermi products do but so far there are no solid leads, only speculation.
If you learn more from your failures than your successes then Intel knows a lot about graphics.
"CHIP DESIGNER AMD believes that it is on a divergent path from Intel thanks to its accelerated processor unit (APU) and that Intel buying Nvidia "would never happen"."
Here is some more Tech News from around the web:
- Find Out if Your Passwords Were Leaked by LulzSec Right Here @ Gizmodo
- Adobe patches critical bugs in Flash and Reader @ The Register
- Umi, we hardly knew ye: contemplating the fate of the videophone in 2011 @ Ars Technica
- 'A SHARK attacked my ROBOT', gasps ex-Sun exec @ The Register
- We’ve got a real bone to pick with this mouse @ Hack a Day
- Fun Quotes from the AFDS Media Roundtable @ SemiAccurate
Subject: General Tech, Shows and Expos | June 15, 2011 - 09:14 PM | Scott Michaud
Tagged: opencl, amd, AFDS
If you are a developer of applications which requires more performance than a CPU alone can provide then you are probably having a gleeful week. Today Microsoft announced their competitor to OpenCL and we have a large write-up about that aspect of their keynote address. If you are currently an OpenCL developer you are not left out, however, as AMD has announced new tools designed to make your life easier too.
General Purpose GPU utilities: Because BINK won't satisfy this crowd.
(Logo trademark Apple Inc.)
AMD’s spectrum of enhanced tools includes:
- gDEBuger: An OpenCL and OpenGL debugger, profiler, and memory analyzer released as a plugin for Visual Studio.
- Parallel Path Analyzer (PPA): A tool designed to profile data transfers and kernel execution across your system.
- Global Memory for Accelerators (GMAC) API: Lets developers use multiple devices without needing to manage multiple data buffers in both the CPU and the GPU.
- Task Manager API: A framework to manage scheduling kernels across devices.
These tools and utilities should make the development of software easier and allow more developers to take the risk on the new technology. The GPU has already proven itself worthy of more and more important tasks and it is only a matter of time before it is finally ubiquitous enough that it is a default component as important as the CPU itself. As an ironic aside, that should spur the adoption of PC Gaming given how many people would have sufficient hardware.
Subject: Editorial, General Tech, Shows and Expos | June 15, 2011 - 05:58 PM | Ryan Shrout
Tagged: programming, microsoft, fusion, c++, amp, AFDS
During this morning's keynote at the AMD Fusion Developer Summit, Microsoft's Herb Sutter went on stage to discuss the problems and solutions involved around programming and developing for multi-processing systems and heterogeneous computing systems in particular. While the problems are definitely something we have discussed before at PC Perspective, the new solution that was showcased was significant.
C++ AMP (accelerated massive parallelism) was announced as a new extension to Visual Studio and the C++ programming language to help developers take advantage of the highly parallel and heterogeneous computing environments of today and the future. The new programming model uses C++ syntax and will be available in the next version of Visual Studio with "bits of it coming later this year." Sorry, no hard release date was given when probed.
Perhaps just as significant is the fact that Microsoft announced the C++ AMP standard would be an open specification and they are going to allow other compilers to integrated support for it. Unlike C# then, C++ AMP has a chance to be a new dominant standard in the programming world as the need for parallel computing expands. While OpenCL was the only option for developers that promised to allow easy utilization of ALL computing power in a computing device, C++ AMP gives users another option with the full weight of Microsoft behind it.
To demonstrate the capability of C++ AMP Microsoft showed a rigid body simulation program that ran on multiple computers and devices from a single executable file and was able to scale in performance from 3 GLOPS on the x86 cores of Llano to 650 GFLOPS on the combined APU power and to 830 GFLOPS with a pair of discrete Radeon HD 5800 GPUs. The same executable file was run on an AMD E-series APU powered tablet and ran at 16 GFLOPS with 16,000 particles. This is the promise of heterogeneous programming languages and is the gateway necessary for consumers and business to truly take advantage of the processors that AMD (and other companies) are building today.
If you want programs other than video transcoding apps to really push the promise of heterogeneous computing, then the announcement of C++ AMP is very, very big news.