Manufacturer: Microsoft

If Microsoft was left to their own devices...

Microsoft's Financial Analyst Meeting 2013 set the stage, literally, for Steve Ballmer's last annual keynote to investors. The speech promoted Microsoft, its potential, and its unique position in the industry. He proclaims, firmly, their desire to be a devices and services company.

The explanation, however, does not befit either industry.


Ballmer noted, early in the keynote, how Bing is the only notable competitor to Google Search. He wanted to make it clear, to investors, that Microsoft needs to remain in the search business to challenge Google. The implication is that Microsoft can fill the cracks where Google does not, or even cannot, and establish a business from that foothold. I agree. Proprietary products (which are not inherently bad by the way), as Google Search is, require one or more rivals to fill the overlooked or under-served niches. A legitimate business can be established from that basis.

It is the following, similar, statement which troubles me.

Ballmer later mentioned, along the same vein, how Microsoft is among the few making fundamental operating system investments. Like search, the implication is that operating systems are proprietary products which must compete against one another. This, albeit subtly, does not match their vision as a devices and services company. The point of a proprietary platform is to own the ecosystem, from end to end, and to derive your value from that control. The product is not a device; the product is not a service; the product is a platform. This makes sense to them because, from birth, they were a company which sold platforms.

A platform as a product is not a device nor is it service.

Keep reading to see what Microsoft... probably still cannot.

Subject: Editorial
Manufacturer: AMD

Retiring the Workhorses

There is an inevitable shift coming.  Honestly, this has been quite obvious for some time, but it has just taken AMD a bit longer to get here than many have expected.  Some years back we saw AMD release their new motto, “The Future is Fusion”.  While many thought it somewhat interesting and trite, it actually foreshadowed the massive shift from monolithic CPU cores to their APUs.  Right now AMD’s APUs are doing “ok” in desktops and are gaining traction in mobile applications.  What most people do not realize is that AMD will be going all APU all the time in the very near future.


We can look over the past few years and see that AMD has been headed in this direction for some time, but they simply have not had all the materials in place to make this dramatic shift.  To get a better understanding of where AMD is heading, how they plan to address multiple markets, and what kind of pressures they are under, we have to look at the two major non-APU markets that AMD is currently hanging onto by a thread.  In some ways, timing has been against AMD, not to mention available process technologies.

Click here to read the entire editorial!


Subject: Editorial
Manufacturer: Quakecon

The Densest 2.5 Hours Imaginable

John Carmack again kicked off this year's Quakecon with an extended technical discussion about nearly every topic bouncing around his head.  These speeches are somewhat legendary for the depth of discussion on what are often esoteric topics, but they typically expose some very important sea changes in the industry, both in terms of hardware and software.  John was a bit more organized and succinct this year by keeping things in check with some 300 lines of discussion that he thought would be interesting for us.
Next Generation Consoles
John cut to the chase and started off the discussion about the upcoming generation of consoles.  John was both happy and sad that we are moving to a new generation of products.  He feels that they really have a good handle on the optimizations of the previous generation of consoles to really extract every ounce of performance and create some interesting content.  The advantages of a new generation of consoles are very obvious, and that is particularly exciting for John.
The two major consoles are very, very similar.  There are of course differences between the two, but the basis for the two are very much the same.  As we well know, the two consoles feature APUs designed by AMD and share a lot of similarities.  The Sony hardware is a bit more robust and has more memory bandwidth, but when all is said and done, the similarities outweigh the differences by a large margin.  John mentioned that this was very good for AMD, as they are still in second place in terms of performance from current architectures as compared to Intel and their world class process technology.
Some years back there was a thought that Intel would in fact take over the next generation of consoles.  Larrabee was an interesting architecture in that it melded x86 CPUs with robust vector units in a high speed fabric on a chip.  With their prowess in process technology, this seemed a logical move for the console makers.  Time has passed, and Intel did not execute on Larrabee as many had expected.  While the technology has been implemented in the current Xeon Phi product, it has never hit the consumer world.
Tagged: steam, origin, ea

What do they want Origin to be?

GamesIndustry International conducted an interview with EA's Executive Vice President, Andrew Wilson, during this year's Electronic Entertainment Expo (E3 2013). Wilson was on the team which originally designed Origin before marketing decided to write off all DOS-era nostalgia they once held with PC gamers through recycling an old web address.

The service, itself, has also changed since the original project.

'"Over the years ... there've been some permutations of that vision that have manifested as part of Origin," Wilson said. "I think what we've done is taken a step back and said 'Wow, we've actually done some really cool things with Origin.' It is by no means perfect, but we've done some pretty cool things. As you say, the plumbing is there. What can we do now to really think about Origin in the next generation?"

Fans of Sim City, who faithfully pre-ordered, will likely argue that Origin does not have enough sewage treatment at the end of their plumbing and the out-flow defecated all over their experience. A good service can be built atop the foundations of Origin; but, I have little confidence in their ability to realize that potential.

Wilson, on the other hand, believes they now "get it".

One assertion deals with customers who purchase more than one game. He argues that multiple update and online services are required and that is a barrier for users who desire a second, third, or hundredth purchase thereafter. The belief is that Origin can create a single experience for users and remove that barrier to inhibit a user's purchase. In practice, Origin ends up being a bigger hurdle than a single-game's service. It washes a bad faith over their entire library and fails to justify itself: games, such as Sim City, update on their own and old titles still have their online services taken offline.

What it comes down to is lack of focus. Wilson believes development of Origin was too focused on the transaction, and that lead to bad faith, presumably because customers would smell the disingenuous salesman. Good Old Games (GOG), on the other hand, successfully focused on the transaction. The difference? GOG gets out of your way immediately after the transaction, leaving you with just the game plus its bonus pack-ins you ordered, not DRM and a redundant social network.

Steam is heavily focused as a service and that is where EA desires Origin to be. The problem? Valve has set a high bar for EA to contend with. Steam has built customer faith consistently, albeit not perfectly, over its life with its user-centric vision. Not only would EA need to be substantially better than Steam, it is fighting with a severe handicap from their history of shutting down gaming servers and threatening to delicense merchandise if their customers upset them.

A successful Origin will need to carefully consider what it wants to be and strive to win at that goal. While possible, they are still content to handicap themselves and, then, not own the results of their decisions.

Manufacturer: Adobe

OpenCL Support in a Meaningful Way

Adobe had OpenCL support since last year. You would never benefit from its inclusion unless you ran one of two AMD mobility chips under Mac OSX Lion, but it was there. Creative Cloud, predictably, furthers this trend with additional GPGPU support for applications like Photoshop and Premiere Pro.

This leads to some interesting points:

  • How OpenCL is changing the landscape between Intel and AMD
  • What GPU support is curiously absent from Adobe CC for one reason or another
  • Which GPUs are supported despite not... existing, officially.


This should be very big news for our readers who do production work whether professional or for a hobby. If not, how about a little information about certain GPUs that are designed to compete with the GeForce 700-series?

Read on for our thoughts, after the break.

Manufacturer: Intel

An new era for computing? Or, just a bit of catching up?

Early Tuesday, at 2am for viewers in eastern North America, Intel performed their Computex 2013 keynote to officially kick off Haswell. Unlike ASUS from the night prior, Intel did not announce a barrage of new products; the purpose is to promote future technologies and the new products of their OEM and ODM partners. In all, there was a pretty wide variety of discussed topics.


Intel carried on with the computational era analogy: the 80's was dominated by mainframes; the 90's were predominantly client-server; and the 2000's brought the internet to the forefront. While true, they did not explicitly mention how each era never actually died but rather just bled through: we still use mainframes, especially with cloud infrastructure; we still use client-server; and just about no-one would argue that the internet has been displaced, despite its struggle against semi-native apps.

Intel believes that we are currently in the two-in-one era, which they probably mean "multiple-in-one" due to devices such as the ASUS Transformer Book Trio. They created a tagline, almost a mantra, illustrating their vision:

"It's a laptop when you need it; it's a tablet when you want it."

But before elaborating, they wanted to discuss their position in the mobile market. They believe they are becoming a major player in the mobile market with key design wins and outperforming some incumbent system on a chips (SoCs). The upcoming Silvermont architecture pines to be fill in the gaps below Haswell, driving smartphones and tablets and stretching upward to include entry-level notebooks and all-in-one PCs. The architecture promises to scale between offering three-fold more performance than its past generation, or a fifth of the power for equivalent performance.


Ryan discussed Silvermont last month, be sure to give his thoughts a browse for more depth.

Also, click to read on after the break for my thoughts on the Intel keynote.

Subject: Editorial

Good effort goes a long way

The wait has been long and anxious for Heart of the Swarm, the expansion to 2010's StarCraft 2: Wings of Liberty. Blizzard originally hinted at a very rapid release schedule which did not exactly come to fruition. The nearly three years of development time for Heart of the Swarm is longer than a single studio spends on a full Call of Duty title; although, one could make a very credible argument that a Blizzard expansion requires more effort to create than said complete Call of Duty title.

But as Duke Nukem Forever demonstrated, a long time in development does not guarantee a fully baked product coming out the other end.

Blizzard games have always been highly entertaining albeit without deep artistic substance; their games are not first on the list for a university literature syllabus. But, there is a lot of room in life for engaging entertainment. In terms of the PC, Blizzard has always been one of the leading developers for the platform; they know how to deliver an exceptional PC experience if they choose to.

Watch the video and read on to find out if they did!

Subject: Editorial

Taking a Fresh Look at GLOBALFOUNDRIES

It has been a while since we last talked about GLOBALFOUNDRIES, and it is high time to do so.  So why the long wait between updates?  Well, I think the long and short of it is a lack of execution from their stated roadmaps from around 2009 on.  When GF first came on the scene they had a very aggressive roadmap about where their process technology will be and how it will be implemented.  I believe that GF first mentioned a working 28 nm process in a early 2011 timeframe.  There was a lot of excitement in some corners as people expected next generation GPUs to be available around then using that process node.


Fab 1 is the facility where all 32 nm SOI and most 28 nm HKMG are produced.

Obviously GF did not get that particular process up and running as expected.  In fact, they had some real issues getting 32 nm SOI running in a timely manner.  Llano was the first product GF produced on that particular node, as well as plenty of test wafers of Bulldozer parts.  Both were delayed from when they were initially expected to hit, and both had fabrication issues.  Time and money can fix most things when it comes to process technology, and eventually GF was able to solve what issues they had on their end.  32 nm SOI/HKMG is producing like gangbusters.  AMD has improved their designs on their end to make things a bit easier as well at GF.

While shoring up the 32 nm process was of extreme importance to GF, it seemingly took resources away from further developing 28 nm and below processes.  While work was still being done on these products, the roadmap was far too aggressive for what they were able to accomplish.  The hits just kept coming though.  AMD cut back on 32nm orders, which had a financial impact on both companies.  It was cheaper for AMD to renegotiate the contract and take a penalty rather than order chips that it simply could not sell.  GF then had lots of line space open on 32 nm SOI (Dresden) that could not be filled.  AMD then voided another contract in which they suffered a larger penalty by opting to potentially utilize a second source for 28 nm HKMG production of their CPUs and APUs.  AMD obviously was very uncomfortable about where GF was with their 28 nm process.

During all of this time GF was working to get their Luther Forest FAB 8 up and running.  Building a new FAB is no small task.  This is a multi-billion dollar endeavor and any new FAB design will have complications.  Happily for GF, the development of this FAB has gone along seemingly according to plan.  The FAB has achieved every major milestone in construction and deployment.  Still, the risks involved with a FAB that could reach around $8 billion+ are immense.

2012 was not exactly the year that GF expected, or hoped for.  It was tough on them and their partners.  They also had more expenses such as acquiring Chartered back in 2009 and then acquiring the rather significant stake that AMD had in the company in the first place.  During this time ATIC has been pumping money into GF to keep it afloat as well as its aspirations at being a major player in the fabrication industry.

Continue reading our editorial on the status of GLOBALFOUNDRIES going into 2013 and beyond!!

Subject: Editorial, Storage
Manufacturer: Various
Tagged: tlc, ssd, slc, mlc, endurance

Taking an Accurate Look at SSD Write Endurance

Last year, I posted a rebuttal to a paper describing the future of flash memory as ‘bleak’. The paper went through great (and convoluted) lengths to paint a tragic picture of flash memory endurance moving forward. Yesterday a newer paper hit Slashdotthis one doing just the opposite, and going as far as to assume production flash memory handling up to 1 Million erase cycles. You’d think that since I’m constantly pushing flash memory as a viable, reliable, and super-fast successor to Hard Disks (aka 'Spinning Rust'), that I’d just sit back on this one and let it fly. After all, it helps make my argument! Well, I can’t, because if there are errors published on a topic so important to me, it’s in the interest of journalistic integrity that I must now post an equal and opposite rebuttal to this one – even if it works against my case.

First I’m going to invite you to read through the paper in question. After doing so, I’m now going to pick it apart. Unfortunately I’m crunched for time today, so I’m going to reduce my dissertation into the form of some simple bulleted points:

  • Max data write speed did not take into account 8/10 encoding, meaning 6Gb/sec = 600MB/sec, not 750MB/sec.
  • The flash *page* size (8KB) and block sizes (2MB) chosen more closely resemble that of MLC parts (not SLC – see below for why this is important).
  • The paper makes no reference to Write Amplification.

Perhaps the most glaring and significant is that all of the formulas, while correct, fail to consider the most important factor when dealing with flash memory writes – Write Amplification.

Before geting into it, I'll reference the excellent graphic that Anand put in his SSD Relapse piece:


SSD controllers combine smaller writes into larger ones in an attempt to speed up the effective write speed. This falls flat once all flash blocks have been written to at least once. From that point forward, the SSD must play musical chairs with the data on each and every small write. In a bad case, a single 4KB write turns into a 2MB write. For that example, Write Amplification would be a factor of 500, meaning the flash memory is cycled at 500x the rate calculated in the paper. Sure that’s an extreme example, but the point is that without referencing amplification at all, it is assumed to be a factor of 1, which would only be the case if you were only writing 2MB blocks of data to the SSD. This is almost never the case, regardless of Operating System.

After posters on Slashdot called out the author on his assumptions of rated P/E cycles, he went back and added two links to justify his figures. The problem is that the first links to a 2005 data sheet for 90nm SLC flash. Samsung’s 90nm flash was 1Gb per die (128MB). The packages were available with up to 4 dies each, and scaling up to a typical 16-chip SSD, that only gives you an 8GB SSD. Not very practical. That’s not to say 100k is an inaccurate figure for SLC endurance. It’s just a really bad reference to use is all. Here's a better one from the Flash Memory Summit a couple of years back:


The second link was a 2008 PR blast from Micron, based on their proposed pushing of the 34nm process to its limits. “One Million Write Cycles” was nothing more than a tag line for an achievement accomplished in a lab under ideal conditions. That figure was never reached in anything you could actually buy in a SATA SSD. A better reference would be from that same presentation at the Summit:


This shows larger process nodes hitting even beyond 1 million cycles (given sufficient additional error bits used for error correction), but remember it has to be something that is available and in a usable capacity to be practical for real world use, and that’s just not the case for the flash in the above chart.

At the end of the day, manufacturers must balance cost, capacity, and longevity. This forces a push towards smaller processes (for more capacity per cost), with the limit being how much endurance they are willing to give up in the process. In the end they choose based on what the customer needs. Enterprise use leans towards SLC or eMLC, as they are willing to spend more for the gain in endurance. Typical PC users get standard MLC and now even TLC, which are *good enough* for that application. It's worth noting that most SSD failures are not due to burning out all of the available flash P/E cycles. The vast majority are due to infant mortality failures of the controller or even due to buggy firmware. I've never written enough to any single consumer SSD (in normal operation) to wear out all of the flash. The closest I've come to a flash-related failure was when I had an ioDrive fail during testing by excessive heat causing a solder pad to lift on one of the flash chips.

All of this said, I’d love to see a revisit to the author’s well-structured paper – only based on the corrected assumptions I’ve outlined above. *That* is the type of paper I would reference when attempting to make *accurate* arguments for SSD endurance.

Manufacturer: PC Perspective

Windows RT: Runtime? Or Get Up and Run Time?

Update #1, 10/26/2012: Apparently it does not take long to see the first tremors of certification woes. A Windows developer by the name of Jeffrey Harmon allegedly wrestled with Microsoft certification support 6 times over 2 months because his app did not meet minimum standards. He was not given clear and specific reasons why -- apparently little more than copy/paste of the regulations he failed to achieve. Kind-of what to expect from a closed platform... right? Imagine if some nonsensical terms become mandated or other problems crop up?

Also, Microsoft has just said they will allow PEGI 18 games which would have received an ESRB M rating. Of course their regulations can and will change further over time... the point is the difference between a store refusing to carry versus banishing from the whole platform even for limited sharing. The necessity of uproars, especially so early on and so frequently, should be red flags for censorship to come. Could be for artistically-intentioned nudity or sexual themes. Could even be not about sex, language, and violence at all.


Last month, I suggested that the transition to Windows RT bares the same hurdles as transitioning to Linux. Many obstacles blocking our path, like Adobe and PC gaming, are considering Linux; the rest have good reason to follow.

This month we receive Windows RT and Microsoft’s attempt to shackle us to it: Windows 8.


To be clear: Microsoft has large incentives to banish the legacy of Windows. The way Windows 8 is structured reduces it to a benign tumorous growth atop Windows RT. The applications we love and the openness we adore are contained to an app.

I will explain how you should hate this -- after I explain why and support it with evidence.

Microsoft is currently in the rare state of sharp and aggressive focus to a vision. Do not misrepresent this as greed: it is not. Microsoft must face countless jokes about security and stability. Microsoft designed Windows with strong slants towards convenience over security.

That ideology faded early into the life of Windows XP. How Windows operates is fundamentally different. Windows machines are quite secure, architecturally. Con-artists are getting desperate. Recent attacks are almost exclusively based on fear and deception of the user. Common examples are fake anti-virus software or fraudulent call center phone calls. We all win when attackers get innovative: survival of the fittest implies death of the weakest.

Continue reading why we think the Windows you Love is gone...

Manufacturer: PC Perspective

And Why the Industry Misses the Point


I am going to take a somewhat unpopular stance: I really like stereoscopic 3D. I also expect to change your mind and get you excited about stereoscopic 3D too - unless of course a circumstance such as monovision interferes with your ability to see 3D at all. I expect to accomplish where the industry has failed simply because I will not ignore the benefits of 3D in my explanation.

Firstly - we see a crisp image when our brain is more clearly able to make out objects in a scene.

We typically have two major methods of increasing the crispness of an image: we either increase the resolution or we increase the contrast of the picture. As resolution increases we receive a finer grid of positional information to place and contain the objects in the scene. As contrast increases we receive a wider difference between the brightest points and the darkest points from a scene which prevents objects from blending together in a mess of grey.

We are also able to experience depth information by comparing the parallax effect across both of our eyes. We are able to encapsulate each object into a 3D volume and position each capsule a more defined distance apart. Encapsulated objects appear crisper because we can more clearly see them as sharply defined independent objects.

Be careful with this stereoscopic 3D image. To see the 3D effect you must slowly cross your eyes until the two images align in the center. This should only be attempted by adults with fully developed eyes and without prior medical conditions. Also, sit a comfortable distance away so you do not need to cross your eyes too far inward and rest your eyes until they no longer feel strained. In short - do not pull an eye muscle or something. Use common sense. Also move your mouse cursor far away from the image as it will break your focusing lock and click on the image to make it full sized.


Again, be careful when crossing your eyes to see stereoscopic 3D and relax them when you are done.

The above image is a scene from Unreal Tournament 3 laid out in a cross-eyed 3D format. If you are safely able to experience the 3D image then I would like you to pay careful attention to how crisp the 3D image appeared. Compare this level of crispness to either the left or right eye image by itself.

Which has the crisper picture quality?

That is basically why 3D is awesome: it makes your picture quality appear substantially better by giving your brain more information about the object. This effect can also play with how the brain perceives the world you present it: similar to how HDR tonal mapping plays with exposure ranges we cannot see and infrared photography plays with colors we cannot see to modify the photograph - which we can see - for surreal effects.

So what goes terribly wrong? Read on to the article to find out.

Manufacturer: PC Perspective
Tagged: windows 8, linux, bsd

Or: the countdown to a fresh Start.

Over time – and not necessarily much of it – usage of a platform can become a marriage. I trusted Windows, nee MS-DOS, guardianship over all of my precious applications which depend upon it. Chances are you too have trusted Microsoft or a similar proprietary platform holder to provide a household for your content.

It is time for a custody hearing.

These are the reasons why I still use Windows – and who could profit as home wreckers.

Windows 8 -- keep your rings. You are not ready for commitment.

1st Reason – Games


The most obvious leading topic.

Computer games have been dominated by Windows for quite some time now. When you find a PC game at retail or online you will find either a Windows trademark or the occasional half-eaten fruit somewhere on the page or packaging.

One of the leading reasons for the success of the PC platform is the culture of backwards compatibility. Though the platform has been rumored dead ad-infinitum it still exists – surrounded by a wasteland of old deprecated consoles. I still play games from past decades on their original platform.

Check in after the break to find out why I still use Windows.

Manufacturer: PC Perspective

I say let the world go to hell

… but I should always have my tea. (Notes From Underground, 1864)

You can praise video games as art to justify its impact on your life – but do you really consider it art?


Best before the servers are taken down, because you're probably not playing it after.

Art allows the author to express their humanity and permits the user to consider that perspective. We become cultured when we experiment with and to some extent understand difficult human nature problems. Ideas are transmitted about topics which we cannot otherwise understand. We are affected positively as humans in society when these issues are raised in a safe medium.

Video games, unlike most other mediums, encourage the user to coat the creation with their own expressions. The player can influence the content through their dialogue and decision-tree choices. The player can accomplish challenges in their own unique way and talk about it over the water cooler. The player can also embed their own content as a direct form of expression. The medium will also mature as we further learn how to leverage interactivity to open a dialogue for these artistic topics in completely new ways and not necessarily in a single direction.

Consciously or otherwise – users will express themselves.

With all of the potential for art that the medium allows it is a shame that – time and time again – the industry and its users neuter its artistic capabilities in the name of greed, simplicity, or merely fear.

Care to guess where I am headed? Buckle in.

Subject: Editorial, Systems
Manufacturer: General
Tagged: silent, legacy

The Premise

Most IT workers or computer enthusiasts tend to ‘accumulate’ computer and electronics gear over time. Over the years it is easy to end up with piles of old and outdated computer parts, components and electronics–whether it’s an old Pentium machine that your work was throwing out, RAM chips you no longer needed after your last upgrade, or an old CRT monitor that your cousin wasn’t sure what to do with. Tossing the accumulated hardware out with the next trash pickup doesn’t even enter the equation, because there’s that slight possibility you might need it someday.

I myself have one (or two, and maybe half an attic…) closet full of old stuff ranging from my old Commodore 64/1541 Floppy disk drive with Zork 5.25” floppies, to a set of four 30 pin 1 MB/70ns SIMM chips that cost $100 each as upgrades to my first 486 DX2/50 Mhz Compudyne PC back in 1989. (Yes, you read that right, $100 for 1 MB of memory.) No matter if you have it all crammed into one closet or spread all over your house, you likely have a collection of gear dating back to the days of punch cards, single button joysticks, and InvisiClues guides.


Occasionally I’ll look into my own closet and lament all the ‘wasted’ technology that resides there. I’m convinced much of the hardware still has some sparks of life left. As a result, I am always looking for a reason to revive some of it from the dead. Since they’ve already been bought and paid for, it feels almost blasphemous to the technology gods not to do something with the hardware. In some cases, it might not be worth the effort, (Windows Vista on an old Micron Transport Trek2 PII-300 laptop doesn’t end well for anyone). In others cases, you can build something fun or useful using parts that you have sitting around and are waiting for a new lease on life.  

Continue reading our look at building a legacy PC with existing hardware you might already have!!

Subject: Editorial
Manufacturer: iD

3+ Hours of discussion later...


The beginning of QuakeCon is always started by several hours of John Carmack talking about very technical things.  This two hour keynote typically runs into the three to four hour range, and it was no different this time.  John certainly has the gift of gab when it comes to his projects, but unlike others his gab is chock full of useful information, often quite beyond the understanding of those in the audience.



The first topic of discussion was that of last year’s Rage launch.  John was quite apologetic about how it went, especially in terms of PC support.  For a good portion of users out there, it simply would not work due to driver issues on the AMD side.  The amount of lessons they learned from Rage were tremendous.  iD simply cannot afford to release two games in one decade.  Rage took some six plus years of development.  Consider that Doom 3 was released in 2004, and we did not see Rage until Fall 2011.  The technology in Rage is a big step up due to the use of iD Tech 5, and the art assets of the title are very impressive.

iD also made some big mistakes in how they have marketed the title.  Many people were assuming that it would be a title more in line with Bethesda’s Fallout 3 with a lot of RPG type missions and storyline.  Instead of a 80 hour title that one would expect, it was a 10+ hour action title.  So marketing needs to create a better representation of what the game entails.  They also need to stay a bit more focused on what they will be delivering, and be able to do so in a timely manner.

Read the rest of John's Keynote by clicking here.

Manufacturer: SiliconDust

An HTPC Perspective on home theater PC technology

We conducted a reader survey a few weeks ago, and one of the tech topics that received a surprising amount of interest in was HTPC coverage. You, our awesome readers, wanted to know more about the hardware and software behind them. I’ll admit that I was ardent about the prospects of talking HTPCs with you. As a relatively new entrant to that area of tech myself, I was excited to cover it, and give you more coverage on a topic you wanted to see more of!

Today we won't be talking about home theater PCs in the sense of a computer in the living room AV rack (Ryan covered that earlier this week), but rather a related technology that makes the HTPC possible: the CableCARD-equipped TV tuner.

I will forewarn you that this article is quite a bit more informal than my usual writings, especially if you only follow my PC Perspective postings. In the future, it may not be that way, but I wanted to give some backstory and some personal thoughts on the matter to illustrate how I got into rolling my own DVR and why I’m excited about it (mainly: it saves money and is very flexible).


Despite my previous attempts to “cut the cord” and use only Internet-based services for television, me and my girlfriend slowly but surely made our way back to cable TV. For about a year we survived on Netflix, Hulu, and the various networks’ streaming videos on their respective websites but as the delays between a shows airing and web streaming availability increased and Netflix instant Streaming started losing content the price of cable started to look increasingly acceptable.

She was probably the first one to feel the effects of a lack of new content – especially with a newfound love for a rather odd show called True Blood. It was at some point thereafter, once she had caught up with as many seasons offered on Netflix of various shows as possible that she broke down and ordered U-Verse. U-Verse is an interesting setup of television delivery using internet protocol (IPTV). While we did have some issues at first with the Residential Gateway and signal levels, it was eventually sorted out and it was an okay setup. It offered a lot of channels – with many in HD. In the end though, after the promotional period was up, it got very expensive to stay subscribed to. Also, because it was IPTV, it was not as flexible as traditional cable as far as adding extra televisions and the DVR functionality. Further, the image quality for the HD streams, while much better than SD, was not up to par with the cable and satellite feeds I’ve seen.

Being with Comcast for Internet for about three years now, I’ve been fairly happy with it. One day I saw a promotion for currently subscribed customers for TV + Blast internet for $80, which was only about $20 more than I was paying each month for its Performance tier. After a week of hell Therefore, I decided to sign up for it. Only, I did not want to rent a Comcast box, so I went searching for alternatives.

Enter the elusive and never advertised CableCARD

It was during this search that I learned a great deal about CableCARDs and the really cool things that they enabled. Thanks to the FCC, cable television providers in the United States have to give their customers an option other than renting a cable box for a monthly fee – customers have to be able to bring their own equipment if they wish (they can still charge you for the CableCARD but at a reduced rate, and not all cable companies charge a fee for them). But what is a CableCARD? In short, it is a small card that resembles a PCMIA expansion card – a connector that can commonly be found in older laptops (think Windows XP-era). It is to be paired with a CableCARD tuner and acts as the key to decrypt the encrypted television stations in your particular subscriber package. They are added much like a customer-owned modem is, by giving the cable company some numbers on the bottom of the card that act as a unique identifier. The cable company then connects that particular card to your account and sends it a profile of what channels you are allowed to tune into.

Cablecard_for_PC Perspective_Tim_Verry.jpg

There are some drawbacks, however. Mainly that On Demand does not work with most CableCARDS. Do note that this is actually not a CableCARD hardware issue, but a support issue on the cable company side. You could, at least in theory, get a CableCARD and tuner that could tune in On Demand content, but right now that functionality seems to be limited to some Tivos and the rental cable boxes (paradoxically some of those are actually CableCARD-equipped). It’s an unfortunate situation, but here’s hoping that it is supported in the future. Also, if you do jump into the world of CableCARDs, it is likely that you will find yourself in a situation where you know more about them than the cable installer as cable companies do not advertise them, and only a small number of employees are trained on them. Don’t be too hard on the cable tech though, it's primarily because cable companies would rather rent you a (expensive) box, and a very small number of people actually know about and need a tech to support the technology. I was lucky enough to get one of the “CableCARD guys,” on my first install, but I’ve also gotten techs that have never seen one before and it made for an interesting conversation piece as they diagnosed signal levels for the cable modem (heh). Basically, patience is key when activating your CableCARD, and I highly recommend asking around forums like DSLReports for the specific number(s) to call to get to the tier 2 techs that are familiar with CableCARDs for your specific provider when calling to activate it if you opt to do a self-install. Even then, you may run into issues. For example, something went wrong with activation on the server side at Comcast so it took a couple of hours for them to essentially unlock all of my HD channels during my install.

Continue reading to find out why I'm so excited about CableCARDs and home theater PCs!

Subject: Editorial

Introduction, Hardware To Look For


Every year the college I graduated from, Beloit College, publishes its not-that-famous “mindset list.” It’s a collection of one-liners, such as “Clint Eastwood is better known as a director than as Dirty Harry,” meant to humorously remind professors that the experiences of their generation are not the same as the generation about to show up in their classrooms.
I’ve sometimes felt a need for a similar reminder among gamers. Arcade classics like Pac-Man and DOS legends such as Prince Of Persia are often cited in conversations of old-school gaming, yet many gamers (including myself) never enjoyed the experience of playing these titles when they first hit store shelves. 
I enjoyed a different generation of classics. My original copy of Interstate ’76 is nestled in a binder of old CDs. A boxed copy of Mechwarrior 2 sits on my book shelf. I have Baldur’s Gate, Sid Meier’s Alpha Centauri, Total Annihilation 2, Starcraft, SimCity 2000, The Elder Scrolls: Daggerfall and Age of Empires II, to name a few. These were my formative gaming experiences. Some have always been with me  – others, lost or destroyed, have been re-acquired from thrift stores for a few bucks each.
Yet I can’t play most of these games without buying them again (via a service like Good Old Games) or resorting to virtualization. The reliability of Window’s compatibility mode is spotty to say the least.
Even if a game does run on my Windows 7 PC, something is missing. The old controllers of yesterday usually don’t agree with – or can’t physically connect to – my modern desktop. The graphics, designed for the CRT era, often don’t translate well to a high-resolution LCD. Random bugs and errors can occur, stopping the games in their tracks.
I’ve finally decided that there is only one solution. If you want to run a game from the 1990s and enjoy them properly you should also have hardware that can play games from that era as originally intended. That means putting together a legacy gaming system.
This is something that I think anyone should be able to do without spending more than $150. But can you, and if so, is it worth your time?
Subject: Editorial
Manufacturer: AMD

Less Risk, Faster Product Development and Introduction

There have been quite a few articles lately about the upcoming Bulldozer refresh from AMD, but a lot of the information that they have posted is not new.  I have put together a few things that seem to have escaped a lot of these articles, and shine a light on what I consider the most important aspects of these upcoming releases.  The positive thing that most of these articles have achieved is increasing interest in AMD’s upcoming products, and what they might do for that company and the industry in general.


The original FX-8150 hopefully will only be a slightly embarrasing memory for AMD come Q3/Q4 of this year.

The current Bulldozer architecture that powers the AMD FX series of processors is not exactly an optimal solution.  It works, and seems to do fine, but it does not surpass the performance of the previous generation Phenom II X6 series of chips in any meaningful way.  Let us not mention how it compares to Intel’s Sandy Bridge and Ivy Bridge products.  It is not that the design is inherently flawed or bad, but rather that it was a unique avenue of thought that was not completely optimized.  The train of thought is that AMD seems to have given up on the high single threaded performance that Intel has excelled at for some time.  Instead they are going for good single threaded performance, and outstanding multi-threaded performance.  To achieve this they had to rethink how to essentially make the processor as wide as possible, keep the die size and TDP down to reasonable sizes, and still achieve a decent amount of performance in single threaded applications.

Bulldozer was meant to address this idea, and its success is debatable.  The processor works, it shows up as an eight logical core processor, and it seems to scale well with multi-threading.  The problem, as stated before, is that it does not perform like a next generation part.  In fact, it is often compared to Intel’s Prescott, which was a larger chip on a smaller process than the previous Northwood processor, but did not outperform the earlier part in any meaningful way (except in heat production).  The difference between Intel and AMD in this aspect is that as compared to Prescott, Bulldozer as an entirely new architecture as compared to the Prescott/Northwood lineage.  AMD has radically changed the way it designs processors.  Taking some lessons from the graphics arm of the company and their successful Radeon brand, AMD is applying that train of thought to processors.

Continue reading our thoughts on AMD, Vishera, and Beyond!!

Subject: Editorial
Manufacturer: Google


Search engine giant Google took the wraps off its long rumored cloud storage service called Google Drive this week. The service has been rumored for years, but is (finally) official. In the interim, several competing services have emerged and even managed to grab significant shares of the market. Therefore, it will be interesting to see how Google’s service will stack up. In this article, we’ll be taking Google Drive on a test drive from installation to usage to see if it is a worthy competitor to other popular storage services—and whether it is worth switching to!

How we test

In order to test the service, I installed the Google desktop application (we’ll be taking a look at the mobile app soon) and uploaded a variety of media file types including documents, music, photos, and videos in numerous formats. The test system in question is an Intel i7 860 based system with 8GB of RAM and a wired Ethernet connection to the LAN. The cable ISP I used offers approximately two to three mpbs uploads (real world speeds, 4mbps promised) for those interested.



Google’s cloud service was officially unveiled on Tuesday, but the company is still rolling out activations for people’s accounts (my Google Drive account activated yesterday [April 27, 2012], for example). And it now represents the new single storage bucket for all your Google needs (Picasa, Gmail, Docs, App Inventor, ect; although people can grandfather themselves into the cheaper Picasa online storage).

Old Picasa Storage vs New Google Drive Storage Plans

Storage Tier (old/new) Old Plan Pricing (per year) New Plan Pricing (per year)
20 GB/25 GB $5 $29.88
80 GB/100 GB $20 $59.88
200 GB $50 $119.88
400 GB $100 $239.88
1 TB $256 $599.88
2 TB $512 $1,199.88
4 TB $1,024 $2,399.88
8 TB $2,048 $4,799.88
16 TB $4,096 $9,599.88

(Picasa Plans were so much cheaper–hold onto them if you're able to!)

The way Google Drive works is much like that of Dropbox wherein a single folder is synced between Google’s servers and the user’s local machine (though sub-folders are okay to use and the equivalent of "labels" on the Google side). The storage in question is available in several tiers, though the tier that most people will be interested in is the free one. On that front, Google Drive offers 5GB of synced storage, 10GB of Gmail storage, and 1GB of Picasa Web Albums photo backup space. Beyond that, Google is offering nine paid tiers from an additional 25GB of "Drive and Picasa" storage (and 25GB of Gmail email storage) for $2.49 a month to 16TB of Drive and Picasa Web Albums storage with 25GB of Gmail email storage for $799.99 a month. The chart below details all the storage tiers available.

Google Drive Storage Plans
Storage Tiers Drive/Picasa Storage Gmail Storage Price (per month)
Free 5GB/1GB 10GB $0 (free)
25GB 25GB (shared) 25GB $2.49
100GB 100GB (shared) 25GB $4.99
200GB 200GB (shared) 25GB $9.99
400GB 400GB (shared) 25GB   $19.99
1TB 1TB (shared) 25GB   $49.99
2TB 2TB (shared) 25GB   $99.99
4TB 4TB (shared) 25GB   $199.99
8TB 8TB (shared) 25GB   $399.99
16TB 16TB (shared) 25GB   $799.99

1024MB = 1GB, 1024GB = 1TB

The above storage numbers do not include the 5GB of free drive storage that is also applied to any paid tiers.  The free 1GB of Picasa storage does not carry over to the paid tiers.

Even better, Google has not been stingy with their free storage. They continue to allow users to upload as many photos as they want to Google+ (they are resized to a max of 2048x2048 pixels though). Also, Google Documents stored in the Docs format continue to not count towards the storage quota. Videos uploaded to Google+ under 15 minutes in length are also free from storage limitations. As far as Picasa Web Albums (which also includes photos uploaded to blogger blogs) goes, any images under 2048x2048 and videos under 15 minutes in length do not count towards the storage quota either. If you exceed the storage limit, Google will still allow you to access all of your files, but you will not be able to create any new files until you delete enough files to get below the storage quota. The one exception to that rule is the “storage quota free” file types mentioned above–Google will still let you create/upload those. For Gmail storage, Google allows you to receive and store as much email as you want up to the quota. After you reach the quota, any new email will hard bounce and you will not be able to receive new messages.

In that same vein, Google’s paid tiers are not the cheapest but are still fairly economical. They are less expensive per GB than Dropbox, for example, but are more expensive than Microsoft’s new Skydrive tiers. One issue that many users face with online storage services is the file size limit placed on individual files. While Dropbox places no limits (other than overall storage quota) on individual file size, many other services do. Google offers a compromise to users in the form of 10GB per file size limits. While you won’t be backing up Virtualbox hard drives or drive image backups to Google, they’ll let you backup anything else (within reason).

Continue reading for the full Google Drive preview.

Manufacturer: PC Perspective
Tagged: Malware

Infectious fear is infectious

PCMag and others have released articles based on a blog post from Sophos. The original post discussed how frequently malware designed for Windows is found on Mac computers. What these articles mostly demonstrate is that we really need to understand security: what it is, and why it matters. The largest threats to security are complacency and misunderstanding; users need to grasp the problem rather than have it burried under weak analogies and illusions of software crutches.

Your data and computational ability can be very valuable to people looking to exploit it.

The point of security is not to avoid malware, nor is it to remove it if you failed to avoid it. Those actions are absolutely necessary components of security -- do those things -- but they are not the goal of security. The goal of security is to retain control of what is yours. At the same time, be a good neighbor and make it easier for others to do the same with what is theirs.

Your responsibility extends far beyond just keeping a current antivirus subscription.


The problem goes far beyond throwing stones...

The distinction is subtle.

Your operating system is irrelevant. You could run Windows, Mac, Android, iOS, the ‘nixes, or whatever else. Every useful operating system has vulnerabilities and run vulnerable applications. The user is also very often tricked into loading untrusted code either directly or delivering it within data to a vulnerable application.

Blindly fearing malware -- such as what would happen if someone were to draw parallels to Chlamydia -- does not help you to understand it. There are reasons why malware exists; there are certain things which malware is capable of; and there are certain things which malware is not.

The single biggest threat to security is complacency. Your information is valuable and you are responsible to prevent it from being exploited.  The addition of a computer does not change the fundamental problem. Use the same caution on your computer and mobile devices as you should on the phone or in person. You would not leave your credit card information on a park bench unmonitored.

Read on to understand what malware is and what it could do.