Making Adam Jensen look his best

Subject: General Tech | September 14, 2011 - 06:10 PM |
Tagged: gaming, deus ex 3

[H]ard|OCP has been spending a lot of time in the world of Deus Ex to test the effectiveness of a variety of GPUs in rendering the game and the various effects with.  NVIDIA and AMD have two different methods of taking advantage of the new graphical features in Deus Ex, so this is not just a look at performance but also a look at image quality.  Three cards from each vendor were tested, in single monitor setups as well as multi-monitor scenarios.  See how your card will stack up in the review.

H_deusec.jpg

"Deus Ex: Human Revolution landed a few weeks ago, bringing a worthy addition to one of the most admired PC gaming properties of all time. We've given it a thorough going over, and have lots to share. We test six of the hottest video cards around to show you what this game can really do, along with an in-depth look at image quality."

Here is some more Tech News from around the web:

Gaming

Source: [H]ard|OCP

Bloggers and techies descend on the IDF

Subject: General Tech | September 14, 2011 - 05:36 PM |
Tagged: Intel, idf, idf 2011

Ryan wasn't the only one madly recording the Intel Developers Forum keynote address by Mooly Eden, The Tech Report was also there.  Drop by their record of the live blog that they created here, complete with pictures from a different angle than Ryan's and with different content in some cases. There is even a hacker ninja!

TR_haswell.jpg

"Our own Scott Wasson and Geoff Gasior live blogged Mooly Eden's keynote (complete with pictures) at the Intel Developer Forum this morning. The keynote centered on Intel's mobile endeavors, including Windows 8 tablets and Ivy Bridge-powered ultrabooks. Eden also gave a sneak preview of Intel's next-gen Haswell processors, which will succeed Ivy Bridge in 2013."

Here is some more Tech News from around the web:

Tech Talk

 

IDF 2011: New Ivy Bridge Details from Mooly Eden Keynote

Subject: Editorial, General Tech, Processors, Shows and Expos | September 14, 2011 - 05:25 PM |
Tagged: mooly eden, Ivy Bridge, idf 2011, idf

Today is day 2 at the Intel Developer Forum and with the first keynote out of the way, we can share a few short details about Ivy Bridge that we didn't know before.  First, the transistor count is 1.48 billion - a hefty jump over Sandy Bridge that had less than 1 billion.

m05.jpg

There was also mention of a new power management feature that will allow interrupts from other hardware devices to go to other cores than Core0, which it had ALWAYS done in the past. This means that it can route it to a core that is already awake and doing some work and not wake up a sleeping core unless necessary.

We also saw the Ivy Bridge processor running the HAWX 2 benchmark, now with support for DX11.

m12.jpg

If you look at the die image at the top of this post, you will also notice that it appears more of the die has been assigned to graphics performance than was allocated to it on Sandy Bridge.  Remember that on AMD's Llano about 50% of the die dedicated to stream processors; it would appear that by adding support for DX11, nearly doubling performance and including required support for things like DirectCompute, Intel was forced to follow suit to some degree. 

Mooly laughed at press taking pictures of the die as he had purposely modified the image to hide some of the details or distort them to prevent precise measurements.  Still, it looks like about 33% of the new Ivy Bridge processor is dedicated to graphics and media.  This is good news for consumers, but potentially very bad news for the discrete GPU market in notebooks and low end PCs.

Finally, Mooly Eden ended with a brief look at future Ultrabooks that will be based on the Ivy Bridge processor.

m13.jpg

If you thought the current generation of Ultrabooks was sexy (as I do) then you will really like what is coming up next.

Source: PCPer

Windows 8 Developer Preview Build Sees Public Release At BUILD Conference

Subject: General Tech | September 14, 2011 - 05:04 PM |
Tagged: windows, windows 8, Metro, developer preview, microsoft

While some folks may be dissapointed that Microsoft's first public beta download was not released this week at their BUILD conference, we did get the next best thing; Microsoft released a developer preview build for 32 and 64 bit systems yesterday. The download page went live at 11 PM Eastern Time, and hosts three versions of the Windows 8 build available to the public-- despite the name an MSDN subscription is not required.  The download page does hint that MSDN subscribers are able to access additional downloads, however.

The three available downloads include a disk image (.iso) with developer tools, a 64 bit Windows 8 disk image, and a 32 bit Windows 8 disk image.  Of the three versions, the last two will be most applicable to the public and enthusiast users.

Windows 8 Start Screen.PNG

The Windows 8 Start screen

The Developer Preview with applications for software development work weighs in at a hefty 4.8 GB .iso and features a 64 bit copy of Windows 8, the Windows Metro SDK for applications, Microsoft's Visual Studio 11 Express, Microsoft's Expression Blend 5, and 28 Metro style applications.  Because of the hefty download, you will need a dual layer DVD or USB drive if you plan on installing it on bare metal (single layer DVDs need not apply, in other words).

The next largest download is the 64 bit Windows 8 Developer Preview build that drops the development software and features only the 64 bit Windows 8 operating system and Metro style applications.  This download weighs in at an easier to manage 3.6 GB .iso disk image.  The minimum system requirements for both 64 bit builds include a 1 GHz or faster x64 CPU, at least 2 GB of RAM, 20 GB of hard drive space for installation, a WDDM 1.0 supported DirectX 9 capable graphics card, and a touch screen to utilize the touch functions.

The final download is a 32 bit version of Windows 8 with Metro style apps suited for older computers with less than 4 GB of memory or lacking 64 bit capable hardware.  At 2.8 GB, this disk image is the smallest of the bunch.The Developer Preview.  The minimum system requirements for this build are a 1 GHz or faster x86 processor, 1 GB of RAM, 16 GB of available hard drive space for installation, a DirectX 9 graphics card with WDDM 1.0 or higher driver support, and (I am embarrassed Microsoft believes this needs to be listed) a touch screen in order to take advantage of the touch screen functionality of the OS.

All three builds are of the English language variety and are available here for your downloading pleasure.  Note that if you do choose to install the Windows 8 download on bare metal, you will need to wipe out your current installations, and a clean reinstall of your old operating system will be required to restore your system; therefore it would be prudent to at the very least make sure everything important is backed up before attempting the installation.  For those less adventurous a free Virtualization program might be in order.  Keeping in mind that performance will impacted by running it as a virtual machine, Virtual Box seems to handle Windows 8 very well using the Windows 7 64 bit settings after allocating 4 GB of RAM and the maximum amount of video memory.  VM Ware and other paid solutions should also handle the operating system well enough for you to get an idea of Microsoft's vision for the operating system by using tweaked Windows 7 presets.

What features of the Windows 8 developer preview would you like to see tested out?  After you've had a chance to check the operating system out for yourselves, let us know what you think of Windows 8 in the comments!

Source: Microsoft

IDF 2011: ASUS UX21 Ultrabook Still Sexy, I Still Want It

Subject: General Tech, Processors, Mobile | September 14, 2011 - 03:48 PM |
Tagged: idf, idf 2011, asus, ultrabook, ux21

Yes, I realize the ASUS UX21 was first shown at Computex in June, but this was my first chance to get my hands on it and I have to say after using it for just a few minutes and comparing it to the aging Lenovo X201 that I am typing this on, I am in love with the form factor.

ux21-01.jpg

I don't have anything else to report yet - no performance metrics, no real-world testing, but I couldn't pass posting these few pictures of it.  Enjoy!

ux21-02.jpg

ux21-03.jpg

ux21-04.jpg

ux21-05.jpg

Source: PCPer

IDF 2011: Lucid HyperFormance Technology Improves Game Responsiveness

Subject: General Tech, Graphics Cards, Motherboards | September 14, 2011 - 06:12 AM |
Tagged: virtu, mvp, lucid, idf 2011, idf, hyperformance, hydra

Lucid has a history of introducing new software and hardware technologies that have the potential to dramatically affect the PC gaming environment.  The first product was Hydra shown in 2008 and promised the ability to use multiple GPUs from different generations and even different vendors on the same rendering task.  Next up was Lucid Virtu, a software solution that allowed Sandy Bridge processor customers to take advantage of the integrated graphics features while also using a discrete graphics card.  Lucid added support for AMD platforms later on and also showcased Virtual Vsync earlier this year in an attempt to improve user gaming experiences. 

mvp04.jpg

That is a nice history lesson, but what is Lucid discussing this time around?  The technology is called "HyperFormance" (yes, like "High-Performance") and is included in a new version of the Virtu software called Virtu MVP.  I'll let the Lucid press release describe the goals of the technology:

HyperFormance, found in the new model Virtu Universal MVP, boosts gaming responsiveness performance by intelligently reducing redundant rendering tasks in the flow between the CPU, GPU and the display. 3D games put the greatest demands on both the CPU and GPU. And as the race for higher performance on the PC and now in notebooks never ends, both CPUs and GPUs keep gaining performance.

First, a warning.  This software might seem simple but the task it tries to accomplish is very complex and I have not had enough time to really dive into it too deeply.  Expect an updated and more invasive evaluation soon.  There are a couple of key phrases to pay attention to though including the idea of boosting "gaming responsiveness performance" by removing "redundant rendering tasks".  The idea of boosting responsiveness pertains to how the game FEELS to the gamer and should be evident with things like mouse movement responsiveness and the stability of the on-screen image (lack of tearing).  Lucid's new software technology attempts to improve the speed at which a game responds to your actions not by increasing the frame rate but rather by decreasing the amount of time between your mouse movement (or keyboard input, etc) and what appears on the screen as a result of that action. 

How they do that is actually very complex and revolves around the Lucid software's ability to detect rendering tasks by intercepting calls between the game engine and DirectX, not around dropping or removing whole frames.  Because Lucid Virtu can detect individual tasks it can attempt to prioritize and learn which are being repeated or mostly repeated from the previous frames and tell GPU to not render that data.  This gives the GPU a "near zero" render time on that current frame and pushes the next frame through the system, to the frame buffer and out to the screen sooner. 

To think of it another way, imagine a monitor running at 60 Hz but playing a game at 120 FPS or so.  With Vsync turned off, at any given time you might have two to four or more frames being rendered and shown on the screen.  The amount of each frame displayed will differ based on the frame rate and the result is usually an image some amount of visual tearing; you might have to top 35% of the screen as Frame1, the middle 10% of the screen as Frame2 and the bottom 55% as Frame3.  The HyperFormance software then decides if the frame that is going to take up 10% of the screen, Frame2, has redundant tasks and if it can be mostly removed from the rendering pipeline.  To replace it, the Lucid engine just uses 65% of Frame3. 

mvp05.png

The result is an output that is more "up to date" with your movements and what is going on in the game engine and in "game time".  Like I said, it is a very complex task but one that I personally find very interesting and am looking forward to spending more time visualizing and explaining to readers.

Interestingly, this first implementation of HyperFormance does require the use of a multi-GPU system: the integrated GPU on Sandy Bridge or Llano along with the discrete card.  Lucid is working on a version that can do the same thing on a single GPU but that application is further out.

mvp01.png

Frame rate without HyperFormance 

There is a side effect though that I feel could hurt Lucid: the effective frame rate of the games with HyperFormance enabled are much higher than without the software running.  Of course, the GPU isn't actually rendering more data and graphics than it did before; instead, because HyperFormance is looking for frames to report at near zero frame times, benchmarking applications and the games themselves *think* the game is running much faster than it is.  This is a drawback to the current way games are tested.  Many gamers might at first be fooled into thinking their game is running at higher frame rates - it isn't - and some might see the result as Lucid attempting to cheat - it isn't that either.  It is just a result of the process that Lucid is trying to get to work for gamers' benefits.

mvp03.png

Frame rate with HyperFormance

Instead, Lucid is attempting to showcase the frame rate "increase" as a responsiveness increase or some kind of metric that indicates how much faster and reactive to the user the game actually feels.  It might be a start, but claiming to have 200% responsiveness likely isn't true and instead I think they need to spend some time with serious gamers and have them find a way to quantify the added benefits that the HyperFormance application offers, if any. 

There is a LOT more to say about this application and what it means to PC gaming but for now, that is where we'll leave it.  Expect more in the coming weeks!

Source: PCPer

IDF 2011: Other Foundries Falling Further Behind Intel Technology

Subject: General Tech, Processors | September 13, 2011 - 10:07 PM |
Tagged: TSMC, idf 2011, idf, GLOBALFOUNDRIES

While learning about the intricacies of the Intel tri-gate 22nm process technology at the Intel Developer Forum, Senior Intel Fellow Mark Bohr surprised me a bit by discussing the competition in the foundry market.  Bohr mentioned the performance advantages and competitive edge that the new 22nm technology offers but also decided to mention that other companies like TSMC, GlobalFoundries, Samsung and IBM are behind, and falling further behind as we speak.

22nm18.jpg

When Intel introduced strained silicon in 2003, it took competition until 2007 to implement it.  For High-K Metal Gate technology that Intel brought into market in 2007 it wasn't until 2011 that AMD introduced in its product line.  Finally, with tri-gate coming in 2011, GlobalFoundries is talking about getting it implemented in the 2015 time frame.

Obviously those are some long delays but more important to note is that the gap between Intel and the field's implementations has been getting longer.  Three years for strained silicon, three and a half for high K and up to four years for tri-gate.  Of course, we could all be surprised to see tri-gate come from a competitor earlier, but if this schedule stays true, it could mean an increasing advantage for Intel's products over AMD's and eventually into ARM's. 

This also discounts the occasional advantage that AMD had over Intel in the past like being the first to integrate copper interconnects (on the first Athlon) and the first to develop a Silicon-on-Insulator product (starting with the 130nm process); though Intel never actually adopted SOI. 

Source: PCPer

Intel & McAfee submerging their DeepSAFE deep into the Core

Subject: General Tech | September 13, 2011 - 09:05 PM |
Tagged: mcafee, Intel, idf 2011, idf

As the Intel Developer Forum commences we finally learn a little bit about what Intel is attempting to do with the acquisition of McAfee among other tidbits. Malware is one of the banes of computing existence. Information is valuable, security is hard, and most people do not know either. Antimalware software remains a line of defense between you and infections in the event that your first three lines of defense (patching known security vulnerabilities in software; limiting inbound connections and permissions; and common sense) fail to help. While no antimalware software is anywhere near perfect Intel believes that getting protection a little deeper in the hardware will do a little more to prevent previously unknown exploits.

IDF-McAfee.jpg

Great Norton’s Ghost!

According to McAfee’s website, DeepSAFE is a platform for security software to see more of what is going on in the hardware around the Operating System itself. They are being very cagey about what technology is being utilized both on their site as well as their FAQ (pdf) which causes two problems: firstly, we do not know exactly what processors support or will support DeepSAFE; secondly, we do not know exactly what is being done. While this is more details than we knew previously there are still more than enough holes to fill before we know what this technology truly is capable of.

Source: McAfee

IDF 2011: Intels Shows a PC Running on Solar Power

Subject: General Tech, Processors | September 13, 2011 - 05:22 PM |
Tagged: solar power, solar cell, idf 2011, idf

While on stage during today's opening keynote at the Intel Developer Forum, Intel CEO Paul Otellini showed of a prototype processor running completely on a very small solar cell.

keynote06.jpg

Paul on the left, Windows 7 in the center, prototype ultra-low power CPU on the right

Running Windows 7 and a small animated GIF of a cat wearing headphones, the unannounced CPU was being powered only by a small solar panel with a UV light pointed at it.  Though Intel didn't give us specific voltages or power consumption numbers they did say that it was running at "close to the threshold of the transistors".  Assuming we are talking about the same or similar 22nm tri-gate transistors used in Haswell, we found this:

threshold.png

My mostly uneducated guess then was that they were able to run Windows 7 and this animation on a processor running somewhere around 0.1-0.2v; an impressive feat that would mean wonders for standby time and the all-day computing models.  This is exactly what Intel's engineers have been targeting with their transistor and CPU designs in the last couple of years as it will allow Haswell to scale from desktop performance levels all the way to the smart phone markets on a single architecture.

Keep in mind only the CPU was being powered by the solar cell; the rest of the components including the hard drive, motherboard, etc were being powered by a standard power unit.

keynote07.jpg

You can see the solar panel and UV light on the right hand side of this photo.  Interestingly, when the presenter moved his hand between the light source and the panel, the system locked up, proving that it was indeed being powered by it. 

Source: PCPer

IDF 2011: Intel Haswell Architecture Offers 20x Lower Standby Power

Subject: General Tech, Processors | September 13, 2011 - 05:05 PM |
Tagged: tri-gate, sandy bridge, Ivy Bridge, idf 2011, idf, haswell

The first keynote of the Intel Developer Forum is complete and it started with Paul Otellini discussing the high level direction for Intel in the future.  One of the more interesting points made was not about Ivy Bridge, which we will all see very soon, but about Haswell, Intel's next microarchitecture meant to replace the Sandy Bridge designs sometime in late 2012 or early 2013.  Expected to focus on having 8 processing cores, much improved graphics and the new AVX2 extenstion set, Haswell will also be built on the 3D tri-gate transistors announced over the summer.

Otellini describes Haswell's performance in two important metrics.  First, it will use 30% less power than Sandy Bridge at the same performance levels.  This is a significant step and could be the result of higher IPC as well as better efficiency thanks to the 22nm process technology.  

keynote05.jpg

Where Haswell really excels is apparently in the standby metric: as a platform it could use as much as 20x less power than current hardware.  Obviously Intel's engineers have put a focus on power consumption more than performance and the results are beginning to show.  The goals are simple but seemingly impossible to realize: REAL all-day power and more than 10 days of stand by time.

Source: PCPer