John Carmack Interview: Question and Topic suggestions?

Subject: Editorial, Graphics Cards, Processors | August 4, 2011 - 11:15 AM |
Tagged: nvidia, john carmack, interview, carmack, amd

A couple of years back we talked on the phone with John Carmack during the period of excitement about ray tracing and game engines.  That interview is still one of our most read articles on PC Perspective as he always has interesting topics and information to share.  While we are hosting the PC Perspective Hardware Workshop on Saturday at Quakecon 2011, we also scheduled some time to sit with John again to pick his brain on hardware and technology.

carmack1.jpg

If you had a chance to ask John Carmack questions about hardware and technology, either the current sets of each or what he sees coming in the future, what would you ask?  Let us know in our comments section below!! (No registration required to comment.)

Source: PCPer

Bumpday 8/3/2011: AMD stole my gigahurts – give it back!

Subject: Editorial, General Tech | August 3, 2011 - 09:24 PM |
Tagged: bumpday

Just recently we looked at a Tom’s Hardware review of CPU architecture since about 2005. While the performance of the CPU itself was not covered in the review, that was entirely not the purpose of the article; the problem investigated was whether there was a lot of innovation with the architectures themselves or whether companies were just ramping up the clock rate and adding more cores to get their performance. Implied in the article’s findings was the extent to which Intel was relying on a higher clock rate to even be comparable to AMD at the time, and even if they were being comparable is debatable. At some point AMD decided to change their tactics and stop ranking their processors by clock rate due to the huge disparity between Intel’s performance and their own at any given clock. This drew some flak in the forums but ended up sticking as even Intel dropped the Gigahertz moniker.

Bumpday2.png

I owned a Core 2 Duo E6600 MHz! It’s so fast they needed to count in hex!

Scott, not me but another Scott, accused AMD back in 2001 of confusing users about the actual clock rate of their products. That post was crushed by video gaming’s most popular astrophysicist: yes, exactly. That didn’t stop the debate about whether that is an ethical thing to do, whether Intel’s ethics are any better, or whether they’re hypocrites. Regardless, the soapbox was eventually put away and everyone went back to their lives.

BUMP

Source: PCPer Forums

Yes, Netburst really was that bad: CPU architectures tested

Subject: Editorial, General Tech, Processors | August 3, 2011 - 02:11 AM |
Tagged: Netburst, architecture

It is common knowledge that computing power consistently improves throughout time as dies shrink to smaller processes, clock rates increase, and the processor can do more and more things in parallel. One thing that people might not consider: how fast is the actual architecture itself? Think of the problem of computing in terms of a factory. You can increase the speed of the conveyor belt and you can add more assembly lines, but just how fast are the workers? There are many ways to increase the efficiency of a CPU: from tweaking the most common or adding new instruction sets to allow the task itself to be simplified; to playing with the pipeline size for proper balance between constantly loading the CPU with upcoming instructions and needing to dump and reload the pipe when you go the wrong way down an IF/ELSE statement. Tom’s Hardware wondered this and tested a variety of processors since 2005 with their settings modified such that they could only use one core and only be clocked at 3 GHz. Can you guess which architecture failed the most miserably?

intel4004-2.jpg

Pfft, who says you ONLY need a calculator?

(Image from Intel)

Netburst architecture was designed to get very large clock rates at the expensive of heat -- and performance. At the time, the race between Intel and its competitors was clock rate: the higher the clock the better it was for marketers despite a 1.3 GHz Athlon wrecking a 3.2 GHz Celeron in actual performance. If you are in the mood for a little chuckle, this marketing strategy was all destroyed when AMD decided to name their processors “Athlon XP 3200+” and so forth rather than by their actual clock rate. One of the major reasons that Netburst was so terrible was branch prediction. Branch prediction is a strategy you can use to speed up a processor: when you reach a conditional jump from one chunk of code to another, such as “if this is true do that, otherwise do this”, you do not know for sure what will come next. Pipelining is a method of loading multiple commands into a processor to keep it constantly working. Branch prediction says: “I think I’ll go down this branch” and loads the pipeline assuming that is true; if you are wrong, you need to dump the pipeline and correct your mistake. One way that Pentium Netburst kept high clock rates was by having a ridiculously huge pipeline, 2-4x larger than the first generation of Core 2 parts which replaced it; unfortunately the Pentium 4 branch prediction was terrible keeping the processor stuck needing to dump its pipeline perpetually.

Toms-conclusion.png

The sum of all tests... at least time-based ones.

(Image from Tom's Hardware)

Now that we excavated Intel’s skeletons to air them out it is time to bury them again and look at the more recent results. On the AMD side of things, it looks as though there has not been too much innovation on the efficiency side of things only now getting within range of the architecture efficiency that Intel had back in 2007 with their first introduction of Core 2. Obviously efficiency per core per clock means little in the real world as it tells you neither about raw performance of a part nor how power efficient it is. Still, it is interesting to see how big of a leap Intel made away from their turkey of an architecture theory known as Netburst and model the future around the Pentium 3 and Pentium M architectures. Lastly, despite the lead, it is interesting to note exactly how much work went into the Sandy Bridge architecture. Intel, despite an already large lead and focus outside of the x86 mindset, still tightened up their x86 architecture by a very visible margin. It might not be as dramatic as their abandonment of Pentium 4, but is still laudable in its own right.

Happy SysAdmin Appreciation Day

Subject: Editorial | July 29, 2011 - 11:20 PM |
Tagged: friday

Heading into the PC Perspective Forums today will net you a wide variety of tips, from new cables to new cable modems, as well as unexpected shutdowns and even someone new to the game.  Head to the Motherboard Forum and you can learn all about BioStar's new TZ68A+ and in the Graphics Card Forum you'll find a mix of proud owners of new multimonitor rigs as well as those who are a little less proud of their recent purchase. 

If you own an Intel SSD and are worried about the 8MB bad power off problem you should keep an eye on this thread which will show you a temporary fix and will be updated as Intel works on a permanent fix.  Catch a reminder of the various types of RAM that have been used over the years in the Memory Forum or remind yourself of this old experiment with Windows. If that is not your cup of tea, then get some advice on setting up an AHCI HDD in Linux with our knowledgable penguin flavoured Frogs.

The Lightning Round is hopping as always, with one thread drawing ever closer to the 25K mark, though the BOINC team still has a larger number to tell you about.  The Trading Post has had a few new items go up as well for anyone wanting to buy or trade.  As a last stop you can check out the first PC Perspective Podcast broadcast from the new Brick TWiT house, same old guys but new digs and new technology.

 

Just Delivered Exclusive: PNY XLR8 Liquid Cooled GTX 580 Combo

Subject: Editorial, Graphics Cards | July 29, 2011 - 02:27 PM |
Tagged: water cooling, pny, liquid cooler, GTX 580, geforce

Just Delivered is a section of PC Perspective where we share some of the goodies that pass through our labs that may or may not see a review, but are pretty cool none the less.

Today is a good day to be working at PC Perspective - the goods just keep hitting the door!  After taking a quick look at a new MSI motherboard we also have the world's first look at the upcoming PNY XLR8 Liquid Cooled GTX 580 + CPU cooler combo!

pny01.jpg

You know how self-contained water cooling for processors is all the rage these days?  (And why not, we love it!)  Well NVIDIA and PNY teamed up to create a liquid cooled GPU, the GTX 580 of course, and also have two options for it: one with the GPU only and the other that includes an inline CPU water block as well.

pny02.jpg

We literally have the first two production units from PNY in-house and are going through the installation process for them as I type this.  The GTX 580s support SLI (if you want to go that route) and look much like a reference GTX 580 in terms of their external design.  The insides are quite different though:

pny05.jpg

Asetek provides a GPU water block that is mounted on the PCB while the fan runs at a much lower speed than normal as it is basically only used for keeping the memory temperatures under control.

pny04.jpg

Our units include the CPU water block portion as well which DOES add to the complexity of the installation as well as packaging but I think we are going to find this to be a very efficient (and quiet) way to cool almost your entire rig.

Did I mention we are going to be giving BOTH OF THEM AWAY at our Hardware Workshop next weekend at Quakecon 2011?  Well now I did.  These are valued at $650 each!  Just another reason why you need to be in attendance, don't you think?

Source: PNY

Just Delivered: MSI Z68A-GD65 G3 Motherboard with PCI Express 3.0 Support

Subject: Editorial, Motherboards | July 29, 2011 - 02:03 PM |
Tagged: z68, pcie 3.0, msi

Just Delivered is a section of PC Perspective where we share some of the goodies that pass through our labs that may or may not see a review, but are pretty cool none the less.

As we gear up for the PC Perspective Hardware Workshop at Quakecon 2011 next weekend, August 6th, we are starting to get in some very interesting products.  The coolest part?  All of this is going to be GIVEN AWAY to attendees!! 

MSI is supplying us with a pair of new motherboards for our system build contest that will be held during the workshop - faster person to get a system up and running will get some killer prizes.  Even better, these are some of the FIRST Z68A-GD65 G3 boards in the US - the very same ones we saw at Computex in June sporting the world's first PCIe 3.0 implementation.

msi01.jpg

msi02.jpg

Sporting an LGA1155 socket and the new Z68 chipset, you get all the features associated with it including SSD caching and integrated graphics support.

msi03.jpg

msi04.jpg

The classic features from MSI continue to exist here with the Military Class II components as well as the always well-received OC Genie button.

msi05.jpg

It sports a total of 4 USB 3.0 ports, HDMI, DVI and VGA outputs and a lot more.  

One thing to note: this motherboard will ONLY support PCIE 3.0 speeds once the Ivy Bridge processors are released later this year so unless you have some unreleased hardware (and please do share!) then you aren't going to be seeing the advantages of this tech quite yet.  

Still, future proofing is good news!! 

Thanks to MSI for these boards and if you are coming to our workshop be prepared for your chance to win one before the rest of the worlds gets their hands on them!

Source: MSI

Bumpday 7/27/2011: Yo dawg, I heard you like bumps

Subject: Editorial, General Tech | July 27, 2011 - 09:26 PM |
Tagged: bumpday, DOSSHELL

This week (actually today) Jeremy went back in time and drug out the old DOSSHELL out of the 80’s and early 90’s and recounted Microsoft’s rise as a software platform company. The personal computer caught on quickly with DOSSHELL getting replaced for Windows, then Windows 95 and so forth to the present. And while Jeremy has fond memories of Wing Commander I just cannot help but see his Kilrathi raise him a Privateer.

Bumpday2.png

… so I installed a bump in your bump so you can bump while you bump.

Just ten days before Halloween 2003 the fifth stepson of Newton had an important report to write for his history class, so we think. Xzibit then proclaimed that Microsoft pimped DOS Auto. Wait, what is this? Did Jim put the bump in my bumping bumpday bump? (Who put the RAM in the eighty-eighty-six slot?) But yes it is true, it is amazing to see how far we, especially the old farts, have come.

BUMP

Source: PCPer Forums

Developer Watch: CUVI 0.5 released

Subject: Editorial, General Tech, Graphics Cards | July 26, 2011 - 08:39 PM |
Tagged: gpgpu, Developer Watch, CUVI

Code that can be easily parallelized into many threads have been streaming over to the GPU with many applications and helper libraries taking advantage of CUDA and OpenCL primarily. Thus for developers who wish to utilize the GPU more but are unsure where to start there are more and more options for libraries of functions to call and at least partially embrace their video cards. OpenCV is a library of functions for image manipulation and, while GPU support is ongoing through CUDA, primarily runs on the CPU. CUVIlib, which has just launched their 0.5 release, is a competitor to OpenCV with a strong focus on GPU utilization, performance, and ease of implementation. While OpenCV is licensed as BSD which is about as permissive a license as can be offered, CUVI is not and is based on a proprietary EULA.

Benchmark KLT - CUVILib from TunaCode on Vimeo

Benchmark KLT - OpenCV from TunaCode on Vimeo.

The little plus signs are the computer tracking motion. CUVI (top; 33fps), OpenCV (bottom; 2.5fps)

(Video from CUVIlib)

Despite the proprietary and non-free for commercial use nature of CUVI they advertise large speedups for certain algorithms. For their Kanade-Lucas-Tomasi Feature Tracker algorithm when compared with OpenCV’s implementation they report a three-fold increase in performance with just a GeForce 9800GT installed and 8-13x faster when using a high end computing card such as the Tesla C2050. Their feature page includes footage of two 720p high definition videos undergoing the KLT algorithm with the OpenCV CPU method chugging at 2.5 fps contrasted with CUVI’s GPU-accelerated 33fps. Whether you would prefer to side with OpenCV’s GPU advancements or pay CUVIlib to augment what OpenCV is not good enough for your needs at is up to you, but either future will likely involve the GPU.

Source: CUVIlib

Apple is da bomb! Vulnerability found in battery circuitry

Subject: Editorial, General Tech | July 25, 2011 - 10:24 PM |
Tagged: Malware, apple

Okay, so the title is more joke than anything else but security researcher Charlie “Safari Charlie” Miller discovered a vulnerability in Apple devices, sort of. This exploit, which appears to not actually be a security flaw and rather just an over-permissive design, allows an attacker to gain access to your battery control using one of two static company-wide passwords. Charlie has discovered many exploits in the past several years on the OSX and iOS platforms. One of the most high profile attacks he discovered involved a data-execution vulnerability in the iPhone’s SMS handling: under certain conditions your iPhone could potentially confuse inbound text messages as code and run it with high permissions.

applebattery.jpg

Malware assaults and battery charges.

(Image from Apple, modified)

So what does having the ability to write to a laptop’s battery firmware mean? Firstly, remember the old advice of “Get a virus? Reinstall your OS!”? Well assuming you actually can perform a clean install without ridiculous hacking (thanks Lion) the battery controller can simply re-infect you if the attacker knows an exploit for your version of OSX. But how does the attacker know your current version of OSX? Well if you are installing from an optical disk they just need to know a Snow Leopard RTM exploit; unless of course you extract Lion from the Mac App Store and clean install using it – assuming the attacker does not know an exploit for Lion or simply just infects the reinstall media if you created it from the infected computer. True, malware is about money so it is highly unlikely that an attacker would go for that narrow of a market of Mac users (already a narrow-enough market to begin with) but the security risk is there if for some reason you are a tempting enough target to spear-phish. Your only truely secure option is removing the battery while performing the OHHHHHHHH.

You know, while working (very temporarily) on the Queen's University Solar Vehicle project I was told that Lithium cells smell like sweet apples when they rupture. I have never experienced it but if true I find it delightfully ironic.

While that would all require knowledge of other exploits in your operating system, there is a more direct problem. If for some reason someone would like to cause damage against your Apple devices they could use this flaw to simply break your batteries. Charlie has bricked nine batteries in his testing but has not even attempted to see whether it would be possible to over-charge a battery into exploding. While it is possible to force the battery controller to create the proper conditions for an explosion there are other, physical, safe guards in place. Then again, batteries have exploded in the past often making highly entertaining Youtube videos and highly unentertaining FOX news clips.

Source: Forbes

Video Perspective: AMD Steady Video Technology on AMD A-Series APUs

Subject: Editorial, Graphics Cards | July 25, 2011 - 02:23 PM |
Tagged: amd, APU, llano, steady video, a8-3850, video

In our continuing coverage of the AMD Llano-based A-Series of APUs we have another short video that discusses and evaluates the performance of AMD's Steady Video technology publicly released to the world with the 11.6 driver revision this month.  Steady Video, as we described it in our initial AMD Llano A8-3850 review is:

Using a heterogeneous computing model AMD's driver will have the ability to stabilize "bouncy" video that is usually associated with consumer cameras and unsteady hands.

Basically, AMD is on the war path to show you that your GPU can be used for more than just gaming and video transcoding.  If the APU and heterogeneous computing is to thrive, unique and useful applications of the GPU cores found in Llano, Trinity and beyond must be realized.  Real-time video filtering and stabilization with Steady Video is such an example and is exclusive to AMD GPUs and APUs.

As you can see there are no benchmarks in that video, no numbers we can really quote or reference to tell you "how much" better the corrected videos are compared to the originals.  The examples we gave you there were NOT filtered or selected because they show off the technology better or worse than any others; instead we used it for what AMD said it should be used for - amateur video taken without tripods, etc.  

And since this feature works not only AMD A-Series APUs but also on recent Radeon GPUs, I encourage you all to give it a shot and let us know what you think in our comments here below - do you find the feature useful and effective?  Would you leave the option enabled full time or just turn on when you encounter a particularly bouncy video?

If you haven't seen our previous Video Perspectives focusing on AMD A-Series of APUs, you can catch them here: