How We Review Laptops At PC Perspective
Cooling, Portability, Software, Performance, Verdict
Here we deal with the temperature of a laptop when it is operating and the sound made by the system fan.
In the recent past we haven’t reported specific system temperatures. We used to report internal laptop temperatures, but this no longer seems worthwhile. Overheating systems no longer seem to be an issue, even with high-end GPUs. However, we are going to start reporting external laptop temperatures. This is concern to most users. Laptops with hot exteriors can become difficult to use.
Fan sound remains a subjective test for now, and for the most part, it’s not that interesting. Most laptops are reasonably quiet at idle. Load sound is reported, and we also pay attention to how the fan sounds - is it whiny? Wheezy? Is there a metallic quality? Does it frequently switch speeds? The perfect fan will only produce the noise of air flowing and will maintain the same speed for long periods of time.
This is where we address a system’s size and weight. These are definite, physical measurements, so there’s not much room for opinion here. We will sometimes comment on how well a laptop fits into a bag, however. Those with smoother and slimmer exteriors do tend to slip in and our more easily.
We also address battery life in this section. The default power saving mode is used, but the display is adjusted to 70% brightness (or the closest approximation thereof) and all display dimming and sleep features turned off. Wi-Fi remains on.
The Battery Eater standard test is a demanding benchmark that uses a 3D rendering of a battery to load both the CPU and GPU. It represents a worse-case scenario. You might receive this result if you tried to game on your laptop without having it plugged in.
Battery Eater Reader’s Test scrolls through a text document constantly. As such, it’s a low load situation, similar to light web browsing and document editing. This might be how you use your laptop while at a coffee shop or on a plane. This a (realistic) best case scenario - you’d only stretch the battery more by turning down display brightness, turning off wireless or just not using it at all.
We used to include a real-world test, which was my subjective testing while browsing the web and editing documents (often done while writing the review of the laptop I was testing). However, this does not produce a tightly controlled workload. For 2012 we’ll instead be using the Peacekeeper battery bench, which simply loops the Peacekeeper benchmark. It should serve as a good compromise between the extremes represented by the Battery Eater tests.
We always note any annoying bloatware that comes on a system, as well as any beneficial software that might be installed. We don’t uninstall any software that comes with a system before we do benchmarks, but we do update drivers as necessary.
Here’s where we dive in to the real nitty-gritty of a review. We’ve changed a lot of the benchmarks for 2012, so I won’t try to compare every one to last year. Another change is the grouping - our old groups no longer made that much sense with the new benchmarks.
Our first group consists of CPU benchmarks. This includes SiSoft Sandra Processor Arithmetic/Multimedia, 7-Zip and Peacekeeper. All of these benchmarks do a good job of tasking all of a processor’s threads except for Peacekeeper, the browser benchmark. But that makes it interesting - it shows us what can happen to performance when you’re running a program poorly optimized for multiple threads, and there are still an awful lot of those around today.
Second are the general real-world benchmarks. We start with the use of Windows Live Movie Maker to save a video clip to 1080p. Next we run a batch photo editing test using a popular freeware program called BatchBlitz, and then we deal with system boot / resume times. These benchmarks aren’t really benchmarks - they’re real-world tasks that are being timed. They counter-balance the synthetic tests, which are highly optimized, to help us better understand how a laptop is going to perform in real-world use.
Next we have the hard drive tests, HD Tune and ATTO. With HD Tune we report the average transfer speed, access time and burst rate. With ATTO we report read/write speeds of 4KB, 256KB and 4MB files. Both benchmarks serve the same obvious goal - to test hard drive performance. This is an area we haven’t paid much attention to in the past, but it is important. The addition of these benchmarks will help us form a more well-rounded impression.
Our 3D benchmarks are 3DMark 06 and 3DMark 11. We use both because some GPUs appear to do relatively better in one than the other, and also because Intel HD graphics still does not support DirectX 11 (that’s going to change later in 2012). Though most desktops will ease through 3DMark 06 these days, laptops still find it a challenge unless they’re specifically built for gaming.
Speaking of which, our game benchmarks include Dawn of War 2: Retribution, Just Cause 2 and Battlefield 3. We feel this is a selection of games that are varying in the stress they place on the system. DoW2:R is playable on most laptops - Battlefield 3 is playable on only a few. The settings we use are as follows:
- DoW2:R - Medium detail presets. Standard built-in benchmark used.
- Just Cause 2 - All settings at lowest possible, decals on, soft particles on, high-res shadows off, SSAO off, point-specular lighting off. Built-in Concrete Jungle benchmark used.
- Battlefield 3 - Medium detail presets. FRAPS used to take data from three 180 second runs of the second single-player mission.
All of these game benchmarks are performed at 1366x768 unless 1080p is available. If so, they’re performed at 1080p.
And that’s the whole kit. One benchmark that’s noticeably missing is PCMark 7. This benchmark is no longer part of our review process because we believe it too heavily prefers solid state drives. It also seems to generate unusually high computation results in most tests that involve dual-core Intel Core i5 processors. These skew the overall score and make the benchmark more confusing than enlightening.
We don’t use a star or score system at PC Perspective. Attempting to quantify a product in this way seems like a good idea at first, but ultimately doesn't provide much clarification. What makes one laptop .5 points better than another? If no product is perfect, when should you award five stars?
Instead we have three awards - Silver, Gold and Editor’s Choice. Of course, it is possible for a product to receive no award if it has nothing by which it can be recommended.
The Silver Award is given to products that have strengths and an obvious appeal to certain users, but also has some flaws that could seriously turn off others. The conclusion will let you know who we think will like the laptop.
Products with a number of strong points that make them worthy of a general recommendation receive the Gold Award. A laptop that receives this is noteworthy for several reasons and also has few downsides. Products with this award are not best-in-class, but should be given strong consideration.
Our highest award, the Editor’s Choice, is only given to laptops that rise about their peers and provide a remarkable combination of traits. They may have a flaw or two, but those issues are overshadowed by numerous strengths.
Such an award system is, like any rating system, subjective. We encourage you to read the full review text in order to understand why we’ve reached our verdict. You might stumble on a detail that will make-or-break the laptop for you.