All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Introduction, Product Specifications And Line-Up
Earlier this year I penned an editorial about ultrabooks. It wasn’t all that nice. I pointed out that they are slow, that they require design sacrifices that not everyone will enjoy and that ultraportables often provide a better experience at the same price or lower.
Since then I’ve also discovered, through various reviews, that ultrabooks so far have not shown any battery life advantage over ultraportables. The advantage of a low-voltage processor is consistently negated by the smaller batteries squeezed into Intel’s thin form-factor.
I’m not on the bandwagon. This, however, should not come as a surprise. It’s exceedingly rare for a company, even of Intel’s size, to knock a new product out of the park on its first try. The models that released so far were decent products in some ways, but they were also the hardware equivalent of a beta. Intel and laptop manufacturers are now responding to what they’ve discovered.
This brings us to Ivy Bridge. As I noted in my Ivy Bridge for mobile review, Intel’s architectural update seems to be more exciting for laptops than for desktops. The Core i7-3720QM we received in our Ivy Bridge reference laptop was a beast, easily defeating all previous processor benchmarks and also posting surprisingly good results in gaming tests. Despite this, battery life seemed to at least remain the same.
XFX Throws into the Midrange Ring
Who is this XFX? This is a brand that I have not dealt with in a long time. In fact, the last time I had an XFX card was some five years ago, and it was in the form of the GeForce 8800 GTX XXX Edition. This was a pretty awesome card for the time, and it seemed to last forever in terms of performance and features in the new DX 10 world that was 2007/2008. This was a heavily overclocked card, and it would get really loud during gaming sessions. I can honestly say though that this particular card was troublefree and well built.
XFX has not always had a great reputation though, and the company has gone through some very interesting twists and turns over the years. XFX is a subsidiary of Pine Technologies. Initially XFX dealt strictly with NVIDIA based products, but a few years back when the graphics market became really tight, NVIDIA dropped several manufacturers and focused their attention on the bigger partners. Among the victims of this tightening were BFG Technologies and XFX. Unlike BFG, XFX was able to negotiate successfully with AMD to transition their product lineup to Radeon products. Since then XFX has been very aggressive in pursuing unique designs based on these AMD products. While previous generation designs did not step far from the reference products, this latest generation is a big step forward for XFX.
Introduction, Design, User Interface
When Ivy Bridge was released Ryan did a deep-dive and desktop review while I worked on a review of the mobile processor. My mobile review was based on a reference laptop known as the ASUS N56VM. Although considered a “reference platform,” the laptop is really a production product and successor to the outgoing ASUS N55. We held off on a full review to provide coverage of the new G75, but now it’s time to revisit the N56.
This is an important product for ASUS. The 15.6” laptop remains a sales leader and the N56 will likely be the company’s flagship in this arena for the coming year. This means it won’t be a high-volume model, but it serve as a “halo product” – an example of what ASUS is capable of. If the company follows its usually modus operandi we’ll see this same chassis used as the basis for a number of variations at different price points with different hardware.
As you may remember from our Ivy Bridge for mobile review, the model we received is equipped with a Core i7-3720QM processor. It’s hard to say if this is a mid-range quad given the limited number of Ivy Bridge products available so far, but it probably will end up in that role. What about the rest of the system? Well, take a look.
Introduction and Features
Antec has one of the largest selections of PC power supplies on the market today and their new HCP-1000 Platinum power supply features 1000W of continuous output power and is 80 Plus Platinum certified. The High Current Pro Platinum is the first power supply in a new series that will replace three existing lines, the TruePower Quattro, High Current Pro (80 Plus Gold), and Antec’s Signature series. The High Current Pro Platinum series will be the new top class of maximum efficiency within Antec’s range of power supplies with modular cabling.
The HCP-1000 Platinum is based on a brand new platform, co-developed with Antec’s partner Delta Electronics and combines several new technological developments and features to provide unmatched performance and be the very best power supply possible. The HCP-1000 Platinum incorporates all modular cables with six PCI-E connectors, NVIDIA SLI-Ready certification, ErP Lot 6:2013 compliance, a 7-year warranty and it is being introduced with a MSRP of $269.90 USD.
Here is what Antec has to say about their new HCP-1000 PSU:
“Antec's High Current Pro Platinum series is the pinnacle of power supplies. High Current Pro Platinum is fully modular with a revolutionary 20+8-pin MBU socket for the needs of tomorrow. By using a PSU that is 80 PLUS® PLATINUM & ErP Lot 6: 2013 certified, operating up to 94% efficient, you can reduce your electricity bill by up to 25% when compared to many other power supplies. HCP Platinum's innovative 16-pin sockets create a new level of flexibility by doubling the modular connectivity, supporting two different 8-pins connectors and even future connectors of 10, 12, 14 or 16-pins. Backed by a 7 year warranty and lifetime global 24/7 support, the HCP-1000 Platinum embodies everything a power supply can accomplish today.”
Antec High Current Pro Platinum 1000W PSU Key Features:
• 1000W continuous power output at 50°C
• 80 Plus Platinum Certified (up to 94% efficient)
• Four High Current +12V rails with high maximum load
• 100% +12V output for maximum CPU and GPU support
• Quiet 135mm double ball bearing fan
• Thermal Manager – advanced low voltage fan controller
• All Japanese brand, heavy duty capacitors
• PhaseWave Design server-class, full-bridge LLC topology
• NVIDIA SLI-Ready certified (six PCI-E connectors)
• Active PFC with Universal AC line input
• ErP Lot 6:2013 Compliant
• Fully modular sleeved cables
• Protection: OCP, OVP, UVP, SCP, OPP, OTP, SIP, NLO and BOP
• Antec AQ7 7-year warranty and lifetime global 24/7 support
When the Fermi architecture was first discussed in September of 2009 at the NVIDIA GPU Technology Conference it marked an interesting turn for the company. Not only was NVIDIA releasing details about a GPU that wasn’t going to be available to consumers for another six months, but also that NVIDIA was building GPUs not strictly for gaming anymore – HPC and GPGPU were a defining target of all the company’s resources going forward.
Kepler on the other hand seemed to go back in the other direction with a consumer graphics release in March of this year without discussion of the Tesla / Quadro side of the picture. While the company liked to tout that Kepler was built for gamers I think you’ll find that with the information NVIDIA released today, Kepler was still very much designed to be an HPC powerhouse. More than likely NVIDIA’s release schedules were altered by the very successful launch of AMD’s Tahiti graphics cards under the HD 7900 brand. As a result, gamers got access to GK104 before NVIDIA’s flagship professional conference and the announcement of GK110 – a 7.1 billion transistor GPU aimed squarely at parallel computing workloads.
With the Fermi design NVIDIA took a gamble and changed directions with its GPU design betting that it could develop a microprocessor that was primarily intended for the professional markets while still appealing to the gaming markets that have sustained it for the majority of the company’s existence. While the GTX 480 flagship consumer card and the GTX 580 to some degree had overheating and efficiency drawbacks for gaming workloads compared to AMD GPUs, the GTX 680 based on Kepler GK104 has improved on them greatly. NVIDIA has still designed Kepler for high-performance computing though with a focus this time on power efficiency as well as performance though we haven’t seen the true king of this product line until today.
GK110 Die Shot
Built on the 28nm process technology from TSMC, GK110 is an absolutely MASSIVE chip built on 7.1 billion transistors and though NVIDIA hasn’t given us a die size, it is likely coming close the reticle limit of 550 square millimeters. NVIDIA is proud to call this chip the most ‘architecturally complex’ microprocessor ever built and while impressive, it means there is potential for some issues when it comes to producing a chip of this size. This GPU will be able to offer more than 1 TFlop of double precision computing power with greater than 80% efficiency and 3x the performance per watt of Fermi designs.
NVIDIA puts its head in the clouds
Today at the 2012 NVIDIA GPU Technology Conference (GTC), NVIDIA took the wraps off a new cloud gaming technology that promises to reduce latency and improve the quality of streaming gaming using the power of NVIDIA GPUs. Dubbed GeForce GRID, NVIDIA is offering the technology to online services like Gaikai and OTOY.
The goal of GRID is to bring the promise of "console quality" gaming to every device a user has. The term "console quality" is kind of important here as NVIDIA is trying desperately to not upset all the PC gamers that purchase high-margin GeForce products. The goal of GRID is pretty simple though and should be seen as an evolution of the online streaming gaming that we have covered in the past–like OnLive. Being able to play high quality games on your TV, your computer, your tablet or even your phone without the need for high-performance and power hungry graphics processors through streaming services is what many believe the future of gaming is all about.
GRID starts with the Kepler GPU - what NVIDIA is now dubbing the first "cloud GPU" - that has the capability to virtualize graphics processing while being power efficient. The inclusion of a hardware fixed-function video encoder is important as well as it will aid in the process of compressing images that are delivered over the Internet by the streaming gaming service.
This diagram shows us how the Kepler GPU handles and accelerates the processing required for online gaming services. On the server side, the necessary process for an image to find its way to the user is more than just a simple render to a frame buffer. In current cloud gaming scenarios the frame buffer would have to be copied to the main system memory, compressed on the CPU and then sent via the network connection. With NVIDIA's GRID technology that capture and compression happens on the GPU memory and thus can be on its way to the gamer faster.
The results are H.264 streams that are compressed quickly and efficiently to be sent out over the network and return to the end user on whatever device they are using.
AMD’s position is not enviable. Though they’re the only large competitor to Intel in the market for x86 processors, the company is dwarfed by the Giant of Santa Clara. As a resident of Portland, I can’t forget this fact. Intel offices are strewn across the landscape of the western suburbs, most of them at least four times larger than any office I’ve worked at.
Despite the long odds, AMD is set in this course for now and has no choice but to soldier on. And so we have today’s reference platform, a laptop powered by AMD’s latest mobile processor, codenamed Trinity. These processors, like the older Llano models, will be sold as the AMD A-Series. This might lead you to think that it’s simply another minor update, but that’s not the case.
Llano was released around the same time as Bulldozer, but it did not use Bulldozer cores. Instead it used yet another update of Stars, which is a mobile incarnation of Phenom II, which was of course an improvement upon the original Phenom. The “new” Llano APU in fact was equipped with some rather old processor cores. This showed in the performance of the mobile Llano products. They simply could not keep up with Sandy Bridge’s more modern cores.
Bulldozer isn’t coming to mobile with Trinity, either. Instead we’re receiving Piledriver. AMD has effectively skipped the first iteration of its new Bulldozer architecture and moved straight on to the second. Piledriver includes the third generation of AMD’s Turbo Core and promises “up to 29%” better processor performance than last year’s Llano-based A-Series.
That’s a significant improvement, should it turn out to be correct. Is it true, and will it be enough to catch up to Intel?
Search engine giant Google took the wraps off its long rumored cloud storage service called Google Drive this week. The service has been rumored for years, but is (finally) official. In the interim, several competing services have emerged and even managed to grab significant shares of the market. Therefore, it will be interesting to see how Google’s service will stack up. In this article, we’ll be taking Google Drive on a test drive from installation to usage to see if it is a worthy competitor to other popular storage services—and whether it is worth switching to!
How we test
In order to test the service, I installed the Google desktop application (we’ll be taking a look at the mobile app soon) and uploaded a variety of media file types including documents, music, photos, and videos in numerous formats. The test system in question is an Intel i7 860 based system with 8GB of RAM and a wired Ethernet connection to the LAN. The cable ISP I used offers approximately two to three mpbs uploads (real world speeds, 4mbps promised) for those interested.
Google’s cloud service was officially unveiled on Tuesday, but the company is still rolling out activations for people’s accounts (my Google Drive account activated yesterday [April 27, 2012], for example). And it now represents the new single storage bucket for all your Google needs (Picasa, Gmail, Docs, App Inventor, ect; although people can grandfather themselves into the cheaper Picasa online storage).
Old Picasa Storage vs New Google Drive Storage Plans
|Storage Tier (old/new)||Old Plan Pricing (per year)||New Plan Pricing (per year)|
|20 GB/25 GB||$5||$29.88|
|80 GB/100 GB||$20||$59.88|
(Picasa Plans were so much cheaper–hold onto them if you're able to!)
The way Google Drive works is much like that of Dropbox wherein a single folder is synced between Google’s servers and the user’s local machine (though sub-folders are okay to use and the equivalent of "labels" on the Google side). The storage in question is available in several tiers, though the tier that most people will be interested in is the free one. On that front, Google Drive offers 5GB of synced storage, 10GB of Gmail storage, and 1GB of Picasa Web Albums photo backup space. Beyond that, Google is offering nine paid tiers from an additional 25GB of "Drive and Picasa" storage (and 25GB of Gmail email storage) for $2.49 a month to 16TB of Drive and Picasa Web Albums storage with 25GB of Gmail email storage for $799.99 a month. The chart below details all the storage tiers available.
|Storage Tiers||Drive/Picasa Storage||Gmail Storage||Price (per month)|
1024MB = 1GB, 1024GB = 1TB
The above storage numbers do not include the 5GB of free drive storage that is also applied to any paid tiers. The free 1GB of Picasa storage does not carry over to the paid tiers.
Even better, Google has not been stingy with their free storage. They continue to allow users to upload as many photos as they want to Google+ (they are resized to a max of 2048x2048 pixels though). Also, Google Documents stored in the Docs format continue to not count towards the storage quota. Videos uploaded to Google+ under 15 minutes in length are also free from storage limitations. As far as Picasa Web Albums (which also includes photos uploaded to blogger blogs) goes, any images under 2048x2048 and videos under 15 minutes in length do not count towards the storage quota either. If you exceed the storage limit, Google will still allow you to access all of your files, but you will not be able to create any new files until you delete enough files to get below the storage quota. The one exception to that rule is the “storage quota free” file types mentioned above–Google will still let you create/upload those. For Gmail storage, Google allows you to receive and store as much email as you want up to the quota. After you reach the quota, any new email will hard bounce and you will not be able to receive new messages.
In that same vein, Google’s paid tiers are not the cheapest but are still fairly economical. They are less expensive per GB than Dropbox, for example, but are more expensive than Microsoft’s new Skydrive tiers. One issue that many users face with online storage services is the file size limit placed on individual files. While Dropbox places no limits (other than overall storage quota) on individual file size, many other services do. Google offers a compromise to users in the form of 10GB per file size limits. While you won’t be backing up Virtualbox hard drives or drive image backups to Google, they’ll let you backup anything else (within reason).
If the netbook was a shooting star, the nettop was an asteroid that never quite entered our atmosphere. Instead it flew silently by, noted by NASA, written about in a handful of articles, and now forgotten.
That doesn’t mean it has ceased to exist, however. It’s still out there, floating in space - and it occasionally swings back around for an encore. So we have the Lenovo IdeaCentre Q180.
Of course, simply advertising a small computer as - well, a small computer - isn’t particularly sexy. The Q180 is instead being sold not just as general-purpose laptop but also as a media center (with optional Blu-Ray, not found on our review unit). There’s no doubting the demand for this, but so far, attempts to make PC-based media center computers have not done well - even Boxee, with its custom Linux-based operating system, was fussy. Can the Q180 succeed where others have stumbled? Let’s start with the specs.
It’s been awhile since we tested anything Atom. Since our last look at this line of processors, Intel has updated to the code-name Cedertrail processors, allowing for higher clock speeds. The 2.13 GHz dual-core Atom D2700 looks quite robust in print. But this still the same old architecture, so per-clock performance doesn’t come close to Intel’s Pentium and Core processors.
Also included in AMD’s Radeon HD 6450A, a version of the HD 6450 built for small systems that don’t have room for a typical PCIe graphics card. This makes up for the fact that all Atom processors are still using hopelessly outdated Intel Media Accelerator graphics, which is entirely unsuitable for HD video.
GK104 takes a step down
While the graphics power found in the new GeForce GTX 690, the GeForce GTX 680 and even the Radeon HD 7970 are incredibly impressive, if we are really honest with ourselves the real meat of the GPU market buys options much lower than $999. Today's not-so-well-kept-secret release of the GeForce GTX 670 attempts to bring the price to entry of the NVIDIA Kepler architecture down to a more attainable level while also resetting the performance per dollar metrics of the GPU world once again.
The GeForce GTX 670 is in fact a very close cousin to the GeForce GTX 680 with only a single SMX unit disabled and a more compelling $399 price tag.
The GTX 670 GPU - Nearly as fast as the GTX 680
The secret is out - GK104 finds its way onto a third graphics card in just two months - but in this iteration the hardware has been reduced slightly.
The GTX 670 block diagram we hacked together above is really just a GTX 680 diagram with a single SMX unit disabled. While the GTX 680 sported a total of 1536 CUDA cores broken up into eight 192 core SMX units, the new GTX 670 will include 1344 cores. This will also drop the texture units to 112 (from 128 on the GTX 680) though the ROP count stays at 32 thanks to the continued use of a 256-bit memory interface.
Infectious fear is infectious
PCMag and others have released articles based on a blog post from Sophos. The original post discussed how frequently malware designed for Windows is found on Mac computers. What these articles mostly demonstrate is that we really need to understand security: what it is, and why it matters. The largest threats to security are complacency and misunderstanding; users need to grasp the problem rather than have it burried under weak analogies and illusions of software crutches.
Your data and computational ability can be very valuable to people looking to exploit it.
The point of security is not to avoid malware, nor is it to remove it if you failed to avoid it. Those actions are absolutely necessary components of security -- do those things -- but they are not the goal of security. The goal of security is to retain control of what is yours. At the same time, be a good neighbor and make it easier for others to do the same with what is theirs.
Your responsibility extends far beyond just keeping a current antivirus subscription.
The problem goes far beyond throwing stones...
The distinction is subtle.
Your operating system is irrelevant. You could run Windows, Mac, Android, iOS, the ‘nixes, or whatever else. Every useful operating system has vulnerabilities and run vulnerable applications. The user is also very often tricked into loading untrusted code either directly or delivering it within data to a vulnerable application.
Blindly fearing malware -- such as what would happen if someone were to draw parallels to Chlamydia -- does not help you to understand it. There are reasons why malware exists; there are certain things which malware is capable of; and there are certain things which malware is not.
The single biggest threat to security is complacency. Your information is valuable and you are responsible to prevent it from being exploited. The addition of a computer does not change the fundamental problem. Use the same caution on your computer and mobile devices as you should on the phone or in person. You would not leave your credit card information on a park bench unmonitored.
Introduction, Design, User Interface
Intel has decided to lead its introduction of Ivy Bridge for mobile with its most powerful quad-core parts. Many of these processors will end up in mainstream laptop, but they’re also great for gaming laptops. In our first look at Ivy Bridge we saw that it holds up well when paired with its own Intel HD 4000 graphics – if you keep the resolution around 1366x768. A bit more than that and the IGP just can’t hang.
Gamers will still want a beefy discrete GPU, and that’s what the G75 offers. Inside this beast you’ll find an Nvidia GeForce GTX 670M. Those who were reading our Kepler coverage will remember that this is not based off Nvidia’s newest architecture but is instead a re-work of an older Fermi chip. That mean seem a bit disappointing, and it is – but the performance of Nvidia’s older mobile chips wasn’t lackluster.
So, this new laptop is packing a spanking-new Core i7-3720QM as well as Nvidia’s new GTX 670M. That’s an impressive combination, and ASUS has wisely backed it up with a well-rounded set of performance components.
GTX 690 Specifications
On Thursday May the 3rd at 10am PDT / 1pm EDT, stop by the PC Perspective Live page for an NVIDIA and PC Perspective hosted event surrounding the GeForce GTX 690 graphics card. Ryan Shrout and Tom Petersen will be on hand to talk about the technology, the performance characteristics as well as answer questions from the community from the chat room, twitter, etc. Be sure to catch it all at http://pcper.com/live
Okay, so it's not a surprise to you at all, or if it is, you haven't been paying attention. Today is the first on-sale date and review release for the new NVIDIA GeForce GTX 690 4GB dual-GPU Kepler graphics card that we first announced in late April. This is the dream card any PC gamer out there combining a pair of GTX 680 GK104 GPUs on a single PCB and running them in a single slot SLI configuration and is easily the fastest single card we have ever tested. It also the most expensive reference card we have ever seen with a hefty $999 price tag.
So how does it perform? How about efficiency and power consumption - does the GTX 690 suffer the same problems the GTX 590 did? Can AMD hope to compete with a dual-GPU HD 7990 card in the future? All that and more in our review!
Kepler Architecture Overview
For those of you that may have missed the boat on the GTX 680 launch, the first card to use NVIDIA's new Kepler GPU architecture, you should definitely head over and read my review and analysis of that before heading into the deep-dive on the GTX 690 here today.
Kepler is a 3.54 billion transistor GPU with 1536 CUDA cores / stream processors contained within and even in a single GPU configuration is able produce some impressive PC gaming performance results. The new SMX-based design has some modest differences from Fermi the most dramatic of which is the removal of the "hot clock" - the factor that ran the shaders and twice the clock speed of the rest of the GPU. Now, the entire chip runs at one speed, higher than 1 GHz on the GTX 680.
Each SMX on Kepler now includes 192 CUDA cores as opposed to the 32 cores found in each SM on Fermi - a change that has increased efficiency and performance per watt quite dramatically.
As I said above, there are lot more details on the changes in our GeForce GTX 680 review.
The GeForce GTX 690 Specifications
Many of the details surrounding the GTX 690 have already been revealed by NVIDIA's CEO Jen-Hsun Huang during a GeForce LAN event in China last week. The card is going to be fast, expensive and is built out of components and materials we haven't seen any graphics card utilize before.
Depsite the high performance level of the card, the GTX 690 isn't much heavier and isn't much longer than the reference GTX 680 card. We'll go over the details surrounding the materials, cooler and output configuration on the next page, but let's take some time just to look and debate the performance specifications.
Introduction and Features
SilverStone was one of the first PC power supply manufacturers to design and market a fanless power supply for silent operation. While many of their competitor’s fanless products have come and gone, SilverStone continues to build on their reputation and later last year released the SST-ST50NF 500W fanless power supply, which is the latest addition to the Nightjar series. We are a little late to the party in reviewing the ST50NF but after talking with the good folks at SilverStone it appears the wait was worth it as they have continued to tweak the design in recent months to improve AC ripple suppression on the DC outputs.
Here is what SilverStone has to say about the Nightjar 500W fanless power supply: The fanless Nightjar series power supplies are long favorites for professionals and enthusiasts alike that require noiseless power solution with no moving parts. With increasing power demands required from modern computers, SilverStone engineers have once again created another fanless power supply with leading output level in ST50NF. With 500W of continuous rating, near 80Plus Silver efficiency, ±3% voltage regulation, single +12V rail, multiple PCI-E connectors, and full host of safety features, the ST50NF is a great choice for mission-critical systems that need to operate in noiseless or dusty environments.
SilverStone Nightjar 500W Fanless PSU Key Features:
• Fanless thermal solution,0 dBA Acoustics
• 500W continuous power output
• 80 PLUS Bronze certified with84%~88% efficiency at 20%~100% load
• Compliance with ATX 12V v2.3 and EPS 12V Specifications
• Strict ±3% voltage regulation
• PCI-E 8-pin and PCI-E 6-pin connectors
• Powerfull class-leading single +12V rail (38A)
• Aluminum construction
• Server-level components
• Universal AC input (100~250V) with Active PFC
Editor’s Note: Fanless PC power supplies occupy a niche market and are targeted towards users who want a silent power supply for use in noise-sensitive areas or who need a power supply that can survive in a dusty/dirty environment that might choke and kill a conventional fan cooled PSU. Fanless power supplies rely on convection cooling and still require airflow in and around the power supply chassis to carry away the waste heat. So while the power supply itself may not have a fan, the computer enclosure must still have some means of creating airflow to keep the CPU, GPU and PSU cool. The last thing you want to do is put a fanless PSU in a closed enclosure without any fans or airflow!
Introduction, Low-Power Computing Was Never Enjoyable
It was nearly five years ago that ASUS announced the first Eee PC model at Computex. That October the first production version of what would to be called a netbook, the ASUS Eee PC 4G, was released. The press latched on to the little Eee PC, making it the new darling of the computer industry. It was small, it was inexpensive, and it was unlike anything on the market.
Even so, the original Eee PC was a bit of a dead end. It used an Intel Celeron processor that was not suited for the application. It consumed too much power and took up a significant portion of the netbook’s production cost. If Intel’s Celeron had remained the only option for netbooks they probably would not have made the leap from press darling to mainstream consumer device.
It turned out that Intel (perhaps unintentionally) had the solution – Atom. Originally built with hopes that it might power “mobile Internet devices” it proved to be the netbook’s savior. It allowed vendors to squeeze out cheap netbooks with Windows and a proper hard drive.
At the time, Atom and the netbook seemed promising. Sales were great – consumers loved the cute, pint-sized, affordable computers. In 2009 netbook sales jumped by over 160% quarter-over-quarter while laptops staggered along with single-digit growth. The buzz quickly jumped to other products, spawning nettops, media centers and low-power all-in-one-PCs. There seemed to be nothing an Atom powered computer could not do.
Fast forward. Earlier this year, PC World ran an article asking if netbooks are dead. U.S. sales peaked in the first quarter of 2010 and have been nose-diving since then, and while some interest remains in the other markets, only central Europe and Latin America have held steady. It appears the star that burned brightest has indeed burned the quickest.
Background and Internals
A little over two weeks back, Intel briefed me on their new SSD 910 Series PCIe SSD. Since that day I've been patiently awaiting its arrival, which happened just a few short hours ago. I've burned the midnight oil for the sake of getting some greater details out there. Before we get into the goods, here's a quick recap of the specs for the 800 (or 400) GB model:
- PCIe 2.0 x8 LSI Falcon 2008 SAS HBA driving 4 (or 2) Hitachi Ultrastar SAS controllers, each in turn driving 200GB of IMFT 25nm High Endurance Technology flash memory, all on a triple stacked half-height PCB.
- 400GB model yields (r/w) 1GB/s / 750MB/s sequential and 90,000 / 38,000 4k IOPS.
- 800GB model yields (r/w) 2GB/s / 1GB/s sequential and 180,000 / 75,000 4k IOPS.
- 800GB 'performance mode' (r/w) 2GB/s / 1.5GB/s sequential and 180,000 / 75,000 4k IOPS.
"Performance Mode" is a feature that can be enabled through the Intel Data Center Tool Software. This feature is only possible on the 800GB model, but not for the reason you might think. The 400GB model is *always* in Performance Mode, since it can go full speed without drawing greater than the standard PCIe 25W power specification. The 800GB model has twice the components to drive yet it stays below the 25W limit so long as it is in its Default Mode. Switching the 800GB model to Performance Mode increases that draw to 38W (the initial press briefing stated 28W, which appears to have been a typo). Note that this increased draw is only seen during writes.
Ok, now into the goodies:
Get Out the Microscope
AMD announced their Q1 2012 earnings last week, which turned out better than the previous numbers suggested. The bad news is that they posted a net loss of $590 million. That does sound pretty bad considering that their gross revenue was $1.59 billion, but there is more to the story than meets the eye. Of course, there are thoughts of “those spendthrift executives are burying AMD again”, but this is not the case. The loss lays squarely on the GLOBALFOUNDRIES equity and wafer agreements that have totally been retooled.
To get a good idea of where AMD stands in Q1, and for the rest of this year, we need to see how all these numbers actually get sorted out. Gross revenue is down 6% from the quarter before, which is expected due to seasonal pressures. This is right in line with Intel’s seasonal downturn, and in ways AMD was affected slightly less than their larger competitor. They are down around 2% from last year’s quarter, and part of that can be attributed to the continuing hard drive shortage that continued to affect the previous quarter.
An update to a great architecture
This article will focus on the new Ivy Bridge, 3rd Generation Core Processor from a desktop perspective. If you are curious as the performance and features of the Ivy Bridge mobile processors, be sure to check out our Core i7-3720QM ASUS N56VM review here!!
One of the great things about the way Intel works as a company is that we get very few surprises on an annual basis in terms of the technology they release. With the success of shows like the Intel Developer Forum permitting the release of architectural details months and often years ahead of the actual product, developers, OEMs and the press are able to learn about them over a longer period of time. As you might imagine, that results in both a much better understanding of the new processor in question and also a much less hurried one. If only GPU cycles would follow the same path...
Because of this long-tail release of a CPU, we already know quite a bit about Ivy Bridge, the new 22nm processor architecture from Intel to be rebranded as the 3rd Generation Intel Core Processor Family. Ivy Bridge is the "tick" that brings a completely new process technology node as we have seen over the last several years but this CPU does more than take the CPU from 32nm to 22nm. Both the x86 and the processor graphics portions of the die have some changes though the majority fall with the GPU.
Ivy Bridge Architecture
In previous tick-tock scenarios the "tick" results in a jump in process technology (45nm to 32nm, etc) with very little else being done. This isn't just to keep things organized in slides above but it also keeps Intel's engineers focused on one job at a time - either a new microprocessor architecture OR a new process node; but not both.
For the x86 portion of Ivy Bridge this plan stays in tract. The architecture is mostly unchanged from the currently available Sandy Bridge processors including the continuation of a 2-chip platform solution and integrated graphics, memory controller, display engine, PCI Express and LLC along with the IA cores.
Introduction, Overview, What is New With Ivy Bridge
This article will focus on the new Ivy Bridge, 3rd Generation Core Processor from a mobile perspective. If you are curious as the performance and features of the Ivy Bridge desktop processors, be sure to check out our desktop Core i7-3770K review here.
It would be an understatement to say that Intel’s had a good streak as of, say, the last five years. If life was commented on by the announcer from Unreal Tournament, Intel’s product releases would now be followed by the scream of “M-M-M-MONSTER KILLLLLLLL!” This is particularly true in the mobile market. Atom aside, Intel’s processors have repeatedly defeated AMD and its own preceding products.
Many companies in this position might feel it’s time to take a breather, but Intel has reached this point precisely because it doesn’t. The “tick-tock” strategy of constant improvement has made the company and its products stronger than ever before. Even the Pentium-powered Intel of the mid-90s seems weak compared to today’s juggernaut.
And so we come to the launch of Ivy Bridge. This is not a new architecture but instead an update of Sandy Bridge – however, that does not mean the under-the-hood revisions aren’t substantial. There’s a lot to talk about.
The reference system provided for our review is an ASUS N56VM, but this is not a full review of the laptop. That will be published later, after we’ve had more time to look at the laptop itself. Our focus today is on the new Intel hardware inside.
Let’s get to it.
Introduction, Design, User Interface
Dell has long tried to enter the high-end luxury laptop market. These attempts have always been met with mixed results. While Dell’s thick, power and relatively affordable XPS laptops are a good pick for people needing a desktop replacement, they don’t cause the thinness-obsessed media to salivate.
Enter the Dell XPS 15z. It’d be easy to think that it’s a MacBook Pro clone considering its similar pricing and silver exterior, but reality is simpler then that. This is just an XPS 15 that has been slimmed down. Like the standard XPS laptops, the 15z follows a form-balanced-by-function approach that is common among all of Dell’s laptops.
Slimming the chassis has forced the use of some less powerful components, but our review unit still arrived with some impressive hardware. Let’s have a look.