All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
ioSafe: Introduction and Internals
Cloud storage is all the talk these days, and our own Tim Verry has been hard at work detailing as much of it as he can keep up with. While all of us at PCPer currently use cloud based solutions for many of the day-to-day goings on, it's not for everyone, and it tends to not be for very large chunks of data, either. Sometimes local storage is just the way to go – especially when you want to be the one in absolute control of the reliability and integrity of your data.
The general rule for proper backups is to have your local copy, a local backup (RAID is *not* a backup), and an additional off-site backup to cover things like theft, fires or floods. So lets say you simply have too much sensitive data for your internet connection to support bulk transferring to an off-side / cloud storage location. Perhaps the cloud storage for that much space is simply cost prohibitive, or your data is sensitive enough that – despite encryption – you don't want it leaving your network and/or premesis? Perhaps you're just stubborn and want only one backup of your data? I think I might have the answer you've been looking for – behold the ioSafe SoloPRO:
What is this thing, you may ask? On the inside it's one of the available 1, 2, 3, or even 4TB 3.5" hard drives. On the outside it's a very durable and solid steel enclosure. The hard drive is wrapped in a thermally conductive yet water resistant 'HydroSafe' foil that enables water resistance rated at a 10 ft depth for 3 days with no data loss. The bonus, however, is not the water resistance - that featuer is present primarily to battle the side effects of something much more drastic - the ioSafe is fire-proof. That feature comes from what sits between the steel casing and the shrink wrapped hard drive - something ioSafe calls a DataCast (pictured below):
I'm going to break from my normal warranty voiding and show a photo from this past Storage Visions conference at the Consumer Electonics Show, where an ioSafe was already cracked open for our viewing pleasure:
What does $399 buy these days?
I think it is pretty safe to say that MSI makes some pretty nice stuff when it comes to video cards. Their previous generation of the HD 6000 and GTX 500 series of cards were quite popular, and we reviewed more than a handful here. That generation of cards really seemed to stake MSI’s reputation as one of the top video card vendors in the industry in terms of quality, features, and cooling innovation. Now we are moving onto a new generation of cards from both AMD and NVIDIA, and the challenges of keeping up MSI’s reputation seem to have increased.
The competition has become much more aggressive as of late. Asus has some unique solutions, and companies such as XFX have stepped up their designs to challenge the best of the industry. MSI has found themselves to be in a much more crowded space with upgraded cooler designs, robust feature sets, and pricing that reflects the larger selection of products that fit such niches. The question here is if MSI’s design methodology for non-reference cards is up to the challenge.
Previously I was able to review the R7970 Lightning from MSI, and it was an impressive card. I had some initial teething problems with that particular model, but a BIOS flash later and some elbow grease allowed it to work as advertised. Today I am looking at the R7950 TwinFrozr3GD5/OC. This card looks to feature a reference PCB combined with a Twin Frozr III cooling solution. I was not entirely sure what to expect with this card, since the Lightning was such a challenge at first.
Introduction and Features
Courtesy of ASUS
ASUS continues to optimize their hardware for the overclocking and PC gaming crowds, but they are also catering to a niche audience looking for ultra stable and durable PC components. ASUS's Sabertooth X79 motherboard is their one of their latest products to bear the TUF series label and sport customized hardware and thermal components as well as a desert camo color scheme to complete the military look. This $329 motherboard comes with a five-year warranty, digital power management system, rugged chokes, solid capacitors, and MOSFETs that have been certified through third party, military-grade testing.
Courtesy of ASUS
The Sabertooth X79 also comes with a host of other features to improve SSD caching and give users quad GPU support for CrossfireX and SLI graphics card configurations. This board also includes a unique UEFI BIOS and natively supports 2TB hard drives with 64-bit operating systems. The USB BIOS "Flashback" feature also helps new users update their motherboard BIOS without entering the BIOS. ASUS states that users can use any USB storage device with the latest BIOS, push the BIOS button located on the back I/O panel for three seconds, and the board will automatically update the BIOS using standby power. Very cool!
Courtesy of ASUS
The back I/O panel on the Sabertooth X79 is no slouch either as it gives users a healthy amount of USB 2.0, USB 3.0, and eSATA 6GB/s ports for greater performance and expandability options. They also added a small fan over the back I/O panel as part of their "TUF Thermal Armor" feature that will help cool and exhaust heat from ther motherboard out the back of the chassis. Let's move on to the rest of the Saberbooth X79's features where we will get our first out-of-the-box look at this motherboard.
Inside and Out
When you are a little fish in the great big pond of PC builders, you need to do something to stand out from the rest. The people behind DV Nation apparently were well aware of that when entering the system vendor business and offering up SSDs to every single system configuration. Through a new system they are offering, provocatively named the "RAMRod PC", DV Nation provides a pre-built system that has some very unique components and configuration settings.
Built around the Antec Three Hundred Two chassis, the first glance at the RAMRod doesn't really indicate anything special is going on under the hood. But let's take a quick look at the specs:
- Intel Core i7-3820 @ 4.4 GHz
- 64GB DDR3-1600 Memory from G.Skill
- Radeon HD 6990 4GB
- 2x Seagate Momentus XT 750GB Hybrid HDD in RAID-0
- OCZ RevoDrive 3 X2 480GB PCIE SSD
- RAMCache: SuperSpeed Supercache 8GB on PCIE SSD, 8GB on Momentus
- RAMDisk: 42GB ROMEX Primo rated at 8000 MB/s
- Cost: $5,400
Obviously there is a LOT of storage work going on in the RAMRod and the purpose of the rig is to be the fastest pre-configured storage available anywhere. If you are looking for a cheaper version of this system you can get a base model with 16GB of memory, 10GB RAMDisk, 2GB RAMCache, 240GB PCIe SSD, single standard hard drive and even at GTX 680 for $2999.
Let's take a quick walk around the rest of the system before diving into the benchmarks!
Introduction, Design, User Interface
In August of 2011 I reviewed the Acer AC700-1099, one of two Chromebooks available in the North American market. The review was almost entirely negative. The hardware wasn’t great and the operating system was a bit of a mess–capable of only the most basic tasks.
Since then, the small surge of hype that surrounded the Chrome OS release has receded. You could be mistaken for thinking Google has abandoned it, but they haven’t. In typical Google fashion it has been slowly, quietly improved. Performance tweaks have allegedly improved web browsing, a proper file manager has been added and Google has just launched Google Drive, its cloud storage service.
Such enhancements could address a lot of the concerns I had with the Acer rendition. Do they? That’s what we’re here to find out. Let’s start with the basics - what’s inside?
The hardware inside the Samsung Series 5 is nearly identical to what was inside the Acer AC700-1099 that we reviewed late last year. We’re talking an Atom processor that must rely on its own IGP, two gigabytes of RAM and a tiny–but quick–16GB solid state drive.
While the equipment is the same, the pricing has changed. When we reviewed the Acer Chromebook it was $349.99. That has been slashed to $279.99. The Series 5, which used to be priced at $429, is now sold for just $299.
Introduction, Product Specifications And Line-Up
Earlier this year I penned an editorial about ultrabooks. It wasn’t all that nice. I pointed out that they are slow, that they require design sacrifices that not everyone will enjoy and that ultraportables often provide a better experience at the same price or lower.
Since then I’ve also discovered, through various reviews, that ultrabooks so far have not shown any battery life advantage over ultraportables. The advantage of a low-voltage processor is consistently negated by the smaller batteries squeezed into Intel’s thin form-factor.
I’m not on the bandwagon. This, however, should not come as a surprise. It’s exceedingly rare for a company, even of Intel’s size, to knock a new product out of the park on its first try. The models that released so far were decent products in some ways, but they were also the hardware equivalent of a beta. Intel and laptop manufacturers are now responding to what they’ve discovered.
This brings us to Ivy Bridge. As I noted in my Ivy Bridge for mobile review, Intel’s architectural update seems to be more exciting for laptops than for desktops. The Core i7-3720QM we received in our Ivy Bridge reference laptop was a beast, easily defeating all previous processor benchmarks and also posting surprisingly good results in gaming tests. Despite this, battery life seemed to at least remain the same.
XFX Throws into the Midrange Ring
Who is this XFX? This is a brand that I have not dealt with in a long time. In fact, the last time I had an XFX card was some five years ago, and it was in the form of the GeForce 8800 GTX XXX Edition. This was a pretty awesome card for the time, and it seemed to last forever in terms of performance and features in the new DX 10 world that was 2007/2008. This was a heavily overclocked card, and it would get really loud during gaming sessions. I can honestly say though that this particular card was troublefree and well built.
XFX has not always had a great reputation though, and the company has gone through some very interesting twists and turns over the years. XFX is a subsidiary of Pine Technologies. Initially XFX dealt strictly with NVIDIA based products, but a few years back when the graphics market became really tight, NVIDIA dropped several manufacturers and focused their attention on the bigger partners. Among the victims of this tightening were BFG Technologies and XFX. Unlike BFG, XFX was able to negotiate successfully with AMD to transition their product lineup to Radeon products. Since then XFX has been very aggressive in pursuing unique designs based on these AMD products. While previous generation designs did not step far from the reference products, this latest generation is a big step forward for XFX.
Introduction, Design, User Interface
When Ivy Bridge was released Ryan did a deep-dive and desktop review while I worked on a review of the mobile processor. My mobile review was based on a reference laptop known as the ASUS N56VM. Although considered a “reference platform,” the laptop is really a production product and successor to the outgoing ASUS N55. We held off on a full review to provide coverage of the new G75, but now it’s time to revisit the N56.
This is an important product for ASUS. The 15.6” laptop remains a sales leader and the N56 will likely be the company’s flagship in this arena for the coming year. This means it won’t be a high-volume model, but it serve as a “halo product” – an example of what ASUS is capable of. If the company follows its usually modus operandi we’ll see this same chassis used as the basis for a number of variations at different price points with different hardware.
As you may remember from our Ivy Bridge for mobile review, the model we received is equipped with a Core i7-3720QM processor. It’s hard to say if this is a mid-range quad given the limited number of Ivy Bridge products available so far, but it probably will end up in that role. What about the rest of the system? Well, take a look.
Introduction and Features
Antec has one of the largest selections of PC power supplies on the market today and their new HCP-1000 Platinum power supply features 1000W of continuous output power and is 80 Plus Platinum certified. The High Current Pro Platinum is the first power supply in a new series that will replace three existing lines, the TruePower Quattro, High Current Pro (80 Plus Gold), and Antec’s Signature series. The High Current Pro Platinum series will be the new top class of maximum efficiency within Antec’s range of power supplies with modular cabling.
The HCP-1000 Platinum is based on a brand new platform, co-developed with Antec’s partner Delta Electronics and combines several new technological developments and features to provide unmatched performance and be the very best power supply possible. The HCP-1000 Platinum incorporates all modular cables with six PCI-E connectors, NVIDIA SLI-Ready certification, ErP Lot 6:2013 compliance, a 7-year warranty and it is being introduced with a MSRP of $269.90 USD.
Here is what Antec has to say about their new HCP-1000 PSU:
“Antec's High Current Pro Platinum series is the pinnacle of power supplies. High Current Pro Platinum is fully modular with a revolutionary 20+8-pin MBU socket for the needs of tomorrow. By using a PSU that is 80 PLUS® PLATINUM & ErP Lot 6: 2013 certified, operating up to 94% efficient, you can reduce your electricity bill by up to 25% when compared to many other power supplies. HCP Platinum's innovative 16-pin sockets create a new level of flexibility by doubling the modular connectivity, supporting two different 8-pins connectors and even future connectors of 10, 12, 14 or 16-pins. Backed by a 7 year warranty and lifetime global 24/7 support, the HCP-1000 Platinum embodies everything a power supply can accomplish today.”
Antec High Current Pro Platinum 1000W PSU Key Features:
• 1000W continuous power output at 50°C
• 80 Plus Platinum Certified (up to 94% efficient)
• Four High Current +12V rails with high maximum load
• 100% +12V output for maximum CPU and GPU support
• Quiet 135mm double ball bearing fan
• Thermal Manager – advanced low voltage fan controller
• All Japanese brand, heavy duty capacitors
• PhaseWave Design server-class, full-bridge LLC topology
• NVIDIA SLI-Ready certified (six PCI-E connectors)
• Active PFC with Universal AC line input
• ErP Lot 6:2013 Compliant
• Fully modular sleeved cables
• Protection: OCP, OVP, UVP, SCP, OPP, OTP, SIP, NLO and BOP
• Antec AQ7 7-year warranty and lifetime global 24/7 support
When the Fermi architecture was first discussed in September of 2009 at the NVIDIA GPU Technology Conference it marked an interesting turn for the company. Not only was NVIDIA releasing details about a GPU that wasn’t going to be available to consumers for another six months, but also that NVIDIA was building GPUs not strictly for gaming anymore – HPC and GPGPU were a defining target of all the company’s resources going forward.
Kepler on the other hand seemed to go back in the other direction with a consumer graphics release in March of this year without discussion of the Tesla / Quadro side of the picture. While the company liked to tout that Kepler was built for gamers I think you’ll find that with the information NVIDIA released today, Kepler was still very much designed to be an HPC powerhouse. More than likely NVIDIA’s release schedules were altered by the very successful launch of AMD’s Tahiti graphics cards under the HD 7900 brand. As a result, gamers got access to GK104 before NVIDIA’s flagship professional conference and the announcement of GK110 – a 7.1 billion transistor GPU aimed squarely at parallel computing workloads.
With the Fermi design NVIDIA took a gamble and changed directions with its GPU design betting that it could develop a microprocessor that was primarily intended for the professional markets while still appealing to the gaming markets that have sustained it for the majority of the company’s existence. While the GTX 480 flagship consumer card and the GTX 580 to some degree had overheating and efficiency drawbacks for gaming workloads compared to AMD GPUs, the GTX 680 based on Kepler GK104 has improved on them greatly. NVIDIA has still designed Kepler for high-performance computing though with a focus this time on power efficiency as well as performance though we haven’t seen the true king of this product line until today.
GK110 Die Shot
Built on the 28nm process technology from TSMC, GK110 is an absolutely MASSIVE chip built on 7.1 billion transistors and though NVIDIA hasn’t given us a die size, it is likely coming close the reticle limit of 550 square millimeters. NVIDIA is proud to call this chip the most ‘architecturally complex’ microprocessor ever built and while impressive, it means there is potential for some issues when it comes to producing a chip of this size. This GPU will be able to offer more than 1 TFlop of double precision computing power with greater than 80% efficiency and 3x the performance per watt of Fermi designs.
NVIDIA puts its head in the clouds
Today at the 2012 NVIDIA GPU Technology Conference (GTC), NVIDIA took the wraps off a new cloud gaming technology that promises to reduce latency and improve the quality of streaming gaming using the power of NVIDIA GPUs. Dubbed GeForce GRID, NVIDIA is offering the technology to online services like Gaikai and OTOY.
The goal of GRID is to bring the promise of "console quality" gaming to every device a user has. The term "console quality" is kind of important here as NVIDIA is trying desperately to not upset all the PC gamers that purchase high-margin GeForce products. The goal of GRID is pretty simple though and should be seen as an evolution of the online streaming gaming that we have covered in the past–like OnLive. Being able to play high quality games on your TV, your computer, your tablet or even your phone without the need for high-performance and power hungry graphics processors through streaming services is what many believe the future of gaming is all about.
GRID starts with the Kepler GPU - what NVIDIA is now dubbing the first "cloud GPU" - that has the capability to virtualize graphics processing while being power efficient. The inclusion of a hardware fixed-function video encoder is important as well as it will aid in the process of compressing images that are delivered over the Internet by the streaming gaming service.
This diagram shows us how the Kepler GPU handles and accelerates the processing required for online gaming services. On the server side, the necessary process for an image to find its way to the user is more than just a simple render to a frame buffer. In current cloud gaming scenarios the frame buffer would have to be copied to the main system memory, compressed on the CPU and then sent via the network connection. With NVIDIA's GRID technology that capture and compression happens on the GPU memory and thus can be on its way to the gamer faster.
The results are H.264 streams that are compressed quickly and efficiently to be sent out over the network and return to the end user on whatever device they are using.
AMD’s position is not enviable. Though they’re the only large competitor to Intel in the market for x86 processors, the company is dwarfed by the Giant of Santa Clara. As a resident of Portland, I can’t forget this fact. Intel offices are strewn across the landscape of the western suburbs, most of them at least four times larger than any office I’ve worked at.
Despite the long odds, AMD is set in this course for now and has no choice but to soldier on. And so we have today’s reference platform, a laptop powered by AMD’s latest mobile processor, codenamed Trinity. These processors, like the older Llano models, will be sold as the AMD A-Series. This might lead you to think that it’s simply another minor update, but that’s not the case.
Llano was released around the same time as Bulldozer, but it did not use Bulldozer cores. Instead it used yet another update of Stars, which is a mobile incarnation of Phenom II, which was of course an improvement upon the original Phenom. The “new” Llano APU in fact was equipped with some rather old processor cores. This showed in the performance of the mobile Llano products. They simply could not keep up with Sandy Bridge’s more modern cores.
Bulldozer isn’t coming to mobile with Trinity, either. Instead we’re receiving Piledriver. AMD has effectively skipped the first iteration of its new Bulldozer architecture and moved straight on to the second. Piledriver includes the third generation of AMD’s Turbo Core and promises “up to 29%” better processor performance than last year’s Llano-based A-Series.
That’s a significant improvement, should it turn out to be correct. Is it true, and will it be enough to catch up to Intel?
Search engine giant Google took the wraps off its long rumored cloud storage service called Google Drive this week. The service has been rumored for years, but is (finally) official. In the interim, several competing services have emerged and even managed to grab significant shares of the market. Therefore, it will be interesting to see how Google’s service will stack up. In this article, we’ll be taking Google Drive on a test drive from installation to usage to see if it is a worthy competitor to other popular storage services—and whether it is worth switching to!
How we test
In order to test the service, I installed the Google desktop application (we’ll be taking a look at the mobile app soon) and uploaded a variety of media file types including documents, music, photos, and videos in numerous formats. The test system in question is an Intel i7 860 based system with 8GB of RAM and a wired Ethernet connection to the LAN. The cable ISP I used offers approximately two to three mpbs uploads (real world speeds, 4mbps promised) for those interested.
Google’s cloud service was officially unveiled on Tuesday, but the company is still rolling out activations for people’s accounts (my Google Drive account activated yesterday [April 27, 2012], for example). And it now represents the new single storage bucket for all your Google needs (Picasa, Gmail, Docs, App Inventor, ect; although people can grandfather themselves into the cheaper Picasa online storage).
Old Picasa Storage vs New Google Drive Storage Plans
|Storage Tier (old/new)||Old Plan Pricing (per year)||New Plan Pricing (per year)|
|20 GB/25 GB||$5||$29.88|
|80 GB/100 GB||$20||$59.88|
(Picasa Plans were so much cheaper–hold onto them if you're able to!)
The way Google Drive works is much like that of Dropbox wherein a single folder is synced between Google’s servers and the user’s local machine (though sub-folders are okay to use and the equivalent of "labels" on the Google side). The storage in question is available in several tiers, though the tier that most people will be interested in is the free one. On that front, Google Drive offers 5GB of synced storage, 10GB of Gmail storage, and 1GB of Picasa Web Albums photo backup space. Beyond that, Google is offering nine paid tiers from an additional 25GB of "Drive and Picasa" storage (and 25GB of Gmail email storage) for $2.49 a month to 16TB of Drive and Picasa Web Albums storage with 25GB of Gmail email storage for $799.99 a month. The chart below details all the storage tiers available.
|Storage Tiers||Drive/Picasa Storage||Gmail Storage||Price (per month)|
1024MB = 1GB, 1024GB = 1TB
The above storage numbers do not include the 5GB of free drive storage that is also applied to any paid tiers. The free 1GB of Picasa storage does not carry over to the paid tiers.
Even better, Google has not been stingy with their free storage. They continue to allow users to upload as many photos as they want to Google+ (they are resized to a max of 2048x2048 pixels though). Also, Google Documents stored in the Docs format continue to not count towards the storage quota. Videos uploaded to Google+ under 15 minutes in length are also free from storage limitations. As far as Picasa Web Albums (which also includes photos uploaded to blogger blogs) goes, any images under 2048x2048 and videos under 15 minutes in length do not count towards the storage quota either. If you exceed the storage limit, Google will still allow you to access all of your files, but you will not be able to create any new files until you delete enough files to get below the storage quota. The one exception to that rule is the “storage quota free” file types mentioned above–Google will still let you create/upload those. For Gmail storage, Google allows you to receive and store as much email as you want up to the quota. After you reach the quota, any new email will hard bounce and you will not be able to receive new messages.
In that same vein, Google’s paid tiers are not the cheapest but are still fairly economical. They are less expensive per GB than Dropbox, for example, but are more expensive than Microsoft’s new Skydrive tiers. One issue that many users face with online storage services is the file size limit placed on individual files. While Dropbox places no limits (other than overall storage quota) on individual file size, many other services do. Google offers a compromise to users in the form of 10GB per file size limits. While you won’t be backing up Virtualbox hard drives or drive image backups to Google, they’ll let you backup anything else (within reason).
If the netbook was a shooting star, the nettop was an asteroid that never quite entered our atmosphere. Instead it flew silently by, noted by NASA, written about in a handful of articles, and now forgotten.
That doesn’t mean it has ceased to exist, however. It’s still out there, floating in space - and it occasionally swings back around for an encore. So we have the Lenovo IdeaCentre Q180.
Of course, simply advertising a small computer as - well, a small computer - isn’t particularly sexy. The Q180 is instead being sold not just as general-purpose laptop but also as a media center (with optional Blu-Ray, not found on our review unit). There’s no doubting the demand for this, but so far, attempts to make PC-based media center computers have not done well - even Boxee, with its custom Linux-based operating system, was fussy. Can the Q180 succeed where others have stumbled? Let’s start with the specs.
It’s been awhile since we tested anything Atom. Since our last look at this line of processors, Intel has updated to the code-name Cedertrail processors, allowing for higher clock speeds. The 2.13 GHz dual-core Atom D2700 looks quite robust in print. But this still the same old architecture, so per-clock performance doesn’t come close to Intel’s Pentium and Core processors.
Also included in AMD’s Radeon HD 6450A, a version of the HD 6450 built for small systems that don’t have room for a typical PCIe graphics card. This makes up for the fact that all Atom processors are still using hopelessly outdated Intel Media Accelerator graphics, which is entirely unsuitable for HD video.
GK104 takes a step down
While the graphics power found in the new GeForce GTX 690, the GeForce GTX 680 and even the Radeon HD 7970 are incredibly impressive, if we are really honest with ourselves the real meat of the GPU market buys options much lower than $999. Today's not-so-well-kept-secret release of the GeForce GTX 670 attempts to bring the price to entry of the NVIDIA Kepler architecture down to a more attainable level while also resetting the performance per dollar metrics of the GPU world once again.
The GeForce GTX 670 is in fact a very close cousin to the GeForce GTX 680 with only a single SMX unit disabled and a more compelling $399 price tag.
The GTX 670 GPU - Nearly as fast as the GTX 680
The secret is out - GK104 finds its way onto a third graphics card in just two months - but in this iteration the hardware has been reduced slightly.
The GTX 670 block diagram we hacked together above is really just a GTX 680 diagram with a single SMX unit disabled. While the GTX 680 sported a total of 1536 CUDA cores broken up into eight 192 core SMX units, the new GTX 670 will include 1344 cores. This will also drop the texture units to 112 (from 128 on the GTX 680) though the ROP count stays at 32 thanks to the continued use of a 256-bit memory interface.
Infectious fear is infectious
PCMag and others have released articles based on a blog post from Sophos. The original post discussed how frequently malware designed for Windows is found on Mac computers. What these articles mostly demonstrate is that we really need to understand security: what it is, and why it matters. The largest threats to security are complacency and misunderstanding; users need to grasp the problem rather than have it burried under weak analogies and illusions of software crutches.
Your data and computational ability can be very valuable to people looking to exploit it.
The point of security is not to avoid malware, nor is it to remove it if you failed to avoid it. Those actions are absolutely necessary components of security -- do those things -- but they are not the goal of security. The goal of security is to retain control of what is yours. At the same time, be a good neighbor and make it easier for others to do the same with what is theirs.
Your responsibility extends far beyond just keeping a current antivirus subscription.
The problem goes far beyond throwing stones...
The distinction is subtle.
Your operating system is irrelevant. You could run Windows, Mac, Android, iOS, the ‘nixes, or whatever else. Every useful operating system has vulnerabilities and run vulnerable applications. The user is also very often tricked into loading untrusted code either directly or delivering it within data to a vulnerable application.
Blindly fearing malware -- such as what would happen if someone were to draw parallels to Chlamydia -- does not help you to understand it. There are reasons why malware exists; there are certain things which malware is capable of; and there are certain things which malware is not.
The single biggest threat to security is complacency. Your information is valuable and you are responsible to prevent it from being exploited. The addition of a computer does not change the fundamental problem. Use the same caution on your computer and mobile devices as you should on the phone or in person. You would not leave your credit card information on a park bench unmonitored.
Introduction, Design, User Interface
Intel has decided to lead its introduction of Ivy Bridge for mobile with its most powerful quad-core parts. Many of these processors will end up in mainstream laptop, but they’re also great for gaming laptops. In our first look at Ivy Bridge we saw that it holds up well when paired with its own Intel HD 4000 graphics – if you keep the resolution around 1366x768. A bit more than that and the IGP just can’t hang.
Gamers will still want a beefy discrete GPU, and that’s what the G75 offers. Inside this beast you’ll find an Nvidia GeForce GTX 670M. Those who were reading our Kepler coverage will remember that this is not based off Nvidia’s newest architecture but is instead a re-work of an older Fermi chip. That mean seem a bit disappointing, and it is – but the performance of Nvidia’s older mobile chips wasn’t lackluster.
So, this new laptop is packing a spanking-new Core i7-3720QM as well as Nvidia’s new GTX 670M. That’s an impressive combination, and ASUS has wisely backed it up with a well-rounded set of performance components.
GTX 690 Specifications
On Thursday May the 3rd at 10am PDT / 1pm EDT, stop by the PC Perspective Live page for an NVIDIA and PC Perspective hosted event surrounding the GeForce GTX 690 graphics card. Ryan Shrout and Tom Petersen will be on hand to talk about the technology, the performance characteristics as well as answer questions from the community from the chat room, twitter, etc. Be sure to catch it all at http://pcper.com/live
Okay, so it's not a surprise to you at all, or if it is, you haven't been paying attention. Today is the first on-sale date and review release for the new NVIDIA GeForce GTX 690 4GB dual-GPU Kepler graphics card that we first announced in late April. This is the dream card any PC gamer out there combining a pair of GTX 680 GK104 GPUs on a single PCB and running them in a single slot SLI configuration and is easily the fastest single card we have ever tested. It also the most expensive reference card we have ever seen with a hefty $999 price tag.
So how does it perform? How about efficiency and power consumption - does the GTX 690 suffer the same problems the GTX 590 did? Can AMD hope to compete with a dual-GPU HD 7990 card in the future? All that and more in our review!
Kepler Architecture Overview
For those of you that may have missed the boat on the GTX 680 launch, the first card to use NVIDIA's new Kepler GPU architecture, you should definitely head over and read my review and analysis of that before heading into the deep-dive on the GTX 690 here today.
Kepler is a 3.54 billion transistor GPU with 1536 CUDA cores / stream processors contained within and even in a single GPU configuration is able produce some impressive PC gaming performance results. The new SMX-based design has some modest differences from Fermi the most dramatic of which is the removal of the "hot clock" - the factor that ran the shaders and twice the clock speed of the rest of the GPU. Now, the entire chip runs at one speed, higher than 1 GHz on the GTX 680.
Each SMX on Kepler now includes 192 CUDA cores as opposed to the 32 cores found in each SM on Fermi - a change that has increased efficiency and performance per watt quite dramatically.
As I said above, there are lot more details on the changes in our GeForce GTX 680 review.
The GeForce GTX 690 Specifications
Many of the details surrounding the GTX 690 have already been revealed by NVIDIA's CEO Jen-Hsun Huang during a GeForce LAN event in China last week. The card is going to be fast, expensive and is built out of components and materials we haven't seen any graphics card utilize before.
Depsite the high performance level of the card, the GTX 690 isn't much heavier and isn't much longer than the reference GTX 680 card. We'll go over the details surrounding the materials, cooler and output configuration on the next page, but let's take some time just to look and debate the performance specifications.
Introduction and Features
SilverStone was one of the first PC power supply manufacturers to design and market a fanless power supply for silent operation. While many of their competitor’s fanless products have come and gone, SilverStone continues to build on their reputation and later last year released the SST-ST50NF 500W fanless power supply, which is the latest addition to the Nightjar series. We are a little late to the party in reviewing the ST50NF but after talking with the good folks at SilverStone it appears the wait was worth it as they have continued to tweak the design in recent months to improve AC ripple suppression on the DC outputs.
Here is what SilverStone has to say about the Nightjar 500W fanless power supply: The fanless Nightjar series power supplies are long favorites for professionals and enthusiasts alike that require noiseless power solution with no moving parts. With increasing power demands required from modern computers, SilverStone engineers have once again created another fanless power supply with leading output level in ST50NF. With 500W of continuous rating, near 80Plus Silver efficiency, ±3% voltage regulation, single +12V rail, multiple PCI-E connectors, and full host of safety features, the ST50NF is a great choice for mission-critical systems that need to operate in noiseless or dusty environments.
SilverStone Nightjar 500W Fanless PSU Key Features:
• Fanless thermal solution,0 dBA Acoustics
• 500W continuous power output
• 80 PLUS Bronze certified with84%~88% efficiency at 20%~100% load
• Compliance with ATX 12V v2.3 and EPS 12V Specifications
• Strict ±3% voltage regulation
• PCI-E 8-pin and PCI-E 6-pin connectors
• Powerfull class-leading single +12V rail (38A)
• Aluminum construction
• Server-level components
• Universal AC input (100~250V) with Active PFC
Editor’s Note: Fanless PC power supplies occupy a niche market and are targeted towards users who want a silent power supply for use in noise-sensitive areas or who need a power supply that can survive in a dusty/dirty environment that might choke and kill a conventional fan cooled PSU. Fanless power supplies rely on convection cooling and still require airflow in and around the power supply chassis to carry away the waste heat. So while the power supply itself may not have a fan, the computer enclosure must still have some means of creating airflow to keep the CPU, GPU and PSU cool. The last thing you want to do is put a fanless PSU in a closed enclosure without any fans or airflow!
Introduction, Low-Power Computing Was Never Enjoyable
It was nearly five years ago that ASUS announced the first Eee PC model at Computex. That October the first production version of what would to be called a netbook, the ASUS Eee PC 4G, was released. The press latched on to the little Eee PC, making it the new darling of the computer industry. It was small, it was inexpensive, and it was unlike anything on the market.
Even so, the original Eee PC was a bit of a dead end. It used an Intel Celeron processor that was not suited for the application. It consumed too much power and took up a significant portion of the netbook’s production cost. If Intel’s Celeron had remained the only option for netbooks they probably would not have made the leap from press darling to mainstream consumer device.
It turned out that Intel (perhaps unintentionally) had the solution – Atom. Originally built with hopes that it might power “mobile Internet devices” it proved to be the netbook’s savior. It allowed vendors to squeeze out cheap netbooks with Windows and a proper hard drive.
At the time, Atom and the netbook seemed promising. Sales were great – consumers loved the cute, pint-sized, affordable computers. In 2009 netbook sales jumped by over 160% quarter-over-quarter while laptops staggered along with single-digit growth. The buzz quickly jumped to other products, spawning nettops, media centers and low-power all-in-one-PCs. There seemed to be nothing an Atom powered computer could not do.
Fast forward. Earlier this year, PC World ran an article asking if netbooks are dead. U.S. sales peaked in the first quarter of 2010 and have been nose-diving since then, and while some interest remains in the other markets, only central Europe and Latin America have held steady. It appears the star that burned brightest has indeed burned the quickest.
Get notified when we go live!