Subject: Systems
Manufacturer: PC Perspective
Tagged: quad-core, gpu, gaming, cpu

Introduction and Test Hardware

logos.jpg

The PC gaming world has become divided by two distinct types of games: those that were designed and programmed specifically for the PC, and console ports. Unfortunately for PC gamers it seems that far too many titles are simply ported over (or at least optimized for consoles first) these days, and while PC users can usually enjoy higher detail levels and unlocked frame rates there is now the issue of processor core-count to consider. This may seem artificial, but in recent months quite a few games have been released that require at least a quad-core CPU to even run (without modifying the game).

One possible explanation for this is current console hardware: PS4 and Xbox One systems are based on multi-core AMD APUs (the 8-core AMD "Jaguar"). While a quad-core (or higher) processor might not be techincally required to run current games on PCs, the fact that these exist on consoles might help to explain quad-core CPU as a minimum spec. This trend could simply be the result of current x86 console hardware, as developement of console versions of games is often prioritized (and porting has become common for development of PC versions of games). So it is that popular dual-core processors like the $69 Intel Pentium Anniversary Edition (G3258) are suddenly less viable for a future-proofed gaming build. While hacking these games might make dual-core CPUs work, and might be the only way to get such a game to even load as the CPU is checked at launch, this is obviously far from ideal.

4790K_box.jpg

Is this much CPU really necessary?

Rather than rail against this quad-core trend and question its necessity, I decided instead to see just how much of a difference the processor alone might make with some game benchmarks. This quickly escalated into more and more system configurations as I accumulated parts, eventually arriving at 36 different configurations at various price points. Yeah, I said 36. (Remember that Budget Gaming Shootout article from last year? It's bigger than that!) Some of the charts that follow are really long (you've been warned), and there’s a lot of information to parse here. I wanted this to be as fair as possible, so there is a theme to the component selection. I started with three processors each (low, mid, and high price) from AMD and Intel, and then three graphics cards (again, low, mid, and high price) from AMD and NVIDIA.

Here’s the component rundown with current pricing*:

Processors tested:

Graphics cards tested:

  • AMD Radeon R7 260X (ASUS 2GB OC) - $137.24
  • AMD Radeon R9 280 (Sapphire Dual-X) - $169.99
  • AMD Radeon R9 290X (MSI Lightning) - $399
  • NVIDIA GeForce GTX 750 Ti (OEM) - $149.99
  • NVIDIA GeForce GTX 770 (OEM) - $235
  • NVIDIA GeForce GTX 980 (ASUS STRIX) - $519

*These prices were current as of  6/29/15, and of course fluctuate.

Continue reading our Quad-Core Gaming Roundup: How Much CPU Do You Really Need?

Subject: Storage
Manufacturer: Samsung

Introduction, Specifications and Packaging

Introduction:

Where are all the 2TB SSDs? It's a question we've been hearing since they started to go mainstream seven years ago. While we have seen a few come along on the enterprise side as far back as 2011, those were prohibitively large, expensive, and out of reach of most consumers. Part of the problem initially was one of packaging. Flash dies simply were not of sufficient data capacity (and could not be stacked in sufficient quantities) as to reach 2TB in a consumer friendly form factor. We have been getting close lately, with many consumer focused 2.5" SATA products reaching 1TB, but things stagnated there for a bit. Samsung launched their 850 EVO and Pro in capacities up to 1TB, with plenty of additional space inside the 2.5" housing, so it stood to reason that the packaging limit was no longer an issue, so why did they keep waiting?

The first answer is one of market demand. When SSDs were pushing $1/GB, the thought of a 2TB SSD was great right up to the point where you did the math and realized it would cost more than a typical enthusiast-grade PC. That was just a tough pill to swallow, and market projections showed it would take more work to produce and market the additional SKU than it would make back in profits.

The second answer is one of horsepower. No, this isn't so much a car analogy as it is simple physics. 1TB SSDs had previously been pushing the limits of controller capabilities of flash and RAM addressing, as well as handling Flash Translation Layer lookups as well as garbage collection and other duties. This means that doubling a given model SSD capacity is not as simple as doubling the amount of flash attached to the controller - that controller must be able to effectively handle twice the load.

With all of that said, it looks like we can finally stop asking for those 2TB consumer SSDs, because Samsung has decided to be the first to push into this space:

150705-191310.jpg

Today we will take a look at the freshly launched 2TB version of the Samsung 850 EVO and 850 Pro. We will put these through the same tests performed on the smaller capacity models. Our hope is to verify that the necessary changes Samsung made to the controller are sufficient to keep performance scaling or at least on-par with the 1TB and smaller models of the same product lines.

Read on for the full review!

Introduction and Technical Specifications

Introduction

In our previous article here, we demonstrated how to mod the EVGA GTX 970 SC ACX 2.0 video card to get higher performance and significantly lower running temps. Now we decided to take two of these custom modded EVGA GTX 970 cards to see how well they perform in an SLI configuration. ASUS was kind enough to supply us with one of their newly introduced ROG Enthusiast SLI Bridges for our experiments.

ASUS ROG Enthusiast SLI Bridge

02-rog-3way-adapter.jpg

Courtesy of ASUS

03-rog-adapter-profile.jpg

Courtesy of ASUS

For the purposes of running the two EVGA GTX 970 SC ACX 2.0 video cards in SLI, we chose to use the 3-way variant of ASUS' ROG Enthusiast SLI Bridge so that we could run the tests with full 16x bandwidth across both cards (with the cards in PCIe 3.0 x16 slots 1 and 3 in our test board). This customized SLI adapter features a powered red-colored ROG logo embedded in its brushed aluminum upper surface. The adapter supports 2-way and 3-way SLI in a variety of board configurations.

04-all-adapters.jpg

Courtesy of ASUS

ASUS offers their ROG Enthusiast SLI Bridge in 3 sizes for various variations on 2-way, 3-way, and 4-way SLI configurations. All bridges feature the top brushed-aluminum cap with embedded glowing ROG logo.

Continue reading our article on Modding the EVGA GTX 970 SC Graphics Card!

05-sli-config.jpg

Courtesy of ASUS

The smallest bridge supports 2-way SLI configurations with either a two or three slot separation. The middle sized bridge supports up to a 3-way SLI configuration with a two slot separation required between each card. The largest bridge support up to a 4-way SLI configuration, also requiring a two slot separation between each card used.

Technical Specifications (taken from the ASUS website)

Dimensions 2-WAY: 97 x 43 x 21 (L x W x H mm)
3-WAY: 108 x 53 x 21 (L x W x H mm)
4-WAY: 140 x 53 x 21 (L x W x H mm)
Weight 70 g (2-WAY)
91 g (3-WAY)
123 g(4-WAY)
Compatible GPU set-ups 2-WAY: 2-WAY-S & 2-WAY-M
3-WAY: 2-WAY-L & 3-WAY
4-WAY: 4-WAY
Contents 2-WAY: 1 x optional power cable & 2 PCBs included for varying configurations
3-WAY: 1 x optional power cable
4-WAY: 1 x optional power cable

Continue reading our story!!

Tick Tock Tick Tock Tick Tock Tock

A few websites have been re-reporting on a leak from BenchLife.info about Kaby Lake, which is supposedly a second 14nm redesign (“Tock”) to be injected between Skylake and Cannonlake.

UPDATE (July 2nd, 3:20pm ET): It has been pointed out that many hoaxes have come out of the same source, and that I should be more clear in my disclaimer. This is an unconfirmed, relatively easy to fake leak that does not have a second, independent source. I reported on it because (apart from being interesting enough) some details were listed on the images, but not highlighted in the leak, such as "GT0" and a lack of Iris Pro on -K. That suggests that the leaker got the images from somewhere, but didn't notice those details, which implies that the original source was hoaxed by an anonymous source, who only seeded the hoax to a single media outlet, or that it was an actual leak.

Either way, enjoy my analysis but realize that this is a single, unconfirmed source who allegedly published hoaxes in the past.

intel-2015-kaby-lake-leak-01.png

Image Credit: BenchLife.info

If true, this would be a major shift in both Intel's current roadmap as well as how they justify their research strategies. It also includes a rough stack of product categories, from 4.5W up to 91W TDPs, including their planned integrated graphics configurations. This leads to a pair of interesting stories:

How Kaby Lake could affect Intel's processors going forward. Since 2006, Intel has only budgeted a single CPU architecture redesign for any given fabrication process node. Taking two attempts on the 14nm process buys time for 10nm to become viable, but it could also give them more time to build up a better library of circuit elements, allowing them to assemble better processors in the future.

What type of user will be given Iris Pro? Also, will graphics-free options be available in the sub-Enthusiast class? When buying a processor from Intel, the high-end mainstream processors tend to have GT2-class graphics, such as the Intel HD 4600. Enthusiast architectures, such as Haswell-E, cannot be used without discrete graphics -- the extra space is used for more cores, I/O lanes, or other features. As we will discuss later, Broadwell took a step into changing the availability of Iris Pro in the high-end mainstream, but it doesn't seem like Kaby Lake will make any more progress. Also, if I am interpreting the table correctly, Kaby Lake might bring iGPU-less CPUs to LGA 1151.

Keeping Your Core Regular

To the first point, Intel has been on a steady tick-tock cycle since the Pentium 4 architecture reached the 65nm process node, which was a “tick”. The “tock” came from the Conroe/Merom architecture that was branded “Core 2”. This new architecture was a severe departure from the high clock, relatively low IPC design that Netburst was built around, which instantaneously changed the processor landscape from a dominant AMD to an Intel runaway lead.

intel-tick-tock.png

After 65nm and Core 2 started the cycle, every new architecture alternated between shrinking the existing architecture to smaller transistors (tick) and creating a new design on the same fabrication process (tock). Even though Intel has been steadily increasing their R&D budget over time, which is now in the range of $10 to $12 billion USD each year, creating smaller, more intricate designs with new process nodes has been getting harder. For comparison, AMD's total revenue (not just profits) for 2014 was $5.51 billion USD.

Read on to see more about what Kaby Lake could mean for Intel and us.

Author:
Manufacturer: AMD

Retail cards still suffer from the issue

In our review of AMD's latest flagship graphics card, the Radeon R9 Fury X, I noticed and commented on the unique sound that the card was producing during our testing. A high pitched whine, emanating from the pump of the self-contained water cooler designed by Cooler Master, was obvious from the moment our test system was powered on and remained constant during use. I talked with a couple of other reviewers about the issue before the launch of the card and it seemed that I wasn't alone. Looking around other reviews of the Fury X, most make mention of this squeal specifically.

09.jpg

Noise from graphics cards come in many forms. There is the most obvious and common noise from on-board fans and the air it moves. Less frequently, but distinctly, the sound of inductor coil whine comes up. Fan noise spikes when the GPU gets hot, causing the fans to need to spin faster and move more air across the heatsink, which keeps everything running cool. Coil whine changes pitch based on the frame rate (and the frequency of power delivery on the card) and can be alleviated by using higher quality components on the board itself.

But the sound of our Fury X was unique: it was caused by the pump itself and it was constant. The noise it produced did not change as the load on the GPU varied. It was also 'pitchy' - a whine that seemed to pierce through other sounds in the office. A close analog might be the sound of an older, CRT TV or monitor that is left powered on without input.

In our review process, AMD told us the solution was fixed. In an email sent to the media just prior to the Fury X launch, an AMD rep stated:

In regards to the “pump whine”, AMD received feedback that during open bench testing some cards emit a mild “whining” noise.  This is normal for most high speed liquid cooling pumps; Usually the end user cannot hear the noise as the pumps are installed in the chassis, and the radiator fan is louder than the pump.  Since the AMD Radeon™ R9 Fury X radiator fan is near silent, this pump noise is more noticeable.  
 
The issue is limited to a very small batch of initial production samples and we have worked with the manufacturer to improve the acoustic profile of the pump.  This problem has been resolved and a fix added to production parts and is not an issue.

I would disagree that this is "normal" but even so, taking AMD at its word, I wrote that we heard the noise but also that AMD had claimed to have addressed it. Other reviewers noted the same comment from AMD, saying the result was fixed. But very quickly after launch some users were posting videos on YouTube and on forums with the same (or worse) sounds and noise. We had already started bringing in a pair of additional Fury X retail cards from Newegg in order to do some performance testing, so it seemed like a logical next step for us to test these retail cards in terms of pump noise as well.

First, let's get the bad news out of the way: both of the retail AMD Radeon R9 Fury X cards that arrived in our offices exhibit 'worse' noise, in the form of both whining and buzzing, compared to our review sample. In this write up, I'll attempt to showcase the noise profile of the three Fury X cards in our possession, as well as how they compare to the Radeon R9 295X2 (another water cooled card) and the GeForce GTX 980 Ti reference design - added for comparison.

Continue reading our look into the pump noise of the AMD Fury X Graphics Card!

Introduction, Specifications, and Packaging

Lexar is Micron’s brand covering SD Cards, microSD Cards, USB flash drives, and card readers. Their card readers are known for being able to push high in the various speed grades, typically allowing transfers (for capable SD cards) much faster than what a typical built-in laptop or PC SD card reader is capable of. Today we will take a look at the Lexar ‘Professional Workflow’ line of flash memory connectivity options from Lexar.

150121-164114.jpg

This is essentially a four-bay hub device that can accept various card readers or other types of devices (a USB flash storage device as opposed to just a reader, for example). The available readers range from SD to CF to Professional Grade CFast cards capable of over 500 MB/sec.

We will be looking at the following items today:

  • Professional Workflow HR2
    • Four-bay Thunderbolt™ 2/USB 3.0 reader and storage drive hub
  • Professional Workflow UR1
    • Three-slot microSDHC™/microSDXC™ UHS-I USB 3.0 reader
  • Professional Workflow SR1
    • SDHC™/SDXC™ UHS-I USB 3.0 reader
  • Professional Workflow CFR1
    • CompactFlash® USB 3.0 reader
  • Professional Workflow DD256
    • 256GB USB 3.0 Storage Drive

Note that since we were sampled these items, Lexar has begun shipping a newer version of the SR1. The SR2 is a SDHC™/SDXC™ UHS-II USB 3.0 reader. Since we had no UHS-II SD cards available to test, this difference would not impact any of our testing speed results. There is also an HR1 model which has only USB 3.0 support and no Thunderbolt, coming in at a significantly lower cost when compared with the HR2 (more on that later).

Continue reading for our review of all of the above!

Subject: Displays
Manufacturer: ASUS

Introduction, Specifications, and Packaging

AMD fans have been patiently waiting for a proper FreeSync display to be released. The first round of displays using the Adaptive Sync variable refresh rate technology arrived with an ineffective or otherwise disabled overdrive feature, resulting in less than optimal pixel response times and overall visual quality, especially when operating in variable refresh rate modes. Meanwhile G-Sync users had overdrive functionality properly functioning , as well as a recently introduced 1440P IPS panel from Acer. The FreeSync camp was overdue for an IPS 1440P display superior to that first round of releases, hopefully with those overdrive issues corrected. Well it appears that ASUS, the makers of the ROG Swift, have just rectified that situation with a panel we can finally recommend to AMD users:

DSC02594.jpg

Before we get into the full review, here is a sampling of our recent display reviews from both sides of the camp:

  • ASUS PG278Q 27in TN 1440P 144Hz G-Sync
  • Acer XB270H 27in TN 1080P 144Hz G-Sync
  • Acer XB280HK 28in TN 4K 60Hz G-Sync
  • Acer XB270HU 27in IPS 1440P 144Hz G-Sync
  • LG 34UM67 34in IPS 25x18 21:9 48-75Hz FreeSync
  • BenQ XL2730Z 27in TN 1440P 40-144Hz FreeSync
  • Acer XG270HU 27in TN 1440P 40-144Hz FreeSync
  • ASUS MG279Q 27in IPS 1440P 144Hz FreeSync(35-90Hz) < You are here

The reason for there being no minimum rating on the G-Sync panels above is explained in our article 'Dissecting G-Sync and FreeSync - How the Technologies Differ', though the short version is that G-Sync can effectively remain in VRR down to <1 FPS regardless of the hardware minimum of the display panel itself.

Continue reading as we will look at this new ASUS MG279Q 27" 144Hz 1440P IPS FreeSync display!

Subject: Mobile
Manufacturer: Various

Business Model Based on Partnerships

Alexandru Voica works for Imagination Technologies. His background includes research in computer graphics at the School of Advanced Studies Sant'Anna in Pisa and a brief stint as a CPU engineer, working on several high-profile 32-bit processors used in many mobile and embedded devices today. You can follow Alex on Twitter @alexvoica.

Some months ago my colleague Rys Sommefeldt wrote an article offering his (deeply) technical perspective on how a chip gets made, from R&D to manufacturing. While his bildungsroman production covers a lot of the engineering details behind silicon production, it is light on the business side of things; and that is a good thing because it gives me opportunity to steal some of his spotlight!

This article will give you a breakdown of the IP licensing model, describing the major players and the relationships between them. It is not designed to be a complete guide by any means and some parts might already sound familiar, but I hope it is a comprehensive overview that can be used by anyone who is new to product manufacturing in general.

The diagram below offers an analysis of the main categories of companies involved in the semiconductor food chain. Although I’m going to attempt to paint a broad picture, I will mainly offer examples based on the ecosystem formed around Imagination (since that is what I know best).

01.jpg

A simplified view of the manufacturing chain

Let’s work our way from left to right.

IP vendors

Traditionally, these are the companies that design and sell silicon IP. ARM and Imagination Technologies are perhaps the most renowned for their sub-brands: Cortex CPU + Mali GPU and MIPS CPU + PowerVR GPU, respectively.

Given the rapid evolution of the semiconductor market, such companies continue to evolve their business models beyond point solutions to become one-stop shops that offer more than for a wide variety of IP cores and platforms, comprising CPUs, graphics, video, connectivity, cloud software and more.

Continue reading The IP licensing business model. A love story. on PC Perspective!!

Manufacturer: Seasonic

Introduction and Features

2-SnowSilent-Banner.jpg

It’s always a happy day at the PC Perspective Test Lab when the delivery truck drops off a new Seasonic power supply for evaluation! Seasonic is a well-known and highly respected OEM that produces some of the best PC power supplies on the market today. In addition to building power supplies for many big-name companies who re-brand the units with their own name, Seasonic also sells a full line of power supplies under the Seasonic name. Their new Snow Silent Series now includes two models, the original 1050W and the new 750W version we have up for review.

The Snow Silent Series power supplies feature a stylish white exterior along with top level Seasonic build quality. The Snow Silent-750 is a next generation XP-Series (XP2S) power supply that comes with fully modular cables and a 120mm cooling fan with Fluid Dynamic Bearings. It features an upgraded S3FC Hybrid Fan Control Circuit that provides fanless operation up to ~50% load. The Snow Silent-750 is designed to provide high efficiency (80 Plus Platinum certified) and tight voltage regulation with minimal AC ripple.

3-SS-750-side.jpg

Seasonic Snow Silent 750W PSU Key Features:

•    High efficiency, 80 Plus Platinum certified
•    Fully Modular Cable design with flat ribbon-style cables
•    Seasonic Patented DC Connector Panel with integrated VRMs
•    Upgraded Hybrid Silent Fan Control (S3FC: Fanless, Silent and Cooling)
•    120mm Fan with Fluid Dynamic Bearings (FDB)
•    Ultra-tight voltage regulation (+2% and -0% +12V rail)
•    Supports multi-GPU technologies (four PCI-E 6+2 p connectors)
•    High reliability 105°C Japanese made electrolytic capacitors
•    Active PFC (0.99 PF typical) with Universal AC input
•    Dual sided PCB layout with dual copper bars
•    Energy Star and ErP Lot 6 2013 compliance
•    7-Year manufacturer's warranty worldwide

Please continue reading our Seasonic Snow Silent-750 power supply review!

Subject: Systems
Manufacturer: Zotac

Introduction and First Impressions

The Zotac ZBOX CI321 nano is a mini PC kit in the vein of the Intel NUC, and this version features a completely fanless design with built-in wireless for silent integration into just about any location. So is it fast enough to be an HTPC or desktop productivity machine? We will find out here.

zbox_main.jpg

I have reviewed a couple of mini-PCs in the past few months, most recently the ECS LIVA X back in January. Though the LIVA X was not really fast enough to be used as a primary device it was small and inexpensive enough to be an viable product depending on a user’s needs. One attractive aspect of the LIVA designs, and any of the low-power computers introduced recently, is the passive nature of such systems. This has unfortunately resulted in the integration of some pretty low-performance CPUs to stay within thermal (and cost) limits, but this is beginning to change. The ZBOX nano we’re looking at today carries on the recent trend of incorporating slightly higher performance parts as its Intel Celeron processor (the 2961Y) is based on Haswell, and not the Atom cores at the heart of so many of these small systems.

Another parallel to the Intel NUC is the requirement to bring your own memory and storage, and the ZBOX CI321 nano accepts a pair of DDR3 SoDIMMs and 2.5” storage drives. The Intel Celeron 2961Y processor supports up to 1600 MHz dual-channel DDR3L which allows for much higher memory bandwidth than many other mini-PCs, and the storage controller supports SATA 6.0 Gbps which allows for higher performance than the eMMC storage found in a lot of mini-PCs, depending on the drive you choose to install. Of course your mileage will vary depending on the components selected to complete the build, but it shouldn’t be difficult to build a reasonably fast system.

zbox_back_angle.jpg

Continue reading our review of the Zotac ZBOX CI321 nano!!

Author:
Manufacturer: AMD
Tagged: 4GB, amd, Fiji, Fury, fury x, hbm, R9, radeon

A fury unlike any other...

Officially unveiled by AMD during E3 last week, we are finally ready to show you our review of the brand new Radeon R9 Fury X graphics card. Very few times has a product launch meant more to a company, and to its industry, than the Fury X does this summer. AMD has been lagging behind in the highest-tiers of the graphics card market for a full generation. They were depending on the 2-year-old Hawaii GPU to hold its own against a continuous barrage of products from NVIDIA. The R9 290X, despite using more power, was able to keep up through the GTX 700-series days, but the release of NVIDIA's Maxwell architecture forced AMD to move the R9 200-series parts into the sub-$350 field. This is well below the selling prices of NVIDIA's top cards.

IMG_2628.JPG

The AMD Fury X hopes to change that with a price tag of $650 and a host of new features and performance capabilities. It aims to once again put AMD's Radeon line in the same discussion with enthusiasts as the GeForce series.

The Fury X is built on the new AMD Fiji GPU, an evolutionary part based on AMD's GCN (Graphics Core Next) architecture. This design adds a lot of compute horsepower (4,096 stream processors) and it also is the first consumer product to integrate HBM (High Bandwidth Memory) support with a 4096-bit memory bus!

Of course the question is: what does this mean for you, the gamer? Is it time to start making a place in your PC for the Fury X? Let's find out.

Continue reading our review of the new AMD Radeon R9 Fury X Graphics Card!!

Manufacturer: Inateck

One hub to rule them all!

Inateck sent along a small group of connectivity devices for us to evaluate. One such item was their HB7003 7 port USB 3.0 hub:

DSC02348.jpg

This is a fairly standard powered USB hub with one exception - high speed charging. Thanks to an included 36W power adapter and support for Battery Charging Specification 1.2, the HB7003 can charge devices at up to 1.5 Amps at 5 Volts. This is not to be confused with 'Quick Charging', which uses a newer specification and more unique hardware.

Specifications:

  • L/W/H: 6.06" x 1.97" x 0.83"
  • Ports: 7
  • Speed: USB 3.0 5Gbps (backwards compatible with USB 2.0 and 1.1)
  • Windows Vista / OSX 10.8.4 and newer supported without drivers

Packaging:

DSC02346.jpg

Densely packed brown box. Exactly how such a product should be packaged.

DSC02347.jpg

Power adapter (~6 foot cord), ~4.5 foot USB 3.0 cord, instruction manual, and the hub itself.

Charging:

Some quick charging tests revealed that the HB7003 had no issue exceeding 1.0 Amp charging rates, but fell slightly short of a full 1.5A charge rate due to the output voltage falling a little below the full 5V. Some voltage droop is common with this sort of device, but it did have some effect. In one example, an iPad Air drew 1.3A (13% short of a full 1.5A). Not a bad charging rate considering, but if you are expecting a fast charge of something like an iPad, its dedicated 2.1A charger is obviously the better way to go.

Performance and Usability:

DSC02519.jpg

As you can see above, even though the port layout is on a horizontal plane, Inateck has spaced the ports enough that most devices should be able to sit side by side. Some wider devices may take up an extra port, but with seven to work with, the majority of users should have enough available ports even if one or two devices overlap an adjacent port. In the above configuration, we had no issue saturating the throughput to each connected device. I also stepped up to a Samsung USB T1 which also negotiated at the expected USB 3.0 speeds.

Pricing and Availability

Inateck is selling it these direct from their Amazon store (link above).

Conclusion:

Pros:

  • Clean design 7-port USB 3.0 hub.
  • Port spacing sufficient for most devices without interference.
  • 1.5A per port charging.
  • Low cost.

Cons:

  • 'Wall wart' power adapter may block additional power strip outlets.

At just $35, the Inateck HB7003 is a good quality 7-port USB 3.0 hub. All ports can charge devices at up to 1.5A while connecting them to the host at data rates up to 5 Gbps. The only gripe I had was that the hub was a bit on the light weight side and as a result it easily slid around on the desk when the attached cords were disturbed, but some travelers might see light weight as a bonus. Overall this is a simple, no frills USB 3.0 hub that gets the job done nicely.

Author:
Subject: Processors, Mobile
Manufacturer: Qualcomm

Qualcomm’s GPU History

Despite its market dominance, Qualcomm may be one of the least known contenders in the battle for the mobile space. While players like Apple, Samsung, and even NVIDIA are often cited as the most exciting and most revolutionary, none come close to the sheer sales, breadth of technology, and market share that Qualcomm occupies. Brands like Krait and Snapdragon have helped push the company into the top 3 semiconductor companies in the world, following only Intel and Samsung.

Founded in July 1985, seven industry veterans came together in the den of Dr. Irwin Jacobs’ San Diego home to discuss an idea. They wanted to build “Quality Communications” (thus the name Qualcomm) and outlined a plan that evolved into one of the telecommunications industry’s great start-up success stories.

Though Qualcomm sold its own handset business to Kyocera in 1999, many of today’s most popular mobile devices are powered by Qualcomm’s Snapdragon mobile chipsets with integrated CPU, GPU, DSP, multimedia CODECs, power management, baseband logic and more.  In fact the typical “chipset” from Qualcomm encompasses up to 20 different chips of different functions besides just the main application processor. If you are an owner of a Galaxy Note 4, Motorola Droid Turbo, Nexus 6, or Samsung Galaxy S5, then you are most likely a user of one of Qualcomm’s Snapdragon chipsets.

qc-2.jpg

Qualcomm’s GPU History

Before 2006, the mobile GPU as we know it today was largely unnecessary. Feature phones and “dumb” phones were still the large majority of the market with smartphones and mobile tablets still in the early stages of development. At this point all the visual data being presented on the screen, whether on a small monochrome screen or with the color of a PDA, was being drawn through a software renderer running on traditional CPU cores.

But by 2007, the first fixed-function, OpenGL ES 1.0 class of GPUs started shipping in mobile devices. These dedicated graphics processors were originally focused on drawing and updating the user interface on smartphones and personal data devices. Eventually these graphics units were used for what would be considered the most basic gaming tasks.

Continue reading Qualcomm History and its GPU (R)evolution.

Author:
Manufacturer: Sapphire

The new Radeon R9 300-series

The new AMD Radeon R9 and R7 300-series of graphics cards are coming into the world with a rocky start. We have seen rumors and speculation about what GPUs are going to be included, what changes would be made and what prices these would be shipping at for what seems like months, and in truth it has been months. AMD's Radeon R9 290 and R9 290X based on the new Hawaii GPU launched nearly 2 years ago, while the rest of the 200-series lineup was mostly a transition of existing products in the HD 7000-family. The lone exception was the Radeon R9 285, a card based on a mysterious new GPU called Tonga that showed up late to the game to fill a gap in the performance and pricing window for AMD.

AMD's R9 300-series, and the R7 300-series in particular, follows a very similar path. The R9 390 and R9 390X are still based on the Hawaii architecture. Tahiti is finally retired and put to pasture, though Tonga lives on as the Radeon R9 380. Below that you have the Radeon R7 370 and 360, the former based on the aging GCN 1.0 Curacao GPU and the latter based on Bonaire. On the surface its easy to refer to these cards with the dreaded "R-word"...rebrands. And though that seems to be the case there are some interesting performance changes, at least at the high end of this stack, that warrant discussion.

DSC02524.jpg

And of course, AMD partners like Sapphire are using this opportunity of familiarity with the GPU and its properties to release newer product stacks. In this case Sapphire is launching the new Nitro brand for a series of cards that it is aimed at what it considers the most common type of gamer: one that is cost conscious and craves performance over everything else.

The result is a stack of GPUs with prices ranging from about $110 up to ~$400 that target the "gamer" group of GPU buyers without the added price tag that some other lines include. Obviously it seems a little crazy to be talking about a line of graphics cards that is built for gamers (aren't they all??) but the emphasis is to build a fast card that is cool and quiet without the additional cost of overly glamorous coolers, LEDs or dip switches.

Today I am taking a look at the new Sapphire Nitro R9 390 8GB card, but before we dive head first into that card and its performance, let's first go over the changes to the R9-level of AMD's product stack.

Continue reading our review of the Sapphire Nitro R9 390 8GB Graphics Card!!

Author:
Manufacturer: AMD

Fiji: A Big and Necessary Jump

Fiji has been one of the worst kept secrets in a while.  The chip has been talked about, written about, and rumored about seemingly for ages.  The chip has promised to take on NVIDIA at the high end by bringing about multiple design decisions that are aimed to give it a tremendous leap in performance and efficiency as compared to previous GCN architectures.  NVIDIA released their Maxwell based products last year and added to that this year with the Titan X and the GTX 980 Ti.  These are the parts that Fiji is aimed to compete with.

fiji_01.jpg

The first product that Fiji will power is the R9 Fury X with integrated water cooling.

AMD has not been standing still, but their R&D budgets have been taking a hit as of late.  The workforce has also been pared down to the bare minimum (or so I hope) while still being able to design, market, and sell products to the industry.  This has affected their ability to produce as large a quantity of new chips as NVIDIA has in the past year.  Cut-backs are likely not the entirety of the story, but they have certainly affected it.

The plan at AMD seems to be to focus on very important products and technologies, and then migrate those technologies to new products and lines when it makes the most sense.  Last year we saw the introduction of “Tonga” which was the first major redesign after the release of the GCN 1.1 based Hawaii which powers the R9 290 and R9 390 series.  Tonga delivered double the tessellation performance over Hawaii, it improved overall architecture efficiency, and allowed AMD to replace the older Tahiti and Pitcairn chips with an updated unit that featured xDMA and TrueAudio support.  Tonga was a necessary building block that allowed AMD to produce a chip like Fiji.

Click here to read about the AMD Fiji GPU!

Subject: Storage
Manufacturer: Inateck

You love the dock!

Today we'll take a quick look at the Inateck FD2002 USB 3.0 to dual SATA dock:

DSC02361.jpg

This is a UASP capable dock that should provide near full SATA 6Gb/sec throughput to each of two connected SATA SSDs or HDDs. This particular dock has no RAID capability, but exchanges that for an offline cloning / duplication mode. While the FD2002 uses ASMedia silicon to perform these tasks, similar limitations are inherent in competing hardware fron JMicron, which comes with a similar toggle of either RAID or cloning capability. Regardless, Inateck made the logical choice with the FD2002, as hot swap docks are not the best choice for hardware RAID.

ASM1153E ASM1091R FD2002.jpg

The pair of ASMedia chips moving data within the FD2002. The ASM1153E on the left couples the USB 3.0 link to the ASM1091R, which multiplexes to the pair of SATA ports and apparently adds cloning functionality.

Continue reading for our review of the Inateck FD2002 USB 3.0 Dual SATA Dock!

Introduction and Technical Specifications

Introduction

The measure of a true modder is not in how powerful he can make his system by throwing money at it, but in how well he can innovate to make his components run better with what he or she has on hand. Some make artistic statements with their truly awe-inspiring cases, while others take the dremel and clamps to their beloved video cards in an attempt to eek out that last bit of performance. This article serves the later of the two. Don't get me wrong, the card will look nice once we're done with it, but the point here is to re-use components on hand where possible to minimize the cost while maximizing the performance (and sound) benefits.

EVGA GTX 970 SC Graphics Card

02-evga970sc-full.jpg

Courtesy of EVGA

We started with an EVGA GTX 970 SC card with 4GB ram and bundled with the new revision of EVGA's ACX cooler, ACX 2.0. This card is well built with a slight factory overclock out of the box. The ACX 2.0 cooler is a redesigned version of the initial version of the cooler included with the card, offering better cooling potential with fan's not activated for active cooling until the GPU block temperature breeches 60C.

03-evga970sc-card-profile.jpg

Courtesy of EVGA

WATERCOOL HeatKiller GPU-X3 Core GPU Waterblock

04-gpu-x3-core-top.jpg

Courtesy of WATERCOOL

For water cooling the EVGA GTX 970 SC GPU, we decided to use the WATERCOOL HeatKiller GPU-X3 Core water block. This block features a POM-based body with a copper core for superior heat transfer from the GPU to the liquid medium. The HeatKiller GPU-X3 Core block is a GPU-only cooler, meaning that the memory and integrated VRM circuitry will not be actively cooled by the block. The decision to use a GPU only block rather than a full cover block was two fold - availability and cost. I had a few of these on hand, making of an easy decision cost-wise.

Continue reading our article on Modding the EVGA GTX 970 SC Graphics Card!

Subject: Mobile
Manufacturer: ASUS

Introduction and Specifications

The ASUS Zenfone 2 is a 5.5-inch smartphone with a premium look and the specs to match. But the real story here is that it sells for just $199 or $299 unlocked, making it a tempting alternative to contract phones without the concessions often made with budget devices; at least on paper. Let's take a closer look to see how the new Zenfone 2 stacks up! (Note: a second sample unit was provided by Gearbest.com.)

zenfone2_cover.jpg

When I first heard about the Zenfone 2 from ASUS I was eager to check it out given its mix of solid specs, nice appearance, and a startlingly low price. ASUS has created something that has the potential to transcend the disruptive nature of a phone like the Moto E, itself a $149 alternative to contract phones that we reviewed recently. With its premium specs to go along with very low unlocked pricing, the Zenfone 2 could be more than just a bargain device, and if it performs well it could make some serious waves in the smartphone industry.

The Zenfone 2 also features a 5.5-inch IPS LCD screen with 1920x1080 resolution (in line with an iPhone 6 Plus or the OnePlus One), and beyond the internal hardware ASUS has created a phone that looks every bit the part of a premium device that one would expect to cost hundreds more. In fact, without spoiling anything up front, I will say that the context of price won't be necessary to judge the merit of the Zenfone 2; it stands on its own as a smartphone, and not simply a budget phone.

zenfone2_light.jpg

The big question is going to be how the Zenfone 2 compares to existing phones, and with its quad-core Intel Atom SoC this is something of an unknown. Intel has been making a push to enter the U.S. smartphone market (with earlier products more widely available in Europe) and the Zenfone 2 marks an important milestone for both Intel and ASUS in this regard. The Z3580 SoC powering my review unit certainly sounds fast on paper with its 4 cores clocked up to 2.33 GHz, and no less than 4 GB of RAM on board (and a solid 64 GB onboard storage as well).

Continue reading our review of the ASUS Zenfone 2 smartphone!!

Subject: Mobile
Manufacturer: ASUS

Introduction and Design

P4110149.jpg

With the introduction of the ASUS G751JT-CH71, we’ve now got our first look at the newest ROG notebook design revision.  The celebrated design language remains the same, and the machine’s lineage is immediately discernible.  However, unlike the $2,000 G750JX-DB71 unit we reviewed a year and a half ago, this particular G751JT configuration is 25% less expensive at just $1,500.  So first off, what’s changed on the inside?

specs.png

(Editor's Note: This is NOT the recent G-Sync version of the ASUS G751 notebook that was announced at Computex. This is the previously released version, one that I am told will continue to sell for the foreseeable future and one that will come at a lower overall price than the G-Sync enabled model. Expect a review on the G-Sync derivative very soon!)

Quite a lot, as it turns out.  For starters, we’ve moved all the way from the 700M series to the 900M series—a leap which clearly ought to pay off in spades in terms of GPU performance.  The CPU and RAM remain virtually equivalent, while the battery has migrated from external to internal and enjoyed a 100 mAh bump in the process (from 5900 to 6000 mAh).  So what’s with the lower price then?  Well, apart from the age difference, it’s the storage: the G750JX featured both a 1 TB storage drive and a 256 GB SSD, while the G751JT-CH71 drops the SSD.  That’s a small sacrifice in our book, especially when an SSD is so easily added thereafter.  By the way, if you’d rather simply have ASUS handle that part of the equation for you, you can score a virtually equivalent configuration (chipset and design evolutions notwithstanding) in the G751JT-DH72 for $1750—still $250 less than the G750JX we reviewed.

Continue reading our review of the ASUS G751JT!!!

Author:
Subject: Editorial
Manufacturer: Codemasters

Digging in a Little Deeper into the DiRT

Over the past few weeks I have had the chance to play the early access "DiRT Rally" title from Codemasters.  This is a much more simulation based title that is currently PC only, which is a big switch for Codemasters and how they usually release their premier racing offerings.  I was able to get a hold of Paul Coleman from Codemasters and set up a written interview with him.  Paul's answers will be in italics.

Who are you, what do you do at Codemasters, and what do you do in your spare time away from the virtual wheel?

paul_coleman.jpg

Hi my name is Paul Coleman and I am the Chief Games Designer on DiRT Rally. I’m responsible for making sure that the game is the most authentic representation of the sport it can be, I’m essentially representing the player in the studio. In my spare time I enjoy going on road trips with my family in our 1M Coupe. I’ve been co-driving in real world rally events for the last three years and I’ve used that experience to write and voice the co-driver calls in game.

If there is one area that DiRT has really excelled at is keeping frame rate consistent throughout multiple environments.  Many games, especially those using cutting edge rendering techniques, often have dramatic frame rate drops at times.  How do you get around this while still creating a very impressive looking game?

The engine that DiRT Rally has been built on has been constantly iterated on over the years and we have always been looking at ways of improving the look of the game while maintaining decent performance. That together with the fact that we work closely with GPU manufacturers on each project ensures that we stay current. We also have very strict performance monitoring systems that have come from optimising games for console. These systems have proved very useful when building DiRT Rally even though the game is exclusively on PC.

dr_01.jpg

How do you balance out different controller use cases?  While many hard core racers use a wheel, I have seen very competitive racing from people using handheld controllers as well as keyboards.  Do you handicap/help those particular implementations so as not to make it overly frustrating to those users?  I ask due to the difference in degrees of precision that a gamepad has vs. a wheel that can rotate 900 degrees.

Again this comes back to the fact that we have traditionally developed for console where the primary input device is a handheld controller. This is an area that other sims don’t usually have to worry about but for us it was second nature. There are systems that we have that add a layer between the handheld controller or keyboard and the game which help those guys but the wheel is without a doubt the best way to experience DiRT Rally as it is a direct input.

Continue reading the entire DiRT Rally Interview here!