All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
A quick look at performance results
Late last week, EA and Dice released the long awaited patch for Battlefield 4 that enables support for the Mantle renderer. This new API technology was introduced by AMD back in September. Unfortunately, AMD wasn't quite ready for its release with their Catalyst 14.1 beta driver. I wrote a short article that previewed the new driver's features, its expected performance with the Mantle version of BF4, and commentary about the current state of Mantle. You should definite read that as a primer before continuing if you haven't yet.
Today, after really just a few short hours with a useable driver, I have only limited results. Still, I know that you, our readers, clamor for ANY information on the topic. I thought I would share what we have thus far.
As I mentioned in the previous story, the Mantle version of Battlefield 4 has the biggest potential to show advantages in times where the game is more CPU limited. AMD calls this the "low hanging fruit" for this early release of Mantle and claim that further optimizations will come, especially for GPU-bound scenarios. Because of that dependency on CPU limitations, that puts some non-standard requirements on our ability to showcase Mantle's performance capabilities.
For example, the level of the game and even the section of that level, in the BF4 single player campaign, can show drastic swings in Mantle's capabilities. Multiplayer matches will also show more consistent CPU utilization (and thus could be improved by Mantle) though testing those levels in a repeatable, semi-scientific method is much more difficult. And, as you'll see in our early results, I even found a couple instances in which the Mantle API version of BF4 ran a smidge slower than the DX11 instance.
For our testing, we compiled two systems that differed in CPU performance in order to simulate the range of processors installed within consumers' PCs. Our standard GPU test bed includes a Core i7-3960X Sandy Bridge-E processor specifically to remove the CPU as a bottleneck and that has been included here today. We added in a system based on the AMD A10-7850K Kaveri APU which presents a more processor-limited (especially per-thread) system, overall, and should help showcase Mantle benefits more easily.
A troubled launch to be sure
AMD has released some important new drivers with drastic feature additions over the past year. Remember back in August of 2013 when Frame Pacing was first revealed? Today’s Catalyst 14.1 beta release will actually complete the goals that AMD set forth upon itself in early 2013 in regards to introducing (nearly) complete Frame Pacing technology integration for non-XDMA GPUs while also adding support for Mantle
and HSA capability.
Frame Pacing Phase 2 and HSA Support
When AMD released the first frame pacing capable beta driver in August of 2013, it added support to existing GCN designs (HD 7000-series and a few older generations) at resolutions of 2560x1600 and below. While that definitely addressed a lot of the market, the fact was that CrossFire users were also amongst the most likely to have Eyefinity (3+ monitors spanned for gaming) or even 4K displays (quickly dropping in price). Neither of those advanced display options were supported with any Catalyst frame pacing technology.
That changes today as Phase 2 of the AMD Frame Pacing feature has finally been implemented for products that do not feature the XDMA technology (found in Hawaii GPUs for example). That includes HD 7000-series GPUs, the R9 280X and 270X cards, as well as older generation products and Dual Graphics hardware combinations such as the new Kaveri APU and R7 250. I have already tested Kaveri and the R7 250 in fact, and you can read about its scaling and experience improvements right here. That means that users of the HD 7970, R9 280X, etc., as well as those of you with HD 7990 dual-GPU cards, will finally be able to utilize the power of both GPUs in your system with 4K displays and Eyefinity configurations!
This is finally fixed!!
As of this writing I haven’t had time to do more testing (other than the Dual Graphics article linked above) to demonstrate the potential benefits of this Phase 2 update, but we’ll be targeting it later in the week. For now, it appears that you’ll be able to get essentially the same performance and pacing capabilities on the Tahiti-based GPUs as you can with Hawaii (R9 290X and R9 290).
Catalyst 14.1 beta is also the first public driver to add support for HSA technology, allowing owners of the new Kaveri APU to take advantage of the appropriately enabled applications like LibreOffice and the handful of Adobe apps. AMD has since let us know that this feature DID NOT make it into the public release of Catalyst 14.1.
The First Mantle Ready Driver (sort of)
A technology that has been in development for more than two years according to AMD, the newly released Catalyst 14.1 beta driver is the first to enable support for the revolutionary new Mantle API for PC gaming. Essentially, Mantle is AMD’s attempt at creating a custom API that will replace DirectX and OpenGL in order to more directly target the GPU hardware in your PC, specifically the AMD-based designs of GCN (Graphics Core Next).
Mantle runs at a lower level than DX or OGL does, able to more directly access the hardware resources of the graphics chips, and with that ability is able to better utilize the hardware in your system, both CPU and GPU. In fact, the primary benefit of Mantle is going to be seen in the form of less API overhead and bottlenecks such as real-time shader compiling and code translation.
If you are interested in the meat of what makes Mantle tick and why it was so interesting to us when it was first announced in September of 2013, you should check out our first deep-dive article written by Josh. In it you’ll get our opinion on why Mantle matters and why it has the potential for drastically changing the way the PC is thought of in the gaming ecosystem.
Hybrid CrossFire that actually works
The road to redemption for AMD and its driver team has been a tough one. Since we first started to reveal the significant issues with AMD's CrossFire technology back in January of 2013 the Catalyst driver team has been hard at work on a fix, though I will freely admit it took longer to convince them that the issue was real than I would have liked. We saw the first steps of the fix released in August of 2013 with the release of the Catalyst 13.8 beta driver. It supported DX11 and DX10 games and resolutions of 2560x1600 and under (no Eyefinity support) but was obviously still less than perfect.
In October with the release of AMD's latest Hawaii GPU the company took another step by reorganizing the internal architecture of CrossFire on the chip level with XDMA. The result was frame pacing that worked on the R9 290X and R9 290 in all resolutions, including Eyefinity, though still left out older DX9 titles.
One thing that had not been addressed, at least not until today, was the issues that surrounded AMD's Hybrid CrossFire technology, now known as Dual Graphics. This is the ability for an AMD APU with integrated Radeon graphics to pair with a low cost discrete GPU to improve graphics performance and gaming experiences. Recently over at Tom's Hardware they discovered that Dual Graphics suffered from the exact same scaling issues as standard CrossFire; frame rates in FRAPS looked good but the actually perceived frame rate was much lower.
A little while ago a new driver made its way into my hands under the name of Catalyst 13.35 Beta X, a driver that promised to enable Dual Graphics frame pacing with Kaveri and R7 graphics cards. As you'll see in the coming pages, the fix definitely is working. And, as I learned after doing some more probing, the 13.35 driver is actually a much more important release than it at first seemed. Not only is Kaveri-based Dual Graphics frame pacing enabled, but Richland and Trinity are included as well. And even better, this driver will apparently fix resolutions higher than 2560x1600 in desktop graphics as well - something you can be sure we are checking on this week!
Just as we saw with the first implementation of Frame Pacing in the Catalyst Control Center, with the 13.35 Beta we are using today you'll find a new set of options in the Gaming section to enable or disable Frame Pacing. The default setting is On; which makes me smile inside every time I see it.
The hardware we are using is the same basic setup we used in my initial review of the AMD Kaveri A8-7600 APU review. That includes the A8-7600 APU, an Asrock A88X mini-ITX motherboard, 16GB of DDR3 2133 MHz memory and a Samsung 840 Pro SSD. Of course for our testing this time we needed a discrete card to enable Dual Graphics and we chose the MSI R7 250 OC Edition with 2GB of DDR3 memory. This card will run you an additional $89 or so on Amazon.com. You could use either the DDR3 or GDDR5 versions of the R7 250 as well as the R7 240, but in our talks with AMD they seemed to think the R7 250 DDR3 was the sweet spot for the CrossFire implementation.
Both the R7 250 and the A8-7600 actually share the same number of SIMD units at 384, otherwise known as 384 shader processors or 6 Compute Units based on the new nomenclature that AMD is creating. However, the MSI card is clocked at 1100 MHz while the GPU portions of the A8-7600 APU are running at only 720 MHz.
So the question is, has AMD truly fixed the issues with frame pacing with Dual Graphics configurations, once again making the budget gamer feature something worth recommending? Let's find out!
Introduction and Features
Corsair's new CS Series Modular PSUs include four models; the CS450M, CS550M, CS650M and CS750M. All of the power supplies in the CS Series feature modular cables, high efficiency (80 Plus Gold certified) and quiet operation. In addition, Corsair continues to offer a full line of high quality power supplies, memory components, cases, cooling components, SSDs and accessories for the PC market.
Here is what Corsair has to say about their CS Series Modular PSUs: “The CS Modular Series is designed for basic and midrange PCs, but offers features and performance traditionally reserved for higher-end models. 80 Plus Gold efficiency and a thermally controlled fan ensure quiet operation and lower energy use, and the modular, detachable cable set makes installations and upgrades faster and better looking.”
“80 Plus Gold rated efficiency saves you money on your power bill and produces less heat than less efficient power supplies. The flat black modular cables allow you to enjoy fast, neat builds. And, like all Corsair power supplies, CS Series Modular is built with high-quality components and is guaranteed to deliver clean, stable, continuous power.”
Corsair CS Series Modular PSU Key Features: (from the Corsair website)
Introduction, Specifications and Packaging
As of yesterday, the OCZ we all knew was officially acquired by Toshiba. They are now referred to as OCZ Storage Solutions, acting as a wholly owned subsidiary of Toshiba Group:
This deal has been in the works for a while now, and while some suspected OCZ might be going under, they have continued to release new drives. The acquisition is more beneficial to OCZ than you might think, in that they now have much better access to Toshiba flash memory. Further, they can likely purchase it at better costs than available to those outside of the new parent companies' umbrella.
Today is no different, and OCZ is ringing in the pairing with a new product launch:
Lets jump right into the specs:
OCZ also provided a comparison against prior models:
This new model, just like the Vector 150, sports Toshiba 19nm flash. It's a slightly newer version of the Barefoot 3 controller, but with a lower endurance spec and warranty period.
Introduction and Technical Specifications
Courtesy of ECS
The Z87H3-A3X is ECS' latest release in their L337 Gaming board line. Similar to the A2X Extreme board, ECS designed the Z87H3-A3X with a liberal amount of gold, from the gold plating on its capacitors to the gold tint on its integrated heat sinks. This board is a steal at it $119.99 MSRP with its Intel Z87 chipset and performance-oriented components.
Courtesy of ECS
ECS powers the Z87H3-A3X motherboard with a 6-phase digital power regulation system to ensure consistent power delivery to the CPU under all operating circumstances. The Z87H3-A3X includes the following integrated features: six SATA 6Gb/s ports and one eSATA port; an Intel GigE NIC; two PCI-Express x16 slots for up to dual-card support; four PCI-Express x1 slots; and USB 2.0 and 3.0 port support. The 80P button configures what information displays on the diagnostic display once the board has successfully initialized.
Courtesy of ECS
Courtesy of ECS
Being part of their L337 Gaming series boards, the Z87H3-A3X includes the Durathon power delivery solution and innovative cooling solution. The board's heat sinks are designed to enhance airflow over key components to aid in cooling, while the Durathon system incorporates enhanced power modules to maximum performance and minimize failure potential.
Courtesy of ECS
Lenovo introduces a unique form factor
Lenovo isn't a company that seems interested in slowing down. Just when you think the world of notebooks is getting boring, it releases products like the ThinkPad Tablet 2 and the Yoga 2 Pro. Today we are looking at another innovative product from Lenovo, the Yoga Tablet 8 and Yoga Tablet 10. While the tablets share the Yoga branding seen in recent convertible notebooks these are NOT Windows-based PCs - something that I fear some consumers might get confused by.
Instead this tablet pair is based on Android (4.2.2 at this point) which brings with it several advantages. First, the battery life is impressive, particularly with the 8-in version that clocked in more than 17 hours in our web browsing test! Second, the form factor of these units is truly unique and not only allows for larger batteries but also a more comfortable in-the-hand feeling than I have had with any other tablet.
Check out the video overview below!
You can pick up the 8-in version of the Lenovo Yoga Tablet for just $199 while the 10.1-in model starts at $274.
The Lenovo Yoga Tablet is available in both 8-in and 10.1-in sizes though the hardware is mostly identical between both units include screen resolution (1280x800) and SoC hardware (MediaTek quad-core Cortex-A7). The larger model does get an 8000 mAh battery (over the 6000 mAh on the 8-in) but isn't enough to counter balance the power draw of the larger screen.
The 1280x800 resolution is a bit lower than I would like but is perfectly acceptable on the 8-in version of the Yoga Tablet. On the 10-in model though the pixels are just too big and image quality suffers. These are currently running Android 4.2.2 which is fine, but hopefully we'll see some updates from Lenovo to more current Android versions.
A Hard Decision
Welcome to our second annual (only chumps say first annual... crap) Best Hardware of the Year awards. This is where we argue the order of candidates in several categories on the podcast and, some time later, compile the results into an article. The majority of these select the best hardware of its grouping but some look at the more general trends of our industry.
As an aside, Google Monocle will win Best Hardware Ever 2014, 2015, and 2017. It will fail to be the best of all time for 2016, however.
If you would like to see the discussion as it unfolded then you should definitely watch Episode 282 recorded January 2nd, 2014. You do not even need to navigate away because we left it tantalizingly embed below this paragraph. You know you want to enrich the next two hours of your life. Click it. Click it a few times if you have click to enable plugins active in your browser. You can stop clicking when you see the polygons dance. You will know it when you see it.
The categories were arranged as follows:
- Best Graphics Card of 2013
- Best CPU of 2013
- Best Storage of 2013
- Best Case of 2013
- Best Motherboard of 2013
- Best Price Drop of 2013
- Best Mobile Device of 2013
- Best Trend of 2013
- Worst Trend of 2013
Each of the winners will be given our "Editor's Choice" award regardless of its actual badge in any review we conducted of it. This is because the product is the choice of our editors for this year even if it is not an "Editor's Choice". It may have not even been reviewed by us at all.
Also, the criteria for winning each category is left as vague as possible for maximum interpretation.
A Refreshing Change
Refreshes are bad, right? I guess that depends on who you talk to. In the case of AMD, it is not a bad thing. For people who live for cutting edge technology in the 3D graphics world, it is not pretty. Unfortunately for those people, reality has reared its ugly head. Process technology is slowing down, but product cycles keep moving along at a healthy pace. This essentially necessitates minor refreshes for both AMD and NVIDIA when it comes to their product stack. NVIDIA has taken the Kepler architecture to the latest GTX 700 series of cards. AMD has done the same thing with the GCN architecture, but has radically changed the nomenclature of the products.
Gone are the days of the Radeon HD 7000 series. Instead AMD has renamed their GCN based product stack with the Rx 2xx series. The products we are reviewing here are the R9 280X and the R9 270X. These products were formerly known as the HD 7970 and HD 7870 respectively. These products differ in clock speeds slightly from the previous versions, but the differences are fairly minimal. What is different are the prices for these products. The R9 280X retails at $299 while the R9 270X comes in at $199.
Asus has taken these cards and applied their latest DirectCU II technology to them. These improvements relate to design, component choices, and cooling. These are all significant upgrades from the reference designs, especially when it comes to the cooling aspects. It is good to see such a progression in design, but it is not entirely surprising given that the first HD 7000 series debuted in January, 2012.
The AMD Kaveri Architecture
Kaveri: AMD’s New Flagship Processor
How big is Kaveri? We already know the die size of it, but what kind of impact will it have on the marketplace? Has AMD chosen the right path by focusing on power consumption and HSA? Starting out an article with three questions in a row is a questionable tactic for any writer, but these are the things that first come to mind when considering a product the likes of Kaveri. I am hoping we can answer a few of these questions by the end of this article, but alas it seems as though the market will have the final say as to how successful this new architecture is.
AMD has been pursuing the “Future is Fusion” line for several years, but it can be argued that Kaveri is truly the first “Fusion” product that completes the overall vision for where AMD wants to go. The previous several generations of APUs were initially not all that integrated in a functional sense, but the complexity and completeness of that integration has been improved upon with each iteration. Kaveri takes this integration to the next step, and one which fulfills the promise of a truly heterogeneous computing solution. While AMD has the hardware available, we have yet to see if the software companies are willing to leverage the compute power afforded by a robust and programmable graphics unit powered by AMD’s GCN architecture.
(Editor's Note: The following two pages were written by our own Josh Walrath, dicsussing the technology and architecture of AMD Kaveri. Testing and performance analysis by Ryan Shrout starts on page 3.)
The first step in understanding Kaveri is taking a look at the process technology that AMD is using for this particular product. Since AMD divested itself of their manufacturing arm, they have had to rely on GLOBALFOUNDRIES to produce nearly all of their current CPUs and APUs. Bulldozer, Piledriver, Llano, Trinity, and Richland based parts were all produced on GF’s 32 nm PD-SOI process. The lower power APUs such as Brazos and Kabini have been produced by TSMC on their 40 nm and 28 nm processes respectively.
Kaveri will take a slightly different approach here. It will be produced by GLOBALFOUNDRIES, but it will forego the SOI and utilize a bulk silicon process. 28 nm HKMG is very common around the industry, but few pure play foundries were willing to tailor their process to the direct needs of AMD and the Kaveri product. GF was able to do such a thing. APUs are a different kind of animal when it comes to fabrication, primarily because the two disparate units require different characteristics to perform at the highest efficiency. As such, compromises had to be made.
Introduction and Design
We’re always on the hunt for good docking stations, and sometimes it can be difficult to locate one when you aren’t afforded the luxury of a dedicated docking port. Fortunately, with the advent of USB 3.0 and the greatly improved bandwidth that comes along with it, the options have become considerably more robust.
Today, we’ll take a look at StarTech’s USB3SDOCKHDV, more specifically labeled the Universal USB 3.0 Laptop Docking Station - Dual Video HDMI DVI VGA with Audio and Ethernet (whew). This docking station carries an MSRP of $155 (currently selling for $123 on Amazon.com) and is well above other StarTech options (such as the $100 USBVGADOCK2, which offers just one video output—VGA—10/100 Ethernet, and four USB 2.0 ports). In terms of street price, it is currently available at resellers such as Amazon for around $125.
The big selling points of the USB3SDOCKHDV are its addition of three USB 3.0 ports and Gigabit Ethernet—but most enticingly, its purported ability to provide three total screens simultaneously (including the connected laptop’s LCD) by way of dual HD video output. This video output can be achieved by way of either HDMI + DVI-D or HDMI + VGA combinations (but not by VGA + DVI-D). We’ll be interested to see how well this functionality works, as well as what sort of toll it takes on the CPU of the connected machine.
Continue reading our review of the StarTech USB3SDOCKHDV USB 3.0 Docking Station!!!
The stars are aligned
One of the most frequent questions we get at PC Perspective is some derivative of "is now the time to buy or should I wait?" If you listen to the PC Perspective Podcast or This Week in Computer Hardware you'll know that I usually err on the side of purchasing now. Why should you hold yourself back on the enjoyment of technology unless something DRAMATIC is just over the horizon.
This week I got another such email that prompted me to do some thinking. After just returning from CES 2014 in Las Vegas, I think its fair to say that we didn't hear anything concrete about upcoming SSD plans that would really be considered monumental. Sure, we saw plenty of PCIe SSDs as well as some M.2 options, but little for PC enthusiasts or even users that are looking to replace the hard drives in their PlayStation 4. Our team thinks that now is about as good of a time to buy an SSD as you will get.
And while you are always going to see price drops on commodity goods like flash storage, the prices on some of our favorite SSDs are at a low that we haven't witnessed without the rebates and flash deals of Black Friday / Cyber Monday. Let's take a look at a few:
Note: It should go without saying that all of these price discussions are as of this writing and could change...
Samsung 840 EVO 1TB SSD (Red: Amazon, Yellow: Newegg) - Graph courtesy HoverHound
The flagship SSD from the Samsung 840 EVO series SSDs, also the personal favorite of Allyn and most of the rest of the PC Perspective team, is near its all-time low in price at just $529 for a 1TB capacity. That is a cost per GB of just $0.529; no rebates, no gimmicks.
Samsung 840 EVO 500GB SSD (Red: Amazon, Yellow: Newegg) - Graph courtesy HoverHound
Likely the most popularly purchased of the EVO series is the 500GB model that is currently selling on Amazon for $309, or $0.618/GB. Obviously that is a higher mark than the 1TB hits but as you'll see in our tables below, in general, the higher capacity you purchase at the better value per GB you are going to find.
There are other capacities of the Samsung 840 EVO starting at 120GB, going to 250GB, and even a 750GB, all are included in the pricing table below. Depending on your budget and your need for the best perceived value, you can make a decision on your own.
Let's not forget the other options on the market; Samsung may be the strongest player today but companies like Intel, OCZ and Corsair continue to have a strong presence. The second best selling series of SSD during the holidays was the Intel 530 series of drives that utilize the LSI SandForce SF2281 controller. How do they stack up price-wise?
DisplayPort to Save the Day?
During an impromptu meeting with AMD this week, the company's Corporate Vice President for Visual Computing, Raja Koduri, presented me with an interesting demonstration of a technology that allowed the refresh rate of a display on a Toshiba notebook to perfectly match with the render rate of the game demo being shown. The result was an image that was smooth and with no tearing effects. If that sounds familiar, it should. NVIDIA's G-Sync was announced in November of last year and does just that for desktop systems and PC gamers.
Since that November unveiling, I knew that AMD would need to respond in some way. The company had basically been silent since learning of NVIDIA's release but that changed for me today and the information discussed is quite extraordinary. AMD is jokingly calling the technology demonstration "FreeSync".
Variable refresh rates as discussed by NVIDIA.
During the demonstration AMD's Koduri had two identical systems side by side based on a Kabini APU . Both were running a basic graphics demo of a rotating windmill. One was a standard software configuration while the other model had a modified driver that communicated with the panel to enable variable refresh rates. As you likely know from our various discussions about variable refresh rates an G-Sync technology from NVIDIA, this setup results in a much better gaming experience as it produces smoother animation on the screen without the horizontal tearing associated with v-sync disabled.
Obviously AMD wasn't using the same controller module that NVIDIA is using on its current G-Sync displays, several of which were announced this week at CES. Instead, the internal connection on the Toshiba notebook was the key factor: Embedded Display Port (eDP) apparently has a feature to support variable refresh rates on LCD panels. This feature was included for power savings on mobile and integrated devices as refreshing the screen without new content can be a waste of valuable battery resources. But, for performance and gaming considerations, this feature can be used to initiate a variable refresh rate meant to smooth out game play, as AMD's Koduri said.
Once known as Logan, now known as K1
NVIDIA has bet big on Tegra. Since the introduction of the SoC's first iteration, that much was clear. With the industry push to mobile computing and the decreased importance of the classic PC design, developing and gaining traction with a mobile processor was not only an expansion of the company’s portfolio but a critical shift in the mindset of a graphics giant.
The problem thus far is that while NVIDIA continues to enjoy success in the markets of workstation and consumer discrete graphics, the Tegra line of silicon-on-chip processors has faltered. Design wins have been tough to come by. Other companies with feet already firmly planted on this side of the hardware fence continue to innovate and seal deals with customers. Qualcomm is the dominant player for mobile processors with Samsung, MediaTek, and others all fighting for the same customers NVIDIA needs. While press conferences and releases have been all smiles and sunshine since day one, the truth is that Tegra hasn’t grown at the rate NVIDIA had hoped.
Solid products based on NVIDIA Tegra processors have been released. The first Google Nexus 7 used the Tegra 3 processor, and was considered the best Android tablet on the market by most, until it was succeeded by the 2013 iteration of the Nexus 7 this year. Tegra 4 slipped backwards, though – the NVIDIA SHIELD mobile gaming device was the answer for a company eager to show the market they built compelling and relevant hardware. It has only partially succeeded in that task.
With today’s announcement of the Tegra K1, previously known as Logan or Tegra 5, NVIDIA hopes to once again spark a fire under partners and developers, showing them that NVIDIA’s dominance in the graphics fields of the PC has clear benefits to the mobile segment as well. During a meeting with NVIDIA about Tegra K1, Dan Vivoli, Senior VP of marketing and a 16 year employee, equated the release of the K1 to the original GeForce GPU. That is a lofty ambition and puts of a lot pressure on the entire Tegra team, not to mention the K1 product itself, to live up to.
Tegra K1 Overview
What we previously knew as Logan or Tegra 5 (and actually it was called Tegra 5 until just a couple of days ago), is now being released as the Tegra K1. The ‘K’ designation indicated the graphics architecture that powers the SoC, in this case Kepler. Also, it’s the first one. So, K1.
The processor of the Tegra K1 look very familiar and include four ARM Cortex-A15 “r3” cores and 2MB of L2 cache with a fifth A15 core used for lower power situations. This 4+1 design is the same that was introduced with the Tegra 4 processor last year and allows NVIDIA to implement a style of “big.LITTLE” design that is unique. Some slight modifications to the cores are included with Tegra K1 that improve performance and efficiency, but not by much – the main CPU is very similar to the Tegra 4.
NVIDIA also unveiled late last night that another version of the Tegra K1 that replaces the quad A15 cores with two of the company's custom designs Denver CPU cores. Project Denver, announced in early 2011, is NVIDIA's attempt at building its own core design based on the ARMv8 64-bit ISA. This puts this iteration of Tegra K1 on the same level as Apple's A7 and Qualcomm's Krait processors. When these are finally available in the wild it will be incredibly intriguing to see how well NVIDIA's architects did in the first true CPU design from the GPU giant.
Follow all of our coverage of the show at http://pcper.com/ces!
Introduction and Unboxing
We've been covering NVIDIA's new G-Sync tech for quite some time now, and displays so equipped are finally shipping. With all of the excitement going on, I became increasingly interested in the technology, especially since I'm one of those guys who is extremely sensitive to input lag and the inevitable image tearing that results from vsync-off gaming. Increased discussion on our weekly podcast, coupled with the inherent difficulty of demonstrating the effects without seeing G-Sync in action in-person, led me to pick up my own ASUS VG248QE panel for the purpose of this evaluation and review. We've generated plenty of other content revolving around the G-Sync tech itself, so lets get straight into what we're after today - evaluating the out of box installation process of the G-Sync installation kit.
All items are well packed and protected.
Included are installation instructions, a hard plastic spudger for opening the panel, a couple of stickers, and all necessary hardware bits to make the conversion.
Introduction and Technical Specifications
Courtesy of GIGABYTE
The GIGABYTE G1.Sniper 5 motherboard is among GIGABYTE's flagship boards supporting the forth generation of the Intel Core processor line through the integrated Z87 chipset. The board offers support for the newest generation of Intel LGA1150-based processors with all the integrated features and port support you've come to expect from a high-end board. At an MSRP of $419.99, the G1.Sniper 5 premium price is only matched by its premium and expansive feature set.
Courtesy of GIGABYTE
Courtesy of GIGABYTE
GIGABYTE packed the G1.Sniper full of premium features to ensure its viability as a top-rated contender. The board features the Ultra Durable 5 Plus power technology and the Amp-Up Audio technology. Ultra Durable 5 Plus brings several high-end power components into the board's design: International Rectifier (IR) manufactured PowIRstage™ ICs and PWM controllers, Nippon Chemi-con manufactured Black Solid capacitors with a 10k hour operational rating at 105C, 15 micron gold plating on the CPU socket pins, and two 0.070mm copper layers imbedded into the PCB for optimal heat dissipation. GIGABYTE's Amp-Up Audio technology integrates an op-amp socket into the board's audio PCB, giving the user the ability to customize their audio listening experience. Additionally, the G1.Sniper 5 has the following integrated features: 10 SATA 6Gb/s ports; dual GigE NICs - an Intel NIC and a Qualcomm Killer NIC; four PCI-Express x16 slots for up to quad-card NVIDIA SLI or AMD CrossFire support; three PCI-Express x1 slots; on board power, reset, and BIOS reset buttons; switch BIOS and Dual-BIOS switches; 2-digit diagnostic LED display; integrated voltage measurement points; and USB 2.0 and 3.0 port support.
Courtesy of GIGABYTE
Small form factor cases and the push to Mini ITX designs took a dramatic journey during 2013 as the popularity of the smaller PC once again became a popular trend. Though a company like Shuttle, that hardly exists in the form it did in 2004, was the first PC hardware company to really drive home the idea of an SFF system design, many other players have released compelling products helping to strengthen it as one of the unique possibilities for enthusiast PCs.
Even better, though a Mini-ITX based platform could mean limited options for hardware and performance, with companies like ASUS, EVGA, BitFenix and others in the mix, building an incredibly fast and powerful gaming machine using small hardware is not only easy but can be done at a lower price than you might expect.
One entry that found its way to our offices this December comes from Silverstone in the form of the Raven Z, RVZ01 case. This case includes unique features and capabilities including the ability to support nearly any high end graphics card on the market (dual slot or single), space for larger heatsinks and even liquid coolers along with a home theater friendly look and style. Oh, and it's
the same almost the same design that Valve used for its beta Steam Machines as well. (Update: Turns out the size of the Steam Machine is actually a fair bit smaller than the Silverstone RVZ01.)
Sapphire Triple Fan Hawaii
It was mid-December when the very first custom cooled AMD Radeon R9 290X card hit our offices in the form of the ASUS R9 290X DirectCU II. It was cooler, quieter, and faster than the reference model; this is a combination that is hard to pass up (if you could buy it yet). More and more of these custom models, both in the R9 290 and R9 290X flavor, are filtering their way into PC Perspective. Next on the chopping block is the Sapphire Tri-X model of the R9 290X.
Sapphire's triple fan cooler already made quite an impression on me when we tested a version of it on the R9 280X retail round up from October. It kept the GPU cool but it was also the loudest of the retail cards tested at the time. For the R9 290X model, Sapphire has made some tweaks to the fan speeds and the design of the cooler which makes it a better overall solution as you will soon see.
The key tenets for any AMD R9 290/290X custom cooled card is to beat AMD's reference cooler in performance, noise, and variable clock rates. Does Sapphire meet these goals?
The Sapphire R9 290X Tri-X 4GB
While the ASUS DirectCU II card was taller and more menacing than the reference design, the Sapphire Tri-X cooler is longer and appears to be more sleek than the competition thus far. The bright yellow and black color scheme is both attractive and unique though it does lack the LED light that the 280X showcased.
Sapphire has overclocked this model slightly, to 1040 MHz on the GPU clock, which puts it in good company.
|AMD Radeon R9 290X||ASUS R9 290X DirectCU II||Sapphire R9 290X Tri-X|
|Rated Clock||1000 MHz||1050 MHz||1040 MHz|
|Memory Clock||5000 MHz||5400 MHz||5200 MHz|
|TDP||~300 watts||~300 watts||~300 watts|
|Peak Compute||5.6 TFLOPS||5.6+ TFLOPS||5.6T TFLOPS|
There are three fans on the Tri-X design, as the name would imply, but each are the same size unlike the smaller central fan design of the R9 280X.
Introduction and Technical Specifications
Courtesy of Phanteks
A relative new comer in the enthusiast space, Phanteks has taken the hearts and minds with their high-performance and innovatively designed thermal cooling solutions. The PH-TC12DX cooler features a massive dual-radiator tower actively cooled by two 120mm fans with a copper, nickel-plated CPU base plate. The cooler cines packaged with support for all current Intel and AMD CPU socket offerings. To properly gage the PH-TC12DX's performance, we put it up against several similarly-classed air and water-based cooling solutions. At a retail price of $54.99, the Phanteks PH-TC12DX offers you solid performance without busting your bank.
Courtesy of Phanteks
Courtesy of Phanteks
Courtesy of Phanteks
The Phanteks PH-TC12DX cooler consists of a single tower radiator with four U-shaped heat pipes intersecting the its cooling fins. The cooler uses nickel-plated copper heat pipes to transfer the heat from the copper CPU base plate to the fins of the aluminum radiator for optimal heat transmission and dispersal. The tower is sandwiched by two high air flow 120mm fans for heat dispersion from the radiator. Phanteks went to great lengths to make sure the PH-TC12DX kept a sleek looking appearance from the black coloration of the radiator to the branded top-plate to hide the heat pipe termination points.
Courtesy of Phanteks
Phanteks includes everything you need to get the cooler up and running in your system: mounting kits supporting both Intel and AMD-based systems, dual PH-F120HP 120mm fans, fan mounting kits, a sleeved dual-ended fan power cable, and Phanteks PH-NDC thermal paste.
Introduction and Features
We have been reviewing Seasonic power supplies for over ten years here at PCPerspective and they have never failed to impress us. Seasonic is also one of the few companies that actually builds their own power supplies (along with supplying units to numerous other big-name brands). Seasonic has built a stellar reputation for producing some of the best PC power supplies on the market today. In their ongoing pursuit to continuously improve their products, Seasonic has recently introduced the S12G Series, which includes four models: S12G-450, S12G-550, S12G-650, and the S12G-750 that we will be taking a detailed look at in this review. Here are a few of the highlights offered by the new S12G Series power supplies.
• 80Plus Gold certification
• Standard all in one, flat black cabling
• High +12V Output
• Smart and Silent Fan Control (S2FC)
• S12G-750/650: PCI-E 8P/6P x 4, SATA x 10, 4P Molex x 4, FDD x 1
• S12G-550/450: PCI-E 8P/6P x 2, SATA x 8, 4P Molex x 3, FDD x 1
• Worldwide 5-Year Warranty
The S12G Series is targeted towards gamers and PC enthusiasts who want solid performance at a user friendly price. To accomplish this Seasonic has designed the S12G Series on the same basic platform as many of their premium products but has forgone a few features like modular cables and fanless operation for price conscious consumers. Retail prices are currently ranging from $79.99 USD for the S12G-450 to $109.99 USD for the S12G-750 (newegg.com, November 2013).
Here is what Seasonic has to say about the new S12G Series: “The S12G Series is the newest addition to Seasonic’s families of award winning retail products, representing the latest innovation of our engineering team. To meet the demands of users who are looking for reliable 80Plus Gold performance for gaming and overall usage, the S12G Series is designed to support Intel’s Haswell processors, features more SATA cables and is an affordable solution for a wide range of applications.”
Seasonic S12G Series Key Features:
(Courtesy of Seasonic)
Get notified when we go live!