All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Introduction and First Impressions
NZXT has created a stylish mid-tower enclosure with their Source 340 chassis, and made it an especially attractive option with a retail of just $69.99. Can this new case contend in a crowded market? We will find that out here!
With several interesting designs under their belt NZXT isn’t a surprising name when it comes to nice-looking enclosures. I looked at their H440 Razer Edition recently, and the H440 it was based on is a popular mid-tower enclosure with good looks and performance. This new S340 is very similar to the H440 but on a slightly reduced scale, and offers a more open internal layout with a reduction in hard drive storage space. This is a move that won’t work for everyone, but as I mentioned in the recent SilverStone Raven RV05 review being limited to a pair of hard drives and SSDs a fair tradeoff for a gaming or productivity setup.
On the subject of storage, like the aforementioned H440 and RV05 this Source 340 enclosure is another example of a optical bay-free design. There are no hidden slim-ODD bays here, and for any optical data needs a user will be required to use an external solution. I personally like an open layout and don’t use 5.25” bays at all anymore, and the added room in the S340 provides nearly unlimited space for long GPUs and stays clean with a clever approach to cable routing.
Introduction, Specifications and Packaging
Today we're taking a quick look at a pair of drive enclosures sent to us by ICY DOCK.
To the left is the ToughArmor MB996SP-6SB, which is a 5.25" bay hot swap chassis capable of mounting 6 2.5" SATA devices. To the right is the ICYBento MB559U3S-1S, which is a UASP external 3.5" HDD enclosure connectable by either USB 3.0 or eSATA.
We did note that the spec sheet and manual included SATA power to molex adapters, but we found no such adapters in the box. We may have received old stock, as the web site appears more up to date than the paper manual we received.
**update** ICYDock reached out and let me know that all shipping boxes of this part should come with a pair of molex to SATA power cables. Our sample came from their techs and they must have forgot to put those cables back into our box.
Both items were well packaged with no shipping damage noted.
Introduction and Technical Specifications
Courtesy of Corsair
Working in concert with GIGABYTE, Corsair developed the Dominator Platinum DDR4-3400 16GB kit to pair up perfectly with the X99-SOC Champion motherboard. The DDR4 modules feature orange anodized heat spreaders that exactly match the SOC-Champion's color scheme as well as two Dominator Vengeance Platinum memory coolers with integrated orange LEDs. The memory modules are build with hand-screened ICs to ensure the rigorous quality demands necessary for achieving the rated speeds.
Courtesy of Corsair
Courtesy of Corsair
The modules included in the Dominator Platinum DDR4-3400 16GB kit toute many design innovations enabling them to maintain their rated speed, such as the latest version of Corsairs Dominator DHX aluminum heat spreader which directly cools the specially designed PCB and hand-sorted ICs for module construction. The modules are optimized for use with the Intel® Core™ “Haswell-E” CPUs and the Intel X99 platform and include support for the latest version of Intel XMP (Extreme Memory Profile), XMP 2.0.
Introduction, Specifications and Packaging
Editor's note: We are hosting a live stream event with our friends at Intel's SSD group today to discuss the new SSD 750 Series launch and to giveaway a couple of the 400GB units as well! Be sure you stop by to ask quesitons, learn about the technology and have a chance to win some hardware!!
Intel has a habit of overlapping their enterprise and consumer product lines. Their initial X25-M was marketed to both consumer and enterprise, with heavier workloads reserved for the X25-E. Their SSD 320 Series was also spec'd for both consumer and enterprise usage. Their most recent SSD 730 Series was actually an overclocked version of their SSD DC S3500 units. Clearly this is an established trend for Intel, so when they dominated flash memory performance with the SSD DC P3700 launch last year, pretty much everyone following these sorts of things eagerly waited in anticipation of a consumer release.
While they were hard to find outside of enterprise supply chains, some dedicated users picked up that enterprise part for their enthusiast systems, but many were disappointed as the P3700's enterprise hardware and firmware conflicted with many consumer motherboards' BIOS, rendering it unbootable for some and causing address space conflicts for others. In short, the P3700 was a great product that simply did not function properly with most consumer motherboards. All anyone could do was wait for Intel to spin a consumer product from this enterprise part, and that day is today:
This is the add-in card version of the new Intel SSD 750 Series that brings NVMe technology and insane performance levels to consumers at a cost that is more affordable than you might think.
As with the enterprise variant, Intel chose to launch the SSD 750 Series in the familiar HHHL PCIe x4 form factor as well as a 2.5" SFF-8639 packaging. The 2.5" model contains the exact same set of components, just rearranged into a smaller device.
Despite being 2.5", this is not a SATA device. While the connector may look similar, it is *very* different:
As you can see above, SFF-8639 further extends on the familiar SATA power and data connections, which had already been extended a few times to add additional SAS data lines. The new spec adds a complete row of pins on the back side of the connector to support four lanes of PCIe. This means the SFF variant of the SSD 750 will perform identically to the PCIe half-height card version. Since SFF-8639 was born as an enterprise spec, one question remains - how do you connect it to a consumer desktop motherboard? Well, desktop motherboards are coming with M.2 ports that can support up to PCIe 3.0 x4, so all you really need is a simple way to get from point A to point B:
Pictured above (left) is the ASUS 'Hyper Kit' adapter PCB, which was sampled to us with their new Sabertooth X99 motherboard just for testing these new 2.5" devices. The connector you see at the right may look familiar, as it is an internal Mini-SAS HD (SFF-8643) cable commonly used with high end SAS RAID cards. Intel is basically borrowing the physical spec, but rewiring those four SAS lanes over to the PCIe pins of the SFF-8639 connector at the other end of the cable.
Process Technology Overview
We have been very spoiled throughout the years. We likely did not realize exactly how spoiled we were until it became very obvious that the rate of process technology advances hit a virtual brick wall. Every 18 to 24 months we were treated to a new, faster, more efficient process node that was opened up to fabless semiconductor firms and we were treated to a new generation of products that would blow our hair back. Now we have been in a virtual standstill when it comes to new process nodes from the pure-play foundries.
Few expected the 28 nm node to live nearly as long as it has. Some of the first cracks in the façade actually came from Intel. Their 22 nm Tri-Gate (FinFET) process took a little bit longer to get off the ground than expected. We also noticed some interesting electrical features from the products developed on that process. Intel skewed away from higher clockspeeds and focused on efficiency and architectural improvements rather than staying at generally acceptable TDPs and leapfrogging the competition by clockspeed alone. Overclockers noticed that the newer parts did not reach the same clockspeed heights as previous products such as the 32 nm based Sandy Bridge processors. Whether this decision was intentional from Intel or not is debatable, but my gut feeling here is that they responded to the technical limitations of their 22 nm process. Yields and bins likely dictated the max clockspeeds attained on these new products. So instead of vaulting over AMD’s products, they just slowly started walking away from them.
Samsung is one of the first pure-play foundries to offer a working sub-20 nm FinFET product line. (Photo courtesy of ExtremeTech)
When 28 nm was released the plans on the books were to transition to 20 nm products based on planar transistors, thereby bypassing the added expense of developing FinFETs. It was widely expected that FinFETs were not necessarily required to address the needs of the market. Sadly, that did not turn out to be the case. There are many other factors as to why 20 nm planar parts are not common, but the limitations of that particular process node has made it a relatively niche process node that is appropriate for smaller, low power ASICs (like the latest Apple SOCs). The Apple A8 is rumored to be around 90 mm square, which is a far cry from the traditional midrange GPU that goes from 250 mm sq. to 400+ mm sq.
The essential difficulty of the 20 nm planar node appears to be a lack of power scaling to match the increased transistor density. TSMC and others have successfully packed in more transistors into every square mm as compared to 28 nm, but the electrical characteristics did not scale proportionally well. Yes, there are improvements there per transistor, but when designers pack in all those transistors into a large design, TDP and voltage issues start to arise. As TDP increases, it takes more power to drive the processor, which then leads to more heat. The GPU guys probably looked at this and figured out that while they can achieve a higher transistor density and a wider design, they will have to downclock the entire GPU to hit reasonable TDP levels. When adding these concerns to yields and bins for the new process, the advantages of going to 20 nm would be slim to none at the end of the day.
Features and Specifications
If you are looking for the biggest, baddest, power supply on the planet, then we have an exclusive review for you today. But you better have a truck, a couple of strong friends, and very deep pockets!
Miller Electric has been manufacturing large, industrial-grade power supplies since 1929. They are one of the few power supply manufactures who actually design and build their own products, right here in the USA. Miller’s The Power of Blue series of high-capacity power supplies includes models that go all the way up to an astounding 10,500 watts!
While we were not able to obtain a review sample of Miller’s flagship 10.5kW unit, we do have an exclusive review of the Miller XMT 300 PC, which can deliver up to 4,500 watts of pure DC power. Talk about having some extra reserve capacity… wow!
The Miller XMT 300 PC is an external power supply that is about twice the size of a typical mid-tower case and is designed to normally sit on the floor. It features a unique single, high-capacity +12V rail capable of delivering up to 375A (4,500W). A power distribution module mounts inside the PC where the normal ATX power supply would go and breaks down the incoming +12V to the minor rails +3.3V, +5V, etc., along with providing a standard set of cables and connectors. This allows one XMT 300 PC to power multiple PCs at the same time; up to twenty computers.
Miller XMT 300 PC Key Features:
• Monstrous, single rail +12V output (up to 375A peak)
• External main power unit sits on the floor
• Can support multiple PCs at the same time
• #6AWG copper cables for minimal voltage drop
• Automatic fan speed control for optimal cooling and minimal noise
• High efficiency operation (up to 87%)
• Active Power Factor Correction
• 1-Phase or 3-Phase line voltage
• 3-Year warranty
Miller XMT 300 PC Specifications:
Introduction, Specifications and Packaging
Following the same pattern that Samsung led with the 840 Pro and 840 EVO, history has repeated itself with the 850 Pro and 850 EVO. With the 850 EVO launching late last year and being quite successful, it was only a matter of time before Samsung expanded past the 2.5" form factor for this popular SSD. Today is that day:
Today we will be looking at the MSATA and M.2 form factors. To clarify, the M.2 units are still using a SATA controller and connection, and must therefore be installed in a system capable of linking SATA lanes to its M.2 port. As both products are SATA, the DRAM cache based RAPID mode included with their Magician value added software is also available for these models. We won't be using RAPID for this review, but we did take a look at it in a prior article.
Given that 850 EVOs use VNAND - a vastly different technology than the planar NAND used in the 840 EVO, we suspect it is not subject to the same flash cell drift related issues (hopefully to be corrected soon) in the 840 EVO. Only time will tell for sure on that front, but we have not see any of those issues present in 850 EVO models since their launch.
Cross sectional view of Samsung's 32-layer VNAND. Photo by TechInsights.
Samsung sampled us the M.2 SATA in 120GB and 500GB, and the MSATA in 120GB and 1TB. Since both are SATA-based, these are only physical packaging differences. The die counts are the same as the 2.5" desktop counterparts. While the pair of 120GB models should be essentially identical, we'll throw both in with the results to validate the slight differences in stated specs below.
A monitor for those that like it long
It takes a lot to really impress someone that sits in front of dual 2560x1600 30-in IPS screens all day, but the LG 34UM95 did just that. With a 34-in diagonal 3440x1440 resolution panel forming a 21:9 aspect ratio, built on LG IPS technology for flawless viewing angles, this monitor creates a work and gaming experience that is basically unmatched in today's market. Whether you need to open up a half-dozen Excel or Word documents, keep an eye on your Twitter feed while looking at 12 browsers or run games at near Eyefinity/Surround levels without bezels, the LG 34UM95 is a perfect option.
Originally priced north of $1200, the 34UM95 and many in LG's 21:9 lineup have dropped in price considerably, giving them more avenues into users' homes. There are obvious gaming advantages to the 34-in display compared to a pair of 1920x1080 panels (no bezel, 20% more pixels) but if you have a pair of 2560x1440 screens you are going to be giving up a bit. Some games might not handle 21:9 resolutions well either, just as we continue to see Eyefinity/Surround unsupported occasionally.
Productivity users will immediately see an improvement, both for those us inundated with spreadsheets, web pages and text documents as well as the more creative types with Adobe Premiere timelines. I know that Ken would definitely have approved us keeping this monitor here at the office for his use.
Check out the video above for more thoughts on the LG 34UM95!
It's more than just a branding issue
As a part of my look at the first wave of AMD FreeSync monitors hitting the market, I wrote an analysis of how the competing technologies of FreeSync and G-Sync differ from one another. It was a complex topic that I tried to state in as succinct a fashion as possible given the time constraints and that the article subject was on FreeSync specifically. I'm going to include a portion of that discussion here, to recap:
First, we need to look inside the VRR window, the zone in which the monitor and AMD claims that variable refresh should be working without tears and without stutter. On the LG 34UM67 for example, that range is 48-75 Hz, so frame rates between 48 FPS and 75 FPS should be smooth. Next we want to look above the window, or at frame rates above the 75 Hz maximum refresh rate of the window. Finally, and maybe most importantly, we need to look below the window, at frame rates under the minimum rated variable refresh target, in this example it would be 48 FPS.
AMD FreeSync offers more flexibility for the gamer than G-Sync around this VRR window. For both above and below the variable refresh area, AMD allows gamers to continue to select a VSync enabled or disabled setting. That setting will be handled as you are used to it today when your game frame rate extends outside the VRR window. So, for our 34UM67 monitor example, if your game is capable of rendering at a frame rate of 85 FPS then you will either see tearing on your screen (if you have VSync disabled) or you will get a static frame rate of 75 FPS, matching the top refresh rate of the panel itself. If your game is rendering at 40 FPS, lower than the minimum VRR window, then you will again see the result of tearing (with VSync off) or the potential for stutter and hitching (with VSync on).
But what happens with this FreeSync monitor and theoretical G-Sync monitor below the window? AMD’s implementation means that you get the option of disabling or enabling VSync. For the 34UM67 as soon as your game frame rate drops under 48 FPS you will either see tearing on your screen or you will begin to see hints of stutter and judder as the typical (and previously mentioned) VSync concerns again crop their head up. At lower frame rates (below the window) these artifacts will actually impact your gaming experience much more dramatically than at higher frame rates (above the window).
G-Sync treats this “below the window” scenario very differently. Rather than reverting to VSync on or off, the module in the G-Sync display is responsible for auto-refreshing the screen if the frame rate dips below the minimum refresh of the panel that would otherwise be affected by flicker. So, in a 30-144 Hz G-Sync monitor, we have measured that when the frame rate actually gets to 29 FPS, the display is actually refreshing at 58 Hz, each frame being “drawn” one extra instance to avoid flicker of the pixels but still maintains a tear free and stutter free animation. If the frame rate dips to 25 FPS, then the screen draws at 50 Hz. If the frame rate drops to something more extreme like 14 FPS, we actually see the module quadruple drawing the frame, taking the refresh rate back to 56 Hz. It’s a clever trick that keeps the VRR goals and prevents a degradation of the gaming experience. But, this method requires a local frame buffer and requires logic on the display controller to work. Hence, the current implementation in a G-Sync module.
As you can see, the topic is complicated. So Allyn and I (and an aging analog oscilloscope) decided to take it upon ourselves to try and understand and teach the implementation differences with the help of some science. The video below is where the heart of this story is focused, though I have some visual aids embedded after it.
Still not clear on what this means for frame rates and refresh rates on current FreeSync and G-Sync monitors? Maybe this will help.
Our first DX12 Performance Results
Late last week, Microsoft approached me to see if I would be interested in working with them and with Futuremark on the release of the new 3DMark API Overhead Feature Test. Of course I jumped at the chance, with DirectX 12 being one of the hottest discussion topics among gamers, PC enthusiasts and developers in recent history. Microsoft set us up with the latest iteration of 3DMark and the latest DX12-ready drivers from AMD, NVIDIA and Intel. From there, off we went.
First we need to discuss exactly what the 3DMark API Overhead Feature Test is (and also what it is not). The feature test will be a part of the next revision of 3DMark, which will likely ship in time with the full Windows 10 release. Futuremark claims that it is the "world's first independent" test that allows you to compare the performance of three different APIs: DX12, DX11 and even Mantle.
It was almost one year ago that Microsoft officially unveiled the plans for DirectX 12: a move to a more efficient API that can better utilize the CPU and platform capabilities of future, and most importantly current, systems. Josh wrote up a solid editorial on what we believe DX12 means for the future of gaming, and in particular for PC gaming, that you should check out if you want more background on the direction DX12 has set.
One of DX12 keys for becoming more efficient is the ability for developers to get closer to the metal, which is a phrase to indicate that game and engine coders can access more power of the system (CPU and GPU) without having to have its hand held by the API itself. The most direct benefit of this, as we saw with AMD's Mantle implementation over the past couple of years, is improved quantity of draw calls that a given hardware system can utilize in a game engine.
Introduction, Specifications and Packaging
OCZ has been on a fairly steady release track since their aquisition by Toshiba. Having previously pared down their product lines, taking a minimalist approach, the other half of that cycle has taken place with releases like the OCZ AMD Radeon R7. Today we see another addition to OCZ's lineup, in the form of a newer Vector - the Vector 180 Series:
Today we will run all three available capacities (240GB, 480GB, and 960GB) through our standard round of testing. I've thrown in an R7 as a point of comparison, as well as a hand full of the competition.
Here are the specs from OCZ's slide presentation, included here as it gives a good spec comparison across OCZ's SATA product range.
Standard packaging here. 3.5" adapter bracket and Acronis 2013 cloning software product key included.
The perfect laptop; it is every manufacturer’s goal. Obviously no one has gotten there yet (or we would have all stopped writing reviews of them). At CES this past January, we got our first glimpse of a new flagship Ultrabook from Dell: the XPS 13. It got immediate attention for some of the physical characteristics it included, like an ultra-thin bezel and a 13-in screen in the body of a typical 11-in laptop, all while being built in a sleek thin and light design. It’s not a gaming machine, despite what you might remember from the XPS line, but the Intel Core-series Broadwell-U processor keeps performance speedy in standard computing tasks.
As a frequent traveler that tends to err on the side of thin and light designs, as opposed to high performance notebooks with discrete graphics, the Dell XPS 13 is immediately compelling on a personal level as well. I have long been known as a fan of what Lenovo builds for this space, trusting my work machine requirements to the ThinkPad line for years and year. Dell’s new XPS 13 is a strong contender to take away that top spot for me and perhaps force me down the path of an upgrade of my own. So, you might consider this review as my personal thesis on the viability of said change.
The Dell XPS 13 Specifications
First, make sure as you hunt around the web for information on the XPS 13 that you are focusing on the new 2015 model. Much like we see from Apple, Dell reuses model names and that can cause confusion unless you know what specifications to look for or exactly what sub-model you need. Trust me, the new XPS 13 is much better than anything that existed before.
Introduction and Features
EVGA has just announced the arrival of two new GS power supplies in their popular SuperNOVA line, the 550GS and 650GS. Both power supplies are 80 Plus Gold certified and feature all modular cables, high-quality Japanese brand capacitors, and a super quiet 120mm cooling fan (with the ability to operate in silent, fan-less mode at low power levels). The 550GS and 650GS are housed in a compact chassis (150mm deep) and are backed by a 5-year warranty. These new GS units are manufactured by Seasonic and will start selling for $89.99 (550GS) and $99.99 (650GS) this spring.
EVGA was founded in 1999 with headquarters in Brea, California. They continue to specialize in producing NVIDIA based graphics adapters and Intel based motherboards and keep expanding their PC power supply product line, which currently includes twenty-two models ranging from the high-end 1,600W SuperNOVA T2 to the budget minded EVGA 400W power supply.
In this review we will be taking a detailed look at both the EVGA SuperNOVA 550GS and 650GS power supplies. It’s nice when we receive two slightly different units in the same product series to look for consistency during testing.
Here is what EVGA has to say about the new SuperNOVA GS Gold PSUs: “The EVGA GS power supply lineup has arrived. These power supplies offer superior performance, at extremely low noise levels making these the perfect power supplies for a low noise environment. These power supplies also feature ECO mode meaning that the fan will run only when necessary, further reducing noise level and power consumption.
The EVGA SuperNOVA 550GS/650GS power supply raises the bar with 550W/650W of continuous power delivery and 90% (115VAC) / 92% (220VAC-240VAC) efficiency. A fully modular design with compact dimensions reduces case clutter and 100% Japanese capacitors ensures that only the absolute best components are used. This gives you the best stability, reliability, overclockability and unparalleled control. The EVGA SuperNOVA 550GS/650GS is the ultimate tool to eliminate all system bottlenecks and achieve unrivaled performance."
EVGA SuperNOVA 550 GS and 650 GS Gold PSU Key Features:
• Fully modular cables to reduce clutter and improve airflow
• 80PLUS Gold certified, with up to 90% (115VAC) / 92% (240VAC) efficiency
• Tight voltage regulation (2%), stable power with low AC ripple and noise
• Highest quality Japanese brand capacitors ensure long-term reliability
• Quiet 120mm Teflon Nano-Steel bearing cooling fan
• ECO Intelligent Thermal Control allows silent, fan-less operation at low power
• NVIDIA SLI & AMD Crossfire Ready
• Compliance with ErP Lot 6 2013 Requirement
• Active Power Factor correction (0.99) with Universal AC input
• Complete Protections: OVP, UVP, OPP, and SCP
• Compact chassis only 150mm (5.9”) deep
• 5-Year warranty
Introduction and First Impressions
The Fortress FT05 is the fifth iteration of SilverStone's Fortress series of enclosures, and, like the latest Raven case, this leverages the complete removal of 5.25" bays to reduce its overall size. We've seen this before as the FT03 completely removed optical support, but this enclosure is related far more closely to the current Raven enclosure than any of its predecessors.
Introduction: The Heart of a Raven
If you're familiar with SilverStone's product lineup you'll know about the Fortress and Raven enclosures which both currently feature an unusual 90° motherboard orientation. This layout places I/O on the top of the case, and helps expel warm air straight up. The Fortress was originally a more conventional design with a standard motherboard layout, but SilverStone switched this to mirror the Raven series with the second version, the FT02. However, just as the Raven series diverged from the original design language and layout of the RV01 with later versions, the Fortress series has undergone some radical changes since its introduction. With this fifth version of the Fortress SilverStone has converged the two enclosure lines, and the FT05 is essentially a more businesslike version of the Raven RV05 - though the design's more conventional exterior also contains noise-dampening material which helps to further differentiate the two enclosures.
Much as the current Raven owes much of its design to an earlier version, in that case the RV01, this new Fortress is a return to the design of the FT02. That earlier Fortress was a large (and quite expensive) case that combined great expandability with excellent cooling, taking the RV01's 90° layout and opening up the interior for an expansive, easy-to-manage interior. A considerable amount of the second gen's interior was devoted to storage, and the front of the case was dominated by 5.25" drive bays.
The second-generation Fortress FT02 interior
What is FreeSync?
FreeSync: What began as merely a term for AMD’s plans to counter NVIDIA’s launch of G-Sync (and mocking play on NVIDIA’s trade name) has finally come to fruition, keeping the name - and the attitude. As we have discussed, AMD’s Mantle API was crucial to pushing the industry in the correct and necessary direction for lower level APIs, though NVIDIA’s G-Sync deserves the same credit for recognizing and imparting the necessity of a move to a variable refresh display technology. Variable refresh displays can fundamentally change the way that PC gaming looks and feels when they are built correctly and implemented with care, and we have seen that time and time again with many different G-Sync enabled monitors at our offices. It might finally be time to make the same claims about FreeSync.
But what exactly is FreeSync? AMD has been discussing it since CES in early 2014, claiming that they would bypass the idea of a custom module that needs to be used by a monitor to support VRR, and instead go the route of open standards using a modification to DisplayPort 1.2a from VESA. FreeSync is based on AdaptiveSync, an optional portion of the DP standard that enables a variable refresh rate courtesy of expanding the vBlank timings of a display, and it also provides a way to updating EDID (display ID information) to facilitate communication of these settings to the graphics card. FreeSync itself is simply the AMD brand for this implementation, combining the monitors with correctly implemented drivers and GPUs that support the variable refresh technology.
A set of three new FreeSync monitors from Acer, LG and BenQ.
Fundamentally, FreeSync works in a very similar fashion to G-Sync, utilizing the idea of the vBlank timings of a monitor to change how and when it updates the screen. The vBlank signal is what tells the monitor to begin drawing the next frame, representing the end of the current data set and marking the beginning of a new one. By varying the length of time this vBlank signal is set to, you can force the monitor to wait any amount of time necessary, allowing the GPU to end the vBlank instance exactly when a new frame is done drawing. The result is a variable refresh rate monitor, one that is in tune with the GPU render rate, rather than opposed to it. Why is that important? I wrote in great detail about this previously, and it still applies in this case:
The idea of G-Sync (and FreeSync) is pretty easy to understand, though the implementation method can get a bit more hairy. G-Sync (and FreeSync) introduces a variable refresh rate to a monitor, allowing the display to refresh at wide range of rates rather than at fixed intervals. More importantly, rather than the monitor dictating what rate this refresh occurs at to the PC, the graphics now tells the monitor when to refresh in a properly configured G-Sync (and FreeSync) setup. This allows a monitor to match the refresh rate of the screen to the draw rate of the game being played (frames per second) and that simple change drastically improves the gaming experience for several reasons.
Gamers today are likely to be very familiar with V-Sync, short for vertical sync, which is an option in your graphics card’s control panel and in your game options menu. When enabled, it forces the monitor to draw a new image on the screen at a fixed interval. In theory, this would work well and the image is presented to the gamer without artifacts. The problem is that games that are played and rendered in real time rarely fall into a very specific frame rate. With only a couple of exceptions, games frame rates will fluctuate based on the activity happening on the screen: a rush of enemies, a changed camera angle, an explosion or falling building. Instantaneous frame rates can vary drastically, from 30, to 60, to 90, and force the image to be displayed only at set fractions of the monitor's refresh rate, which causes problems.
With the release of the GeForce GTX 980 back in September of 2014, NVIDIA took the lead in performance with single GPU graphics cards. The GTX 980 and GTX 970 were both impressive options. The GTX 970 offered better performance than the R9 290 as did the GTX 980 compared to the R9 290X; on top of that, both did so while running at lower power consumption and while including new features like DX12 feature level support, HDMI 2.0 and MFAA (multi-frame antialiasing). Because of those factors, the GTX 980 and GTX 970 were fantastic sellers, helping to push NVIDIA’s market share over 75% as of the 4th quarter of 2014.
But in the back of our mind, and in the minds of many NVIDIA fans, we knew that the company had another GPU it was holding on to: the bigger, badder version of Maxwell. The only question was going to be WHEN the company would release it and sell us a new flagship GeForce card. In most instances, this decision is based on the competitive landscape, such as when AMD might be finally updating its Radeon R9 290X Hawaii family of products with the rumored R9 390X. Perhaps NVIDIA is tired of waiting or maybe the strategy is to launch soon before Fiji GPUs make their debut. Either way, NVIDIA officially took the wraps off of the new GeForce GTX TITAN X at the Game Developers Conference two weeks ago.
At the session hosted by Epic Games’ Tim Sweeney, NVIDIA CEO Jen-Hsun Huang arrived when Tim lamented about needing more GPU horsepower for their UE4 content. In his hands he had the first TITAN X GPU and talked about only a couple of specifications: the card would have 12GB of memory and it would be based on a GPU with 8 billion transistors.
Since that day, you have likely seen picture after picture, rumor after rumor, about specifications, pricing and performance. Wait no longer: the GeForce GTX TITAN X is here. With a $999 price tag and a GPU with 3072 CUDA cores, we clearly have a new king of the court.
Introduction and First Impressions
The Lian Li PC-Q33 is a mini-ITX enclosure with a cube-like appearance and a hinged construction that makes it easy to access the components within.
When a builder is contemplating a mini-ITX system the primary driver is going to be the size. It’s incredible that we've reached the point where we can have a powerful single-GPU system with minimal (if any) tradeoffs from the tiny mITX form-factor, but the components need to be housed in an appropriately small enclosure or the entire purpose is defeated. However working within small enclosures is often more difficult, unless the enclosure has been specifically designed to account for this. Certainly no slouch in the design department, Lian Li is no stranger to small, lightweight mini-ITX designs like this. The NCASE M1 (a personal favorite) was manufactured by the company after all, and in some ways the PC-Q33 is reminiscent of that design - in build quality and materials if nothing else. The Q33 features aluminum construction and is very light, and while compact the design of the enclosure allows for effortless component installation. The secret? A hinged design that allows the front of the enclosure to swing down providing full access to the interior.
This approach to accessibility with a small enclosure is a welcome one, and especially so considering the price of the PC-Q33, which retails for $95 on Newegg and can be found for around $105 on Amazon as well. This is still a high cost for many considering a small build and enters the premium price range for an enclosure, but remember the Q33 features an aluminum construction which typically carries a considerably higher cost than steel and plastic. Of course if the case is frustrating to use or has poor thermals than the materials used are meaningless, so in this review we’ll look at the build process and thermal results with the Q33 to see if it’s a good value. My initial impression is that the price is actually low, but that’s coming from someone who looks at a lot of cases and develops a familiarity with the average retail prices in each category.
Introduction, Specs and Packaging
We're getting back into USB device roundup testing. To kick it off, Patriot passed along a couple of USB samples for review. First up is the Supersonic Phoenix 256GB:
- Read speed: Up to 260MB/s
- Write speed: up to 170MB/s
- Compact and lightweight
- Stylish 3D design
- USB Powered
- SuperSpeed USB 3.0
- Compatible with Windows 8, Windows 7, Windows Vista, Windows XP, Windows 2000, Windows ME, Linux 2.4 and later, Mac OS9, X and later
Next up is their Supersonic Rage 2:
- Up to 400MB/s Read; Up to 300MB/s Write
- Durable design extends the life of your drive
- Rubber coated housing protects from drops, spills, daily abuse
- Retractable design protects USB connector when drive not in use
- LED Light Indicator
- Compatible with Windows® 8, Windows® 8.1, Windows® 7,
Windows Vista®, Windows XP®, Windows 2000®, Windows® ME,
Linux 2.4 and later, Mac® OS9, X
The Phoenix comes well packaged with a necessary USB 3.0 cable:
The Rage 2 comes in very simple packaging:
Introduction and Technical Specifications
Courtesy of ASUS
The X99-A is the base level board in ASUS' Intel X99 line of motherboard offering. Don't let the term "base level offering" throw you off though, ASUS put their best foot forward in designing this beauty. The board features full support for all Intel LGA2011-3 based processors paired with DDR4 memory operating in up to a quad channel configuration. Priced at a competitive price point of $274.99, the X99-A gives the more feature-packed (and vastly more expensive) boards a run for their money.
Courtesy of ASUS
Courtesy of ASUS
Just because the X99-A motherboard is designed to be the "entry-level" model of ASUS' X99 product line does not mean that they skimped on its design or features. The X99-A features the enhanced OC Socket and an 8+4 phase digital power system similar to that featured on its more costly siblings, centered around the Extreme Engine Digi+ IV solution. Extreme Engine Digi+ IV combines ASUS' custom designed Digi+ EPU chipset, IR (International Rectifier) sourced MOSFETs, high-quality chokes, and 10k Black Metallic capacitors for unrivaled power delivery capabilities. The board is further augmented by the integration of ASUS' Crystal Sound 2 audio subsystem for superior audio reproduction.
Introduction and Specifications
Had you asked me just a few years ago if 6-inch phones would not only be a viable option, but a dominant force in the mobile computing market, I would have likely rolled my eyes. At that time phones were small, tablets were big, and phablets were laughed at. Today, no one is laughing at the Galaxy Note 4, the latest iteration in Samsung’s created space of larger-than-you-probably-thought-you-wanted smartphones. Nearly all consumers are amazed by the size of the screen and the real estate this class of phone provides but some are instantly off put by the way the phone feels in the hand – it can come off as foreign, cumbersome, and unusable.
In my time with the new Galaxy Note 4 – my first extended-use experience with a phone of this magnitude – I have come to see the many positive traits that a larger phone can offer. There are some trade-offs of course, including the pocket/purse viability debate. One thing beyond question is that a large phone means a big screen. One that can display a large amount of data whether that be on a website or in a note-taking application. The extra screen real estate can instantly improve your productivity. To that end Samsung also provides a multi-tasking framework that lets you run multiple programs in a side-by-side view, similar to what the original version of Windows 8 did. It might seem unnecessary for an Android device, but as soon as you find the situation where you need it going back to a device without it can feel archaic.
A larger phone also means that there is more room for faster hardware, a larger camera sensor, and a bigger battery. Samsung even includes an active stylus called the S-Pen in the body of the device – something that few other modern tablets/phablets/phones feature.