Flash player not detected. Click here to install flash.
« 1 2 3 4 5 »

Remember when competition wasn't a bad word?

Subject: Editorial | October 2, 2015 - 12:41 PM |
Tagged: google, chromecast, AT&T, apple tv, amd, amazon

There is more discouraging news out of AMD as another 5% of their workforce, around 10,000 employees, will be let go by the end of 2016.  That move will hurt their bottom line before the end of this year, $42 million in severance, benefit payouts and other costs associated with restructuring but should save around $60-70 million in costs by the end of next year.  This is on top of the 8% cut to their workforce which occurred earlier this year and shows just how deep AMD needs to cut to stay alive, unfortunately reducing costs is not as effective as raising revenue.  Before you laugh, point fingers or otherwise disparage AMD; consider for a moment a world in which Intel has absolutely no competition selling high powered desktop and laptop parts.  Do you really think the already slow product refreshes will speed up or prices remain the same?

Consider the case of AT&T, who have claimed numerous times that they provide the best broadband service to their customers that they are capable of and at the lowest price they can sustain.  It seems that if you live in a city which has been blessed with Google Fibre somehow AT&T is able to afford to charge $40/month less than in a city which only has the supposed competition of Comcast or Time Warner Cable.  Interesting how the presence of Google in a market has an effect that the other two supposed competitors do not.

There is of course another way to deal with the competition and both Amazon and Apple have that one down pat.  Apple removed the iFixit app that showed you the insides of your phone and had the temerity to actually show you possible ways to fix hardware issues.  Today Amazon have started to kick both Apple TV and Chromecast devices off of their online store.  As of today no new items can be added to the virtual inventory and as of the 29th of this month anything not sold will disappear.  Apparently not enough people are choosing Amazon's Prime Video streaming and so instead of making the service compatible with Apple or Google's products, Amazon has opted to attempt to prevent, or at least hinder, the sale of those products.

The topics of competition, liquidity and other market forces are far too complex to be dealt with in a short post such as this but it is worth asking yourself; do you as a customer feel like competition is still working in your favour?

The Hand

The Hand

"AMD has unveiled a belt-tightening plan that the struggling chipmaker hopes will get its finances back on track to profitability."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register

NVIDIA Announces GeForce NOW Streaming Gaming Service

Subject: General Tech | September 30, 2015 - 09:00 AM |

In a continued evolution of the streaming gaming product previously known as GRID, NVIDIA is taking the wraps off of the final, consumer-ready version of the technology now called GeForce NOW. This streaming gaming service brings games from the cloud to NVIDIA SHIELD devices at up to 1920x1080 resolution and 60 FPS for fluid gameplay. This has been an announcement that we have been expecting for a very long time, with NVIDIA teasing GeForce NOW in the form of GRID private and public betas.


GeForce NOW, which shares a similar goal to services like PlayStation Now and OnLive, plans to stand out through a few key points.

  1. 1080p 60 FPS Support – Supporting higher resolutions than any other service as well as higher frame rates, the resulting product of GeForce NOW could be better than anything else on market for streaming gaming.
  2. Affordability – Coming in at a USD price tag of $7.99, NVIDIA believes that with a combination of included, free, games as well as purchase-and-play games offers a great package for a minimal monthly cost.
  3. Speed of Access – NVIDIA  claims that GeForce NOW can start up new games as much as 2x faster than PlayStation Now, with titles like The Witcher 3 loading up and streaming in as little as 30 seconds.
  4. Global – GeForce NOW will be available in North America, the European Union, Western Europe, Western Russia, and Japan.


Before we talk about the games list, let’s first discuss some of the technical requirements for GeForce NOW. The first, and most important, requirement is a SHIELD device. GeForce NOW will only work with the SHIELD Android TV device or SHIELD Tablet. That will definitely limit the audience for the streaming service, and I am very curious if and when NVIDIA will decide to open this technology and capability to general PC users or other Android/Apple devices. Part of the SHIELD requirement is definitely to promote its own brand, but it might also help gate access to GeForce NOW as the technology ramps up in capacity, etc.

Other than the host device, you’ll also need a speedy broadband network connection. The minimum requirement is 12 Mbps though you will need 20 Mbps of downstream for 720p60 support and 50 Mbps for 1080p60 resolution and frame rate. In terms of latency, you’ll need a 60 ms ping time as a requirement and its going to be recommended you have a 40 ms ping to the nearest NVIDIA server location for the best experience.

All the GeForce NOW servers are based on NVIDIA Kepler GPUs which is what enables the platform to offer up impressive resolutions and image quality settings for a streaming service. Bandwidth and latency are still a concern, of course, but we’ll touch on that aspect of the service when we have more time with it this week or the next.


Finally, let’s talk about the game library. There are ~60 games in the included library including certain games that you can play an unlimited amount of with your $7.99 membership fee. NVIDIA says more games will be added as the service continues.

Continue reading our overview of the new NVIDIA GeForce NOW game streaming service!!

Source: NVIDIA
Manufacturer: NVIDIA

GPU Enthusiasts Are Throwing a FET

NVIDIA is rumored to launch Pascal in early (~April-ish) 2016, although some are skeptical that it will even appear before the summer. The design was finalized months ago, and unconfirmed shipping information claims that chips are being stockpiled, which is typical when preparing to launch a product. It is expected to compete against AMD's rumored Arctic Islands architecture, which will, according to its also rumored numbers, be very similar to Pascal.

This architecture is a big one for several reasons.


Image Credit: WCCFTech

First, it will jump two full process nodes. Current desktop GPUs are manufactured at 28nm, which was first introduced with the GeForce GTX 680 all the way back in early 2012, but Pascal will be manufactured on TSMC's 16nm FinFET+ technology. Smaller features have several advantages, but a huge one for GPUs is the ability to fit more complex circuitry in the same die area. This means that you can include more copies of elements, such as shader cores, and do more in fixed-function hardware, like video encode and decode.

That said, we got a lot more life out of 28nm than we really should have. Chips like GM200 and Fiji are huge, relatively power-hungry, and complex, which is a terrible idea to produce when yields are low. I asked Josh Walrath, who is our go-to for analysis of fab processes, and he believes that FinFET+ is probably even more complicated today than 28nm was in the 2012 timeframe, which was when it launched for GPUs.

It's two full steps forward from where we started, but we've been tiptoeing since then.


Image Credit: WCCFTech

Second, Pascal will introduce HBM 2.0 to NVIDIA hardware. HBM 1.0 was introduced with AMD's Radeon Fury X, and it helped in numerous ways -- from smaller card size to a triple-digit percentage increase in memory bandwidth. The 980 Ti can talk to its memory at about 300GB/s, while Pascal is rumored to push that to 1TB/s. Capacity won't be sacrificed, either. The top-end card is expected to contain 16GB of global memory, which is twice what any console has. This means less streaming, higher resolution textures, and probably even left-over scratch space for the GPU to generate content in with compute shaders. Also, according to AMD, HBM is an easier architecture to communicate with than GDDR, which should mean a savings in die space that could be used for other things.

Third, the architecture includes native support for three levels of floating point precision. Maxwell, due to how limited 28nm was, saved on complexity by reducing 64-bit IEEE 754 decimal number performance to 1/32nd of 32-bit numbers, because FP64 values are rarely used in video games. This saved transistors, but was a huge, order-of-magnitude step back from the 1/3rd ratio found on the Kepler-based GK110. While it probably won't be back to the 1/2 ratio that was found in Fermi, Pascal should be much better suited for GPU compute.


Image Credit: WCCFTech

Mixed precision could help video games too, though. Remember how I said it supports three levels? The third one is 16-bit, which is half of the format that is commonly used in video games. Sometimes, that is sufficient. If so, Pascal is said to do these calculations at twice the rate of 32-bit. We'll need to see whether enough games (and other applications) are willing to drop down in precision to justify the die space that these dedicated circuits require, but it should double the performance of anything that does.

So basically, this generation should provide a massive jump in performance that enthusiasts have been waiting for. Increases in GPU memory bandwidth and the amount of features that can be printed into the die are two major bottlenecks for most modern games and GPU-accelerated software. We'll need to wait for benchmarks to see how the theoretical maps to practical, but it's a good sign.

Apple Dual Sources A9 SOCs with TSMC and Samsung: Some Extra Thoughts

Subject: Processors | September 30, 2015 - 09:55 PM |
Tagged: TSMC, Samsung, FinFET, apple, A9, 16 nm, 14 nm

So the other day the nice folks over at Chipworks got word that Apple was in fact sourcing their A9 SOC at both TSMC and Samsung.  This is really interesting news on multiple fronts.  From the information gleaned the two parts are the APL0898 (Samsung fabbed) and the APL1022 (TSMC).

These process technologies have been in the news quite a bit.  As we well know, it has been a hard time for any foundry to go under 28 nm in an effective way if your name is not Intel.  Even Intel has had some pretty hefty issues with their march to sub 32 nm parts, but they have the resources and financial ability to push through a lot of these hurdles.  One of the bigger problems that affected the foundries was the idea that they could push back FinFETs beyond what they were initially planning.  The idea was to hit 22/20 nm and use planar transistors and push development back to 16/14 nm for FinFET technology.


The Chipworks graphic that explains the differences between Samsung's and TSMC's A9 products.

There were many reasons why this did not work in an effective way for the majority of products that the foundries were looking to service with a 22/20 nm planar process.  Yes, there were many parts that were fabricated using these nodes, but none of them were higher power/higher performance parts that typically garner headlines.  No CPUs, no GPUs, and only a handful of lower power SOCs (most notably Apple's A8, which was around 89 mm squared and consumed up to 5 to 10 watts at maximum).  The node just did not scale power very effectively.  It provided a smaller die size, but it did not increase power efficiency and switching performance significantly as compared to 28 nm high performance nodes.

The information Chipworks has provided also verifies that Samsung's 14 nm FF process is more size optimized than TSMC's 16 nm FF.  There was originally some talk about both nodes being very similar in overall transistor size and density, but Samsung has a slightly tighter design.  Neither of them are smaller than Intel's latest 14 nm which is going into its second generation form.  Intel still has a significant performance and size advantage over everyone else in the field.  Going back to size we see the Samsung chip is around 96 mm square while the TSMC chip is 104.5 mm square.  This is not huge, but it does show that the Samsung process is a little tighter and can squeeze more transistors per square mm than TSMC.

In terms of actual power consumption and clock scaling we have nothing to go on here.  The chips are both represented in the 6S and 6S+.  Testing so far has not shown there to be significant differences between the two SOCs so far.  In theory one could be performing better than the other, but in reality we have not tested these chips at a low enough level to discern any major performance or power issue.  My gut feeling here is that Samsung's process is more mature and running slightly better than TSMC's, but the differences are going to be minimal at best.

The next piece of info that we can glean from this is that there just isn't enough line space for all of the chip companies who want to fabricate their parts with either Samsung or TSMC.  From a chip standpoint a lot of work has to be done to port a design to two different process nodes.  While 14 and 16 are similar in overall size and the usage of FinFETS, the standard cells and design libraries for both Samsung and TSMC are going to be very different.  It is not a simple thing to port over a design.  A lot of work has to be done in the design stage to make a chip work with both nodes.  I can tell you that there is no way that both chips are identical in layout.  It is not going to be a "dumb port" where they just adjust the optics with the same masks and magically make these chips work right off the bat.  Different mask sets for each fab, verification of both designs, and troubleshooting the yields by metal layer changes will be different for each manufacturer.

In the end this means that there just simply was not enough space at either TSMC or Samsung to handle the demand that Apple was expecting.  Because Apple has deep pockets they contracted out both TSMC and Samsung to produce two very similar, but still different parts.  Apple also likely outbid and locked down what availability to process wafers that Samsung and TSMC have, much to the dismay of other major chip firms.  I have no idea what is going on in the background with people like NVIDIA and AMD when it comes to line space for manufacturing their next generation parts.  At least for AMD it seems that their partnership with GLOBALFOUNDRIES and their version of 14 nm FF is having a hard time taking off.  Eventually more space will be made in production and yields and bins will improve.  Apple will stop taking up so much space and we can get other products rolling off the line.  In the meantime, enjoy that cutting edge iPhone 6S/+ with the latest 14/16 nm FF chips.

Source: Chipworks

Microsoft Surface Book 2-in-1 with Skylake with NVIDIA Discrete GPU Announced

Subject: Mobile | October 6, 2015 - 02:38 PM |
Tagged: video, surface book, surface, Skylake, nvidia, microsoft, Intel, geforce

Along with the announcement of the new Surface Pro 4, Microsoft surprised many with the release of the new Surface Book 2-in-1 convertible laptop. Sharing much of the same DNA as the Surface tablet line, the Surface Book adopts a more traditional notebook design while still adding enough to the formula to produce a unique product.


The pivotal part of the design (no pun intended) is the new hinge, a "dynamic fulcrum" design that looks great and also (supposedly) will be incredibly strong. The screen / tablet attachment mechanism is called Muscle Wire and promises secure attachment as well as ease of release with a single button.

An interesting aspect of the fulcrum design is that, when closed, the Surface Book screen and keyboard do not actually touch near the hinge. Instead you have a small gap in this area. I'm curious how this will play out in real-world usage - it creates a natural angle for using the screen in its tablet form but also may find itself "catching" coin, pens and other things between the two sections. 


The 13.5-in screen has a 3000 x 2000 resolution (3:2 aspect ratio obviously) with a 267 PPI pixel density. Just like the Surface Pro 4, it has a 10-point touch capability and uses the exclusive PixelSense display technology for improved image quality.

While most of the hardware is included in the tablet portion of the device, the keyboard dock includes some surprises of its own. You get a set of two USB 3.0 ports, a full size SD card slot and a proprietary SurfaceConnect port for an add-on dock. But most interestingly you'll find an optional discrete GPU from NVIDIA, an as-yet-undiscovered GeForce GPU with 1GB (??) of memory. I have sent inquiries to Microsoft and NVIDIA for details on the GPU, but haven't heard back yet. We think it is a 30 watt GeForce GPU of some kind (by looking at the power adapter differences) but I'm more interested in how the GPU changes both battery life and performance.

UPDATE: Just got official word from NVIDIA on the GPU, but unfortunately it doesn't tell us much.

The new GPU is a Maxwell based GPU with GDDR5 memory. It was designed to deliver the best performance in ultra-thin form factors such as the Surface Book keyboard dock. Given its unique implementation and design in the keyboard module, it cannot be compared to a traditional 900M series GPU. Contact Microsoft for performance information.


Keyboard and touchpad performance looks to be impressive as well, with a full glass trackpad integration, backlit keyboard design and "class leading" key switch throw distance.

The Surface Book is powered by Intel Skylake processors, available in both Core i5 and Core i7 options, but does not offer Core m-based or Iris graphics options. Instead the integrated GPU will only be offered with the Intel HD 520.


Microsoft promises "up to" 12 hours of battery life on the Surface Book, though that claim was made with the Core i5 / 256GB / 8GB configuration option; no discrete GPU included. 


Pricing on the Surface Book starts at $1499 but can reach as high as $2699 with the maximum performance and storage capacity options. 

Source: Microsoft

Report: AMD's Dual-GPU Fiji XT Card Might Be Coming Soon

Subject: Graphics Cards | October 5, 2015 - 02:33 AM |
Tagged: rumor, report, radeon, graphics cards, Gemini, fury x, fiji xt, dual-GPU, amd

The AMD R9 Fury X, Fury, and Nano have all been released, but a dual-GPU Fiji XT card could be on the way soon according to a new report.


Back in June at AMD's E3 event we were shown Project Quantum, AMD's concept for a powerful dual-GPU system in a very small form-factor. It was speculated that the system was actually housing an unreleased dual-GPU graphic card, which would have made sense given the very small size of the system (and mini-ITX motherboard therein). Now a report from WCCFtech is pointing to a manifest that just might be a shipment of this new dual-GPU card, and the code-name is Gemini.


"Gemini is the code-name AMD has previously used in the past for dual GPU variants and surprisingly, the manifest also contains another phrase: ‘Tobermory’. Now this could simply be a reference to the port that the card shipped from...or it could be the actual codename of the card, with Gemini just being the class itself."

The manifest also indicates a Cooler Master cooler for the card, the maker of the liquid cooling solution for the Fury X. As the Fury X has had its share of criticism for pump whine issues it would be interesting to see how a dual-GPU cooling solution would fare in that department, though we could be seeing an entirely new generation of the pump as well. Of course speculation on an unreleased product like this could be incorrect, and verifiable hard details aren't available yet. Still, of the dual-GPU card is based on a pair of full Fiji XT cores the specs could be very impressive to say the least:

  • Core: Fiji XT x2
  • Stream Processors: 8192
  • GCN Compute Units: 128
  • ROPs: 128
  • TMUs: 512
  • Memory: 8 GB (4GB per GPU)
  • Memory Interface: 4096-bit x2
  • Memory Bandwidth: 1024 GB/s

In addition to the specifics above the report also discussed the possibility of 17.2 TFLOPS of performance based on 2x the performance of Fury X, which would make the Gemini product one of the most powerful single-card GPU solutions in the world. The card seems close enough to the final stage that we should expect to hear something official soon, but for now it's fun to speculate - unless of course the speculation concerns a high initial retail price, and unfortunately something at or above $1000 is quite likely. We shall see.

Source: WCCFtech

Google's Pixel C Is A Powerful Convertible Tablet Running Android 6.0

Subject: Mobile | October 2, 2015 - 04:09 PM |
Tagged: Tegra X1, tablet, pixel, nvidia, google, android 6.0, Android

During its latest keynote event, Google unveiled the Pixel C, a powerful tablet with optional keyboard that uses NVIDIA’s Tegra X1 SoC and runs the Android 6.0 “Marshmallow” operating system.

The Pixel C was designed by the team behind the Chromebook Pixel. Pixel C features an anodized aluminum body that looks (and reportedly feels) smooth with clean lines and rounded corners. The tablet itself is 7mm thick and weighs approximately one pound. The front of the Pixel C is dominated by a 10.2” display with a resolution of 2560 x 1800 (308 PPI, 500 nits brightness), wide sRGB color gamut, and 1:√2 aspect ratio (which Google likened to the size and aspect ratio of an A4 sheet of paper). A 2MP front camera sits above the display while four microphones sit along the bottom edge and a single USB Type-C port and two stereo speakers sit on the sides of the tablet. Around back, there is an 8MP rear camera and a bar of LED lights that will light up to indicate the battery charge level after double tapping it.

Google Pixel C Tegra X1 Tablet.jpg

The keyboard is an important part of the Pixel C, and Google has given it special attention to make it part of the package. The keyboard attaches to the tablet using self-aligning magnets that are powerful enough to keep the display attached while holding it upside down and shaking it (not that you'd want to do that, mind you). It can be attached to the bottom of the tablet for storage and used like a slate or you can attach the tablet to the back of the keyboard and lift the built-in hinge to use the Pixel C in laptop mode (the hinge can hold the display at anywhere from 100 to 135-degrees). The internal keyboard battery is good for two months of use, and can be simply recharged by closing the Pixel C like a laptop and allowing it to inductively charge from the tablet portion. The keyboard is around 2mm thick and is nearly full size at 18.85mm pitch and the chiclet keys have a 1.4mm travel that is similar to that of the Chromebook Pixel. There is no track pad, but it does offer a padded palm rest which is nice to see.

Google Pixel C with Keyboard.jpg

Internally, the Pixel C is powered by the NVIDIA Tegra X1 SoC, 3GB of RAM, and 32GB or 64GB of storage (depending on model). The 20nm Tegra X1 consists of four ARM Cortex A57 and four Cortex A53 CPU cores paired with a 256-core Maxwell GPU. The Pixel C is a major design win for NVIDIA, and the built in GPU will be great for gaming on the go.

The Pixel C will be available in December ("in time for the holidays") for $499 for the base 32 GB model, $599 for the 64 GB model, and $149 for the keyboard.

First impressions, such as this hands-on by Engadget, seem to be very positive stating that it is sturdy yet sleek hardware that feels comfortable typing on. While the hardware looks more than up to the task, the operating system of choice is a concern for me. Android is not the most productivity and multi-tasking friendly software. There are some versions of Android that enable multiple windows or side-by-side apps, but it has always felt rather clunky and limited in its usefulness. With that said, Computer World's  JR Raphael seems hopeful. He points out that the Pixel C is, in Batman fashion, not the hardware Android wants, but the hardware that Android needs (to move forward) and is primed for a future of Android that is more friendly to such productive endeavors. Development versions of Android 6.0 included support for multiple apps running simultaneously side-by-side, and while that feature will not make the initial production code cut, it does show that it is something that Google is looking into pursuing and possibly enabling at some point. The Pixel C has an excellent aspect ratio to take advantage of the app splitting with the ability to display four windows each with the same aspect ratio.

I am not sure how well received the Pixel C will be by business users who have several convertible tablet options running Windows and Chrome OS. It certainly gives the iPad-and-keyboard combination a run for its money and is a premium alternative to devices like the Asus Transformers.

What do you think about the Pixel C, and in particular, it running Android?

Even if I end up being less-than-productive using it, I think I'd still want the sleek-looking hardware as a second machine, heh.

Source: Google

Android to iPhone Day 6: Battery Life and Home Screens

Subject: General Tech, Mobile | October 1, 2015 - 02:45 PM |
Tagged: iphone 6s, iphone, ios, google, apple, Android

PC Perspective’s Android to iPhone series explores the opinions, views and experiences of the site’s Editor in Chief, Ryan Shrout, as he moves from the Android smartphone ecosystem to the world of the iPhone and iOS. Having been entrenched in the Android smartphone market for 7+ years, the editorial series is less of a review of the new iPhone 6s as it is an exploration on how the current smartphone market compares to what each sides’ expectations are.

Full Story Listing:


Day 4

It probably won’t come as a shock to the millions of iPhone users around the globe, but the more days I keep the 6s in my pocket, the more accepting I am becoming with the platform. The phone has been fast and reliable – I have yet to come across any instability or application crashes despite my incessant installations of new ones. And while I think it’s fair to say that even new Android-based phones feel snappy to user interactions out of the box, the iPhone is just about a week in without me ever thinking about performance – which is exactly what you want from a device like this.

There are some quirks and features missing from the iPhone 6s that I had on my Droid Turbo that I wish I could implement in settings or through third-party applications. I fell in love with the ability to do a double wrist rotation with the Droid as a shortcut to opening up the camera. It helped me capture quite a few photos when I only had access to a single hand and without having to unlock the phone, find an icon, etc. The best the iPhone has is a “drag up from the bottom” motion from the lock screen but I find myself taking several thumb swipes on it before successfully activating it when only using one hand. Trying to use the home button to access the lock screen, and thus the camera shortcut, is actually hindered because the Touch ID feature is TOO FAST, taking me to a home screen (that may not have the camera app icon on it) where I need to navigate around.

I have been a user of the Pebble Time since it was released earlier this year and I really enjoy the extended battery life (measured in days not hours) when compared to Android Wear devices or the Apple Watch. However, the capabilities of the Pebble Time are more limited with the iPhone 6s than they are with Android – I can no longer use voice dictation to reply to text messages or emails and the ability to reply with easy templates (yes, no, I’ll be there soon, etc.) is no longer available. Apple does not allow the same level of access to the necessary APIs as Android does and thus my Time has effectively become a read-only device.


Finally, my concern about missing widgets continues to stir within me; it is something that I think the iPhone 6s could benefit from greatly. I also don’t understand the inability to arrange the icons on the home screens in an arbitrary fashion. Apple will not let me move icons to the bottom of the page without first filling up every other spot on the screen – there can be no empty spaces!! So while my organizational style would like to have a group of three icons in the bottom right hand corner of the screen with some empty space around it, Apple doesn’t allow me to do that. If I want those icons in that location I need to fill up every empty space on the screen to do so. Very odd.

Continue reading my latest update on my Android to iPhone journey!!

Subject: General Tech
Manufacturer: Supermicro

Choosing the Right Platform

Despite what most people may think, our personal workstations here at the PC Perspective offices aren’t exactly comprised of cutting edge hardware. Just as in every other production environment, we place a real benefit on stability with the machines that we write, photo edit, and in this case, video edit on.

The current video editing workstation for PC Perspective offices is quite old when you look at the generations upon generations of hardware we have reviewed in the years since it was built. In fact, it has hardly been touched since early 2011. Built around the then $1000 Intel Core-i7 990X, 24GB of DDR3, a Fermi-based NVIDIA Quadro 5000, and a single 240gb SandForce 2 based SSD, this machine has edited a lot of 1080p video for us with little problems.


However, after starting to explore the Panasonic GH4 and 4K video a few months ago, the age of this machine became quite apparent. Real-time playback of high bit rate 4K content was choppy at best, and scrubbing through the timeline next to impossible. Transcoding to a lower resolution mezzanine file, or turning down the playback quality in Premiere Pro worked to some extent, but made the visual quality we gained more difficult to deal with. It was clear that we were going to need a new workstation sooner than later.

The main question was what platform to build upon. My initial thought was to build using the 8-core Intel Core i7-5960X and X99 platform. The main application we use, Adobe Premiere Pro (and it’s associated Media Encoder app) are very multithreaded. Going from 6-cores with the i7-990X to 8-cores with the i7-5960S with modest improvement in IPC didn’t seem like a big enough gain nor very future proof.


Luckily, we had a pair of Xeon E5-2680v2’s around from another testbed that had been replaced. These processors each provide 10 cores (Hyperthreading enabled for a resulting 20 threads each) at a base frequency of 2.8GHz, with the ability to boost up to 3.6GHz. By going with two of these processors in a dual CPU configuration, we will be significantly increasing our compute power and hopefully providing some degree of future proofing. Plus, we already use the slightly higher clocked Xeon E5-2690v2’s in our streaming server, so we have some experience with a very similar setup.

Continue reading an overview of our 2015 Editing Workstation Upgrade!!

Subject: General Tech
Manufacturer: NVIDIA

Setup, Game Selection

Yesterday NVIDIA officially announced the new GeForce NOW streaming game service, the conclusion to the years-long beta and development process known as NVIDIA GRID. As I detailed on my story yesterday about the reveal, GeForce NOW is a $7.99/mo. subscription service that will offer on-demand, cloud-streamed games to NVIDIA SHIELD devices, including a library of 60 games for that $7.99/mo. fee in addition to 7 titles in the “purchase and play” category. There are several advantages that NVIDIA claims make GeForce NOW a step above any other streaming gaming service including PlayStation Now, OnLive and others. Those include load times, resolution and frame rate, combined local PC and streaming game support and more.


I have been able to use and play with the GeForce NOW service on our SHIELD Android TV device in the office for the last few days and I thought I would quickly go over my initial thoughts and impressions up to this point.

Setup and Availability

If you have an NVIDIA SHIELD Android TV (or a SHIELD Tablet) then the setup and getting started process couldn’t be any simpler for new users. An OS update is pushed that changes the GRID application on your home screen to GeForce NOW and you can sign in using your existing Google account on your Android device, making payment and subscription simple to manage. Once inside the application you can easily browse through the included streaming games or look through the smaller list of purchasable games and buy them if you so choose.


Playing a game is as simple and selecting title from the grid list and hitting play.

Game Selection

Let’s talk about that game selection first. For $7.99/mo. you get access to 60 titles for unlimited streaming. I have included a full list below, originally posted in our story yesterday, for reference.

Continue reading my initial thoughts and an early review of GeForce NOW!!

Manufacturer: PC Perspective

New Components, New Approach


After 20 or so enclosure reviews over the past year and a half and some pretty inconsistent test hardware along the way, I decided to adopt a standardized test bench for all reviews going forward. Makes sense, right? Turns out choosing the best components for a cases and cooling test system was a lot more difficult than I expected going in, as special consideration had to be made for everything from form-factor to noise and heat levels.

Along with the new components I will also be changing the approach to future reviews by expanding the scope of CPU cooler testing. After some debate as to the type of CPU cooler to employ I decided that a better test of an enclosure would be to use both closed-loop liquid and air cooling for every review, and provide thermal and noise results for each. For CPU cooler reviews themselves I'll be adding a "real-world" load result to the charts to offer a more realistic scenario, running a standard desktop application (in this case a video encoder) in addition to the torture-test result using Prime95.

But what about this new build? It isn't completely done but here's a quick look at the components I ended up with so far along with the rationale for each selection.

CPU – Intel Core i5-6600K ($249, Amazon.com)


The introduction of Intel’s 6th generation Skylake processors provided the excuse opportunity for an upgrade after using an AMD FX-6300 system for the last couple of enclosure reviews, and after toying with the idea of the new i7-6700K, and immediately realizing this was likely overkill and (more importantly) completely unavailable for purchase at the time, I went with the more "reasonable" option with the i5. There has long been a debate as to the need for hyper-threading for gaming (though this may be changing with the introduction of DX12) but in any case this is still a very powerful processor and when stressed should produce a challenging enough thermal load to adequately test both CPU coolers and enclosures going forward.

GPU – XFX Double Dissipation Radeon R9 290X ($347, Amazon.com)


This was by far the most difficult selection. I don’t think of my own use when choosing a card for a test system like this, as it must meet a set of criteria to be a good fit for enclosure benchmarks. If I choose a card that runs very cool and with minimal noise, GPU benchmarks will be far less significant as the card won’t adequately challenge the design and thermal characteristics of the enclosure. There are certainly options that run at greater temperatures and higher noise (a reference R9 290X for example), but I didn’t want a blower-style cooler with the GPU. Why? More and more GPUs are released with some sort of large multi-fan design rather than a blower, and for enclosure testing I want to know how the case handles the extra warm air.

Noise was an important consideration, as levels from an enclosure of course vary based on the installed components. With noise measurements a GPU cooler that has very low output at idle (or zero, as some recent cooler designs permit) will allow system idle levels to fall more on case fans and airflow than a GPU that might drown them out. (This would also allow a better benchmark of CPU cooler noise - particularly with self-contained liquid coolers and audible pump noise.) And while I wanted very quiet performance at idle, at load there must be sufficient noise to measure the performance of the enclosure in this regard, though of course nothing will truly tax a design quite like a loud blower. I hope I've found a good balance here.

Continue reading our look at the cases and cooling test system build!

Samsung 840 EVO mSATA Gets Long Awaited EXT43B6Q Firmware, Fixes Read Speed Issue

Subject: Storage | October 1, 2015 - 05:42 PM |
Tagged: Samsung, firmware, 840 evo, msata

It took them a while to get it right, but Samsung did manage to fix their read degradation issue in many of their TLC equipped 840 Series SSDs. I say many because there were some models left out when firmware EXT0DB6Q was rolled out via Magician 4.6. The big exception was the mSATA variant of the 840 EVO, which was essentially the same SSD just in a more compact form. This omission was rather confusing as the previous update was applicable to both the 2.5" and mSATA form factors simultaneously.

840 EVO mSATA - 06.png

The Magician 4.7 release notes included a bullet for Advanced Performance Optimization support on the 840 EVO mSATA model, but it took Samsung some time to push out the firmware update that enabled this possibility. We know from our previous testing that the Advanced Performance Optimization feature was included with other changes that enabled reads from 'stale' data at full speeds, compensating for the natural voltage drift of flash cell voltages representing the stored data.

840 EVO mSATA FW - 6.png

Now that the firmware has been made available (it came out early this week but was initially throttled), I was able to apply it to our 840 EVO 1TB mSATA sample without issue, and could perform the Advanced Performance Optimization and observe the expected effects, but my sample was recently used for some testing and did not have data old enough to show a solid improvement with the firmware applied *and before* running the Optimization. Luckily, an Overclock.net forum member was able to perform just that test on his 840 EVO 500GB mSATA model:

Palorim12 OC.net post.png

Kudos to that member for being keen enough to re-run his test just after the update.


It looks like the only consumer 840 TLC model left to fix is the original 840 SSD (not 840 EVO, just 840). This was the initial model launched that was pure TLC flash with no SLC TurboWrite cache capability. We hope to see this model patched in the near future. There were also some enterprise units that used the same planar 19nm TLC flash, but I fear Samsung may not be updating those as most workloads seen by those drives would constantly refresh the flash and not give it a chance to become stale and suffer from slowing read speeds. The newer and faster V-NAND equipped models (850 / 950 Series) have never been susceptible to this issue.

Source: Samsung

Thermaltake Releases Core P5 Wall-Mountable Case

Subject: Cases and Cooling | October 5, 2015 - 09:01 AM |
Tagged: wall mount, thermaltake

Personally, I would like to see at least an option for plexiglass on the perimeter. I feel like some might want a bit of protection from things like sneezes, or rogue squirt-gun blasts. The “case” is basically a plate with a clear acrylic pane in front of it. It can stand upright, be rotated horizontally, or even screwed into a wall if you want to show off a custom liquid coolant loops or something.


Interestingly, Thermaltake is providing “3D Printing Accessory Files”. I somehow doubt that this will be the CAD files required to lasercut your own Core P5 case, but it's designed to allow makers to create their own accessories for it. As such, this sounds more like guides and schematics, but I cannot say for sure because I haven't tried it... and they're not available yet.

The Thermaltake Core P5 will be available soon for an MSRP of $169.99, although it's already at a sale price of $149.99. This could be just a pre-order discount, or a sign of its typical price point. We don't know.

Source: Thermaltake

We interrupt the Apple coverage for a look at a high end Android

Subject: Mobile | October 1, 2015 - 07:13 PM |
Tagged: nexus 6p, google, Android

The Nexus 6P is new from Google and is designed to compete directly against the new Apple devices, with an all aluminium body and a new USB Type-C connection for charging.  One of the benefits of the new USB is in the charging speed, which Google claims will give you 7 hours of usage off of a 10 minute charge.  They have also added a 12.3MP camera complete with a 1.55μm sensor, though in the style of the Nokia Lumia 1520, that camera does project beyond the casing.  The 5.7in 2560x1440 AMOLED screen is made of Gorilla Glass 4 and is powered by a 2GHz Qualcomm Snapdragon 810 v2.1 octa-core processor, which may display that chips tendency to get a little warm during use.  The Inquirer has not had a chance to fully review the Nexus 6P but you can catch their preview right here.


"THE NEXUS 6P is the first truly premium Android device from Google. Last year's Nexus 6 divided opinion with its bulky design and lacklustre features, but the firm is hoping that its successor, with the premium case and next-gen specs, will finally fill the void for those after a stock Android device."

Here are some more Mobile articles from around the web:



Source: The Inquirer

4K performance when you can spend at least $1.3K

Subject: Graphics Cards | October 6, 2015 - 02:40 PM |
Tagged: 4k, gtx titan x, fury x, GTX 980 Ti, crossfire, sli

[H]ard|OCP shows off just what you can achieve when you spend over $1000 on graphics cards and have a 4K monitor in their latest review.  In Project Cars you can expect never to see less than 40fps with everything cranked to maximum and if you invested in Titan X's you can even enable DS2X AntiAliasing for double the resolution, before down sampling.  Witcher 3 is a bit more challenging and no card is up for HairWorks without a noticeable hit to performance.  FarCry 4 still refuses to believe in Crossfire and as far as NVIDIA performance goes, if you want to see soft shadows you are going to have to invest in a pair of Titan X's.  Check out the full review to see what the best of the current market is capable of.


"The ultimate 4K battle is about to begin, AMD Radeon R9 Fury X CrossFire, NVIDIA GeForce GTX 980 Ti SLI, and NVIDIA GeForce GTX TITAN X SLI will compete for the best gameplay experience at 4K resolution. Find out what $1300 to $2000 worth of GPU backbone will buy you. And find out if Fiji really can 4K."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Snapdragon 820 Features Qualcomm's New X12 Modem: Fastest LTE To Date

Subject: Mobile | September 30, 2015 - 02:33 PM |
Tagged: X12 Modem, SoC, snapdragon 820, qualcomm, phones, mu-mimo, mobile, LTE, cell phones

The upcoming Snapdragon 820 is shaping up to be a formidable SoC after the disappointing response to the previous flagship, the Snapdragon 810, which was in far fewer devices than expected for reasons still shrouded in mystery and speculation. One of the biggest aspects of the upcoming 820 is Qualcomm’s new X12 modem, which will provide the most advanced LTE connectivity seen to date when the SoC launches. The X12 features CAT 12 LTE downlink speeds for up to 600 Mbps, and CAT 13 on the uplink for up to 150 Mbps.

LTE connectivity isn’t the only new thing here, as we see from this slide there is also tri-band Wi-Fi supporting 2x2 MU-MIMO.


“This is the first publicly announced processor for use in mobile devices to support LTE Category 12 in the downlink and Category 13 in the uplink, providing up to 33 percent and 200 percent improvement over its predecessor’s download and upload speeds, respectively.”

The specifications for this new modem are densely packed:

  • Cat 12 (up to 600 Mbps) in the downlink
  • Cat 13 (up to 150 Mbps) in the uplink
  • Up to 4x4 MIMO on one downlink LTE carrier
  • 2x2 MU-MIMO (802.11ac)
  • Multi-gigabit 802.11ad
  • LTE-U and LTE+Wi-Fi Link Aggregation (LWA)
  • Next Gen HD Voice and Video calling over LTE and Wi-Fi
  • Call Continuity across Wi-Fi, LTE, 3G, and 2G
  • RF front end innovations
  • Advanced Closed Loop Antenna Tuner
  • Qualcomm RF360™ front end solution with CA
  • Wi-Fi/LTE antenna sharing

Rumored phones that could end up running the Snapdragon 820 with this X12 modem include the Samsung Galaxy S7 and around 30 other devices, though final word is of course pending on shipping hardware.

Source: Qualcomm

StarCraft II v3.0 Gets Another New UI

Subject: General Tech | October 3, 2015 - 11:04 PM |
Tagged: Starcraft II, legacy of the void, blizzard

Third time's the charm, unless they plan another release at some point.

The StarCraft II interface isn't perfect. Even though it is interesting and visually appealing, some tasks are unnecessarily difficult and space is not used in the most efficient way. To see what I mean, try to revert the multiplayer mode to Wings of Liberty, or, worse, find your Character Code. Blizzard released a new UI with Heart of the Swarm back in 2013, and they're doing a new one for the release of Legacy of the Void on November 10th. Note that my two examples probably won't be fixed in this update, they are just examples of UX issues.

While the update aligns with the new expansion, Blizzard will patch the UI for all content levels, including the free Starter Edition. This honestly makes sense, because it's easier to patch a title when all variations share a common core. Then again, not every company patches five-year-old titles like Blizzard does, so the back-catalog support is appreciated.


The most heartwarming change for fans, if pointless otherwise, is in the campaign selection screen. As the StarCraft II trilogy will be completed with Legacy of the Void, the interface aligns them as three episodes in the same style as the original StarCraft did.

On the functional side, the interface has been made more compact (which I alluded to earlier). This was caused by the new chat design, which is bigger yet less disruptive than it was in Heart of the Swarm. The column of buttons on the side are now a top bar, which expands down for sub-menu items.


While there are several things that I don't mention, a final note for this post is that Arcade will now focus on open lobbies. Players can look for the specific game they want, but the initial screen will show lobbies that are waiting to fill. The hope seems to be that players waiting for a game will spend less time. This raises two questions. First, Arcade games tend to have a steep learning curve, so I wonder if this feature will slump off after people try a few rounds before realizing that they should stick with a handful of games. Second, I wonder what this means for player numbers in general -- this sounds like a feature that is added during player declines, which Blizzard seems to hint is not occuring.

I'm not sure when the update will land, but it will probably be around the launch of Legacy of the Void on November 10th.

Source: Blizzard

Ars Technica Reviews Android 6.0 (Marshmellow)

Subject: Mobile | October 6, 2015 - 07:01 AM |
Tagged: google, android 6.0, Android

Android 6.0 was launched yesterday, and Ars Technica has, so far, been the only outlet to give it a formal review. That said, it is a twelve-page review with a table of contents -- so that totally counts for five or so.


The main complaint that the reviewer has is the operating system's inability to be directly updated. There is a large chain of rubber stamps between Google's engineers and the world at large. Carriers and phone manufacturers can delay (or not even attempt to certify) patches for their many handsets. It is not like Windows, where Microsoft controls the centralized update service. In the beginning, this wasn't too big of an issue as updates were typically for features. Sucker, buy a new phone if you want WebGL.

Now it's about security. Granted, it has always been about security, even on the iPhone, we just care more now. If you think about it, every time a phone gets jailbroken, a method exists to steal admin privileges away from Apple and give them to... the user. Some were fairly sophisticated processes involving USB tethering to PCs, while others involved browsing to a malicious website with a payload that the user (but not Apple) wanted to install. Hence why no-one cared: the security was being exploited by the user for the user. It was only a matter of time before either the companies sufficiently crush the bugs, or it started to be tasty for the wolves.

And Google is getting bit.

Otherwise, Ars Technica mostly praised the OS. Be sure to read their review to get a full sense of their opinion. As far as I can tell, they only tested it on the Nexus 5.

Source: Ars Technica

A good contender for powering your system, the CoolerMaster V750W

Subject: Cases and Cooling | October 2, 2015 - 03:29 PM |
Tagged: PSU, modular psu, coolermaster, V750W, 80+ gold

Cooler Master's V750W PSU is fully modular and comes with a nice selection of cabling including four PCIe 6+2 connectors and eight SATA power connectors. At 150x140x86mm (5.9x5.5x3.4") it also takes up less space than many PSUs, though not enough to fit in a truly SFF case. A single 12V rail can provide 744W at 62A which is enough to power more than one mid to high range GPU and Bjorn3D's testing shows that it can maintain that 80+ GOLD rating while it is being used.  The five year warranty is also a good reason to pick up this PSU, assuming you are not in the market for something in the kilowatt range.


"One available option soon to be available on the market for <179$, and our specimen of review today, is the CoolerMaster V750. CoolerMaster has partnered with Seasonic to produce the high quality compact “V” series PSUs which made a huge statement for CoolerMaster and told the world they were ready to push some serious power."

Here are some more Cases & Cooling reviews from around the web:



Source: Bjorn3D

Steam "Store Within a Store" at GameStop, GAME UK, and EB

Subject: Systems | October 5, 2015 - 07:39 PM |
Tagged: valve, steam os, steam machines, steam, pc gaming

According to SteamDB, Valve has struck deals with GameStop, GAME UK, and EB Canada to create “store within a store” areas in North American and UK locations. The article does not clarify how many of stores will receive this treatment. It does note that Steam Controller, Steam Link, and even Steam Machines will be sold from these outlets, which will give physical presence to Valve's console platform alongside the existing ones.


The thing about Valve is that, when they go silent, you can't tell whether they reconsidered their position, or they just are waiting for the right time to announce. They have been fairly vocal about Steam accessories, but the machines themselves have been pretty much radio silence for the better part of a year. There was basically nothing at CES 2015 after a big push in the prior year. The talk shifted to Steam Link, which was obviously part of their original intention but, due to the simultaneous lack of Steam Machine promotion, feels more like a replacement than an addition.

But, as said, that's tricky logic to use with Valve.

As a final note, I am curious about what the transaction entailed. From what I hear, purchasing retail space is pricey and difficult, but some retailers donate space for certain products and initiatives that they find intrinsic value in. Valve probably has a lot money, but they don't have Microsoft levels of cash. Whether Valve paid for the space, or the retailers donated it, is question that leads to two very different, but both very interesting in their own way, follow-ups. Hopefully we'll learn more, but we probably won't.

Source: SteamDB