Flash player not detected. Click here to install flash.
« 1 2 3 4 5 »

Microsoft Buys Havok from Intel

Subject: General Tech | October 5, 2015 - 07:01 AM |
Tagged: physics, microsoft, Intel, Havok

Microsoft has just purchased Havok from Intel for an undisclosed price. This group develops one of the leading physics engines for video games and other software. It was used in every Halo title since Halo 2, including Halo Wars, and a fork of it drives the physics for Valve's Source Engine. It has been around since 2000, but didn't really take off until Max Payne 2 in 2003.

And the natural follow-up question for just about everything is “why?”


Hopefully this isn't bad taste...
Photo Credit: Havok via Game Developer Magazine (June 2013)

There are good reasons, though. First, Microsoft has been in the video game middleware and API business for decades. DirectX is the obvious example, but they have also created software like Games for Windows Live and Microsoft Gaming Zone. Better software drives sales for platforms, and developers can always use help accomplishing that.

Another reason could be Azure. Microsoft wants to bring cloud services to online titles, offloading some of the tasks that are insensitive to latency allows developers to lower system requirements or do more with what they have (which is especially true when consoles flatten huge install bases to a handful of specifications). If they plan to go forward with services that run on Azure or Xbox Live, then it would make sense to have middleware that's as drop-in as possible. Creating a physics engine from scratch is a bit of a hassle, but so is encouraging existing engines to use it.

It would be better to just buy someone that everyone is using. Currently, that's Havok, an open-source solution that is rarely used outside of other open-source systems, and something that's owned by NVIDIA (and probably won't leave their grip until their fingers are frigid and lifeless).

That's about all we know, though. The deal doesn't have a close date, value, or official purpose. Intel hasn't commented on the deal, only Microsoft has.

Source: Microsoft

Report: AMD's Dual-GPU Fiji XT Card Might Be Coming Soon

Subject: Graphics Cards | October 5, 2015 - 02:33 AM |
Tagged: rumor, report, radeon, graphics cards, Gemini, fury x, fiji xt, dual-GPU, amd

The AMD R9 Fury X, Fury, and Nano have all been released, but a dual-GPU Fiji XT card could be on the way soon according to a new report.


Back in June at AMD's E3 event we were shown Project Quantum, AMD's concept for a powerful dual-GPU system in a very small form-factor. It was speculated that the system was actually housing an unreleased dual-GPU graphic card, which would have made sense given the very small size of the system (and mini-ITX motherboard therein). Now a report from WCCFtech is pointing to a manifest that just might be a shipment of this new dual-GPU card, and the code-name is Gemini.


"Gemini is the code-name AMD has previously used in the past for dual GPU variants and surprisingly, the manifest also contains another phrase: ‘Tobermory’. Now this could simply be a reference to the port that the card shipped from...or it could be the actual codename of the card, with Gemini just being the class itself."

The manifest also indicates a Cooler Master cooler for the card, the maker of the liquid cooling solution for the Fury X. As the Fury X has had its share of criticism for pump whine issues it would be interesting to see how a dual-GPU cooling solution would fare in that department, though we could be seeing an entirely new generation of the pump as well. Of course speculation on an unreleased product like this could be incorrect, and verifiable hard details aren't available yet. Still, of the dual-GPU card is based on a pair of full Fiji XT cores the specs could be very impressive to say the least:

  • Core: Fiji XT x2
  • Stream Processors: 8192
  • GCN Compute Units: 128
  • ROPs: 128
  • TMUs: 512
  • Memory: 8 GB (4GB per GPU)
  • Memory Interface: 4096-bit x2
  • Memory Bandwidth: 1024 GB/s

In addition to the specifics above the report also discussed the possibility of 17.2 TFLOPS of performance based on 2x the performance of Fury X, which would make the Gemini product one of the most powerful single-card GPU solutions in the world. The card seems close enough to the final stage that we should expect to hear something official soon, but for now it's fun to speculate - unless of course the speculation concerns a high initial retail price, and unfortunately something at or above $1000 is quite likely. We shall see.

Source: WCCFtech
Manufacturer: NVIDIA

GPU Enthusiasts Are Throwing a FET

NVIDIA is rumored to launch Pascal in early (~April-ish) 2016, although some are skeptical that it will even appear before the summer. The design was finalized months ago, and unconfirmed shipping information claims that chips are being stockpiled, which is typical when preparing to launch a product. It is expected to compete against AMD's rumored Arctic Islands architecture, which will, according to its also rumored numbers, be very similar to Pascal.

This architecture is a big one for several reasons.


Image Credit: WCCFTech

First, it will jump two full process nodes. Current desktop GPUs are manufactured at 28nm, which was first introduced with the GeForce GTX 680 all the way back in early 2012, but Pascal will be manufactured on TSMC's 16nm FinFET+ technology. Smaller features have several advantages, but a huge one for GPUs is the ability to fit more complex circuitry in the same die area. This means that you can include more copies of elements, such as shader cores, and do more in fixed-function hardware, like video encode and decode.

That said, we got a lot more life out of 28nm than we really should have. Chips like GM200 and Fiji are huge, relatively power-hungry, and complex, which is a terrible idea to produce when yields are low. I asked Josh Walrath, who is our go-to for analysis of fab processes, and he believes that FinFET+ is probably even more complicated today than 28nm was in the 2012 timeframe, which was when it launched for GPUs.

It's two full steps forward from where we started, but we've been tiptoeing since then.


Image Credit: WCCFTech

Second, Pascal will introduce HBM 2.0 to NVIDIA hardware. HBM 1.0 was introduced with AMD's Radeon Fury X, and it helped in numerous ways -- from smaller card size to a triple-digit percentage increase in memory bandwidth. The 980 Ti can talk to its memory at about 300GB/s, while Pascal is rumored to push that to 1TB/s. Capacity won't be sacrificed, either. The top-end card is expected to contain 16GB of global memory, which is twice what any console has. This means less streaming, higher resolution textures, and probably even left-over scratch space for the GPU to generate content in with compute shaders. Also, according to AMD, HBM is an easier architecture to communicate with than GDDR, which should mean a savings in die space that could be used for other things.

Third, the architecture includes native support for three levels of floating point precision. Maxwell, due to how limited 28nm was, saved on complexity by reducing 64-bit IEEE 754 decimal number performance to 1/32nd of 32-bit numbers, because FP64 values are rarely used in video games. This saved transistors, but was a huge, order-of-magnitude step back from the 1/3rd ratio found on the Kepler-based GK110. While it probably won't be back to the 1/2 ratio that was found in Fermi, Pascal should be much better suited for GPU compute.


Image Credit: WCCFTech

Mixed precision could help video games too, though. Remember how I said it supports three levels? The third one is 16-bit, which is half of the format that is commonly used in video games. Sometimes, that is sufficient. If so, Pascal is said to do these calculations at twice the rate of 32-bit. We'll need to see whether enough games (and other applications) are willing to drop down in precision to justify the die space that these dedicated circuits require, but it should double the performance of anything that does.

So basically, this generation should provide a massive jump in performance that enthusiasts have been waiting for. Increases in GPU memory bandwidth and the amount of features that can be printed into the die are two major bottlenecks for most modern games and GPU-accelerated software. We'll need to wait for benchmarks to see how the theoretical maps to practical, but it's a good sign.

StarCraft II v3.0 Gets Another New UI

Subject: General Tech | October 3, 2015 - 11:04 PM |
Tagged: Starcraft II, legacy of the void, blizzard

Third time's the charm, unless they plan another release at some point.

The StarCraft II interface isn't perfect. Even though it is interesting and visually appealing, some tasks are unnecessarily difficult and space is not used in the most efficient way. To see what I mean, try to revert the multiplayer mode to Wings of Liberty, or, worse, find your Character Code. Blizzard released a new UI with Heart of the Swarm back in 2013, and they're doing a new one for the release of Legacy of the Void on November 10th. Note that my two examples probably won't be fixed in this update, they are just examples of UX issues.

While the update aligns with the new expansion, Blizzard will patch the UI for all content levels, including the free Starter Edition. This honestly makes sense, because it's easier to patch a title when all variations share a common core. Then again, not every company patches five-year-old titles like Blizzard does, so the back-catalog support is appreciated.


The most heartwarming change for fans, if pointless otherwise, is in the campaign selection screen. As the StarCraft II trilogy will be completed with Legacy of the Void, the interface aligns them as three episodes in the same style as the original StarCraft did.

On the functional side, the interface has been made more compact (which I alluded to earlier). This was caused by the new chat design, which is bigger yet less disruptive than it was in Heart of the Swarm. The column of buttons on the side are now a top bar, which expands down for sub-menu items.


While there are several things that I don't mention, a final note for this post is that Arcade will now focus on open lobbies. Players can look for the specific game they want, but the initial screen will show lobbies that are waiting to fill. The hope seems to be that players waiting for a game will spend less time. This raises two questions. First, Arcade games tend to have a steep learning curve, so I wonder if this feature will slump off after people try a few rounds before realizing that they should stick with a handful of games. Second, I wonder what this means for player numbers in general -- this sounds like a feature that is added during player declines, which Blizzard seems to hint is not occuring.

I'm not sure when the update will land, but it will probably be around the launch of Legacy of the Void on November 10th.

Source: Blizzard

Dell UltraSharp UP3216Q: 31.5-inch 4K Monitor with 99.5% Adobe RGB

Subject: Displays | October 3, 2015 - 09:12 PM |
Tagged: UP3216Q, ultrasharp, UHD, monitor, ips, HDMI 2.0, display, dell, calibration, Adobe RGB, 4k

While not officially launched in the U.S. just yet, on Thursday Tom's Hardware reported news of a trio of upcoming UltraSharp monitors from Dell, the largest of which - the UP3216Q - I was able to locate on Dell's Bermuda site.


For anyone looking for a 4K display for photo or video editing (or any other color critical work) the new Dell UltraSharp UP3216Q looks like a great - and likely very pricey - option. Just how much are we talking? The existing 31.5-inch 4K UP3214Q carries a $1999 MSRP (though it sells for $1879 on Dell's site). For this kind of money there are probably those who will never consider a 16:9 option (or ever give up their 16:10 30-inch displays), but the specifications of this new UP3216Q are impressive:

  • Diagonal Viewing Size: 31.5 inch
  • Aspect Ratio: Widescreen (16:9)
  • Panel Type, Surface: In-Plane Switching
  • Optimal resolution: 3840 x 2160 @ 60Hz
  • Active Display Area (H x V): 273,996 sq-mm (424.7 sq-inches)
  • Contrast Ratio: 1000 to 1 (typical), 2 Million to 1 (dynamic)
  • Brightness: 300 cd/m2 (typical)
  • Response Time: 6ms fast mode . GTG
  • Viewing Angle: 178° vertical / 178° horizontal
  • Adjustability: Tilt, Swivel, Height Adjust
  • Color Support: 1.07 billion colors
  • Pixel Pitch: 0.182 mm
  • Backlight Technology: LED light bar system
  • Display Screen Coating: Anti-Glare with 3H hardness
  • Connectivity: DP, mDP, HDMI (MHL), 4 x USB3 with one charging port, 1 x USB3 upstream, Media Card Reader

With the 60 Hz 4K (UHD) IPS panel offering full sRGB and 99.5% Adobe RGB, and a factory calibration that promises to be factory color calibrated with a deltaE of less than 2, the UP3214Q sounds pretty much ready to go out of the box. However for those inclined to strive for a more perfect calibration Dell is offering an X-Rite i1Display Pro colorimeter as an optional accessory, providing their own Dell UltraSharp Color Calibration Solution software.

A couple of points of interest with this monitor, while it offers DisplayPort and mini-DP inputs it also supports 4K 60 Hz via HDMI 2.0. Color support is also listed as 1.07 billion colors, but it's not specified whether this indicates a 10-bit panel or if they are implementing 10-bit color processing with an 8-bit panel - though if it's in the $2k price range it would probably safe to assume this is a 10-bit panel. Lastly, in keeping with the UltraSharp branding the monitor will also carry Dell's Premium Panel Guarantee and 3-Year Advanced Exchange Service warranty.

Source: Dell

Google's Pixel C Is A Powerful Convertible Tablet Running Android 6.0

Subject: Mobile | October 2, 2015 - 04:09 PM |
Tagged: Tegra X1, tablet, pixel, nvidia, google, android 6.0, Android

During its latest keynote event, Google unveiled the Pixel C, a powerful tablet with optional keyboard that uses NVIDIA’s Tegra X1 SoC and runs the Android 6.0 “Marshmallow” operating system.

The Pixel C was designed by the team behind the Chromebook Pixel. Pixel C features an anodized aluminum body that looks (and reportedly feels) smooth with clean lines and rounded corners. The tablet itself is 7mm thick and weighs approximately one pound. The front of the Pixel C is dominated by a 10.2” display with a resolution of 2560 x 1800 (308 PPI, 500 nits brightness), wide sRGB color gamut, and 1:√2 aspect ratio (which Google likened to the size and aspect ratio of an A4 sheet of paper). A 2MP front camera sits above the display while four microphones sit along the bottom edge and a single USB Type-C port and two stereo speakers sit on the sides of the tablet. Around back, there is an 8MP rear camera and a bar of LED lights that will light up to indicate the battery charge level after double tapping it.

Google Pixel C Tegra X1 Tablet.jpg

The keyboard is an important part of the Pixel C, and Google has given it special attention to make it part of the package. The keyboard attaches to the tablet using self-aligning magnets that are powerful enough to keep the display attached while holding it upside down and shaking it (not that you'd want to do that, mind you). It can be attached to the bottom of the tablet for storage and used like a slate or you can attach the tablet to the back of the keyboard and lift the built-in hinge to use the Pixel C in laptop mode (the hinge can hold the display at anywhere from 100 to 135-degrees). The internal keyboard battery is good for two months of use, and can be simply recharged by closing the Pixel C like a laptop and allowing it to inductively charge from the tablet portion. The keyboard is around 2mm thick and is nearly full size at 18.85mm pitch and the chiclet keys have a 1.4mm travel that is similar to that of the Chromebook Pixel. There is no track pad, but it does offer a padded palm rest which is nice to see.

Google Pixel C with Keyboard.jpg

Internally, the Pixel C is powered by the NVIDIA Tegra X1 SoC, 3GB of RAM, and 32GB or 64GB of storage (depending on model). The 20nm Tegra X1 consists of four ARM Cortex A57 and four Cortex A53 CPU cores paired with a 256-core Maxwell GPU. The Pixel C is a major design win for NVIDIA, and the built in GPU will be great for gaming on the go.

The Pixel C will be available in December ("in time for the holidays") for $499 for the base 32 GB model, $599 for the 64 GB model, and $149 for the keyboard.

First impressions, such as this hands-on by Engadget, seem to be very positive stating that it is sturdy yet sleek hardware that feels comfortable typing on. While the hardware looks more than up to the task, the operating system of choice is a concern for me. Android is not the most productivity and multi-tasking friendly software. There are some versions of Android that enable multiple windows or side-by-side apps, but it has always felt rather clunky and limited in its usefulness. With that said, Computer World's  JR Raphael seems hopeful. He points out that the Pixel C is, in Batman fashion, not the hardware Android wants, but the hardware that Android needs (to move forward) and is primed for a future of Android that is more friendly to such productive endeavors. Development versions of Android 6.0 included support for multiple apps running simultaneously side-by-side, and while that feature will not make the initial production code cut, it does show that it is something that Google is looking into pursuing and possibly enabling at some point. The Pixel C has an excellent aspect ratio to take advantage of the app splitting with the ability to display four windows each with the same aspect ratio.

I am not sure how well received the Pixel C will be by business users who have several convertible tablet options running Windows and Chrome OS. It certainly gives the iPad-and-keyboard combination a run for its money and is a premium alternative to devices like the Asus Transformers.

What do you think about the Pixel C, and in particular, it running Android?

Even if I end up being less-than-productive using it, I think I'd still want the sleek-looking hardware as a second machine, heh.

Source: Google

A good contender for powering your system, the CoolerMaster V750W

Subject: Cases and Cooling | October 2, 2015 - 03:29 PM |
Tagged: PSU, modular psu, coolermaster, V750W, 80+ gold

Cooler Master's V750W PSU is fully modular and comes with a nice selection of cabling including four PCIe 6+2 connectors and eight SATA power connectors. At 150x140x86mm (5.9x5.5x3.4") it also takes up less space than many PSUs, though not enough to fit in a truly SFF case. A single 12V rail can provide 744W at 62A which is enough to power more than one mid to high range GPU and Bjorn3D's testing shows that it can maintain that 80+ GOLD rating while it is being used.  The five year warranty is also a good reason to pick up this PSU, assuming you are not in the market for something in the kilowatt range.


"One available option soon to be available on the market for <179$, and our specimen of review today, is the CoolerMaster V750. CoolerMaster has partnered with Seasonic to produce the high quality compact “V” series PSUs which made a huge statement for CoolerMaster and told the world they were ready to push some serious power."

Here are some more Cases & Cooling reviews from around the web:



Source: Bjorn3D
Subject: Motherboards
Manufacturer: GIGABYTE

Introduction and Technical Specifications



Courtesy of GIGABYTE

As the flagship board in their Intel Z170-based G1 Gaming Series product line, GIGABYTE integrated all the premium features you could ever want into their Z170X-G1 Gaming motherboard. The board features a black PCB with red and white accents spread throughout its surface to make for a very appealing aesthetic. GIGIGABYTE chose to integrate plastic shields covering their rear panel assembly, the VRM heat sinks, the audio components, and the chipset and SATA ports. With the addition of the Intel Z170 chipset, the motherboard supports the latest Intel LGA1151 Skylake processor line as well as Dual Channel DDR4 memory. Offered at at a premium MSRP of $499, the Z170X-Gaming G1 is priced to appeal to the premium user enthusiasts.


Courtesy of GIGABYTE


Courtesy of GIGABYTE

GIGABYTE over-engineered the Z170X-Gaming G1 to take anything you could think of throwing at it with a massive 22-phase digital power delivery system, featuring 4th gen IR digital controllers and 3rd gen IR PowerIRStage ICs as well as Durable Black solid capacitors rated at 10k operational hours. GIGABYTE integrated the following features into the Z170X-G1 Gaming board: four SATA 3 ports; three SATA-Express ports; two M.2 PCIe x4 capable port; dual Qualcomm® Atheros Killer E2400 NICs; a Killer™ Wireless-AC 1535 802.11AC WiFI controller; four PCI-Express x16 slots; three PCI-Express x1 slots; 2-digit diagnostic LED display; on-board power, reset, CMOS clear, ECO, and CPU Overclock buttons; Dual-BIOS and active BIOS switches; audio gain control switch; Sound Blaster Core 3D audio solution; removable audio OP-AMP port; integrated voltage measurement points; Q-Flash Plus BIOS updater; integrated HDMI video port; and USB 2.0, 3.0, and 3.1 Type-A and Type-C port support.

Continue reading our review of the GIGABYTE Z170X-Gaming G1 motherboard!

Remember when competition wasn't a bad word?

Subject: Editorial | October 2, 2015 - 12:41 PM |
Tagged: google, chromecast, AT&T, apple tv, amd, amazon

There is more discouraging news out of AMD as another 5% of their workforce, around 10,000 employees, will be let go by the end of 2016.  That move will hurt their bottom line before the end of this year, $42 million in severance, benefit payouts and other costs associated with restructuring but should save around $60-70 million in costs by the end of next year.  This is on top of the 8% cut to their workforce which occurred earlier this year and shows just how deep AMD needs to cut to stay alive, unfortunately reducing costs is not as effective as raising revenue.  Before you laugh, point fingers or otherwise disparage AMD; consider for a moment a world in which Intel has absolutely no competition selling high powered desktop and laptop parts.  Do you really think the already slow product refreshes will speed up or prices remain the same?

Consider the case of AT&T, who have claimed numerous times that they provide the best broadband service to their customers that they are capable of and at the lowest price they can sustain.  It seems that if you live in a city which has been blessed with Google Fibre somehow AT&T is able to afford to charge $40/month less than in a city which only has the supposed competition of Comcast or Time Warner Cable.  Interesting how the presence of Google in a market has an effect that the other two supposed competitors do not.

There is of course another way to deal with the competition and both Amazon and Apple have that one down pat.  Apple removed the iFixit app that showed you the insides of your phone and had the temerity to actually show you possible ways to fix hardware issues.  Today Amazon have started to kick both Apple TV and Chromecast devices off of their online store.  As of today no new items can be added to the virtual inventory and as of the 29th of this month anything not sold will disappear.  Apparently not enough people are choosing Amazon's Prime Video streaming and so instead of making the service compatible with Apple or Google's products, Amazon has opted to attempt to prevent, or at least hinder, the sale of those products.

The topics of competition, liquidity and other market forces are far too complex to be dealt with in a short post such as this but it is worth asking yourself; do you as a customer feel like competition is still working in your favour?

The Hand

The Hand

"AMD has unveiled a belt-tightening plan that the struggling chipmaker hopes will get its finances back on track to profitability."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register

New Lightweight LG Gram Notebooks Hit US Market

Subject: Mobile | October 2, 2015 - 02:02 AM |
Tagged: LG, ultrathin, Broadwell, ips display

Earlier this week, LG revealed three new notebooks under its Gram series that are set to compete with Apple’s Macbook Air (The Verge has a photo comparison of the two) and various Ultrabooks from other manufacturers (e.g. Lenovo and Asus). The new series includes one 13-inch and two 14-inch laptops that weigh in at 2.16 pounds and are 0.5” thick. The LG Gram with 13” display is the smallest of the bunch at 11.9” x 8.4” x 0.5” and the chassis is constructed of magnesium and polycarbonate (plastic). Meanwhile, the two notebooks with the 14” display measure 12.8” x 8.94” x 0.5” and feature a body made from a combination of carbon-magnesium and lithium-magnesium alloys. The difference in materials accounts for the larger notebooks hitting the same weight target (2.16 lbs).

LG Gram 14 Thin and Light Notebook.jpg

The 14-inch LG Gram 14 (gram-14Z950-A.AA4GU1) notebook.

LG is packing every Gram notebook with a 1080p IPS display (13.3 or 14 inches), dual mics, a 1.3 MP webcam, six row island-style keyboard, and a spacious track pad. External IO includes two USB 3.0 ports, HDMI output, micro SD card slot, and a micro USB port that (along with the included dongle) supports the 10/100 Ethernet network connection.

The base Gram 13-inch comes in Snow White while both Gram 14-inch notebooks are clad in Champagne Gold.

LG Gram 13.jpg

The LG Gram 13 Broadwell-powered laptop (gram-13Z950-A.AA3WU1).

Internally, LG has opted to go with Intel’s Broadwell processor and its built-in HD 5500 GPU. The LG Gram 13 uses the Intel Core i5-5200U (2 cores, 4 threads at 2.2-2.7GHz). The 14-inch models can be configured with an Intel i5 or an Intel Core i7-5500U which is a dual core (with HyperThreading for four threads) processor clocked at 2.4 GHz that can boost to 3.0 GHz. Additional specifications include 8GB of DDR3L memory, a solid state drive (128 GB on the Gram 13, up to 256 GB on the Gram 14), Intel 802.11ac Wi-Fi, and rated battery life of up to 7.5 hours (which is not great, but not too bad).

The Gram 13 starts at $900. Moving up to the base 14” model will cost $1,000. Finally, the top-end Core i7-powered Gram 14 has an MSRP of $1,400.

The Gram series is LG’s first major thin-and-light entry into the US market, and while there are some compromises made to get the portability, the price points are competitive and seem to be priced right. Interestingly, LG is aiming these notebooks as Macbook Air competitors, allegedly offering you a larger, yet lighter, notebook. It is not actually the lightest notebook on the market, however. Below is a brief point of (weight) comparison to some of the major recent thin-and-lights, the Gram is going up against:

  • 12” Apple MacBook: 2.03 lbs
  • 11” Apple MacBook Air: 2.38 lbs
  • 13” Apple MacBook Air: 2.96 lbs
  • 13.3" ASUS Zenbook UX305FA (Core M): 2.65 lbs
  • 13.3" ASUS Zenbook UX301LA (Core i7): 3.08 lbs
  • 13.3” LaVie Z: 1.87 lbs
  • 13.3” LaVie Z 360: 2.04 lbs
  • 12.2" Samsung ATIV Book 9: 2.09 lbs

We will have to wait for reviews to see how the build quality stacks up, especially the 14-inch models using the lithium-magnesium bodies which, while light, may not be the sturdiest flex-wise. If they can hold up to the stress of the daily commuter, the retail pricing is far from exorbitant and if you can live with the compromises fairly attractive.

Source: LG

We interrupt the Apple coverage for a look at a high end Android

Subject: Mobile | October 1, 2015 - 07:13 PM |
Tagged: nexus 6p, google, Android

The Nexus 6P is new from Google and is designed to compete directly against the new Apple devices, with an all aluminium body and a new USB Type-C connection for charging.  One of the benefits of the new USB is in the charging speed, which Google claims will give you 7 hours of usage off of a 10 minute charge.  They have also added a 12.3MP camera complete with a 1.55μm sensor, though in the style of the Nokia Lumia 1520, that camera does project beyond the casing.  The 5.7in 2560x1440 AMOLED screen is made of Gorilla Glass 4 and is powered by a 2GHz Qualcomm Snapdragon 810 v2.1 octa-core processor, which may display that chips tendency to get a little warm during use.  The Inquirer has not had a chance to fully review the Nexus 6P but you can catch their preview right here.


"THE NEXUS 6P is the first truly premium Android device from Google. Last year's Nexus 6 divided opinion with its bulky design and lacklustre features, but the firm is hoping that its successor, with the premium case and next-gen specs, will finally fill the void for those after a stock Android device."

Here are some more Mobile articles from around the web:



Source: The Inquirer

Samsung 840 EVO mSATA Gets Long Awaited EXT43B6Q Firmware, Fixes Read Speed Issue

Subject: Storage | October 1, 2015 - 05:42 PM |
Tagged: Samsung, firmware, 840 evo, msata

It took them a while to get it right, but Samsung did manage to fix their read degradation issue in many of their TLC equipped 840 Series SSDs. I say many because there were some models left out when firmware EXT0DB6Q was rolled out via Magician 4.6. The big exception was the mSATA variant of the 840 EVO, which was essentially the same SSD just in a more compact form. This omission was rather confusing as the previous update was applicable to both the 2.5" and mSATA form factors simultaneously.

840 EVO mSATA - 06.png

The Magician 4.7 release notes included a bullet for Advanced Performance Optimization support on the 840 EVO mSATA model, but it took Samsung some time to push out the firmware update that enabled this possibility. We know from our previous testing that the Advanced Performance Optimization feature was included with other changes that enabled reads from 'stale' data at full speeds, compensating for the natural voltage drift of flash cell voltages representing the stored data.

840 EVO mSATA FW - 6.png

Now that the firmware has been made available (it came out early this week but was initially throttled), I was able to apply it to our 840 EVO 1TB mSATA sample without issue, and could perform the Advanced Performance Optimization and observe the expected effects, but my sample was recently used for some testing and did not have data old enough to show a solid improvement with the firmware applied *and before* running the Optimization. Luckily, an Overclock.net forum member was able to perform just that test on his 840 EVO 500GB mSATA model:

Palorim12 OC.net post.png

Kudos to that member for being keen enough to re-run his test just after the update.


It looks like the only consumer 840 TLC model left to fix is the original 840 SSD (not 840 EVO, just 840). This was the initial model launched that was pure TLC flash with no SLC TurboWrite cache capability. We hope to see this model patched in the near future. There were also some enterprise units that used the same planar 19nm TLC flash, but I fear Samsung may not be updating those as most workloads seen by those drives would constantly refresh the flash and not give it a chance to become stale and suffer from slowing read speeds. The newer and faster V-NAND equipped models (850 / 950 Series) have never been susceptible to this issue.

Source: Samsung

Android to iPhone Day 6: Battery Life and Home Screens

Subject: General Tech, Mobile | October 1, 2015 - 02:45 PM |
Tagged: iphone 6s, iphone, ios, google, apple, Android

PC Perspective’s Android to iPhone series explores the opinions, views and experiences of the site’s Editor in Chief, Ryan Shrout, as he moves from the Android smartphone ecosystem to the world of the iPhone and iOS. Having been entrenched in the Android smartphone market for 7+ years, the editorial series is less of a review of the new iPhone 6s as it is an exploration on how the current smartphone market compares to what each sides’ expectations are.

Full Story Listing:


Day 4

It probably won’t come as a shock to the millions of iPhone users around the globe, but the more days I keep the 6s in my pocket, the more accepting I am becoming with the platform. The phone has been fast and reliable – I have yet to come across any instability or application crashes despite my incessant installations of new ones. And while I think it’s fair to say that even new Android-based phones feel snappy to user interactions out of the box, the iPhone is just about a week in without me ever thinking about performance – which is exactly what you want from a device like this.

There are some quirks and features missing from the iPhone 6s that I had on my Droid Turbo that I wish I could implement in settings or through third-party applications. I fell in love with the ability to do a double wrist rotation with the Droid as a shortcut to opening up the camera. It helped me capture quite a few photos when I only had access to a single hand and without having to unlock the phone, find an icon, etc. The best the iPhone has is a “drag up from the bottom” motion from the lock screen but I find myself taking several thumb swipes on it before successfully activating it when only using one hand. Trying to use the home button to access the lock screen, and thus the camera shortcut, is actually hindered because the Touch ID feature is TOO FAST, taking me to a home screen (that may not have the camera app icon on it) where I need to navigate around.

I have been a user of the Pebble Time since it was released earlier this year and I really enjoy the extended battery life (measured in days not hours) when compared to Android Wear devices or the Apple Watch. However, the capabilities of the Pebble Time are more limited with the iPhone 6s than they are with Android – I can no longer use voice dictation to reply to text messages or emails and the ability to reply with easy templates (yes, no, I’ll be there soon, etc.) is no longer available. Apple does not allow the same level of access to the necessary APIs as Android does and thus my Time has effectively become a read-only device.


Finally, my concern about missing widgets continues to stir within me; it is something that I think the iPhone 6s could benefit from greatly. I also don’t understand the inability to arrange the icons on the home screens in an arbitrary fashion. Apple will not let me move icons to the bottom of the page without first filling up every other spot on the screen – there can be no empty spaces!! So while my organizational style would like to have a group of three icons in the bottom right hand corner of the screen with some empty space around it, Apple doesn’t allow me to do that. If I want those icons in that location I need to fill up every empty space on the screen to do so. Very odd.

Continue reading my latest update on my Android to iPhone journey!!

Podcast #369 - Fable Legends DX12 Benchmark, Apple A9 SoC, Intel P3608 SSD, and more!

Subject: General Tech | October 1, 2015 - 02:17 PM |
Tagged: podcast, video, fable legends, dx12, apple, A9, TSMC, Samsung, 14nm, 16nm, Intel, P3608, NVMe, logitech, g410, TKL, nvidia, geforce now, qualcomm, snapdragon 820

PC Perspective Podcast #369 - 10/01/2015

Join us this week as we discuss the Fable Legends DX12 Benchmark, Apple A9 SoC, Intel P3608 SSD, and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Josh Walrath, Jeremy Hellstrom, and Allyn Malventano

Program length: 1:42:35

  1. Week in Review:
  2. 0:54:10 This episode of PC Perspective is brought to you by…Zumper, the quick and easy way to find your next apartment or home rental. To get started and to find your new home go to http://zumper.com/PCP
  3. News item of interest:
  4. Hardware/Software Picks of the Week:
  5. Closing/outro

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

The headphones of choice on Aiur; Corsair's VOID

Subject: General Tech | October 1, 2015 - 01:43 PM |
Tagged: corsair, VOID Wireless, gaming headset, 7.1 headset

On paper these headphones are impressive, wireless performance out to 40' with 16 hours of charge, frequency response of 20Hz to 20kHz on the 50mm drivers and 7.1 surround sound.  There have been many previous software emulated 7.1 directional gaming headsets which have disappointed users but in this case Benchmark Reviews quite liked the performance of the VOID while gaming and listening to music.  The noise cancelling microphone, dubbed an “InfoMic” as it has LED lights which can be illuminated in different ways depending on your preferences and even the game you happen to be playing.  You can also sync the lights with other Corsair RGB devices using the Cue software if you are so inclined.  Check out the full reivew right here.


"In the world of computer peripherals and hardware, most of us are well aware of Corsair’s existence. This is an organization that has well-earned reputation for producing quality components; components that are going to be high-performing, intelligently designed, and very likely to provide its owners with years of service."

Here is some more Tech News from around the web:

Audio Corner


You thought Stagefright was just taking a bow? Surprise! It's an encore.

Subject: General Tech | October 1, 2015 - 12:44 PM |
Tagged: stagefright, security, Android

Assuming you have a carrier with a sense of responsibility and a reasonably modern phone the chances are you are patched against the original Stagefright vulnerability.  This is not the case for the recently reported vulnerabilities dubbed Stagefright 2.0.  If you open a specially and nefariously modified MP3 or MP4 file in Stagefright on Android 5.0+ it has been confirmed that those files can trigger remote code execution via libstagefright.  If you are on an older model then the vulnerability lies in libutils and can be used for the same purpose, gaining access to the data stored on your device.  From the security company reports that The Register has linked, it sounds like we can expect many repeat performances as the Stagefright library was poorly written and contains many mistakes; worse is the fact that it is not sandboxed in any way and has significantly higher access than an application for playing media files should ever have.


"Joshua Drake from the security outfit Zimperium zLabs introduced us to StageFright earlier this summer, and he is now back with a similar warning and a brace of problems, according to a post on the Kaspersky Threatpost news site."

Here is some more Tech News from around the web:

Tech Talk


Source: The Register
Subject: General Tech
Manufacturer: NVIDIA

Setup, Game Selection

Yesterday NVIDIA officially announced the new GeForce NOW streaming game service, the conclusion to the years-long beta and development process known as NVIDIA GRID. As I detailed on my story yesterday about the reveal, GeForce NOW is a $7.99/mo. subscription service that will offer on-demand, cloud-streamed games to NVIDIA SHIELD devices, including a library of 60 games for that $7.99/mo. fee in addition to 7 titles in the “purchase and play” category. There are several advantages that NVIDIA claims make GeForce NOW a step above any other streaming gaming service including PlayStation Now, OnLive and others. Those include load times, resolution and frame rate, combined local PC and streaming game support and more.


I have been able to use and play with the GeForce NOW service on our SHIELD Android TV device in the office for the last few days and I thought I would quickly go over my initial thoughts and impressions up to this point.

Setup and Availability

If you have an NVIDIA SHIELD Android TV (or a SHIELD Tablet) then the setup and getting started process couldn’t be any simpler for new users. An OS update is pushed that changes the GRID application on your home screen to GeForce NOW and you can sign in using your existing Google account on your Android device, making payment and subscription simple to manage. Once inside the application you can easily browse through the included streaming games or look through the smaller list of purchasable games and buy them if you so choose.


Playing a game is as simple and selecting title from the grid list and hitting play.

Game Selection

Let’s talk about that game selection first. For $7.99/mo. you get access to 60 titles for unlimited streaming. I have included a full list below, originally posted in our story yesterday, for reference.

Continue reading my initial thoughts and an early review of GeForce NOW!!

Apple Dual Sources A9 SOCs with TSMC and Samsung: Some Extra Thoughts

Subject: Processors | September 30, 2015 - 09:55 PM |
Tagged: TSMC, Samsung, FinFET, apple, A9, 16 nm, 14 nm

So the other day the nice folks over at Chipworks got word that Apple was in fact sourcing their A9 SOC at both TSMC and Samsung.  This is really interesting news on multiple fronts.  From the information gleaned the two parts are the APL0898 (Samsung fabbed) and the APL1022 (TSMC).

These process technologies have been in the news quite a bit.  As we well know, it has been a hard time for any foundry to go under 28 nm in an effective way if your name is not Intel.  Even Intel has had some pretty hefty issues with their march to sub 32 nm parts, but they have the resources and financial ability to push through a lot of these hurdles.  One of the bigger problems that affected the foundries was the idea that they could push back FinFETs beyond what they were initially planning.  The idea was to hit 22/20 nm and use planar transistors and push development back to 16/14 nm for FinFET technology.


The Chipworks graphic that explains the differences between Samsung's and TSMC's A9 products.

There were many reasons why this did not work in an effective way for the majority of products that the foundries were looking to service with a 22/20 nm planar process.  Yes, there were many parts that were fabricated using these nodes, but none of them were higher power/higher performance parts that typically garner headlines.  No CPUs, no GPUs, and only a handful of lower power SOCs (most notably Apple's A8, which was around 89 mm squared and consumed up to 5 to 10 watts at maximum).  The node just did not scale power very effectively.  It provided a smaller die size, but it did not increase power efficiency and switching performance significantly as compared to 28 nm high performance nodes.

The information Chipworks has provided also verifies that Samsung's 14 nm FF process is more size optimized than TSMC's 16 nm FF.  There was originally some talk about both nodes being very similar in overall transistor size and density, but Samsung has a slightly tighter design.  Neither of them are smaller than Intel's latest 14 nm which is going into its second generation form.  Intel still has a significant performance and size advantage over everyone else in the field.  Going back to size we see the Samsung chip is around 96 mm square while the TSMC chip is 104.5 mm square.  This is not huge, but it does show that the Samsung process is a little tighter and can squeeze more transistors per square mm than TSMC.

In terms of actual power consumption and clock scaling we have nothing to go on here.  The chips are both represented in the 6S and 6S+.  Testing so far has not shown there to be significant differences between the two SOCs so far.  In theory one could be performing better than the other, but in reality we have not tested these chips at a low enough level to discern any major performance or power issue.  My gut feeling here is that Samsung's process is more mature and running slightly better than TSMC's, but the differences are going to be minimal at best.

The next piece of info that we can glean from this is that there just isn't enough line space for all of the chip companies who want to fabricate their parts with either Samsung or TSMC.  From a chip standpoint a lot of work has to be done to port a design to two different process nodes.  While 14 and 16 are similar in overall size and the usage of FinFETS, the standard cells and design libraries for both Samsung and TSMC are going to be very different.  It is not a simple thing to port over a design.  A lot of work has to be done in the design stage to make a chip work with both nodes.  I can tell you that there is no way that both chips are identical in layout.  It is not going to be a "dumb port" where they just adjust the optics with the same masks and magically make these chips work right off the bat.  Different mask sets for each fab, verification of both designs, and troubleshooting the yields by metal layer changes will be different for each manufacturer.

In the end this means that there just simply was not enough space at either TSMC or Samsung to handle the demand that Apple was expecting.  Because Apple has deep pockets they contracted out both TSMC and Samsung to produce two very similar, but still different parts.  Apple also likely outbid and locked down what availability to process wafers that Samsung and TSMC have, much to the dismay of other major chip firms.  I have no idea what is going on in the background with people like NVIDIA and AMD when it comes to line space for manufacturing their next generation parts.  At least for AMD it seems that their partnership with GLOBALFOUNDRIES and their version of 14 nm FF is having a hard time taking off.  Eventually more space will be made in production and yields and bins will improve.  Apple will stop taking up so much space and we can get other products rolling off the line.  In the meantime, enjoy that cutting edge iPhone 6S/+ with the latest 14/16 nm FF chips.

Source: Chipworks

This mouse goes to 12000! The ROCCAT Nyth

Subject: General Tech | September 30, 2015 - 04:05 PM |
Tagged: roccat, Nyth, gaming mouse, input

That is no typo, the Twin-Tech Laser Sensor R1 on the Nyth really does go all the way up to 12000 DPI and it also has an adjustable lift-off distance.  There are also 18 buttons, with the shift key function they can all be assigned a second function as well.  The Swarm software used to program the mouse is rather impressive, not only can you assign profiles to games you can program a light show into your mouse if you so desire.  It will set you back $120 but if the price tag does not scare you off you can see how it performs in MadShrimps' review.


"ROCCAT Nyth is like a breath of fresh air in the already crowded gaming mice market which sports quite a modular design with replaceable right side panel, no less than four different sets of buttons, a smooth durable plastic texture, catchy LED light effects and a comfortable shape for lengthy gaming sessions."

Here is some more Tech News from around the web:

Tech Talk

Source: Mad Shrimps

Snapdragon 820 Features Qualcomm's New X12 Modem: Fastest LTE To Date

Subject: Mobile | September 30, 2015 - 02:33 PM |
Tagged: X12 Modem, SoC, snapdragon 820, qualcomm, phones, mu-mimo, mobile, LTE, cell phones

The upcoming Snapdragon 820 is shaping up to be a formidable SoC after the disappointing response to the previous flagship, the Snapdragon 810, which was in far fewer devices than expected for reasons still shrouded in mystery and speculation. One of the biggest aspects of the upcoming 820 is Qualcomm’s new X12 modem, which will provide the most advanced LTE connectivity seen to date when the SoC launches. The X12 features CAT 12 LTE downlink speeds for up to 600 Mbps, and CAT 13 on the uplink for up to 150 Mbps.

LTE connectivity isn’t the only new thing here, as we see from this slide there is also tri-band Wi-Fi supporting 2x2 MU-MIMO.


“This is the first publicly announced processor for use in mobile devices to support LTE Category 12 in the downlink and Category 13 in the uplink, providing up to 33 percent and 200 percent improvement over its predecessor’s download and upload speeds, respectively.”

The specifications for this new modem are densely packed:

  • Cat 12 (up to 600 Mbps) in the downlink
  • Cat 13 (up to 150 Mbps) in the uplink
  • Up to 4x4 MIMO on one downlink LTE carrier
  • 2x2 MU-MIMO (802.11ac)
  • Multi-gigabit 802.11ad
  • LTE-U and LTE+Wi-Fi Link Aggregation (LWA)
  • Next Gen HD Voice and Video calling over LTE and Wi-Fi
  • Call Continuity across Wi-Fi, LTE, 3G, and 2G
  • RF front end innovations
  • Advanced Closed Loop Antenna Tuner
  • Qualcomm RF360™ front end solution with CA
  • Wi-Fi/LTE antenna sharing

Rumored phones that could end up running the Snapdragon 820 with this X12 modem include the Samsung Galaxy S7 and around 30 other devices, though final word is of course pending on shipping hardware.

Source: Qualcomm