Introduction and First Impressions
The CRYORIG C7 is a compact air cooler for Intel and processors, designed to fit anywhere a stock solution will. Standing just 47 mm tall, and featuring a footprint close in size to an Intel stock cooler, CRYORIG claims this ultra-compact design will still outperform the stock solution.
An attractive design, the C7 is further sweetened by a $29.99 retail, which places it in a favorable position in the compact CPU cooler market. Designs like these are rarely useful for enthusiasts, but there it certainly a need for good aftermarket options when overclocking isn't a consideration. There was a time when the stock Intel cooler was sufficient for many basic builds, and for some that may still be the case. But if you've spent a little more to get higher performance, a better heatsink can certainly help; and if you're an enthusiast, the stock cooler was never adequate anyway (even before Intel stopped shipping it in K series CPUs).
In this review we'll find out if this small cooler can deliver on its performance promise, and see just how much noise it might make in the process.
Subject: General Tech | March 17, 2016 - 11:07 PM | Ken Addison
Tagged: podcast, video, amd, XConnect, gdc 2016, Vega, Polaris, navi, razer blade, Sulon Q, Oculus, vive, raja koduri, GTX 1080, msi, vortex, Intel, skulltrail, nuc
PC Perspective Podcast #391 - 03/17/2016
Join us this week as we discuss the AMD's news from GDC, the MSI Vortex, and Q&A!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store (audio only)
- RSS - Subscribe through your regular RSS reader (audio only)
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano
Program length: 1:28:26
News items of interest:
Hardware/Software Picks of the Week
Jeremy: QLEDs are real!
Subject: Shows and Expos | March 16, 2016 - 09:00 PM | Jeremy Hellstrom
Tagged: skulltrail, Skull Canyon, nuc, Intel, GDC
No we are not talking about the motherboard from 2008 which was going to compete with AMD's QuadFX platform and worked out just as well. We are talking about a brand new Skull Canyon NUC powered by an i7-6770HQ with Iris Pro 580 graphics and up to 32GB of DDR4-2133. The NUC NUC6i7KYK will also be the first system we have seen with a fully capable USB Type-C port, it will offer Thunderbolt 3, USB 3.1 and DisplayPort 1.2 connectivity; not simultaneously but the flexibility is nothing less than impressive. It will also sport a full-size HDMI 2.0 port and Mini DisplayPort 1.2 outputs so you can still send video while using the Type C port for data transfer. The port will also support external graphics card enclosures if you plan on using this as a gaming machine as well.
The internal storage subsystem is equally impressive, dual M.2 slots will give you great performance, the SD card slot not so much but still a handy feature. Connectivity is supplied by Intel Dual Band Wireless-AC 8260 (802.11 ac) and Bluetooth 4.2 and an infrared sensor will let you use your favourite remote control if you set up the Skulltrail NUC as a media server. All of these features are in a device less than 0.7 litres in size, with your choice of two covers and support for your own if you desire to personalize your system. The price is not unreasonable, the MSRP for a barebones system is $650, one with 16GB memory, 256GB SSD and Windows 10 should retail for about $1000. You can expect to see these for sale on NewEgg in April to ship in May.
Subject: Motherboards | March 16, 2016 - 02:43 PM | Jeremy Hellstrom
Tagged: Intel, gigabyte, X99P-SLI, LGA2011-v3
X99 based systems do not come cheap, with some boards costing well over $300 and very few under $200 the X99P-SLI could be considered mid-range. The board doesn't skimp on a lot at this price either, an M.2 slot, a pair of USB 3.1 ports, OP-AMP based onboard sound, a conveniently placed header for USB 3.0 on your front panel and yes, it does have a single SEx port. Hardware Canucks breaks down how the PCIe slots are shared and many other of the boards features in their review which you should check out, the board was determined to be a Dam Good Value.
"The X99 platform may not be known for affordability but Gigabyte's new X99P-SLI aims to change that opinion with USB 3.1, M.2, great overclocking, quad GPU support and more for less than $250."
Here are some more Motherboard articles from around the web:
- Maximus VIII Formula LGA 1151 @ [H]ard|OCP
- ASRock E3V5 WS @ Kitguru
- ASRock Fatal1ty Z170 Gaming-ITX/AC @ Modders-Inc
- Supermicro X11SAE Workstation @ eTeknix
- ASUS B150 PRO GAMING/AURA @ eTeknix
- Gigabyte 990FX-Gaming @ Kitguru
Things are about to get...complicated
Earlier this week, the team behind Ashes of the Singularity released an updated version of its early access game, which updated its features and capabilities. With support for DirectX 11 and DirectX 12, and adding in multiple graphics card support, the game featured a benchmark mode that got quite a lot of attention. We saw stories based on that software posted by Anandtech, Guru3D and ExtremeTech, all of which had varying views on the advantages of one GPU or another.
That isn’t the focus of my editorial here today, though.
Shortly after the initial release, a discussion began around results from the Guru3D story that measured frame time consistency and smoothness with FCAT, a capture based testing methodology much like the Frame Rating process we have here at PC Perspective. In that post on ExtremeTech, Joel Hruska claims that the results and conclusion from Guru3D are wrong because the FCAT capture methods make assumptions on the output matching what the user experience feels like. Maybe everyone is wrong?
First a bit of background: I have been working with Oxide and the Ashes of the Singularity benchmark for a couple of weeks, hoping to get a story that I was happy with and felt was complete, before having to head out the door to Barcelona for the Mobile World Congress. That didn’t happen – such is life with an 8-month old. But, in my time with the benchmark, I found a couple of things that were very interesting, even concerning, that I was working through with the developers.
FCAT overlay as part of the Ashes benchmark
First, the initial implementation of the FCAT overlay, which Oxide should be PRAISED for including since we don’t have and likely won’t have a DX12 universal variant of, was implemented incorrectly, with duplication of color swatches that made the results from capture-based testing inaccurate. I don’t know if Guru3D used that version to do its FCAT testing, but I was able to get some updated EXEs of the game through the developer in order to the overlay working correctly. Once that was corrected, I found yet another problem: an issue of frame presentation order on NVIDIA GPUs that likely has to do with asynchronous shaders. Whether that issue is on the NVIDIA driver side or the game engine side is still being investigated by Oxide, but it’s interesting to note that this problem couldn’t have been found without a proper FCAT implementation.
With all of that under the bridge, I set out to benchmark this latest version of Ashes and DX12 to measure performance across a range of AMD and NVIDIA hardware. The data showed some abnormalities, though. Some results just didn’t make sense in the context of what I was seeing in the game and what the overlay results were indicating. It appeared that Vsync (vertical sync) was working differently than I had seen with any other game on the PC.
For the NVIDIA platform, tested using a GTX 980 Ti, the game seemingly randomly starts up with Vsync on or off, with no clear indicator of what was causing it, despite the in-game settings being set how I wanted them. But the Frame Rating capture data was still working as I expected – just because Vsync is enabled doesn’t mean you can look at the results in capture formats. I have written stories on what Vsync enabled captured data looks like and what it means as far back as April 2013. Obviously, to get the best and most relevant data from Frame Rating, setting vertical sync off is ideal. Running into more frustration than answers, I moved over to an AMD platform.
Subject: Systems, Mobile | February 25, 2016 - 11:42 AM | Ryan Shrout
Tagged: MWC, MWC 2016, Huawei, matebook, Intel, core m, Skylake, 2-in-1
Huawei is getting into the PC business with the MateBook 2-in-1, built in the same vein as the Microsoft Surface Pro 4. Can they make a splash with impressive hardware and Intel Core m processors?
Subject: General Tech | February 19, 2016 - 01:34 PM | Jeremy Hellstrom
Tagged: Intel, delay, 10nm
Today Intel has insisted that the rumours of a further delay in their scheduled move to a 10nm process are greatly exaggerated. They had originally hoped to make this move in the latter half of this year but difficulties in the design process moved that target into 2017. They have assured The Inquirer and others that the speculation, based on information in a job vacancy posting, is inaccurate and that the they still plan on releasing processors built on a 10nm node by the end of next year. You can still expect Kaby Lake before the end of the year and Intel also claims to have found promising techniques to shrink their processors below 10nm in the future,
"INTEL HAS moved to quash speculation that its first 10nm chips could be pushed back even further than the second half of 2017, after already delaying them from this year."
Here is some more Tech News from around the web:
- 519070 or blank: The PINs that can pwn 80k online security cams @ The Register
- 6 Excellent Lightweight Linuxes for x86 and ARM @ Linux.com
- Samsung launches 14nm SoC for mid-range smartphones @ DigiTimes
- PC sales aren't doing so great – but good God, you're buying mountains of Nvidia graphics cards @ The Register
- Your anger is our energy, says Microsoft as it fixes Surface @ The Register
- Under-fire Apple backs down, crafts new iOS to kill security safeguard @ The Register
- iPhone 5SE price, release date, specs and rumours @ The Inquirer
- Firefox 2.0 for iOS adds 3D Touch and better password management @ The Inquirer
- Original 1977 Star Wars 35mm Print Has Been Restored and Released Online @ Slashdot
- Ventev Chargesync Alloy Cable @ TechwareLabs
- A Wireless Router That Means Business: Synology RT1900ac Review @ Techgage
Subject: Storage | February 14, 2016 - 02:51 PM | Allyn Malventano
Tagged: vnand, ssd, Samsung, nand, micron, Intel, imft, 768Gb, 512GB, 3d nand, 384Gb, 32 Layer, 256GB
You may have seen a wave of Micron 3D NAND news posts these past few days, and while many are repeating the 11-month old news with talks of 10TB/3.5TB on a 2.5"/M.2 form factor SSDs, I'm here to dive into the bigger implications of what the upcoming (and future) generation of Intel / Micron flash will mean for SSD performance and pricing.
Remember that with the way these capacity increases are going, the only way to get a high performance and high capacity SSD on-the-cheap in the future will be to actually get those higher capacity models. With such a large per-die capacity, smaller SSDs (like 128GB / 256GB) will suffer significantly slower write speeds. Taking this upcoming Micron flash as an example, a 128GB SSD will contain only four flash memory dies, and as I wrote about back in 2014, such an SSD would likely see HDD-level sequential write speeds of 160MB/sec. Other SSD manufacturers already recognize this issue and are taking steps to correct it. At Storage Visions 2016, Samsung briefed me on the upcoming SSD 750 Series that will use planar 16nm NAND to produce 120GB and 250GB capacities. The smaller die capacities of these models will enable respectable write performance and will also enable them to discontinue their 120GB 850 EVO as they transition that line to higher capacity 48-layer VNAND. Getting back to this Micron announcement, we have some new info that bears analysis, and that pertains to the now announced page and block size:
256Gb MLC: 16KB Page / 16MB Block / 1024 Pages per Block
384Gb TLC: 16KB Page / 24MB Block / 1536 Pages per Block
To understand what these numbers mean, using the MLC line above, imagine a 16MB CD-RW (Block) that can write 1024 individual 16KB 'sessions' (Page). Each 16KB can be added individually over time, and just like how files on a CD-RW could be modified by writing a new copy in the remaining space, flash can do so by writing a new Page and ignoring the out of date copy. Where the rub comes in is when that CD-RW (Block) is completely full. The process at this point is very similar actually, in that the Block must be completely emptied before the erase command (which wipes the entire Block) is issued. The data has to go somewhere, which typically means writing to empty blocks elsewhere on the SSD (and in worst case scenarios, those too may need clearing before that is possible), and this moving and erasing takes time for the die to accomplish. Just like how wiping a CD-RW took a much longer than writing a single file to it, erasing a Block takes typically 3-4x as much time as it does to program a page.
With that explained, of significance here are the growing page and block sizes in this higher capacity flash. Modern OS file systems have a minimum bulk access size of 4KB, and Windows versions since Vista align their partitions by rounding up to the next 2MB increment from the start of the disk. These changes are what enabled HDDs to transition to Advanced Format, which made data storage more efficient by bringing the increment up from the 512 Byte sector up to 4KB. While most storage devices still use 512B addressing, it is assumed that 4KB should be the minimum random access seen most of the time. Wrapping this all together, the Page size (minimum read or write) is 16KB for this new flash, and that is 4x the accepted 4KB minimum OS transfer size. This means that power users heavy on their page file, or running VMs, or any other random-write-heavy operations being performed over time will have a more amplified effect of wear of this flash. That additional shuffling of data that must take place for each 4KB write translates to lower host random write speeds when compared to lower capacity flash that has smaller Page sizes closer to that 4KB figure.
A rendition of 3D IMFT Floating Gate flash, with inset pulling back some of the tunnel oxide layer to show the location of the floating gate. Pic courtesy Schiltron.
Fortunately for Micron, their choice to carry Floating Gate technology into their 3D flash has netted them some impressive endurance benefits over competing Charge Trap Flash. One such benefit is a claimed 30,000 P/E (Program / Erase) cycle endurance rating. Planar NAND had dropped to the 3,000 range at its lowest shrinks, mainly because there was such a small channel which could only store so few electrons, amplifying the (negative) effects of electron leakage. Even back in the 50nm days, MLC ran at ~10,000 cycle endurance, so 30,000 is no small feat here. The key is that by using that same Floating Gate tech so good at controlling leakage for planar NAND on a new 3D channel that can store way more electrons enables excellent endurance that may actually exceed Samsung's Charge Trap Flash equipped 3D VNAND. This should effectively negate the endurance hit on the larger Page sizes discussed above, but the potential small random write performance hit still stands, with a possible remedy being to crank up the Over-Provisioning of SSDs (AKA throwing flash at the problem). Higher OP means less active pages per block and a reduction in the data shuffling forced by smaller writes.
A 25nm flash memory die. Note the support logic (CMOS) along the upper left edge.
One final thing helping out Micron here is that their Floating Gate design also enables a shift of 75% of the CMOS circuitry to a layer *underneath* the flash storage array. This logic is typically part of what you see 'off to the side' of a flash memory die. Layering CMOS logic in such a way is likely thanks to Intel's partnership and CPU development knowledge. Moving this support circuitry to the bottom layer of the die makes for less area per die dedicated to non-storage, more dies per wafer, and ultimately lower cost per chip/GB.
Samsung's Charge Trap Flash, shown in both planar and 3D VNAND forms.
One final thing before we go. If we know anything about how the Intel / Micron duo function, it is that once they get that freight train rolling, it leads to relatively rapid advances. In this case, the changeover to 3D has taken them a while to perfect, but once production gains steam, we can expect to see some *big* advances. Since Samsung launched their 3D VNAND their gains have been mostly iterative in nature (24, 32, and most recently 48). I'm not yet at liberty to say how the second generation of IMFT 3D NAND will achieve it, but I can say that it appears the next iteration after this 32-layer 256Gb (MLC) /384Gb (TLC) per die will *double* to 512Gb/768Gb (you are free to do the math on what that means for layer count). Remember back in the day where Intel launched new SSDs at a fraction of the cost/GB of the previous generation? That might just be happening again within the next year or two.
Subject: General Tech | February 12, 2016 - 12:28 PM | Jeremy Hellstrom
Tagged: Intel, wind river, telecoms
The next dream of telecoms providers is network function virtualization, the ability to virtualize customers hardware instead of shipping them a device. The example given to the The Register were DVRs, instead of shipping a cablebox with recording capability to the customer the DVR would be virtualized on the telcos internal infrastructure. You could sign up for a DVR VM, point your smart TV at the appropriate IP address and plug in a USB disk for local storage.
The problem has been the hardware available to the telco, the routers simply did not have the power to provide a consistent internet or cable connection, let alone add virtual devices to their systems. At the upcoming MWC, Wind River will be showing off Titanium Servers for virtualizing customer premises equipment, with enough processing power and VM optimizations that these types of services could be supported.
"Intel is starting to deliver on its vision of x86-powered modem/routers in the home , as its Wind River subsidiary releases a server dedicated to delivery of functions to virtual customer premises equipment (CPE)."
Here is some more Tech News from around the web:
- Gmail growls with more bad message flags to phoil phishers @ The Register
- Qualcomm outs Snapdragon X16 LTE modem with 1Gbps download speed support @ The Inquirer
- ARM pumps fist as profits soar, warns of weaker hand in 2016 @ The Register
- Microsoft axes ‘dozens’ more from former Nokia phone biz @ The Register
- Pwn2Own 2016 Won't Attack Firefox (Because It's Too Easy) @ Slashdot
- MWC 2016: What to expect from Samsung, Huawei, LG and more @ The Inquirer
- An Introduction to SELinux @ Linux.com
- Windows 10 Media Treasure Hunt @ Tech ARP
- Pimp my desk: Gadgets to make your work life easier @ The Inquirer
Subject: Processors | February 6, 2016 - 09:00 PM | Scott Michaud
Tagged: Skylake, overclocking, asrock, Intel, gskill
I recently came across a post at PC Gamer that looked at the extreme overclocking leaderboard of the Skylake-based Intel Core i7-6700K. Obviously, these competitions will probably never end as long as higher numbers are possible on parts that are interesting for one reason or another. Skylake is the new chip on the liquid nitrogen block. It cannot reach frequencies as high as its predecessors, but teams still compete to get as high as possible on that specific SKU.
The current world record for a single-threaded Intel Core i7-6700K is 7.02566 GHz, which is achieved with a voltage of 4.032V. For comparison, the i7-6700K is typically around 1.3V at load. This record was apparently set about a month ago, on January 11th.
This is obviously a huge increase, about three-fold more voltage for the extra 3 GHz. For comparison, the current world record over all known CPUs is the AMD FX-8370 with a clock of 8.72278 GHz. Many Pentium 4-era processors make up the top 15 places too, as those parts were designed for high clock rates with relatively low IPC.
The rest of the system used G.SKILL Ripjaws 4 DDR4 RAM, an ASRock Z170M OC Formula motherboard, and an Antec 1300W power supply. It used an NVIDIA GeForce GT 630 GPU, which offloaded graphics from the integrated chip, but otherwise interfered as little as possible. They also used Windows XP, because why not I guess? I assume that it does the least amount of work to boot, allowing a quicker verification, but that is only a guess.