All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards, Processors, Mobile | August 12, 2015 - 07:30 AM | Ryan Shrout
Tagged: snapdragon 820, snapdragon, siggraph 2015, Siggraph, qualcomm, adreno 530, adreno
Despite the success of the Snapdragon 805 and even the 808, Qualcomm’s flagship Snapdragon 810 SoC had a tumultuous lifespan. Rumors and stories about the chip and an inability to run in phone form factors without overheating and/or draining battery life were rampant, despite the company’s insistence that the problem was fixed with a very quick second revision of the part. There are very few devices that used the 810 and instead we saw more of the flagship smartphones uses the slightly cut back SD 808 or the SD 805.
Today at Siggraph Qualcomm starts the reveal of a new flagship SoC, Snapdragon 820. As the event coinciding with launch is a graphics-specific show, QC is focusing on a high level overview of the graphics portion of the Snapdragon 820, the updated Adreno 5xx architecture and associated designs and a new camera image signal processor (ISP) aiming to improve quality of photos and recording on our mobile devices.
A modern SoC from Qualcomm features many different processors working in tandem to impact the user experience on the device. While the only details we are getting today focus around the Adreno 530 GPU and Spectra ISP, other segments like connectivity (wireless), DSP, video processing and digital signal processing are important parts of the computing story. And we are well aware that Qualcomm is readying its own 64-bit processor architecture for the Kryo CPU rather than implementing the off-the-shelf cores from ARM used in the 810.
We also know that Qualcomm is targeting a “leading edge” FinFET process technology for SD 820 and though we haven’t been able to confirm anything, it looks very like that this chip will be built on the Samsung 14nm line that also built the Exynos 7420.
But over half of the processing on the upcoming Snapdragon 820 fill focus on visual processing, from graphics to gaming to UI animations to image capture and video output, this chip’s die will be dominated by high performance visuals.
Qualcomm’s lists of target goals for SD 820 visuals reads as you would expect: wanting perfection in every area. Wouldn’t we all love a phone or tablet that takes perfect photos each time, always focusing on the right things (or everything) with exceptional low light performance? Though a lesser known problem for consumers, having accurate color reproduction from capture, through processing and to the display would be a big advantage. And of course, we all want graphics performance that impresses and a user interface that is smooth and reliable while enabling NEW experience that we haven’t even thought of in the mobile form factor. Qualcomm thinks that Snapdragon 820 will be able to deliver on all of that.
Subject: Processors | August 11, 2015 - 06:39 PM | Jeremy Hellstrom
Tagged: skylake-u, Intel
Fanless Tech just posted slides of Skylake-U the ultraportable version of Skylake, all of which have an impressively low TDP of 15W which can be reduced to either 10W or in some cases all the way down to 7.5W. As they have done previously all are BGA socketed which means you will not be able to upgraded nor are you likely to see them in desktops, not necessarily a bad thing for this segment of the mobile market but certainly worth noting.
There will be two i7 models and two i5 along with a single i3 version, the top models of which, the Core i7-6600U and Core i5-6300U sport a slightly increased frequency and support for vPro. Those two models, along with the i7-6500U and i5-6200U will have the Intel HD graphics 520 with frequencies of 300/1050 for the i7's and 300/1000 for the i5 and i3 chips
Along with the Core models will come a single Pentium chip, the 4405U and a pair of Celerons, the 3955U and 3855U. They will have HD510 graphics, clocks of 300/950 or 300/900 for the Celerons and you will see slight reductions in PCIe and storage subsystems on teh 4405U and 3855U. The naming scheme is less confusing that some previous generations, a boon for those with family or friends looking for a new laptop who are perhaps not quite as obsessed with processors as we are.
Subject: Graphics Cards, Processors, Mobile, Shows and Expos | August 10, 2015 - 09:01 AM | Scott Michaud
Tagged: vulkan, spir, siggraph 2015, Siggraph, opengl sc, OpenGL ES, opengl, opencl, Khronos
When the Khronos Group announced Vulkan at GDC, they mentioned that the API is coming this year, and that this date is intended to under promise and over deliver. Recently, fans were hoping that it would be published at SIGGRAPH, which officially begun yesterday. Unfortunately, Vulkan has not released. It does hold a significant chunk of the news, however. Also, it's not like DirectX 12 is holding a commanding lead at the moment. The headers were public only for a few months, and the code samples are less than two weeks old.
The organization made announcements for six products today: OpenGL, OpenGL ES, OpenGL SC, OpenCL, SPIR, and, as mentioned, Vulkan. They wanted to make their commitment clear, to all of their standards. Vulkan is urgent, but some developers will still want the framework of OpenGL. Bind what you need to the context, then issue a draw and, if you do it wrong, the driver will often clean up the mess for you anyway. The briefing was structure to be evident that it is still in their mind, which is likely why they made sure three OpenGL logos greeted me in their slide deck as early as possible. They are also taking and closely examining feedback about who wants to use Vulkan or OpenGL, and why.
As for Vulkan, confirmed platforms have been announced. Vendors have committed to drivers on Windows 7, 8, 10, Linux, including Steam OS, and Tizen (OSX and iOS are absent, though). Beyond all of that, Google will accept Vulkan on Android. This is a big deal, as Google, despite its open nature, has been avoiding several Khronos Group standards. For instance, Nexus phones and tablets do not have OpenCL drivers, although Google isn't stopping third parties from rolling it into their devices, like Samsung and NVIDIA. Direct support of Vulkan should help cross-platform development as well as, and more importantly, target the multi-core, relatively slow threaded processors of those devices. This could even be of significant use for web browsers, especially in sites with a lot of simple 2D effects. Google is also contributing support from their drawElements Quality Program (dEQP), which is a conformance test suite that they bought back in 2014. They are going to expand it to Vulkan, so that developers will have more consistency between devices -- a big win for Android.
While we're not done with Vulkan, one of the biggest announcements is OpenGL ES 3.2 and it fits here nicely. At around the time that OpenGL ES 3.1 brought Compute Shaders to the embedded platform, Google launched the Android Extension Pack (AEP). This absorbed OpenGL ES 3.1 and added Tessellation, Geometry Shaders, and ASTC texture compression to it. It was also more tension between Google and cross-platform developers, feeling like Google was trying to pull its developers away from Khronos Group. Today, OpenGL ES 3.2 was announced and includes each of the AEP features, plus a few more (like “enhanced” blending). Better yet, Google will support it directly.
Next up are the desktop standards, before we finish with a resurrected embedded standard.
OpenGL has a few new extensions added. One interesting one is the ability to assign locations to multi-samples within a pixel. There is a whole list of sub-pixel layouts, such as rotated grid and Poisson disc. Apparently this extension allows developers to choose it, as certain algorithms work better or worse for certain geometries and structures. There were probably vendor-specific extensions for a while, but now it's a ratified one. Another extension allows “streamlined sparse textures”, which helps manage data where the number of unpopulated entries outweighs the number of populated ones.
OpenCL 2.0 was given a refresh, too. It contains a few bug fixes and clarifications that will help it be adopted. C++ headers were also released, although I cannot comment much on it. I do not know the state that OpenCL 2.0 was in before now.
And this is when we make our way back to Vulkan.
SPIR-V, the code that runs on the GPU (or other offloading device, including the other cores of a CPU) in OpenCL and Vulkan is seeing a lot of community support. Projects are under way to allow developers to write GPU code in several interesting languages: Python, .NET (C#), Rust, Haskell, and many more. The slide lists nine that Khronos Group knows about, but those four are pretty interesting. Again, this is saying that you can write code in the aforementioned languages and have it run directly on a GPU. Curiously missing is HLSL, and the President of Khronos Group agreed that it would be a useful language. The ability to cross-compile HLSL into SPIR-V means that shader code written for DirectX 9, 10, 11, and 12 could be compiled for Vulkan. He expects that it won't take long for a project to start, and might already be happening somewhere outside his Google abilities. Regardless, those who are afraid to program in the C-like GLSL and HLSL shading languages might find C# and Python to be a bit more their speed, and they seem to be happening through SPIR-V.
As mentioned, we'll end on something completely different.
For several years, the OpenGL SC has been on hiatus. This group defines standards for graphics (and soon GPU compute) in “safety critical” applications. For the longest time, this meant aircraft. The dozens of planes (which I assume meant dozens of models of planes) that adopted this technology were fine with a fixed-function pipeline. It has been about ten years since OpenGL SC 1.0 launched, which was based on OpenGL ES 1.0. SC 2.0 is planned to launch in 2016, which will be based on the much more modern OpenGL ES 2 and ES 3 APIs that allow pixel and vertex shaders. The Khronos Group is asking for participation to direct SC 2.0, as well as a future graphics and compute API that is potentially based on Vulkan.
The devices that this platform intends to target are: aircraft (again), automobiles, drones, and robots. There are a lot of ways that GPUs can help these devices, but they need a good API to certify against. It needs to withstand more than an Ouya, because crashes could be much more literal.
Subject: Processors | August 8, 2015 - 05:55 PM | Scott Michaud
Tagged: Skylake, Intel, delid, CPU die, cpu, Core i7-6700K
PC Watch, a Japanese computer hardware website, acquired at least one Skylake i7-6700K and removed the heatspreader. With access to the bare die, they took some photos and tested a few thermal compound replacements, which quantifies how good (or bad) Intel's default thermal grease is. As evidenced by the launch of Ivy Bridge and, later, Devil's Canyon, the choice of thermal interface between the die and the lid can make a fairly large difference in temperatures and overclocking.
Image Credit: PC Watch
They chose the vice method for the same reason that Morry chose this method in his i7-4770k delid article last year. This basically uses a slight amount of torque and external pressure or shock to pop the lid off the processor. Despite how it looks, this is considered to be less traumatic than using a razer blade to cut the seal, because human hands are not the most precise instruments and a slight miss could damage the PCB. PC Watch, apparently, needed to use a wrench to get enough torque on the vice, which is transferred to the processor as pressure.
Image Credit: PC Watch
Of course, Intel could always offer enthusiasts with choices in thermal compounds before they put the lid on, which would be safest. How about that, Intel?
Image Credit: PC Watch
With the lid off, PC Watch mentioned that the thermal compound seems to be roughly the same as Devil's Canyon, which is quite good. They also noticed that the PCB is significantly more thin than Haswell, dropping in thickness from about 1.1mm to about 0.8mm. For some benchmarks, they tested it with the stock interface, an aftermarket solution called Prolimatech PK-3, and a liquid metal alloy called Coollaboratory Liquid Pro.
Image Credit: PC Watch
At 4.0 GHz, PK-3 dropped the temperature by about 4 degrees Celsius, while Liquid Metal knocked it down 16 degrees. At 4.6 GHz, PK-3 continued to give a delta of about 4 degrees, while Liquid Metal widened its gap to 20 degrees. It reduced an 88 C temperature to 68 C!
Image Credit: PC Watch
There are obviously limitations to how practical this is. If you were concerned about thermal wear on your die, you probably wouldn't forcibly remove its heatspreader from its PCB to acquire it. That would be like performing surgery on yourself to remove your own appendix, which wasn't inflamed, just in case. Also, from an overclocking standpoint, heat doesn't scale with frequency. Twenty degrees is a huge gap, but even a hundred MHz could eat it up, depending on your die.
It's still interesting for those who try, though.
Subject: Processors | August 5, 2015 - 03:20 PM | Jeremy Hellstrom
Tagged: sunrise point, Skylake, Intel, ddr4, Core i7-6700K, core i7, 6700k, 14nm
By now you have read through Ryan's review of the new i7-6700 and the ASUS Z170-A as well as the related videos and testing, if not we will wait for you to flog yourself in punishment and finish reading the source material. Now that you are ready, take a look at what some of the other sites thought about the new Skylake chip and Sunrise Point chipset. For instance [H]ard|OCP managed to beat Ryan's best overclock, hitting 4.7GHz/3600MHz at 1.32v vCore with some toasty but acceptable CPU temperatures. The full review is worth looking for and if some of the rumours going around are true you should take H's advice, if you think you want one buy it now.
"Today we finally get to share with you our Intel Skylake experiences. As we like to, we are going to focus on Instructions Per Clock / IPC and overclocking this new CPU architecture. We hope to give our readers a definitive answer to whether or not it is time to make the jump to a new desktop PC platform."
Here are some more Processor articles from around the web:
- Intel's Core i7-6700K 'Skylake' @ The Tech Report
- Asus' Z170-A motherboard @ The Tech Report
- Intel Core i7-6700K & i5-6600K Skylake CPU @ Kitguru
- Asus Maximus VIII Hero @ Kitguru
- A Preview Of Intel’s First Skylake Processors & Z107 Chipset @ Techgage
- Intel Core I7 6700K Review, Skylake is Falling! @ Bjorn3d
- Intel 6th Generation Core i7 6700K Review @ OCC
Subject: Processors | August 3, 2015 - 10:58 AM | Sebastian Peak
Tagged: Skylake, leak, Intel, i7-6700K, Core i7-6700K
Leaked photos of what appear to be the full retail box version of the upcoming Intel Core i7-6700K and i5-6600K "Skylake" unlocked CPU have appeared on imgur, making the release of these processors feel ever closer.
Is this really the new box graphic for the unlocked i7?
While the authenticity of these photos can't be verified through any official channel, they certainly do look real. We have heard of Skylake leaks - a.k.a. Skyleaks - for a while now, and the rumors point to an August release for these new LGA 1151 chips (sorry LGA 1150 motherboard owners!).
Looks real. But we do live in a Photoshop world...
We only have about four weeks to wait at the most if an August release is, in fact, imminent. If not, I blame Jeremy for getting our hopes up with terms like Skyleak™. I encourage you to direct all angry correspondence to his inbox.
These boxes are very colorful (or colourful, if you will)
Chart taken from WCCFTech
The pricing of the top i7 part at $316 would be a welcome reduction from the current $339 retail of the i7-4790K. Now whether the 6700K can beat out that Devil's Canyon part remains to be seen. Doubtless we will have benchmarks and complete coverage once any official release is made by Intel for these parts.
Subject: Processors | July 31, 2015 - 03:37 PM | Jeremy Hellstrom
Tagged: iris pro, Broadwell, linux, i7-5775C
The graphics core of new CPUs used to have issues on Linux at launch but recently this has become much less of an issue. The newly released Iris Pro on the 5770C follows this trend as you can see in the benchmarks at Phoronix. The OpenGL performance is a tiny bit slower overall on Linux, apart from OpenArena, but not enough to ruin your gaming experience. With a new kernel on the horizon and a community working with the new GPU you can expect the performance gap to narrow. Low cost gaming on a Linux machine becomes more attractive every day.
"Resulting from the What Windows 10 vs. Linux Benchmarks Would You Like To See and The Phoronix Test Suite Is Running On Windows 10, here are our first benchmarks comparing the performance of Microsoft's newly released Windows 10 Pro x64 against Fedora 22 when looking at the Intel's OpenGL driver performance across platforms."
Here are some more Processor articles from around the web:
- Intel Core i7 5775C Review @ OCC
- Intel Core i7 5775C: Once Going, This Broadwell CPU Is Great On Linux @ Phoronix
- Intel "Broadwell" Core i7 5775C Review @HiTech Legion
- Comparing The Power/Performance Of A NetBurst Celeron & Pentium 4 To Broadwell's Core i7 5775C @ Phoronix
Subject: Processors | July 22, 2015 - 09:56 PM | Scott Michaud
Tagged: amd, APU, Godavari, a8, a8-7670k
AMD's Godavari architecture is the last one based on Bulldozer, which will hold the company's product stack over until their Zen architecture arrives in 2016. The A10-7870K was added a month ago, with a 95W TDP at a MSRP of $137 USD. This involved a slight performance bump of +200 MHz at its base frequency, but a +100 MHz higher Turbo than its predecessor when under high load. More interesting, it does this at the same TDP and the same basic architecture.
Remember that these are AMD's benchmarks.
The refresh has been expanded to include the A8-7670K. Some sites have reported that this uses the Excavator architecture as seen in Carrizo, but this is not the case. It is based on Steamroller. This product has a base clock of 3.6 GHz with a Turbo of up to 3.9 GHz. This is a +300 MHz Base and +100 MHz Turbo increase over the previous A8-7650K. Again, this is with the same architecture and TDP. The GPU even received a bit of a bump, too. It is now clocked at 757 MHz versus the previous generation's 720 MHz with all else equal, as far as I can tell. This should lead to a 5.1% increase in GPU compute throughput.
The A8-7670K just recently launched for an MSRP of $117.99. This 20$ saving should place it in a nice position below the A10-7870K for mainstream users.
Subject: Processors | July 20, 2015 - 05:58 PM | Jeremy Hellstrom
Tagged: Intel, i7-5775C, LGA1150, Broadwell, crystalwell
To keep it interesting and to drive tech reviewers even crazier, Intel has changed their naming scheme again, with C now designating an unlocked CPU as opposed to K on the new Broadwell models. Compared to the previous 4770K, the TPD is down to 65W from 84W, the L3 cache has shrunk from 8MB to 6MB and the frequency of both the base and turbo clocks have dropped 200MHz. It does have the Iris Pro 6200 graphics core, finally available on an LGA chip. Modders Inc. took the opportunity to clock both the flagship Haswell and Broadwell chips to 4GHz to do a clock for clock comparison of the architectures. Check out the review right here.
"While it is important to recognize one's strengths and leverage it as an asset, accepting shortcomings and working on them is equally as important for the whole is greater than the sum of its parts."
Here are some more Processor articles from around the web:
- Intel Celeron N3050 Braswell Linux Performance @ Phoronix
- Intel Core i7-5775C @ Legion Hardware
- AMD vs. Intel Price Comparison Table – July/2015 @ Hardware Secrets
- Comparing Today's Modern CPUs To Intel's Socket 478 Celeron & Pentium 4 NetBurst CPUs @ Phoronix
- AMD A10-7870K Godavari: RadeonSI Gallium3D vs. Catalyst Linux Drivers @ Phoronix
- AMD A10-7870K Benchmarks On Ubuntu Linux @ Phoronix
Subject: Graphics Cards, Processors, Mobile | July 19, 2015 - 06:59 AM | Scott Michaud
Tagged: Zen, TSMC, Skylake, pascal, nvidia, Intel, Cannonlake, amd, 7nm, 16nm, 10nm
Getting smaller features allows a chip designer to create products that are faster, cheaper, and consume less power. Years ago, most of them had their own production facilities but that is getting rare. IBM has just finished selling its manufacturing off to GlobalFoundries, which was spun out of AMD when it divested from fabrication in 2009. Texas Instruments, on the other hand, decided that they would continue manufacturing but get out of the chip design business. Intel and Samsung are arguably the last two players with a strong commitment to both sides of the “let's make a chip” coin.
So where do you these chip designers go? TSMC is the name that comes up most. Any given discrete GPU in the last several years has probably been produced there, along with several CPUs and SoCs from a variety of fabless semiconductor companies.
Several years ago, when the GeForce 600-series launched, TSMC's 28nm line led to shortages, which led to GPUs remaining out of stock for quite some time. Since then, 28nm has been the stable work horse for countless high-performance products. Recent chips have been huge, physically, thanks to how mature the process has become granting fewer defects. The designers are anxious to get on smaller processes, though.
In a conference call at 2 AM (EDT) on Thursday, which is 2 PM in Taiwan, Mark Liu of TSMC announced that “the ramping of our 16 nanometer will be very steep, even steeper than our 20nm”. By that, they mean this year. Hopefully this translates to production that could be used for GPUs and CPUs early, as AMD needs it to launch their Zen CPU architecture in 2016, as early in that year as possible. Graphics cards have also been on that technology for over three years. It's time.
Also interesting is how TSMC believes that they can hit 10nm by the end of 2016. If so, this might put them ahead of Intel. That said, Intel was also confident that they could reach 10nm by the end of 2016, right until they announced Kaby Lake a few days ago. We will need to see if it pans out. If it does, competitors could actually beat Intel to the market at that feature size -- although that could end up being mobile SoCs and other integrated circuits that are uninteresting for the PC market.
Following the announcement from IBM Research, 7nm was also mentioned in TSMC's call. Apparently they expect to start qualifying in Q1 2017. That does not provide an estimate for production but, if their 10nm schedule is both accurate and also representative of 7nm, that would production somewhere in 2018. Note that I just speculated on an if of an if of a speculation, so take that with a mine of salt. There is probably a very good reason that this date wasn't mentioned in the call.
Back to the 16nm discussion, what are you hoping for most? New GPUs from NVIDIA, new GPUs from AMD, a new generation of mobile SoCs, or the launch of AMD's new CPU architecture? This should make for a highly entertaining comments section on a Sunday morning, don't you agree?
Subject: Graphics Cards, Processors | July 7, 2015 - 08:00 AM | Scott Michaud
Tagged: earnings, amd
The projections for AMD's second fiscal quarter had revenue somewhere between flat and down 6%. The actual estimate, as of July 6th, is actually below the entire range. They expect that revenue is down 8% from the previous quarter, rather than the aforementioned 0 to 6%. This is attributed to weaker APU sales in OEM devices, but they also claim that channel sales are in line with projections.
This is disappointing news for fans of AMD, of course. The next two quarters will be more telling though. Q3 will count two of the launch months for Windows 10, which will likely include a bunch of new and interesting devices and aligns well with back to school season. We then get one more chance at a pleasant surprise in the fourth quarter and its holiday season, too. My intuition is that it won't be too much better than however Q3 ends up.
One extra note: AMD has also announced a “one-time charge” of $33 million USD related to a change in product roadmap. Rather than releasing designs at 20nm, they have scrapped those plans and will architect them for “the leading-edge FinFET node”. This might be a small expense compared to how much smaller the process technology will become. Intel is at 14nm and will likely be there for some time. Now AMD doesn't need to wait around at 20nm in the same duration.
Subject: Processors | June 26, 2015 - 12:32 PM | Sebastian Peak
Tagged: skylake-s, Skylake-K, Intel Skylake, cpu cooler
A report from Chinese-language site XFastest contains a slide reportedly showing Intel's cooling strategy for upcoming retail HEDT (High-end Desktop) Skylake "K" processors.
Typically Intel CPUs (outside of the current high-end enthusiast segment on LGA2011) have been packaged with one of Intel's ubiquitous standard performance air coolers, and this move to eliminate them from future unlocked SKUs makes sense for unlocked "K" series processors. The slide indicates that a 135W solution will be recommended, even if the TDP of the processor is still in the 91-95W range. The additional headroom is certainly advisable, and arguably the stock cooler never should have been used with products like the 4770K and 4790K, which more than push the limits of the stock cooler (and often allow 90 °C at load without overclocking in my experience with these high-end chips).
Aftermarket cooling (with AIO liquid CPU coolers in particular) has been essential for maximizing the performance of an unlocked CPU all along, so this news shouldn't effect the appeal of these upcoming CPUs for those interested in the latest Intel offerings (though it won't help enhance your collection of unused stock heatsinks).
Subject: Graphics Cards, Processors, Mobile | June 4, 2015 - 04:58 PM | Scott Michaud
Tagged: amd, carrizo
My discussion of the Carrizo architecture went up a couple of days ago. The post did not include specific SKUs because we did not have those at the time. Now we do, and there will be products: one A8-branded, one A10-branded, and one FX-branded.
All three will be quad-core parts that can range between 12W and 35W designs, although the A8 processor does not have a 35W mode listed in the AMD Dual Graphics table. The FX-8800P is an APU that has all eight GPU cores while the A-series APUs have six. The A10-8700P and the A8-8600P are separated by a couple hundred megahertz base and boost CPU clocks, and 80 MHz GPU clock.
Also, we have been given a table of AMD Radeon R5 and R7 M-series GPUs that can be paired with Carrizo in an AMD Dual Graphics setup. These GPUs are the R7 M365, R7 M360, R7 M350, R7 M340, R5 M335, and R5 M330. They cannot be paired with every Carrizo APU, and some pairings only work in certain power envelopes. Thankfully, this table should only be relevant to OEMs, because end-users are receiving pre-configured systems.
Pricing and availability will depend on OEMs, of course.
Subject: Processors, Shows and Expos | June 2, 2015 - 11:10 AM | Ryan Shrout
Tagged: Intel, computex 2015, computex, Broadwell
Earlier this morning you saw us post a story about MSI updating its line of 20 notebooks with new Broadwell processors. Though dual-core Broadwell has been available for Ultrabooks and 2-in-1s for some time already, today marks the release of the quad-core variations we have been waiting on for some time. Available for mobile designs, as well as marking the very first Iris Pro graphics implementation for desktop users, Broadwell quad-core parts look to be pretty impressive.
Today Intel gives to the world a total 10 new processors for content creators and enthusiasts. Two of these parts are 65 watt SKUs in LGA packaging for use by enthusiasts and DIY builders. The rest are BGA designs for all-in-one PCs and high performance notebooks and include both 65 watt and 47 watt variants. And most are using the new Iris Pro Graphics 6200 implementation.
For desktop users, we get the Core i7-5775C and the Core i5-5675C. The Core i7 model is a quad-core, HyperThreaded CPU with a base clock of 3.3 GHz and a max Turbo clock of 3.7 GHz. It's unlocked so that overclockers and can mess around with them in the same way do with Haswell. The Iris Pro Graphics 6200 can scale up to 1150 MHz and rated DDR3L memory speeds are up to 1600 MHz. 6MB of L3 cache, a 65 watt TDP and a tray price of $366 round out the information we have.
Click to Enlarge
The Core i5-5675C does not include HyperThreading, has clock speed ranges of 3.1 GHz to 3.6 GHz and only sees the Iris Pro scale to 1100 MHz. Also, it drops from 6MB of L3 cache to 4MB. Pricing on this model will start a $276.
These two processors mark the first time we have seen Iris Pro graphics in a socketed form factor, something we have been asking Intel to offer for at least a couple of generations. They focused on 65 watt TDPs rather than anything higher mostly because of the target audience for these chips: if you are interested in the performance of integrated graphics then you likely are pushing a small form factor design or HTPC of some kind. If you have a Haswell-capable motherboard then you SHOULD be able to utilize one of these new processors though you'll want a Z97 board if you are going to try to overclock it.
From a performance standpoint, the Core i7-5775C will offer 2x the gaming performance, 35% faster video transcoding and 20% higher compute performance when compared to the previous top-end 65 watt Haswell part, the Core i7-4790S. That 4th generation part uses Intel HD Graphics 4600 that does not include the massive eDRAM that makes Iris Pro implementations so unique.
For mobile and AIO buyers, Intel has a whole host of new processors to offer. You'll likely find most of the 65 watt parts in all-in-one designs but you may see some mobile designs that go crazy and opt for them too. For the rest of the gaming notebook designs there are CPUs like the Core i7-5950HQ, a quad-core HyperThreaded part with a base clock of 2.9 GHz and max Turbo clock of 3.8 GHz inside a TDP of 47 watts. The Iris Pro Graphics 6200 will scale from 300 to 1150 MHz so GPU performance should basically be on par with the desktop 65-watt equivalent. Pricing is pretty steep though: starting at $623.
Click to Enlarge
These new processors, especially the new 5950HQ, offer impressive compute and gaming performance.
Compared to the Core i7-5600U, already available and used in some SFF and mobile platforms, the Core i7-5950HQ is 2.5x faster in SPECint and nearly 2x faster in a video conversion benchmark. Clearly these machines are going to be potent desktop replacement options.
For mainstream gamers, the Iris Pro Graphics 6200 on 1920x1080 displays will see some impressive numbers. Players of League of Legends, Heroes of the Storm and WoW will see over 60 FPS at the settings listed in the slide above.
We are still waiting for our hardware to show up but we have both the LGA CPUs and notebooks using the BGA option en route. Expect testing from PC Perspective very soon!
Subject: Processors | June 2, 2015 - 08:40 AM | Sebastian Peak
Tagged: rumor, nuc, leak, Intel Skylake, core i5, core i3
A report from FanlessTech shows what appears to be a leaked slide indicating an upcoming Intel 6th-generation Skylake NUC.
The site claims that these new Intel NUCs will be coming out in Q3 for a 6th-generation Core i3 model, and in Q4 for a 6th-gen Core i5 model. and this new NUC will feature 15W TDP Skylake-U processors and 1866 MHz DDR4 memory, along with fast M.2 storage and an SDXC card reader.
True to their name, FanlessTech speculates about the possibility of a passively-cooled version of the NUC: “Out of the box, the Skylake NUC is actively cooled. But fanless cases from Akasa, HDPLEX, Streacom and cirrus7 are to be expected.”
Here are the reported specs of this NUC:
- Intel 6th Generation Core i3 / i5-6xxxU (15W TDP)
- Dual-channel DDR4 SODIMMs 1.2V, 1866 MHz (32GB max)
- Intel HD Graphics 6xxx
- 1 x mini HDMI 1.4a
- 1 x mini DisplayPort 1.2
- 2 x USB 3.0 ports on the back panel
- 2 x USB 3.0 ports on the front panel (1 x charging capable)
- 2 x Internal USB 2.0 via header
- Internal support for M.2 SSD card (22x42 or 22x80)
- Internal SATA3 support for 2.5" HDD/SSD (up to 9.5mm thickness)
- SDXC slot with UHS-I support on the side
- Intel 10/100/1000Mbps Network Connection
- Intel Wireless-AC xxxx M.2 soldered-down, wireless antennas
- IEEE 802.11ac, Bluetooth 4, Intel® Wireless Display
- Up to 7.1 surround audio via Mini HDMI and Mini DisplayPort
- Headphone/Microphone jack on the front panel
- Consumer Infrared sensor on the front panel
- 19V, 65W wall-mount AC-DC power adapter
No further information has been revealed about this alleged upcoming NUC, but we will probably know more soon.
Subject: Processors | May 28, 2015 - 03:44 PM | Scott Michaud
Tagged: Intel, Skylake, skylake-s, haswell, devil's canyon
For a while, it was unclear whether we would see Broadwell on the desktop. With the recently leaked benchmarks of the Intel Core i7-6700K, it seems all-but-certain that Intel will skip it and go straight to Skylake. Compared to Devil's Canyon, the Haswell-based Core i7-4790K, the Skylake-S Core i7-6700K has the same base clock (4.0 GHz) and same full-processor Turbo clock (4.2 GHz). Pretty much every improvement that you see is pure performance per clock (IPC).
Image Credit: CPU Monkey
In multi-threaded applications, the Core i7-6700K tends to get about a 9% increase while, when a single core is being loaded, it tends to get about a 4% increase. Part of this might be the slightly lower single-core Turbo clock, which is said to be 4.2 GHz instead of 4.4 GHz. There might also be some increased efficiency with HyperThreading or cache access -- I don't know -- but it would be interesting to see.
I should note that we know nothing about the GPU. In fact, CPU Monkey fails to list a GPU at all. Intel has expressed interest in bringing Iris Pro-class graphics to the high-end mainstream desktop processors. For someone who is interested in GPU compute, especially with Explicit Unlinked MultiAdapter in DirectX 12 upcoming, it would be nice to see GPUs be ubiquitous and always enabled. It is expected to have the new GT4e graphics with 72 compute units and either 64 or 128MB of eDRAM. If clocks are equivalent, this could translate well over a teraflop (~1.2 TFLOPs) of compute performance in addition to discrete graphics. In discrete graphics, that would be nearly equivalent to an NVIDIA GTX 560 Ti.
We are expecting to see the Core i7-6700K launch in Q3 of this year. We'll see.
Subject: Processors | May 27, 2015 - 09:45 PM | Scott Michaud
Tagged: xeon, Skylake, Intel, Cannonlake, avx-512
AVX-512 is an instruction set that expands the CPU registers from 256-bit to 512-bit. It comes with a core specification, AVX-512 Foundation, and several extensions that can be added where it makes sense. For instance, AVX-512 Exponential and Reciprocal Instructions (ERI) help solve transcendental problems, which occur in geometry and are useful for GPU-style architectures. As such, it appears in Knights Landing but not anywhere else.
Image Credit: Bits and Chips
Today's rumor is that Skylake, the successor to Broadwell, will not include any AVX-512 support in its consumer parts. According to the lineup, Xeons based on Skylake will support AVX-512 Foundation, Conflict Detection Instructions, Vector Length Extensions, Byte and Word Instructions, and Double and Quadword Instructions. Fused Multiply and Add for 52-bit Integers and Vector Byte Manipulation Instructions will not arrive until Cannonlake shrinks everything down to 10nm.
The main advantage of larger registers is speed. When you can fit 512 bits of data in a memory bank and operate upon it at once, you are able to do several, linked calculations together. AVX-512 has the capability to operate on sixteen 32-bit values at the same time, which is obviously sixteen times the compute performance compared with doing just one at a time... if all sixteen undergo the same operation. This is especially useful for games, media, and other, vector-based workloads (like science).
This also makes me question whether the entire Cannonlake product stack will support AVX-512. While vectorization is a cheap way to get performance for suitable workloads, it does take up a large amount of transistors (wider memory, extra instructions, etc.). Hopefully Intel will be able to afford the cost with the next die shrink.
Subject: Graphics Cards, Processors, Displays, Systems | May 15, 2015 - 03:02 PM | Scott Michaud
Tagged: Oculus, oculus vr, nvidia, amd, geforce, radeon, Intel, core i5
Today, Oculus has published a list of what they believe should drive their VR headset. The Oculus Rift will obviously run on lower hardware. Their minimum specifications, published last month and focused on the Development Kit 2, did not even list a specific CPU or GPU -- just a DVI-D or HDMI output. They then went on to say that you really should use a graphics card that can handle your game at 1080p with at least 75 fps.
The current list is a little different:
- NVIDIA GeForce GTX 970 / AMD Radeon R9 290 (or higher)
- Intel Core i5-4590 (or higher)
- 8GB RAM (or higher)
- A compatible HDMI 1.3 output
- 2x USB 3.0 ports
- Windows 7 SP1 (or newer).
I am guessing that, unlike the previous list, Oculus has a more clear vision for a development target. They were a little unclear about whether this refers to the consumer version or the current needs of developers. In either case, it would likely serve as a guide for what they believe developers should target when the consumer version launches.
This post also coincides with the release of the Oculus PC SDK 0.6.0. This version pushes distortion rendering to the Oculus Server process, rather than the application. It also allows multiple canvases to be sent to the SDK, which means developers can render text and other noticeable content at full resolution, but scale back in places that the user is less likely to notice. They can also be updated at different frequencies, such as sleeping the HUD redraw unless a value changes.
The Oculus PC SDK (0.6.0) is now available at the Oculus Developer Center.
Subject: Processors | May 7, 2015 - 07:36 PM | Scott Michaud
Tagged: Intel, xeon, xeon e7 v3, xeon e7
On May 5th, Intel officially announced their new E7 v3 lineup of Xeon processors. This replaces the Xeon E7 v2 processors, which were based on Ivy Bridge-EX, with the newer Haswell-EX architecture. Interestingly, WCCFTech has Broadwell-EX listed next, even though the desktop is expected to mostly skip Broadwell and jump to Skylake in high-performance roles.
The largest model is the E7-8890 v3, which contains eighteen cores fed by a total of 45MB in L3 cache. Despite the high core count, the E7-8890 v3 has its base frequency set at 2.5 GHz to yield a TDP of 165W. The E7-8891 v3 (165W) and the E7-8893 v3 (140W) drop the core count to ten and four, but raise the base frequency to 2.8 GHz and 3.2 GHz, respectively. The E7-8880L v3 is a low power version, relatively speaking, which will also contains eighteen cores that are clocked at 2.0 GHz. This drops its TDP to 115W while still maintaining 45 MB of L3 cache.
Image Credit: WCCFTech
The product stack trickles down from there, but not much further. Just twelve processors are listed in the Xeon E7 segment, which Intel points out in the WCCFTech slides is a significant reduction in SKUs. This suggests that they believe their previous line was too many options for enterprise customers. When dealing with prices in the range of $1,223 - $7,174 USD for bulk orders, it makes sense to offer a little choice to slightly up-sell potential buyers, but too many choices can defeat that purpose. Also, it was a bit humorous to see such an engineering-focused company highlight a reduction of SKUs with a bubble point like it was a technological feature. Not bad, actually quite good as I mentioned above, just a bit funny.
The Xeon E7 v3 is listed as now available, with SKUs ranging from $1223 - $7174 USD.
Subject: Processors | April 27, 2015 - 06:06 PM | Josh Walrath
Tagged: Zen, Steamroller, Kaveria, k12, Excavator, carrizo, bulldozer, amd
There are some pretty breathless analysis of a single leaked block diagram that is supposedly from AMD. This is one of the first indications of what the Zen architecture looks like from a CPU core standpoint. The block diagram is very simple, but looks in the same style as what we have seen from AMD. There are some labels, but this is almost a 50,000 foot view of the architecture rather than a slightly clearer 10,000 foot view.
There are a few things we know for sure about Zen. It is a clean sheet design that moves away from what AMD was pursuing with their Bulldozer family of cores. Zen gives up CMT for SMT support for handling more threads. The design has a cluster of four cores sharing 8 MB of L3 cache, with each core having access to 512 KB of L2 cache. There is a lot of optimism that AMD can kick the trend of falling more and more behind Intel every year with this particular design. Jim Keller is viewed very positively due to his work at AMD in the K7 through K8 days, as well as what he accomplished at Apple with their ARM based offerings.
One of the first sites to pick up this diagram wrote quite a bit about what they saw. There was a lot of talk about, “right off the bat just by looking at the block diagram we can tell that Zen will have substantially higher single threaded performance compared to Excavator and the Bulldozer family.” There was the assumption that because it had two 256-bit FMACs that it could fuse them to create a single 512 bit AVX product.
These assumptions are pretty silly. This is a very simple block diagram that answers few very important questions about the architecture. Yes, it shows 6 int pipelines, but we don’t know how many are address generation vs. execution units. We don’t know how wide decode is. We don’t know latency to L2 cache, much less how L3 is connected and shared out. So just because we see more integer pipelines per core does not automatically mean, “Da, more is better, strong like tractor!” We don’t know what improvements or simplifications we will see in the schedulers. There is no mention of the front-end other than Fetch and Decode. How about Branch Prediction? What is the latency for the memory controller when addressing external memory?
Essentially, this looks like a simplified way of expressing to analysts that AMD is attempting to retain their per core integer performance while boosting floating point/AVX at a similar level. Other than that, there is very little that can be gleaned from this simple block diagram.
Other leaks that are interesting concerning Zen are the formats that we will see these products integrated into. One leak detailed a HPC aimed APU that features 16 Zen cores with 32 MB of L3 cache attached to a very large GPU. Another leak detailed a server level chip that will support 32 cores and will be seen in 2P systems. Zen certainly appears to be very flexible, and in ways it reminds me of a much beefier Jaguar type CPU. My gut feeling is that AMD will get closer to Intel than it has been in years, and perhaps they can catch Intel by surprise with a few extra features. The reality of the situation is that AMD is far behind and only now are we seeing pure-play foundries start to get even close to Intel in terms of process technology. AMD is very much at a disadvantage here.
Still, the company needs to release new, competitive products that will refill the company coffers. The previous quarter’s loss has dug into cash reserves, but AMD is still stable in terms of cash on hand and long term debt. 2015 will see new GPUs, an APU refresh, and the release of the new Carrizo parts. 2016 looks to be the make or break year with Zen and K12.
Edit 2015-04-28: Thanks to SH STON we have a new slide that has been leaked from the same deck as this one. This has some interesting info in that AMD may be going away from exclusive cache designs. Exclusive was a good idea when cache was small and expensive, as data was not replicated through each level of cache (L1 was not replicated in L2 and L2 was not replicated in L3). Intel has been using inclusive cache since forever, where data is replicated and simpler to handle. Now it looks like AMD is moving towards inclusive. This is not necessarily a bad thing as the 512 KB of L2 can easily handle what looks to be 128 KB of L1 and the shared 8 MB of L3 cache can easily handle the 2 MB of L2 data. Here is the link to that slide.
The new slide in question.