Haswell-E has sprung a leak

Subject: Processors | August 26, 2014 - 01:32 PM |
Tagged: rumour, leak, Intel, Haswell-E, 5960X, 5930K, 5820K

Take it with a grain of salt as always with leaks of these kind but you will be interested to know that videocardz.com has what might be some inside information on Haswell-E pricing and model numbers.

Intel-HaswellE-E-VideoCardz_Com-Press-Deck-4-850x478.png

Intel i7 / X99 Haswell-E pricing:

  • Intel Core i7 5960X 8C/16HT – 40-lane PCI-Express support (x16 + x16 + x8) — $999
  • Intel Core i7 5930K 6C/12HT – 40-lane PCI-Express support (x16 + x16 + x8) — $583
  • Intel Core i7 5820K 6C/12HT – 28-lane PCI-Express support (x16 + x8 + x4) —– $389

As you can see there is a big jump between the affordable i7-5820K and the more expensive 5930K.  For those who know they will stick with a single GPU or two low to mid-range GPUs the 5820K should be enough for you but if you have any thoughts of upgrading or adding in a number of PCIe SSDs then you might want to seriously consider saving up for the 5930K.  Current generation GPUs and SSDs are not fully utilizing PCIe 3.0 16x but that is not likely to remain true for long so if you wish for your system to have some longevity this is certainly something you should think long and hard about.  Core counts are up while frequencies are down, the 8 core 5960X has a base clock of 3GHz, a full gigahertz slower than the 4790K but you can expect the monstrous 20MB cache and quad-channel DDR4-2133 to mitigate that somewhat.  Also make sure to note that TDP, 140W is no laughing matter and will require some serious cooling.

Follow the link for a long deck of slides that reveal even more!

Intel-HaswellE-E-VideoCardz_Com-Press-Deck-5-850x478.png

Intel Haswell-E De-Lidded: Solder Is Its Thermal Interface

Subject: General Tech, Processors | August 24, 2014 - 03:33 AM |
Tagged: Intel, Haswell-E, Ivy Bridge-E, haswell, solder, thermal paste

Sorry for being about a month late to this news. Apparently, someone got their hands on an Intel Core i7-5960X and they wanted to see its eight cores. Removing the lid, they found that it was soldered directly onto the die with an epoxy, rather than coated with a thermal paste. While Haswell-E will still need to contend with the limitations of 22nm, and how difficult it becomes to exceed various clockspeed ceilings, the better ability to dump heat is always welcome.

Intel-5960X-delidded.jpg

Image Credit: OCDrift

While Devil's Canyon (Core i7 4970K) used better thermal paste, the method used with Haswell-E will be event better. I should note that Ivy Bridge-E, released last year, also contained a form of solder under its lid and its overclocking results were still limited. This is not an easy path to ultimate gigahertz. Even so, it is nice that Intel, at least on their enthusiast line, is spending that little bit extra to not introduce artificial barriers.

Source: OCDrift

X99 Manuals Leak: Core i7-5820K Has Reduced PCIe Lanes?

Subject: General Tech, Processors | August 23, 2014 - 01:38 AM |
Tagged: X99, Intel, Haswell-E

Haswell-E, with its X99 chipset, are expected to launch soon. This will bring a new spread of processors and motherboards to the high-end, enthusiast market. These are the processors that fans of Intel should buy if they have money, want all the RAM, and have a bunch of PCIe expansion cards to install.

Intel-logo.png

The Intel enthusiast platform typically has 40 PCIe lanes, while the mainstream platform has 16. For Haswell-E, the Core i7-5820K will be the exception. According to Gigabyte's X99 manual, the four, full-sized PCIe slots will have the following possible configurations:
 

Core i7-5930K
(and above)
First Slot
(PCIe 1)
Second Slot
(PCIe 4)
Third Slot
(PCIe 2)
Fourth Slot
(PCIe 3)
  16x Unused 16x 8x
  8x 8x 16x 8x
Core i7-5820K
First Slot
(PCIe 1)
Second Slot
(PCIe 4)
Third Slot
(PCIe 2)
Fourth Slot
(PCIe 3)
  16x Unused 8x 4x
  8x 8x 8x 4x

If you count the PCIe x1 slots, the table would refer to the first, third, fifth, and seventh slots.

To me, this is not too bad. You are able to use three GPUs with eight-lane bandwidth and stick a four-lane PCIe SSD on the last slot. Considering that each lane is PCIe 3.0, it is similar to having three PCIe 2.0 x16 slots. While two-way and three-way SLI is supported on all CPUs, four-way SLI is only allowed with processors that provide forty lanes of PCIe 3.0.

Gigabyte also provides three PCIe 2.0 x1 slots, which are not handled by the CPU and do not count against its available lanes.

Since I started to write up this news post, Gigabyte seems to have replaced their manual with a single, blank page. Thankfully, I was able to have it cached long enough to finish my thoughts. Some sites claim that the manual failed to mention the 8-8-8 configuration and suggested that configurations of three GPUs were impossible. That is not true; the manual refers to these situations, just not in the most clear of terms.

Haswell-E should launch soon, with most rumors pointing to the end of the month.

VIA's Rumored New "Isaiah II" Based x86 CPU Will Compete With Intel Bay Trail and AMD Kabini Chips

Subject: Processors | August 19, 2014 - 09:06 PM |
Tagged: VIA, isaiah II, centaur technologies, centaur

VIA subsidiary Centaur Technology is rumored to be launching a new x86 processor at the end of August based on the "Isaiah II" architecture. This upcoming chip is a 64-bit SoC aimed at the mobile and low power space. So far, the only known implementation is a quad core version clocked at up to 2.0 GHz with a 2MB L2 cache. Benchmarks of the quad core Isaiah II-based processor recently appeared online, and if the SiSoft Sandra results hold true VIA has very competitive chip on its hands that outperforms Intel's Bay Trail Z3770 and holds its own against AMD's Jaguar-based Athlon 5350.

Centaur Technology.jpg

The SiSoft Sandra results below show the alleged Isaiah II quad core handily outmaneuvering Intel's Bay Trail SoC and trading wins with AMD's Athlon 5350. All three SoCs are quad core parts with integrated graphics solutions. The benchmarks were run on slightly different configurations as they do not share a motherboard or chipset in common. In the case of the VIA chip, it was paired with a motherboard using the VIA VX11H chipset).

Processor VIA Isaiah II Quad Core AMD Athlon 5350 Intel Atom Z3770
CPU Arithmetic 20.00 GOPS 22.66 GOPS 15.10 GOPS
CPU Multimedia 50.20 Mpix/s 47.56 Mpix/s 25.90 Mpix/s
Multicore Efficiency 3.10 GB/s 4.00 GB/s 1.70 GB/s
Cryptography (HS) 1.50 GB/s 1.48 GB/s 0.40 GB/s
PM Efficiency (ALU) 2.90 GIPS 2.88 GIPS 2.50 GIPS
Financial Analysis (DP FP64) 3.00 kOPT/S 3.64 kOPT/S 1.50 kOPT/S

For comparison, The Atom Z3770 is a quad core clocked at 1.46 GHz (2.39 GHz max turbo) with 2MB L2 cache and Intel HD Graphics clocked at up to 667 MHz supporting up to 4GB of 1066 MHz memory. Bay Trail is manufactured on a 22nm process and has a 2W SDP (Scenario Design Power). Further, the AMD "Kabini" Athlon 5350 features four Jaguar CPU cores clocked at 2.05 GHz, a 128-core GCN GPU clocked at 600 MHz, 2MB L2 cache, and support for 1600 MHz memory. AMD's Kabini SoC is a 28nm chip with a 25W TDP (Thermal Design Power). VIA's new chip allegedly supports modern instruction sets, including AVX 2.0, putting it on par with the AMD and Intel options. 

Processor VIA Isaiah II Quad Core AMD Athlon 5350 Intel Atom Z3770
CPU 4 Cores @ 2.00 GHz 4 Cores @ 2.05 GHz 4 Cores @ 1.46 GHz (up to 2.39 GHz turbo)
GPU ? 128 GCN Cores @ 600 MHz HD Graphics @ (up to) 667 MHz
Memory Support ? 1600 MHz 1066 MHz
L2 Cache 2 MB 2 MB 2 MB
TDP / SDP ? 25W 2W
Process Node ? 28nm 22nm
Price ? $55 $37

The SiSoft Sandra benchmarks spotted by TechPowerUp suggest that the Centaur Technology designed chip has potential. However, there are still several (important) unknowns at this point. Mainly, price and power usage. Also, the GPU VIA is using in the processor is still a mystery though Scott suspects an S3 GPU is possible through a partnership with HTC. 

The chip does seem to be offering up competitive performance, but pricing and power efficiency will play a major role in whether or not VIA gets any design wins with system OEMs. If I had to guess, the VIA chip will sit somewhere between the Intel and AMD offerings with the inclusion of motherboard chipset pushing it towards AMD's higher TDP.

If VIA prices it correctly, we could see the company making a slight comeback in the x86 market with consumer facing devices (particularly Windows 8.1 tablets). VIA has traditionally been known as the low power x86 licensee, and the new expanding mobile market is the ideal place for such a chip. Its past endeavors have not been well received (mainly due to timing and volume production/availability issues of the Nano processors), but I hope that Centaur Technology and VIA are able to pull this one off as I had started to forget the company existed (heh).

Source: TechPowerUp

Intel and Microsoft Show DirectX 12 Demo and Benchmark

Subject: General Tech, Graphics Cards, Processors, Mobile, Shows and Expos | August 13, 2014 - 09:55 PM |
Tagged: siggraph 2014, Siggraph, microsoft, Intel, DirectX 12, directx 11, DirectX

Along with GDC Europe and Gamescom, Siggraph 2014 is going on in Vancouver, BC. At it, Intel had a DirectX 12 demo at their booth. This scene, containing 50,000 asteroids, each in its own draw call, was developed on both Direct3D 11 and Direct3D 12 code paths and could apparently be switched while the demo is running. Intel claims to have measured both power as well as frame rate.

intel-dx12-LockedFPS.png

Variable power to hit a desired frame rate, DX11 and DX12.

The test system is a Surface Pro 3 with an Intel HD 4400 GPU. Doing a bit of digging, this would make it the i5-based Surface Pro 3. Removing another shovel-load of mystery, this would be the Intel Core i5-4300U with two cores, four threads, 1.9 GHz base clock, up-to 2.9 GHz turbo clock, 3MB of cache, and (of course) based on the Haswell architecture.

While not top-of-the-line, it is also not bottom-of-the-barrel. It is a respectable CPU.

Intel's demo on this processor shows a significant power reduction in the CPU, and even a slight decrease in GPU power, for the same target frame rate. If power was not throttled, Intel's demo goes from 19 FPS all the way up to a playable 33 FPS.

Intel will discuss more during a video interview, tomorrow (Thursday) at 5pm EDT.

intel-dx12-unlockedFPS-1.jpg

Maximum power in DirectX 11 mode.

For my contribution to the story, I would like to address the first comment on the MSDN article. It claims that this is just an "ideal scenario" of a scene that is bottlenecked by draw calls. The thing is: that is the point. Sure, a game developer could optimize the scene to (maybe) instance objects together, and so forth, but that is unnecessary work. Why should programmers, or worse, artists, need to spend so much of their time developing art so that it could be batch together into fewer, bigger commands? Would it not be much easier, and all-around better, if the content could be developed as it most naturally comes together?

That, of course, depends on how much performance improvement we will see from DirectX 12, compared to theoretical max efficiency. If pushing two workloads through a DX12 GPU takes about the same time as pushing one, double-sized workload, then it allows developers to, literally, perform whatever solution is most direct.

intel-dx12-unlockedFPS-2.jpg

Maximum power when switching to DirectX 12 mode.

If, on the other hand, pushing two workloads is 1000x slower than pushing a single, double-sized one, but DirectX 11 was 10,000x slower, then it could be less relevant because developers will still need to do their tricks in those situations. The closer it gets, the fewer occasions that strict optimization is necessary.

If there are any DirectX 11 game developers, artists, and producers out there, we would like to hear from you. How much would a (let's say) 90% reduction in draw call latency (which is around what Mantle claims) give you, in terms of fewer required optimizations? Can you afford to solve problems "the naive way" now? Some of the time? Most of the time? Would it still be worth it to do things like object instancing and fewer, larger materials and shaders? How often?

NVIDIA Reveals 64-bit Denver CPU Core Details, Headed to New Tegra K1 Powered Devices Later This Year

Subject: Processors | August 12, 2014 - 01:06 AM |
Tagged: tegra k1, project denver, nvidia, Denver, ARMv8, arm, Android, 64-bit

During GTC 2014 NVIDIA launched the Tegra K1, a new mobile SoC that contains a powerful Kepler-based GPU. Initial processors (and the resultant design wins such as the Acer Chromebook 13 and Xiaomi Mi Pad) utilized four ARM Cortex-A15 cores for the CPU side of things, but later this year NVIDIA is deploying a variant of the Tegra K1 SoC that switches out the four A15 cores for two custom (NVIDIA developed) Denver CPU cores.

Today at the Hot Chips conference, NVIDIA revealed most of the juicy details on those new custom cores announced in January which will be used in devices later this year.

The custom 64-bit Denver CPU cores use a 7-way superscalar design and run a custom instruction set. Denver is a wide but in-order architecture that allows up to seven operations per clock cycle. NVIDIA is using a custom ISA and on-the-fly binary translation to convert ARMv8 instructions to microcode before execution. A software layer and 128MB cache enhance the Dynamic Code Optimization technology by allowing the processor to examine and optimize the ARM code, convert it to the custom instruction set, and further cache the converted microcode of frequently used applications in a cache (which can be bypassed for infrequently processed code). Using the wider execution engine and Dynamic Code Optimization (which is transparent to ARM developers and does not require updated applications), NVIDIA touts the dual Denver core Tegra K1 as being at least as powerful as the quad and octo-core packing competition.

Further, NVIDIA has claimed at at peak throughput (and in specific situations where application code and DCO can take full advantage of the 7-way execution engine) the Denver-based mobile SoC handily outpaces Intel’s Bay Trail, Apple’s A7 Cyclone, and Qualcomm’s Krait 400 CPU cores. In the results of a synthetic benchmark test provided to The Tech Report, the Denver cores were even challenging Intel’s Haswell-based Celeron 2955U processor. Keeping in mind that these are NVIDIA-provided numbers and likely the best results one can expect, Denver is still quite a bit more capable than existing cores. (Note that the Haswell chips would likely pull much farther ahead when presented with applications that cannot be easily executed in-order with limited instruction parallelism).

NVIDIA Denver CPU Core 64bit ARMv8 Tegra K1.png

NVIDIA is ratcheting up mobile CPU performance with its Denver cores, but it is also aiming for an efficient chip and has implemented several power saving tweaks. Beyond the decision to go with an in-order execution engine (with DCO hopefully mostly making up for that), the beefy Denver cores reportedly feature low latency power state transitions (e.g. between active and idle states), power gating, dynamic voltage, and dynamic clock scaling. The company claims that “Denver's performance will rival some mainstream PC-class CPUs at significantly reduced power consumption.” In real terms this should mean that the two Denver cores in place of the quad core A15 design in the Tegra K1 should not result in significantly lower battery life. The two K1 variants are said to be pin compatible such that OEMs and developers can easily bring upgraded models to market with the faster Denver cores.

NVIDIA Denver CPU cores in Tegra K1.png

For those curious, In the Tegra K1, the two Denver cores (clocked at up to 2.5GHz) share a 16-way L2 cache and each have 128KB instruction and 64KB data L1 caches to themselves. The 128MB Dynamic Code Optimization cache is held in system memory.

Denver is the first (custom) 64-bit ARM processor for Android (with Apple’s A7 being the first 64-bit smartphone chip), and NVIDIA is working on supporting the next generation Android OS known as Android L.

The dual Denver core Tegra K1 is coming later this year and I am excited to see how it performs. The current K1 chip already has a powerful fully CUDA compliant Kepler-based GPU which has enabled awesome projects such as computer vision and even prototype self-driving cars. With the new Kepler GPU and Denver CPU pairing, I’m looking forward to seeing how NVIDIA’s latest chip is put to work and the kinds of devices it enables.

Are you excited for the new Tegra K1 SoC with NVIDIA’s first fully custom cores?

Source: NVIDIA

Kaveri on Linux

Subject: Processors | August 11, 2014 - 03:40 PM |
Tagged: A10-7800, A6-7400K, linux, amd, ubuntu 14.04, Kaveri

Linux support for AMD's GPUs has not been progressing at the pace many users would like, though it is improving over time but that is not the same with their APUs.  Phoronix just tested the A10-7800 and A6-7400K on Ubuntu 14.04 with kernel 3.13 and the latest Catalyst 14.6 Beta.  This preview just covers the raw performance, you can expect to see more published in the near future that will cover new features such as the configurable TDP which exists on these chips.  The tests show that the new 7800 can keep pace with the previous 7850K and while the A6-7400K is certainly slower it will be able to handle a Linux machine with relatively light duties.  You can see the numbers here.

image.php_.jpg

"At the end of July AMD launched new Kaveri APU models: the A10-7800, A8-7600, and A6-7400K. AMD graciously sent over review samples on their A10-7800 and A6-7400K Kaveri APUs, which we've been benchmarking and have some of the initial Linux performance results to share today."

Here are some more Processor articles from around the web:

Processors

Source: Phoronix

How can you make your Pentium G3258 system cheaper? Run Ubuntu!

Subject: Processors | July 22, 2014 - 04:15 PM |
Tagged: linux, Pentium G3258, ubuntu 14.10

Phoronix tested out the 20th Anniversary Pentium CPU on Ubuntu 14.10 and right off the bat were impressed as they managed a perfectly stable overclock of 4.4GHz on air.  Using Linux 3.16 and Mesa 10.2 they had no issues with the performance of the onboard GPU though the performance lagged behind the fast GPU present on the Haswell chips they tested against.  When they benchmarked the CPU the lack of Advanced Vector Extensions and the fact that it is a dual core CPU showed in the results but when you consider the difference in price for a G3258's compared to a 4770K it fares quite well.  Stay tuned for their next set of benchmarks which will compare the G3258 to AMD's current offerings.

image.php_.jpg

"Up for review today on Phoronix is the Pentium G3258, the new processor Intel put out in celebration of their Pentium brand turning 20 years old. This new Pentium G3258 processor costs under $100 USD and comes unlocked for offering quite a bit overclocking potential while this Pentium CPU can be used by current Intel 8 and 9 Series Chipsets. Here's our first benchmarks of the Intel Pentium G3258 using Ubuntu Linux."

Here are some more Processor articles from around the web:

Processors

Source: Phoronix

Intel AVX-512 Expanded

Subject: General Tech, Graphics Cards, Processors | July 19, 2014 - 03:05 AM |
Tagged: Xeon Phi, xeon, Intel, avx-512, avx

It is difficult to know what is actually new information in this Intel blog post, but it is interesting none-the-less. Its topic is the AVX-512 extension to x86, designed for Xeon and Xeon Phi processors and co-processors. Basically, last year, Intel announced "Foundation", the minimum support level for AVX-512, as well as Conflict Detection, Exponential and Reciprocal, and Prefetch, which are optional. This, earlier blog post was very much focused on Xeon Phi, but it acknowledged that the instructions will make their way to standard, CPU-like Xeons at around the same time.

Intel_Xeon_Phi_Family.jpg

This year's blog post brings in a bit more information, especially for common Xeons. While all AVX-512-supporting processors (and co-processors) will support "AVX-512 Foundation", the instruction set extensions are a bit more scattered.

 
Xeon
Processors
Xeon Phi
Processors
Xeon Phi
Coprocessors (AIBs)
Foundation Instructions Yes Yes Yes
Conflict Detection Instructions Yes Yes Yes
Exponential and Reciprocal Instructions No Yes Yes
Prefetch Instructions No Yes Yes
Byte and Word Instructions Yes No No
Doubleword and Quadword Instructions Yes No No
Vector Length Extensions Yes No No

Source: Intel AVX-512 Blog Post (and my understanding thereof).

So why do we care? Simply put: speed. Vectorization, the purpose of AVX-512, has similar benefits to multiple cores. It is not as flexible as having multiple, unique, independent cores, but it is easier to implement (and works just fine with having multiple cores, too). For an example: imagine that you have to multiply two colors together. The direct way to do it is multiply red with red, green with green, blue with blue, and alpha with alpha. AMD's 3DNow! and, later, Intel's SSE included instructions to multiply two, four-component vectors together. This reduces four similar instructions into a single operating between wider registers.

Smart compilers (and programmers, although that is becoming less common as compilers are pretty good, especially when they are not fighting developers) are able to pack seemingly unrelated data together, too, if they undergo similar instructions. AVX-512 allows for sixteen 32-bit pieces of data to be worked on at the same time. If your pixel only has four, single-precision RGBA data values, but you are looping through 2 million pixels, do four pixels at a time (16 components).

For the record, I basically just described "SIMD" (single instruction, multiple data) as a whole.

This theory is part of how GPUs became so powerful at certain tasks. They are capable of pushing a lot of data because they can exploit similarities. If your task is full of similar problems, they can just churn through tonnes of data. CPUs have been doing these tricks, too, just without compromising what they do well.

Source: Intel

Intel Earnings Report (Q2 2014)

Subject: General Tech, Processors, Mobile | July 16, 2014 - 03:37 AM |
Tagged: quarterly results, quarterly earnings, quarterly, Intel, earnings

Another fiscal quarter brings another Intel earnings report. Once again, they are doing well for themselves as a whole but are struggling to gain a foothold in mobile. In three months, they sold 8.7 billion dollars in PC hardware, of which 3.7 billion was profit. Its mobile division, on the other hand, brought in 51 million USD in revenue, losing 1.1 billion dollars for their efforts. In all, the company is profitable -- by about 3.84 billion USD.

Intel-Swimming-in-Money.jpg

One interesting metric which Intel adds to their chart, and I have yet to notice another company listing this information so prominently, is their number of employees, compared between quarters. Last year, Intel employed about 106,000 people, which increased to 106,300 two quarters ago. Between two quarters ago and this last quarter, that number dropped by 1400, to 104,900 employees, which was about 1.3% of their total workforce. There does not seem to be a reason for this decline (except for Richard Huddy, we know that he went to AMD).

Intel Process nodes_575px.png

Image Credit: Anandtech

As a final note, Anandtech, when reporting on this story, added a few historical trends near the end. One which caught my attention was the process technology vs. quarter graph, demonstrating their smallest transistor size over the last thirteen-and-a-bit years. We are still slowly approaching 0nm, following an exponential curve as it approaches its asymptote. The width, however, is still fairly regular. It looks like it is getting slightly longer, but not drastically (minus the optical illusion caused by the smaller drops).

Source: Intel

The Third x86-based SoC Player: VIA & Centaur's Isaiah II

Subject: General Tech, Processors, Mobile | July 11, 2014 - 04:58 PM |
Tagged: x86, VIA, isaiah II, Intel, centaur, arm, amd

There might be a third, x86-compatible processor manufacturer who is looking at the mobile market. Intel has been trying to make headway, including the direct development of Android for the x86 architecture. The company also has a few design wins, mostly with Windows 8.1-based tablets but also the occasional Android-based models. Google is rumored to be preparing the "Nexus 8" tablet with one of Intel's Moorefield SoCs. AMD, the second-largest x86 processor manufacturer, is aiming their Mullins platform at tablets and two-in-ones, but cannot afford to play snowplow, at least not like Intel.

via-centaur-countdown.jpg

VIA, through their Centaur Technology division, is expected to announce their own x86-based SoC, too. Called Isaiah II, it is rumored to be a quad core, 64-bit processor with a maximum clock rate of 2.0 GHz. Its GPU is currently unknown. VIA sold their stake S3 Graphics to HTC back in 2011, who then became majority shareholder over the GPU company. That said, HTC and VIA are very close companies. The chairwoman of HTC is the founder of VIA Technologies. The current President and CEO of VIA, who has been in that position since 1992, is her husband. I expect that the GPU architecture will be provided by S3, or will somehow be based on their technology. I could be wrong. Both companies will obviously do what they think is best.

It would make sense, though, especially if it benefits HTC with cheap but effective SoCs for Android and "full" Windows (not Windows RT) devices.

Or this announcement could be larger than it would appear. Three years ago, VIA filed for a patent which described a processor that can read both x86 and ARM machine language and translate it into its own, internal microinstructions. The Centaur Isaiah II could reasonably be based on that technology. If so, this processor would be able to support either version of Android. Or, after Intel built up the Android x86 code base, maybe they shelved that initiative (or just got that patent for legal reasons).

Android-x86.png

But what about Intel? Honestly, I see this being a benefit for the behemoth. Extra x86-based vendors will probably grow the overall market share, compared to ARM, by helping with software support. Even if it is compatible with both ARM and x86, what Intel needs right now is software. They can only write so much of it themselves. It is possible that VIA, being the original netbook processor, could disrupt the PC market with both x86 and ARM compatibility, but I doubt it.

Centaur Technology, the relevant division of VIA, will make their announcement in less than 51 days.

Source: 3d Center

Fully Enabling the A10-7850K while Utilizing a Standalone GPU

Subject: Processors | July 9, 2014 - 05:42 PM |
Tagged: nvidia, msi, Luxmark, Lightning, hsa, GTX 580, GCN, APU, amd, A88X, A10-7850K

When I first read many of the initial AMD A10 7850K reviews, my primary question was how would the APU act if there was a different GPU installed on the system and did not utilize the CrossFire X functionality that AMD talked about.  Typically when a user installs a standalone graphics card on the AMD FM2/FM2+ platform, they disable the graphics portion of the APU.  They also have to uninstall the AMD Catalyst driver suite.  So this then leaves the APU as a CPU only, and all of that graphics silicon is left silent and dark.

apu_first.jpg

Who in their right mind would pair a high end graphics card with the A10-7850K? This guy!

Does this need to be the case?  Absolutely not!  The GCN based graphics unit on the latest Kaveri APUs is pretty powerful when used in GPGPU/OpenCL applications.  The 4 cores/2 modules and 8 GCN cores can push out around 856 GFlops when fully utilized.  We also must consider that the APU is the first fully compliant HSA (Heterogeneous System Architecture) chip, and it handles memory accesses much more efficiently than standalone GPUs.  The shared memory space with the CPU gets rid of a lot of the workarounds typically needed for GPGPU type applications.  It makes sense that users would want to leverage the performance potential of a fully functioning APU while upgrading their overall graphics performance with a higher end standalone GPU.

To get this to work is very simple.  Assuming that the user has been using the APU as their primary graphics controller, they should update to the latest Catalyst drivers.  If the user is going to use an AMD card, then it would behoove them to totally uninstall the Catalyst driver and re-install only after the new card is installed.  After this is completed restart the machine, go into the UEFI, and change the primary video boot device to PEG (PCI-Express Graphics) from the integrated unit.  Save the setting and shut down the machine.  Insert the new video card and attach the monitor cable(s) to it.  Boot the machine and either re-install the Catalyst suite if an AMD card is used, or install the latest NVIDIA drivers if that is the graphics choice.

Windows 7 and Windows 8 allow users to install multiple graphics drivers from different vendors.  In my case I utilized a last generation GTX 580 (the MSI N580GTX Lightning) along with the AMD A10 7850K.  These products coexist happily together on the MSI A88X-G45 Gaming motherboard.  The monitor is attached to the NVIDIA card and all games are routed through that since it is the primary graphics adapter.  Performance seems unaffected with both drivers active.

luxmark_setup.PNG

I find it interesting that the GPU portion of the APU is named "Spectre".  Who owns those 3dfx trademarks anymore?

When I load up Luxmark I see three entries: the APU (CPU and GPU portions), the GPU portion of the APU, and then the GTX 580.  Luxmark defaults to the GPUs.  We see these GPUs listed as “Spectre”, which is the GCN portion of the APU, and the NVIDIA GTX 580.  Spectre supports OpenCL 1.2 while the GTX 580 is an OpenCL 1.1 compliant part.

With both GPUs active I can successfully run the Luxmark “Sala” test.  The two units perform better together than when they are run separately.  Adding in the CPU does increase the score, but not by very much (my guess here is that the APU is going to be very memory bandwidth bound in such a situation).  Below we can see the results of the different units separate and together.

luxmark_results_02.png

These results make me hopeful about the potential of AMD’s latest APU.  It can run side by side with a standalone card, and applications can leverage the performance of this unit.  Now all we need is more HSA aware software.  More time and more testing is needed for setups such as this, and we need to see if HSA enabled software really does see a boost from using the GPU portion of the APU as compared to a pure CPU piece of software or code that will run on the standalone GPU.

Personally I find the idea of a heterogeneous solution such as this appealing.  The standalone graphics card handles the actual graphics portions, the CPU handles that code, and the HSA software can then fully utilize the graphics portion of the APU in a very efficient manner.  Unfortunately, we do not have hard numbers on the handful of HSA aware applications out there, especially when used in conjunction with standalone graphics.  We know in theory that this can work (and should work), but until developers get out there and really optimize their code for such a solution, we simply do not know if having an APU will really net the user big gains as compared to something like the i7 4770 or 4790 running pure x86 code.

full_APU_GPU.PNG

In the meantime, at least we know that these products work together without issue.  The mixed mode OpenCL results make a nice case for improving overall performance in such a system.  I would imagine with more time and more effort from developers, we could see some really interesting implementations that will fully utilize a system such as this one.  Until then, happy experimenting!

Source: AMD

Celeron II: The Second Coming

Subject: Processors | July 8, 2014 - 07:23 PM |
Tagged: intel atom, Pentium G3258, overclocking

Technically it is an Anniversary Edition Pentium processor but it reminds those of us who have been in the game a long time of the old Celeron D's which cost very little and overclocked like mad!  The Pentium G3258 is well under $100 but the stock speed of 3.2GHz is only a recommendation as this processor is just begging to be overclocked.  The Tech Report coaxed it up to 4.8GHz on air cooling, 100MHz higher than the i7-4790K they tested.  A processor that costs about 20% of the price of the 4790K can almost meet its performance in Crysis 3 without resorting to even high end watercooling should make any gamer on a budget sit up an take notice.  Sure you lose the extra cores and other features of the flagship processor but if you are primarily a gamer these are not your focus, you simply want the fastest processor you can get at a reasonable amount of money.  Stay tuned for more information about the Anniversary Edition Pentium as there are more benchmarks to be run!

test-rig.jpg

"This new Pentium is an unlocked dual-core CPU based on the latest 22-nm Haswell silicon. I ran out and picked one up as soon as they went on sale last week. The list price is only 72 bucks, but Micro Center had them on sale for $60. In other words, you can get a processor that will quite possibly run at clock speeds north of 4GHz—with all the per-clock throughput of Intel's very latest CPU core—for the price of a new Call of Shooty game.

Also, ours overclocks like a Swiss watchmaker on meth."

Here are some more Processor articles from around the web:

Processors

Intel's Knights Landing (Xeon Phi, 2015) Details

Subject: General Tech, Graphics Cards, Processors | July 2, 2014 - 03:55 AM |
Tagged: Intel, Xeon Phi, xeon, silvermont, 14nm

Anandtech has just published a large editorial detailing Intel's Knights Landing. Mostly, it is stuff that we already knew from previous announcements and leaks, such as one by VR-Zone from last November (which we reported on). Officially, few details were given back then, except that it would be available as either a PCIe-based add-in board or as a socketed, bootable, x86-compatible processor based on the Silvermont architecture. Its many cores, threads, and 512 bit registers are each pretty weak, compared to Haswell, for instance, but combine to about 3 TFLOPs of double precision performance.

itsbeautiful.png

Not enough graphs. Could use another 256...

The best way to imagine it is running a PC with a modern, Silvermont-based Atom processor -- only with up to 288 processors listed in your Task Manager (72 actual cores with quad HyperThreading).

The main limitation of GPUs (and similar coprocessors), however, is memory bandwidth. GDDR5 is often the main bottleneck of compute performance and just about the first thing to be optimized. To compensate, Intel is packaging up-to 16GB of memory (stacked DRAM) on the chip, itself. This RAM is based on "Hybrid Memory Cube" (HMC), developed by Micron Technology, and supported by the Hybrid Memory Cube Consortium (HMCC). While the actual memory used in Knights Landing is derived from HMC, it uses a proprietary interface that is customized for Knights Landing. Its bandwidth is rated at around 500GB/s. For comparison, the NVIDIA GeForce Titan Black has 336.4GB/s of memory bandwidth.

Intel and Micron have worked together in the past. In 2006, the two companies formed "IM Flash" to produce the NAND flash for Intel and Crucial SSDs. Crucial is Micron's consumer-facing brand.

intel-knights-landing.jpg

So the vision for Knights Landing seems to be the bridge between CPU-like architectures and GPU-like ones. For compute tasks, GPUs edge out CPUs by crunching through bundles of similar tasks at the same time, across many (hundreds of, thousands of) computing units. The difference with (at least socketed) Xeon Phi processors is that, unlike most GPUs, Intel does not rely upon APIs, such as OpenCL, and drivers to translate a handful of functions into bundles of GPU-specific machine language. Instead, especially if the Xeon Phi is your system's main processor, it will run standard, x86-based software. The software will just run slowly, unless it is capable of vectorizing itself and splitting across multiple threads. Obviously, OpenCL (and other APIs) would make this parallelization easy, by their host/kernel design, but it is apparently not required.

It is a cool way that Intel arrives at the same goal, based on their background. Especially when you mix-and-match Xeons and Xeon Phis on the same computer, it is a push toward heterogeneous computing -- with a lot of specialized threads backing up a handful of strong ones. I just wonder if providing a more-direct method of programming will really help developers finally adopt massively parallel coding practices.

I mean, without even considering GPU compute, how efficient is most software at splitting into even two threads? Four threads? Eight threads? Can this help drive heterogeneous development? Or will this product simply try to appeal to those who are already considering it?

Source: Intel

The renewed FX-9590, still up to 5GHz

Subject: Processors | June 23, 2014 - 04:05 PM |
Tagged: amd, fx 9590, vishera

Hardware Canucks have just let out AMD's secret on a new take on a Vishera processor, the FX-9590 which will come with a Cooler Master Seidon 120 AIO LCS which will add $40 to the original $320 price tag.  The base clock of the 8 CPUs will still be 4.7GHz, 5GHz boost buit with the TDP of 219W the watercooler should allow the boost clock to be maintained longer.  If you ever planned on overclocking the FX-9590 but never picked it up because of the challenge of cooling it, then here is your chance.

FX-9590-1234.jpg

"It all started with a tweet. AMD teased an unnamed new FX-series chip on Twitter and we've got the inside track. It's a refreshed 5GHz FX-9590 with an included water cooling unit."

Here are some more Processor articles from around the web:

Processors

Qualcomm Focuses on Android Gaming, Snapdragon Benefits to Gamers, Developers

Subject: Processors, Mobile | June 23, 2014 - 01:08 PM |
Tagged: snapdragon, qualcomm, gaming, Android, adreno

Today Qualcomm has published a 22-page white paper that keys in on the company's focus around Android gaming and the benefits that Qualcomm SoCs offer. As the dominant SoC vendor in the Android ecosystem of smartphones, tablets and handhelds (shipping more than 32% in Q2 of 2013) QC is able to offer a unique combination of solutions to both developers and gamers that push Android gaming into higher fidelity with more robust game play.

According to the white paper, Android gaming is the fastest growing segment of the gaming market with a 30% compound annual growth rate from 2013 to 2015, as projected by Gartner. Experiences for mobile games have drastically improved since Android was released in 2008 with developers like Epic Games and the Unreal Engine pushing visuals to near-console and near-PC qualities. 

qcgaming1.jpg

Qualcomm is taking a heterogeneous approach to address the requirements of gaming that include AI execution, physics simulation, animation, low latency input and high speed network connectivity in addition to high quality graphics and 3D rendering. Though not directly a part of the HSA standards still in development, the many specialized engines that Qualcomm has developed for its Snapdragon SoC processors including traditional CPUs, GPUs, DSPs, security and connectivity allow the company to create a solution that is built for Android gaming dominance.

qcgaming2.jpg

In the white paper Qualcomm dives into the advantages that the Krait CPU architecture offers for CPU-based tasks as well as the power of the Adreno 4x series of GPUs that offer both raw performance and the flexibility to support current and future gaming APIs. All of this is done with single-digit wattage draw and a passive, fanless design and points to the huge undertaking that mobile gaming requires from an engineering and implementation perspective.

qcgaming3.jpg

For developers, the ability to target Snapdragon architectures with a single code path that can address a scalable product stack allows for the least amount of development time and the most return on investment possible. Qualcomm continues to support the development community with tools and assistance to bring out the peak performance of Krait and Adreno to get games running on lower power parts as well as the latest and upcoming generations of SoCs in flagship devices.

It is great to see Qualcomm focus on this aspect of the mobile market and the challenges presented by it require strong dedication from these engineering teams. Being able to create compelling gaming experiences with high quality imagery while maintaining the required power envelope is a task that many other company's have struggled with. 

Check out the new landing page over at Qualcomm if you are interested in more technical information as well as direct access to the white paper detailing the work Qualcomm is putting into its Snapdragon line of SoC for gamers.

Source: Qualcomm

AMD Restructures. Lisa Su Is Now COO.

Subject: Editorial, General Tech, Graphics Cards, Processors, Chipsets | June 13, 2014 - 06:45 PM |
Tagged: x86, restructure, gpu, arm, APU, amd

According to VR-Zone, AMD has reworked their business, last Thursday, sorting each of their projects into two divisions and moving some executives around. The company is now segmented into the "Enterprise, Embedded, and Semi-Custom Business Group", and the "Computing and Graphics Business Group". The company used to be divided between "Computing Solutions", which handled CPUs, APUs, chipsets, and so forth, "Graphics and Visual Solutions", which is best known for GPUs but also contains console royalties, and "All Other", which was... everything else.

amd-new2.png

Lisa Su, former general manger of global business, has moved up to Chief Operating Officer (COO), along with other changes.

This restructure makes sense for a couple of reasons. First, it pairs some unprofitable ventures with other, highly profitable ones. AMD's graphics division has been steadily adding profitability to the company while its CPU division has been mostly losing money. Secondly, "All Other" is about a nebulous as a name can get. Instead of having three unbalanced divisions, one of which makes no sense to someone glancing at AMD's quarterly earnings reports, they should now have two, roughly equal segments.

At the very least, it should look better to an uninformed investor. Someone who does not know the company might look at the sheet and assume that, if AMD divested from everything except graphics, that the company would be profitable. If, you know, they did not know that console contracts came into their graphics division because their compute division had x86 APUs, and so forth. This setup is now more aligned to customers, not products.

Source: VR-Zone

Thinking of swapping Linux for Windows on your new Bay Trail NUC?

Subject: Processors | June 5, 2014 - 06:32 PM |
Tagged: baytrail, linux, N2820, ubuntu 14.04, Linux 3.13, Linux 3.15, mesa, nuc

It would seem that installing Linux on your brand new Bay Trail powered NUC will cost you a bit of performance.  The testing Phoronix has performed on Intel NUC DN2820FYKH proves that it can handle running Linux without a hitch, however you will find that your overall graphical performance will dip a bit.  Using MESA 10.3 and both the current 3.13 kernel and the 3.15 development kernel Phoronix saw a small delta in performance between Ubuntu 14.04 and Win 8.1 ... until they hit the OpenGL performance.  As there is still no full OpenGL 4.0+ support there were tests that could not be run and even with the tests that could be there was a very large performance gap.  Do not let this worry you, as they point out in the article there is a dedicated team working on full compliance and you can expect updated results in the near future.

image.php_.jpg

"A few days ago my benchmarking revealed Windows 8.1 is outperforming Ubuntu Linux with the latest Intel open-source graphics drivers on Haswell hardware. I have since conducted tests on the Celeron N2820 NUC, and sadly, the better OpenGL performance is found with Microsoft's operating system."

Here are some more Processor articles from around the web:

Processors

Source: Phoronix

Computex 2014: Cavium Introduces 48 Core ThunderX ARM Processors

Subject: Processors, Mobile | June 4, 2014 - 11:00 AM |
Tagged: computex, computex 2014, arm, cavium, thunderx

While much of the news coming from Computex was centered around PC hardware, many of ARMs partners are making waves as well. Take Cavium for example, introducing the ThunderX CN88XX family of processors. With a completely custom ARMv8 architectural core design, the ThunderX processors will range from 24 to 48 cores and are targeted at large volume servers and cloud infrastructure. 48 cores!

The ThunderX family will be the first SoC to scale up to 48 cores and with a clock speed of 2.5 GHz and 16MB of L2 cache, should offer some truly impressive performance levels. Cavium claims to be the first socket-coherent ARM processor as well, using the Cavium Coherent Processor Interconnect. The I/O capacity stretches into the hundreds of Gigabits and quad channel DDR3 and DDR4 memory speeds up to 2.4 GHz keep the processors fed with work.

thunderx.jpg

Source: Gigaom.com

Here is the breakdown on the ThunderX families.

ThunderX_CP: Up to 48 highly efficient cores along with integrated virtSOC, dual socket coherency, multiple 10/40 GbE and high memory bandwidth. This family is optimized for private and public cloud web servers, content delivery, web caching, search and social media workloads.

ThunderX_ST: Up to 48 highly efficient cores along with integrated virtSOC, multiple SATAv3 controllers, 10/40 GbE & PCIe Gen3 ports, high memory bandwidth, dual socket coherency, and scalable fabric for east-west as well as north-south traffic connectivity. This family includes hardware accelerators for data protection/ integrity/security, user to user efficient data movement (RoCE) and compressed storage. This family is optimized for Hadoop, block & object storage, distributed file storage and hot/warm/cold storage type workloads.

ThunderX_SC: Up to 48 highly efficient cores along with integrated virtSOC, 10/40 GbE connectivity, multiple PCIe Gen3 ports, high memory bandwidth, dual socket coherency, and scalable fabric for east-west as well as north-south traffic connectivity. The hardware accelerators include Cavium’s industry leading, 4th generation NITROX and TurboDPI technology with acceleration for IPSec, SSL, Anti-virus, Anti-malware, firewall and DPI. This family is optimized for Secure Web front-end, security appliances and Cloud RAN type workloads.

ThunderX_NT: Up to 48 highly efficient cores along with integrated virtSOC, 10/40/100 GbE connectivity, multiple PCIe Gen3 ports, high memory bandwidth, dual socket coherency, and scalable fabric with feature rich capabilities for bandwidth provisioning , QoS, traffic Shaping and tunnel termination. The hardware accelerators include high packet throughput processing, network virtualization and data monitoring. This family is optimized for media servers, scale-out embedded applications and NFV type workloads.

We spoke with ARM earlier this year about its push into the server market and it is partnerships like these that will begin the ramp up to wide spread adoption of ARM-based server infrastructure. The ThunderX family will begin sampling in early Q4 2014 and production should be available by early 2015. 

Richard Huddy Departs Intel, Rejoins AMD

Subject: Graphics Cards, Processors | June 3, 2014 - 02:10 PM |
Tagged: Intel, amd, richard huddy

Interesting news is crossing the ocean today as we learn that Richard Huddy, who has previously had stints at NVIDIA, ATI, AMD and most recently, Intel, is teaming up with AMD once again. Richard brings with him years of experience and innovation in the world of developer relations and graphics technology. Often called "the Godfather" of DirectX, AMD wants to prove to the community it is taking PC gaming seriously.

richardhuddy.jpg

The official statement from AMD follows:

AMD is proud to announce the return of the well-respected authority in gaming, Richard Huddy. After three years away from AMD, Richard returns as AMD's Gaming Scientist in the Office of the CTO - he'll be serving as a senior advisor to key technology executives, like Mark Papermaster, Raja Koduri and Joe Macri. AMD is extremely excited to have such an industry visionary back. Having spent his professional career with companies like NVIDIA, Intel and ATI, and having led the worldwide ISV engineering team for over six years at AMD, Mr. Huddy has a truly unique perspective on the PC and Gaming industries.

Mr. Huddy rejoins AMD after a brief stint at Intel, where he had a major impact on their graphics roadmap.  During his career Richard has made enormous contributions to the industry, including the development of DirectX and a wide range of visual effects technologies.  Mr. Huddy’s contributions in gaming have been so significant that he was immortalized as ‘The Scientist’ in Max Payne (if you’re a gamer, you’ll see the resemblance immediately). 

Kitguru has a video from Richard Huddy explaining his reasoning for the move back to AMD.

Source: Kitguru.net

This move points AMD in a very interesting direction going forward. The creation of the Mantle API and the debate around AMD's developer relations programs are going to be hot topics as we move into the summer and I am curious how quickly Huddy thinks he can have an impact.

I have it on good authority we will find out very soon.