Rumor: Intel Adds New Codecs with Kaby Lake-S iGPU

Subject: Processors | June 25, 2016 - 03:15 AM |
Tagged: Intel, kaby lake, iGPU, h.265, hevc, vp8, vp9, codec, codecs

Fudzilla isn't really talking about their sources, so it's difficult to gauge how confident we should be, but they claim to have information about the video codecs supported by Kaby Lake's iGPU. This update is supposed to include hardware support for HDR video, the Rec.2020 color gamut, and HDCP 2.2, because, if videos are pirated prior to their release date, the solution is clearly to punish your paying customers with restrictive, compatibility-breaking technology. Time-traveling pirates are the worst.

Intel-logo.png

According to their report, Kaby Lake-S will support VP8, VP9, HEVC 8b, and HEVC 10b, both encode and decode. However, they then go on to say that 10-bit VP9 and 10-bit HEVC 10b does not include hardware encoding. I'm not too knowledgeable about video codecs, but I don't know of any benefits to encoding 8-bit HEVC Main 10. Perhaps someone in our comments can clarify.

Source: Fudzilla

UCDavis Manufactures a 1000-Core CPU

Subject: Processors | June 22, 2016 - 02:00 AM |
Tagged: ucdavis

Update (June 22nd @ 12:36 AM): Errrr. Right. Accidentally referred to the CPU in terms of TFLOPs. That's incorrect -- it's not a floating-point decimal processor. Should be trillions of operations per second (teraops). Whoops! Also, it has a die area of 64sq.mm, compared to 520sq.mm of something like GF110.

So this is an interesting news post. Graduate students at UCDavis have designed and produced a thousand-core CPU at IBM's facilities. The processor is manufactured on their 32nm process, which is quite old -- about half-way between NVIDIA's Fermi and Kepler if viewed from a GPU perspective. Its die area is not listed, though, but we've reached out to their press contact for more information. The chip can be clocked up to 1.78 GHz, yielding 1.78 teraops of theoretical performance.

These numbers tell us quite a bit.

ucdavis-2016-thousandcorecpu.jpg

The first thing that stands out to me is that the processor is clocked at 1.78 GHz, has 1000 cores, and is rated at 1.78 teraops. This is interesting because modern GPUs (note that this is not a GPU -- more on that later) are rated at twice the clock rate times the number of cores. The factor of two comes in with fused multiply-add (FMA), a*b + c, which can be easily implemented as a single instruction and are widely used in real-world calculations. Two mathematical operations in a single instruction yields a theoretical max of 2 times clock times core count. Since this processor does not count the factor of two, it seems like its instruction set is massively reduced compared to commercial processors. If they even cut out FMA, what else did they remove from the instruction set? This would at least partially explain why the CPU has such a high theoretical throughput per transistor compared to, say, NVIDIA's GF110, which has a slightly lower TFLOP rating with about five times the transistor count -- and that's ignoring all of the complexity-saving tricks that GPUs play, that this chip does not. Update (June 22nd @ 12:36 AM): Again, none of this makes sense, because it's not a floating-point processor.

"Big Fermi" uses 3 billion transistors to achieve 1.5 TFLOPs when operating on 32 pieces of data simultaneously (see below). This processor does 1.78 teraops with 0.621 billion transistors.

On the other hand, this chip is different from GPUs in that it doesn't use their complexity-saving tricks. GPUs save die space by tying multiple threads together and forcing them to behave in lockstep. On NVIDIA hardware, 32 instructions are bound into a “warp”. On AMD, 64 make up a “wavefront”. On Intel's Xeon Phi, AVX-512 packs 16, 32-bit instructions together into a vector and operates them at once. GPUs use this architecture because, if you have a really big workload, you, chances are, have very related tasks; neighbouring pixels on a screen will be operating on the same material with slightly offset geometry, multiple vertexes of the same object will be deformed by the same process, and so forth.

This processor, on the other hand, has a thousand cores that are independent. Again, this is wasteful for tasks that map easily to single-instruction-multiple-data (SIMD) architectures, but the reverse (not wasteful in highly parallel tasks that SIMD is wasteful on) is also true. SIMD makes an assumption about your data and tries to optimize how it maps to the real-world -- it's either a valid assumption, or it's not. If it isn't? A chip like this would have multi-fold performance benefits, FLOP for FLOP.

Source: UCDavis

Rumor: AMD Plans 32-Core Opteron with 128 PCIe Lanes

Subject: Processors | June 16, 2016 - 03:18 AM |
Tagged: Zen, opteron, amd

We're beginning to see how the Zen architecture will affect AMD's entire product stack. This news refers to their Opteron line of CPUs, which are intended for servers and certain workstations. They tend to allow lots of memory, have lots of cores, and connect to a lot of I/O options and add-in boards at the same time.

amd-2016-e3-zenlogo.png

In this case, Zen-based Opterons will be available in two, four, sixteen, and thirty-two core options, with two threads per core (yielding four, eight, thirty-two, and sixty-four threads, respectively). TDPs will range between 35W and 180W. Intel's Xeon E7 v4 goes up to 165W got 24 cores (on Broadwell-EX) so AMD has a little more headroom to play with for those extra eight cores. That is obviously a lot, and it should be, again, good for cloud applications that can be parallelized.

As for the I/O side of things, the rumored chip will have 128 PCIe 3.0 lanes. It's unclear whether that is per socket, or total. Its wording sounds like it is per-CPU, although much earlier rumors have said that it has 64 PCIe lanes per socket with dual-socket boards available. It will also support sixteen 10-Gigabit Ethernet connections, which, again, is great for servers, especially with virtualization.

These are expected to launch in 2017. Fudzilla claims that “very late 2016” is possible, but also that it will launch after high-end desktop, which are expected to be delayed until 2017.

Source: Fudzilla

AMD "Sneak Peek" at RX Series (RX 480, RX 470, RX 460)

Subject: Graphics Cards, Processors | June 13, 2016 - 07:51 PM |
Tagged: amd, Polaris, Zen, Summit Ridge, rx 480, rx 470, rx 460

AMD has just unveiled their entire RX line of graphics cards at E3 2016's PC Gaming Show. It was a fairly short segment, but it had a few interesting points in it. At the end, they also gave another teaser of Summit Ridge, which uses the Zen architecture.

amd-2016-e3-470460.png

First, Polaris. As we know, the RX 480 was going to bring >5 TFLOPs at a $199 price point. They elaborated that this will apply to the 4GB version, which likely means that another version with more VRAM will be available, and that implies 8GB. Beyond the RX 480, AMD has also announced the RX 470 and RX 460. Little is known about the 470, but they mentioned that the 460 will have a <75W TDP. This is interesting because the PCIe bus provides 75W of power. This implies that it will not require any external power, and thus could be a cheap and powerful (in terms of esports titles) addition to an existing desktop. This is an interesting way to use the power savings of the die shrink to 14nm!

amd-2016-e3-backpackpc.png

They also showed off a backpack VR rig. They didn't really elaborate, but it's here.

amd-2016-e3-summitdoom.png

As for Zen? AMD showed the new architecture running DOOM, and added the circle-with-Zen branding to a 3D model of a CPU. Zen will be coming first to the enthusiast category with (up to?) eight cores, two threads per core (16 threads total).

amd-2016-e3-zenlogo.png

The AMD Radeon RX 480 will launch on June 29th for $199 USD (4GB). None of the other products have a specific release date.

Source: AMD

James Reinders Leaving Intel and What It Means

Subject: Processors | June 8, 2016 - 12:17 PM |
Tagged: Xeon Phi, Intel, gpgpu

Intel's recent restructure had a much broader impact than I originally believed. Beyond the large number of employees who will lose their jobs, we're even seeing it affect other areas of the industry. Typically, ASUS releases their ZenPhone line with x86 processors, which I assumed was based on big subsidies from Intel to push their instruction set into new product categories. This year, ASUS chose the ARM-based Qualcomm Snapdragon, which seemed to me like Intel decided to stop the bleeding.

reinders148x148.jpg

That brings us to today's news. After over 27 years at Intel, James Reinders accepted the company's early retirement offer, scheduled for his 10001st day with the company, and step down from his position as Intel's High Performance Computing Director. He worked on the Larabee and Xeon Phi initiatives, and published several books on parallelism.

According to his letter, it sounds like his retirement offer was part of a company-wide package, and not targeting his division specifically. That would sort-of make sense, because Intel is focusing on cloud and IoT. Xeon Phi is an area that Intel is battling NVIDIA for high-performance servers, and I would expect that it has potential for cloud-based applications. Then again, as I say that, AWS only has a handful of GPU instances, and they are running fairly old hardware at that, so maybe the demand isn't there yet.

Video Perspective: Intel Giving Away 6950X + SSD 750 Systems at PAX Prime

Subject: Processors | June 7, 2016 - 07:29 PM |
Tagged: Intel, video, PAX, pax prime, i7-6950X, taser

Intel is partnering with 12 of their top system builders to build amazing PCs around the Core i7-6950X 10-core Extreme Edition processor and the SSD 750 Series drives. Intel will be raffling off 7 of these systems at PAX Prime in September. You can find out more details on the competition and how you can enter at http://inte.ly/rigchallenge. 

As for us, we got a taser.

Looking for a new CPU? You will be waiting until January at the earliest

Subject: Processors | June 7, 2016 - 06:45 PM |
Tagged: Zen, kaby lake, Intel, delayed, amd

Bad news upgraders, neither AMD nor Intel will be launching their new CPUs until the beginning of next year.  Both AMD's Zen and Intel's Kaby Lake have now been delayed instead of launching in Q4 and Q3 of this year respectively.  DigiTimes did not delve into the reasons behind the delay in AMD's 14nm GLOBALFOUNDRIES (and Samsung) sourced Zen but unfortunately the reasons beind Intel's delay are all too clear.  With large stockpiles of  Skylake and Haswell processors and systems based around them sitting in the channel, AMD's delay creates an opportunity for Intel and retailers to move that stock.  Once Kaby Lake arrives the systems will no longer be attractive to consumers and the prices will plummet.

Here is to hoping AMD's delay does not imply anything serious, though the lack of a new product release at a time which traditionally sees sales increase is certainly going to hurt their bottom line for 2016.

bad-news-everyone.jpg

"With the delays, the PC supply chain will not be able to begin mass production for the next-generation products until November or December and PC demand is also unlikely to pick up until the first quarter of 2017."

Here are some more Processor articles from around the web:

Processors

 

Source: DigiTimes

Intel Launches Xeon E7 v4 Processors

Subject: Processors | June 7, 2016 - 01:39 PM |
Tagged: xeon e7 v4, xeon e7, xeon, Intel, broadwell-ex, Broadwell

Yesterday, Intel launched eleven SKUs of Xeon processors that are based on Broadwell-EX. While I don't follow this product segment too closely, it's a bit surprising that Intel launched them so close to consumer-level Broadwell-E. Maybe I shouldn't be surprised, though.

intel-logo-cpu.jpg

These processors scale from four cores up to twenty-four of them, with HyperThreading. They are also available in cache sizes from 20MB up to 60MB. With Intel's Xeon naming scheme, the leading number immediately after the E7 in the product name denotes the number of CPUs that can be installed in a multi-socket system. The E7-8XXX line can be run in an eight-socket motherboard, while the E7-4XXX models are limited to four sockets per system. TDPs range between 115W and 165W, which is pretty high, but to be expected for a giant chip that runs at a fairly high frequency.

Intel Xeon E7 v4 launched on June 6th with listed prices between $1223 to $7174 per CPU.

Source: Intel

HSA 1.1 Released

Subject: Graphics Cards, Processors, Mobile | June 6, 2016 - 11:11 AM |
Tagged: hsa 1.1, hsa

The HSA Foundation released version 1.1 of their specification, which focuses on “multi-vendor” compatibility. In this case, multi-vendor doesn't refer to companies that refused to join the HSA Foundation, namely Intel and NVIDIA, but rather multiple types of vendors. Rather than aligning with AMD's focus on CPU-GPU interactions, HSA 1.1 includes digital signal processors (DSPs), field-programmable gate arrays (FPGAs), and other accelerators. I can see this being useful in several places, especially on mobile, where cameras, sound processors, and CPU cores, and a GPU regularly share video buffers.

HSA Foundation_Logo.png

That said, the specification also mentions “more efficient interoperation with non-HSA compliant devices”. I'm not quite sure what that specifically refers to, but it could be important to keep an eye on for future details -- whether it is relevant for Intel and NVIDIA hardware (and so forth).

Charlie, down at SemiAccurate, notes that HSA 1.1 will run on all HSA 1.0-compliant hardware. This makes sense, but I can't see where this is explicitly mentioned in their press release. I'm guessing that Charlie was given some time on a conference call (or face-to-face) regarding this, but it's also possible that he may be mistaken. It's also possible that it is explicitly mentioned in the HSA Foundation's press blast and I just fail at reading comprehension.

If so, I'm sure that our comments will highlight my error.

Rounding up the i7-6950X reviews

Subject: Processors | June 3, 2016 - 08:55 PM |
Tagged: X99, video, Intel, i7-6950X, core i7, Core, Broadwell-E, Broadwell

You have seen our take on the impressively powerful and extremely expensive i7-6950X but of course we were not the only ones to test out Intel's new top of the line processor.  Hardware Canucks focused on the difference between the  ~$1700 i7-6950X and the ~$1100 i7-6900K.  From synthetic benchmarks such as AIDA through gaming at 720p and 1080p, they tested the two processors against each other to see when it would make sense to spend the extra money on the new Broadwell-E chip.  Check out what they thought of the chip overall as well as the scenarios where they felt it would be full utilized.

BROADWELL-E-6.png

"10 cores, 20 threads, over $1700; Intel's Broadwell-E i7-6950X delivers obscene performance at an eye-watering price. Then there's the i7-6900K which boasts all the same niceties in a more affordable package."

Here are some more Processor articles from around the web:

Processors

Computex 2016: Here It Is! Your Moment of Zen!

Subject: Processors | June 1, 2016 - 03:57 AM |
Tagged: Zen, computex 2016, computex, amd

At the end of the AMD Computex 2016 keynote, Lisa Su, President and CEO of the company, announced a few details about their upcoming Zen architecture. This will mark the end of the Bulldozer line of architectures that attempted to save die area by designing cores in pairs, eliminating what AMD projected to be redundancies as the world moved toward multi-core and GPU compute. Zen “starts from scratch” and targets where they now see desktop, server, laptop, and embedded devices heading.

amd-2016-zen-markets.png

They didn't really show a whole lot at the keynote. They presented an animation that was created and rendered on the new architecture. I mean, okay, but that's kind-of like reviewing a keyboard by saying that you used it to type the review. It's cool that you have sample silicon available to use internally, but we understand that it physically works.

amd-2016-zen-die.png

That said, Lisa Su did say some hard numbers, which should be interesting for our readers. AMD claims that Zen has 40% higher IPC from their previous generation (which we assume is Excavator). It will be available for desktop with eight cores, two threads per core, on their new AM4 platform. It also taped out earlier this year, with wide sampling in Q3.

amd-2016-zen-video.png

I'm curious how it will end up. The high-end CPU market is a bit... ripe for the picking these days. If AMD gets close to Intel in performance, and offers competitive prices and features alongside it, then it would make sense for enthusiast builds. We'll need to wait for benchmarks, but there seems to be low-hanging fruit.

Report: AMD Socket AM4 Compatible with Existing AM2/AM3 Coolers

Subject: Cases and Cooling, Processors | May 25, 2016 - 04:54 PM |
Tagged: Zen, socket AM3, cpu cooler, amd, AM4

Upgrading to the upcoming Zen processors won't require the purchase of a new cooler or adapter, according to a report from Computer Base (German language).

amd_wraith.jpg

The AMD Wraith Cooler (image credit: The Tech Report)

Answering a customer question on Facebook, a Thermalright representative responded (translated):

"For all AMD fans, we have good news. As we advance AMD has assured the new AM4 processors and motherboards are put on the usual base-fixing, which is standard for AM2. To follow all the Thermalright coolers are used on the Zen processors without additional accessories!"

This news is hardly surprising considering AMD has used the same format for some time, much as Intel's current CPUs still work with coolers designed for LGA 1156.

Rumor: Apple's A11 SoC Reaches Tapeout at TSMC 10nm

Subject: Processors, Mobile | May 9, 2016 - 05:42 PM |
Tagged: apple, a11, 10nm, TSMC

Before I begin, the report comes from DigiTimes and they cite anonymous sources for this story. As always, a grain of salt is required when dealing with this level of alleged leak.

apple.png

That out of the way, rumor has it that Apple's A11 SoC has been taped out on TSMC's 10nm process node. This is still a little way's away from production, however. From here, TSMC should be providing samples of the now finalized chip in Q1 2017, start production a few months later, and land in iOS devices somewhere in Q3/Q4. Knowing Apple, that will probably align with their usual release schedule -- around September.

DigiTimes also reports that Apple will likely make their split-production idea a recurring habit. Currently, the A9 processor is fabricated at TSMC and Samsung on two different process nodes (16nm for TSMC and 14nm for Samsung). They claim that two-thirds of A11 chips will come from TSMC.

Source: DigiTimes

Kaby Lake Benchmarks Might Have Been Leaked

Subject: Processors | May 9, 2016 - 04:51 PM |
Tagged: kaby lake, Intel

Fudzilla claims that they have a screenshot of SiSoft benchmarks belonging to the Intel Core i7-7700k. I should note that image only mentions “Kabylake,” not any specific model number. It's possible that the branding will change this generation, and there's an infinitesimal chance that this is not highest level SKU of that specific chip, but it should be safe to assume that this is the 7700k, and that it will be branded as such. I'm just being over-cautious.

intel-2016-7700k-fudzilla.jpg

Image Credit: Fudzilla

In terms of specifications, Kaby Lake will be a quad-core processor that runs at 3.6 GHz, 4.2 GHz turbo, backed with 8MB of L3 cache. The graphics processor has 24 CUs that can reach a clock of 1.15 GHz. If Intel hasn't changed the GPU architecture since Skylake, this equates to 192 FP32 processors and 442 GFLOPs. Apart from a lower CPU base clock, 3.6 GHz versus Skylake's 4.0 GHz, Kaby Lake seems to be identical to Skylake.

I was hoping to compare the benchmark results with Core i7-6700k, but I'm not sure which version of SiSoft they're using. The numbers don't seem to line up with our results (SiSoft 2013 SP3a) or the SiSoft 2015 benchmarks that I've found around the net (and even those 2015 benchmarks varied greatly). It might just be my lack of experience with CPU benchmarks, but I'd rather just present the data.

Source: Fudzilla

Speaking of Leaked Benchmarks: Broadwell-E

Subject: Processors | May 5, 2016 - 07:26 PM |
Tagged: Intel, Broadwell-E

NVIDIA is not the only one with leaked benchmarks this week -- it's Intel's turn!

Silicon Lottery down at the Overclock.net forums got their hands on the ten-core, twenty-thread, Intel Core i7-6950X. Because Silicon Lottery is all about buying CPUs, testing how they overclock, and reselling them, it looks like each of these results are overclocked. The base clock is listed as 3.0 GHz, but the tests were performed at 4.0 GHz or higher.

intel-2016-siliconlottery-broadwelle-oc.png

Image Credit: Silicon Lottery via Overclock.net

They only had access to a single CPU, but they were able to get a “24/7” stable overclock at 4.3 GHz, pushed to 4.5 GHz for a benchmark or two. This could vary from part to part, as this all depends on microscopic errors that were made during manufacturing, and bigger chips have more surface area to run into them. These tiny imprecisions can require excess voltage to hit higher frequencies, causing a performance variation between parts. Too much, and the manufacturer will laser-cut under-performing cores, if possible, and sell it as a lesser part. That said, Silicon Lottery said that performance ran into a wall at some point, which sounds like an architectural limitation.

Broadwell-E is expected to launch at Computex.

Gaming at the low end, checking out the Athlon X4 880K

Subject: Processors | April 26, 2016 - 09:32 PM |
Tagged: Wraith, Godavari, GLOBALFOUNDRIES, FM2+, amd, X4 880K

Remember that FM2+ refresh which Josh informed you about back in March?  The APUs have started arriving on test benches and can be benchmarked independently to see what this ~$100 processor and the Wraith cooler are capable of.  Neoseeker compares the new 880K against the older FX-4350 in a long series of benchmarks which show the 880K to be the better part in most cases.  There are some interesting exceptions to this, in which the FX-4350's slightly higher frequency allows it to pull ahead by a small margin so there are cases where the less expensive chip would make sense.  Read the full review to see which chip makes more sense for you.

01x.jpg

"Today we take a look at the AMD Athlon X4 880K, a quad-core FM2+ processor with 4.0/4.2GHz base/Turbo clocks and unlocked multiplier priced at under $100 USD. It's designed for enthusiasts on a budget looking for the fastest multi-core Athlon processor yet without any integrated GPU to add to the cost. It even shares the 95W TDP of AMD's higher-end APUs for optimized power consumption that further leads to more overclocking headroom."

Here are some more Processor articles from around the web:

Processors

Source: Neoseeker

AMD Expands Wraith Air Cooler Lineup With More CPUs

Subject: Cases and Cooling, Processors | April 22, 2016 - 03:36 PM |
Tagged: Wraith, quiet computing, heatsink, cpu cooler, cpu, AMD Wraith, amd, air cooling

AMD has expanded the CPU lineup featuring their high-performance Wraith air cooling solution, with the quiet cooler now being offered with two more FX-series processors.

amd_wraith.jpg

Image credit: The Tech Report

"AMD has heard the feedback from reviewers and PC users everywhere: the near-silent, capable AMD Wraith Cooler is a resounding success. The question they keep asking is, 'When will the Wraith Cooler be available on more AMD Processors?'

We’re pleased to announce that the wait is over. The high-performance AMD FX 8350 and AMD FX 6350 processors now include a true premium thermal solution in the AMD Wraith Cooler, and each continues to deliver the most cores andthe highest clock rates in its class."

8350-FX.jpg

The lineup featuring AMD's most powerful air solution now includes the following products:

  • AMD FX 8370
  • AMD FX 8350
  • AMD FX 6350
  • AMD A10-7890K

The Wraith cooler initially made its debut with the FX-8370 CPU, and was added to the new A10-7890K APU with the FM2+ refresh last month.

Source: AMD

AMD licenses server processor technology to Chinese companies

Subject: Processors | April 21, 2016 - 10:02 PM |
Tagged: amd, Zen, China, chinese, licensing

As part of its earnings release today, it was announced that AMD has partnered with a combination of public and private Chinese companies to license its high-end server architecture and products. The Chinese company is called THATIC, Tianjin Haiguang Advanced Technology Investment Co. Ltd., and it will license x86 designs and SoC technology providing all the tools needed to make a server platform including CPUs, interconnects and controllers.

This move is important and intriguing in several ways. First, for AMD, this could be a step to get the company and its products some traction and growth after falling well behind Intel's Xeon platforms in the server space. Increasing the market share of AMD technology, in nearly any capacity, is a move the company needs to have any chance to return to profitability. For the Chinese government, it finally will get access to the x86 architecture, though not in the form of its own license.

amdhq.jpg

By licensing the x86 designs to THATIC, AMD could create an entire host of competitors for itself as well as for Intel, which won't help Intel's in-roads into the Chinese markets for enterprise tech. Intel does not license out x86 technology at all, deciding instead to keep it completely in-house in hopes of being the single provider of processors for devices from the cloud to the smartphone.

The first products built by THATIC will likely use the upcoming Zen architecture, due out in early 2017. AMD creates an interesting space for itself with this partnership - the company will sell its own Zen-based chips that could compete with the custom designs the Chinese organization builds. It's possible that a non-compete of sales based on region is part of the arrangement.

Out of the gate, AMD expects to make $293 million from the deal as part of the joint-venture and also will make money based on royalties. That's great news for a company just posted another net loss for Q1 2016. 

Source: Forbes

How Intel Job Cuts and Restructuring Affect Enthusiasts

Subject: Processors | April 21, 2016 - 06:44 PM |
Tagged: restructure, Intel

Earlier this week Intel announced a major restructuring that will result in the loss of 12,000 jobs over the next several weeks, an amount equal to approximately 11% of the company's workforce. I've been sitting on the news for a while, trying to decide what I could add to the hundreds of reports on it and honestly, I haven't come to any definitive conclusion. But here it goes.

It's obviously worth noting the humanitarian part of this announcement - 12,000 people will be losing their job. I feel for them and wish them luck finding employment quickly. It sucks to see anyone lose their job, and maybe more so with a company that is still so profitable and innovative.

intelofficelogo.jpg

The reasons for the restructuring are obviously complex, but the major concern is the shift in focus towards IoT (Internet of Things) and cloud infrastructure as the primary growth drivers. 

The data center and Internet of Things (IoT) businesses are Intel’s primary growth engines, with memory and field programmable gate arrays (FPGAs) accelerating these opportunities – fueling a virtuous cycle of growth for the company. These growth businesses delivered $2.2 billion in revenue growth last year, and made up 40 percent of revenue and the majority of operating profit, which largely offset the decline in the PC market segment.

That last line is the one that might be the most concerning for enthusiasts and builders that read PC Perspective. The decline of the PC market has been a constant hum in the back of minds for the better part of 10 years. Everyone from graphics card vendors to motherboard manufacturers and any other product that depends on the consumer PC to be relevant, has been worried about what will happen as the PC continues in a southward spiral.

But it's important to point out that Intel has done this before, has taken the stance that the consumer PC is bad business. Remember the netbook craze and the rise of the Atom product line? When computers were "fast enough" for people to open up a browser and get to their email? At that point Intel had clearly pushed the enthusiast and high performance computing market to back burner. This also occurred when management pushed Intel into the mobile space, competing directly with the likes of Qualcomm in a market that it didn't quite have the product portfolio to do so.

Then something happened - PC gaming proved to be a growth segment after all. Intel started to realize that high end components mattered and they made attempts to recapture the market's mind share (as it never lost the market share). That is where the unlocked processors in notebooks and "anniversary edition" CPUs were born, in the labs of Intel where gamers and enthusiasts mattered. Hell the entire creation of the Devil's Canyon platform was predicated on the idea that the enthusiast community mattered.

slides01.jpg

I thought we were moving in the right direction. But it appears we have another setback. Intel is going to downplay the value and importance of the market that literally defines and decides what every other consumer buys. Enthusiasts are the trend setters, the educators and the influencers. When families and friends and co-workers ask for suggestions for new phones, tablets and notebooks, they ask us. 

Maybe Intel is just in another cycle, another loop about the fate of the PC and what it means. Did tablets and the iPad kill off the notebook? Did mobile games on your iPhone keep users from flocking to PC games? Have the PS4 or Xbox One destroyed the market for PC-based gaming and VR? No. 

The potential worry now is that one of these times, as Intel feigns disinterest in the PC, it may stick.

Source: Intel

Sony plans PlayStation NEO with massive APU hardware upgrade

Subject: Graphics Cards, Processors | April 19, 2016 - 03:21 PM |
Tagged: sony, ps4, Playstation, neo, giant bomb, APU, amd

Based on a new report coming from Giant Bomb, Sony is set to release a new console this year with upgraded processing power and a focus on 4K capabilities, code named NEO. We have been hearing for several weeks that both Microsoft and Sony were planning partial generation upgrades but it appears that details for Sony's update have started leaking out in greater detail, if you believe the reports.

Giant Bomb isn't known for tossing around speculation and tends to only report details it can safely confirm. Austin Walker says "multiple sources have confirmed for us details of the project, which is internally referred to as the NEO." 

ps4gpu.jpg

The current PlayStation 4 APU
Image source: iFixIt.com

There are plenty of interesting details in the story, including Sony's determination to not split the user base with multiple consoles by forcing developers to have a mode for the "base" PS4 and one for NEO. But most interesting to us is the possible hardware upgrade.

The NEO will feature a higher clock speed than the original PS4, an improved GPU, and higher bandwidth on the memory. The documents we've received note that the HDD in the NEO is the same as that in the original PlayStation 4, but it's not clear if that means in terms of capacity or connection speed.

...

Games running in NEO mode will be able to use the hardware upgrades (and an additional 512 MiB in the memory budget) to offer increased and more stable frame rate and higher visual fidelity, at least when those games run at 1080p on HDTVs. The NEO will also support 4K image output, but games themselves are not required to be 4K native.

Giant Bomb even has details on the architectural changes.

  Shipping PS4 PS4 "NEO"
CPU 8 Jaguar Cores @ 1.6 GHz 8 Jaguar Cores @ 2.1 GHz
GPU AMD GCN, 18 CUs @ 800 MHz AMD GCN+, 36 CUs @ 911 MHz
Stream Processors 1152 SPs ~ HD 7870 equiv. 2304 SPs ~ R9 390 equiv.
Memory 8GB GDDR5 @ 176 GB/s 8GB GDDR5 @ 218 GB/s

(We actually did a full video teardown of the PS4 on launch day!)

If the Compute Unit count is right from the GB report, then the PS4 NEO system will have 2,304 stream processors running at 911 MHz, giving it performance nearing that of a consumer Radeon R9 390 graphics card. The R9 390 has 2,560 SPs running at around 1.0 GHz, so while the NEO would be slower, it would be a substantial upgrade over the current PS4 hardware and the Xbox One. Memory bandwidth on NEO is still much lower than a desktop add-in card (218 GB/s vs 384 GB/s).

DSC02539.jpg

Could Sony's NEO platform rival the R9 390?

If the NEO hardware is based on Grenada / Hawaii GPU design, there are some interesting questions to ask. With the push into 4K that we expect with the upgraded PlayStation, it would be painful if the GPU didn't natively support HDMI 2.0 (4K @ 60 Hz). With the modularity of current semi-custom APU designs it is likely that AMD could swap out the display controller on NEO with one that can support HDMI 2.0 even though no consumer shipping graphics cards in the 300-series does so. 

It is also POSSIBLE that NEO is based on the upcoming AMD Polaris GPU architecture, which supports HDR and HDMI 2.0 natively. That would be a much more impressive feat for both Sony and AMD, as we have yet to see Polaris released in any consumer GPU. Couple that with the variables of 14/16nm FinFET process production and you have a complicated production pipe that would need significant monitoring. It would potentially lower cost on the build side and lower power consumption for the NEO device, but I would be surprised if Sony wanted to take a chance on the first generation of tech from AMD / Samsung / Global Foundries.

However, if you look at recent rumors swirling about the June announcement of the Radeon R9 480 using the Polaris architecture, it is said to have 2,304 stream processors, perfectly matching the NEO specs above.

polaris-5.jpg

New features of the AMD Polaris architecture due this summer

There is a lot Sony and game developers could do with roughly twice the GPU compute capability on a console like NEO. This could make the PlayStation VR a much more comparable platform to the Oculus Rift and HTC Vive though the necessity to work with the original PS4 platform might hinder the upgrade path. 

The other obvious use is to upgrade the image quality and/or rendering resolution of current games and games in development or just to improve the frame rate, an area that many current generation consoles seem to have been slipping on

In the documents we’ve received, Sony offers suggestions for reaching 4K/UltraHD resolutions for NEO mode game builds, but they're also giving developers a degree of freedom with how to approach this. 4K TV owners should expect the NEO to upscale games to fit the format, but one place Sony is unwilling to bend is on frame rate. Throughout the documents, Sony repeatedly reminds developers that the frame rate of games in NEO Mode must meet or exceed the frame rate of the game on the original PS4 system.

There is still plenty to read in the Giant Bomb report, and I suggest you head over and do so. If you thought the summer was going to be interesting solely because of new GPU releases from AMD and NVIDIA, it appears that Sony and Microsoft have their own agenda as well.

Source: Giant Bomb