Subject: Processors | April 26, 2016 - 05:32 PM | Jeremy Hellstrom
Tagged: Wraith, Godavari, GLOBALFOUNDRIES, FM2+, amd, X4 880K
Remember that FM2+ refresh which Josh informed you about back in March? The APUs have started arriving on test benches and can be benchmarked independently to see what this ~$100 processor and the Wraith cooler are capable of. Neoseeker compares the new 880K against the older FX-4350 in a long series of benchmarks which show the 880K to be the better part in most cases. There are some interesting exceptions to this, in which the FX-4350's slightly higher frequency allows it to pull ahead by a small margin so there are cases where the less expensive chip would make sense. Read the full review to see which chip makes more sense for you.
"Today we take a look at the AMD Athlon X4 880K, a quad-core FM2+ processor with 4.0/4.2GHz base/Turbo clocks and unlocked multiplier priced at under $100 USD. It's designed for enthusiasts on a budget looking for the fastest multi-core Athlon processor yet without any integrated GPU to add to the cost. It even shares the 95W TDP of AMD's higher-end APUs for optimized power consumption that further leads to more overclocking headroom."
Here are some more Processor articles from around the web:
- The AMD Athlon X4 880K Review @ Hardware Canucks
- A10-7870K vs. Core i3-6100 CPU @ Hardware Secrets
- £150 Gaming CPU: AMD FX 8370 (w/ Wraith) vs Intel Core i5-6400 @ Kitguru
Subject: Cases and Cooling, Processors | April 22, 2016 - 11:36 AM | Sebastian Peak
Tagged: Wraith, quiet computing, heatsink, cpu cooler, cpu, AMD Wraith, amd, air cooling
AMD has expanded the CPU lineup featuring their high-performance Wraith air cooling solution, with the quiet cooler now being offered with two more FX-series processors.
Image credit: The Tech Report
"AMD has heard the feedback from reviewers and PC users everywhere: the near-silent, capable AMD Wraith Cooler is a resounding success. The question they keep asking is, 'When will the Wraith Cooler be available on more AMD Processors?'
We’re pleased to announce that the wait is over. The high-performance AMD FX 8350 and AMD FX 6350 processors now include a true premium thermal solution in the AMD Wraith Cooler, and each continues to deliver the most cores andthe highest clock rates in its class."
The lineup featuring AMD's most powerful air solution now includes the following products:
- AMD FX 8370
- AMD FX 8350
- AMD FX 6350
- AMD A10-7890K
The Wraith cooler initially made its debut with the FX-8370 CPU, and was added to the new A10-7890K APU with the FM2+ refresh last month.
Subject: Processors | April 21, 2016 - 06:02 PM | Ryan Shrout
Tagged: amd, Zen, China, chinese, licensing
As part of its earnings release today, it was announced that AMD has partnered with a combination of public and private Chinese companies to license its high-end server architecture and products. The Chinese company is called THATIC, Tianjin Haiguang Advanced Technology Investment Co. Ltd., and it will license x86 designs and SoC technology providing all the tools needed to make a server platform including CPUs, interconnects and controllers.
This move is important and intriguing in several ways. First, for AMD, this could be a step to get the company and its products some traction and growth after falling well behind Intel's Xeon platforms in the server space. Increasing the market share of AMD technology, in nearly any capacity, is a move the company needs to have any chance to return to profitability. For the Chinese government, it finally will get access to the x86 architecture, though not in the form of its own license.
By licensing the x86 designs to THATIC, AMD could create an entire host of competitors for itself as well as for Intel, which won't help Intel's in-roads into the Chinese markets for enterprise tech. Intel does not license out x86 technology at all, deciding instead to keep it completely in-house in hopes of being the single provider of processors for devices from the cloud to the smartphone.
The first products built by THATIC will likely use the upcoming Zen architecture, due out in early 2017. AMD creates an interesting space for itself with this partnership - the company will sell its own Zen-based chips that could compete with the custom designs the Chinese organization builds. It's possible that a non-compete of sales based on region is part of the arrangement.
Out of the gate, AMD expects to make $293 million from the deal as part of the joint-venture and also will make money based on royalties. That's great news for a company just posted another net loss for Q1 2016.
Subject: Processors | April 21, 2016 - 02:44 PM | Ryan Shrout
Tagged: restructure, Intel
Earlier this week Intel announced a major restructuring that will result in the loss of 12,000 jobs over the next several weeks, an amount equal to approximately 11% of the company's workforce. I've been sitting on the news for a while, trying to decide what I could add to the hundreds of reports on it and honestly, I haven't come to any definitive conclusion. But here it goes.
It's obviously worth noting the humanitarian part of this announcement - 12,000 people will be losing their job. I feel for them and wish them luck finding employment quickly. It sucks to see anyone lose their job, and maybe more so with a company that is still so profitable and innovative.
The reasons for the restructuring are obviously complex, but the major concern is the shift in focus towards IoT (Internet of Things) and cloud infrastructure as the primary growth drivers.
The data center and Internet of Things (IoT) businesses are Intel’s primary growth engines, with memory and field programmable gate arrays (FPGAs) accelerating these opportunities – fueling a virtuous cycle of growth for the company. These growth businesses delivered $2.2 billion in revenue growth last year, and made up 40 percent of revenue and the majority of operating profit, which largely offset the decline in the PC market segment.
That last line is the one that might be the most concerning for enthusiasts and builders that read PC Perspective. The decline of the PC market has been a constant hum in the back of minds for the better part of 10 years. Everyone from graphics card vendors to motherboard manufacturers and any other product that depends on the consumer PC to be relevant, has been worried about what will happen as the PC continues in a southward spiral.
But it's important to point out that Intel has done this before, has taken the stance that the consumer PC is bad business. Remember the netbook craze and the rise of the Atom product line? When computers were "fast enough" for people to open up a browser and get to their email? At that point Intel had clearly pushed the enthusiast and high performance computing market to back burner. This also occurred when management pushed Intel into the mobile space, competing directly with the likes of Qualcomm in a market that it didn't quite have the product portfolio to do so.
Then something happened - PC gaming proved to be a growth segment after all. Intel started to realize that high end components mattered and they made attempts to recapture the market's mind share (as it never lost the market share). That is where the unlocked processors in notebooks and "anniversary edition" CPUs were born, in the labs of Intel where gamers and enthusiasts mattered. Hell the entire creation of the Devil's Canyon platform was predicated on the idea that the enthusiast community mattered.
I thought we were moving in the right direction. But it appears we have another setback. Intel is going to downplay the value and importance of the market that literally defines and decides what every other consumer buys. Enthusiasts are the trend setters, the educators and the influencers. When families and friends and co-workers ask for suggestions for new phones, tablets and notebooks, they ask us.
Maybe Intel is just in another cycle, another loop about the fate of the PC and what it means. Did tablets and the iPad kill off the notebook? Did mobile games on your iPhone keep users from flocking to PC games? Have the PS4 or Xbox One destroyed the market for PC-based gaming and VR? No.
The potential worry now is that one of these times, as Intel feigns disinterest in the PC, it may stick.
Subject: Graphics Cards, Processors | April 19, 2016 - 11:21 AM | Ryan Shrout
Tagged: sony, ps4, Playstation, neo, giant bomb, APU, amd
Based on a new report coming from Giant Bomb, Sony is set to release a new console this year with upgraded processing power and a focus on 4K capabilities, code named NEO. We have been hearing for several weeks that both Microsoft and Sony were planning partial generation upgrades but it appears that details for Sony's update have started leaking out in greater detail, if you believe the reports.
Giant Bomb isn't known for tossing around speculation and tends to only report details it can safely confirm. Austin Walker says "multiple sources have confirmed for us details of the project, which is internally referred to as the NEO."
The current PlayStation 4 APU
Image source: iFixIt.com
There are plenty of interesting details in the story, including Sony's determination to not split the user base with multiple consoles by forcing developers to have a mode for the "base" PS4 and one for NEO. But most interesting to us is the possible hardware upgrade.
The NEO will feature a higher clock speed than the original PS4, an improved GPU, and higher bandwidth on the memory. The documents we've received note that the HDD in the NEO is the same as that in the original PlayStation 4, but it's not clear if that means in terms of capacity or connection speed.
Games running in NEO mode will be able to use the hardware upgrades (and an additional 512 MiB in the memory budget) to offer increased and more stable frame rate and higher visual fidelity, at least when those games run at 1080p on HDTVs. The NEO will also support 4K image output, but games themselves are not required to be 4K native.
Giant Bomb even has details on the architectural changes.
|Shipping PS4||PS4 "NEO"|
|CPU||8 Jaguar Cores @ 1.6 GHz||8 Jaguar Cores @ 2.1 GHz|
|GPU||AMD GCN, 18 CUs @ 800 MHz||AMD GCN+, 36 CUs @ 911 MHz|
|Stream Processors||1152 SPs ~ HD 7870 equiv.||2304 SPs ~ R9 390 equiv.|
|Memory||8GB GDDR5 @ 176 GB/s||8GB GDDR5 @ 218 GB/s|
(We actually did a full video teardown of the PS4 on launch day!)
If the Compute Unit count is right from the GB report, then the PS4 NEO system will have 2,304 stream processors running at 911 MHz, giving it performance nearing that of a consumer Radeon R9 390 graphics card. The R9 390 has 2,560 SPs running at around 1.0 GHz, so while the NEO would be slower, it would be a substantial upgrade over the current PS4 hardware and the Xbox One. Memory bandwidth on NEO is still much lower than a desktop add-in card (218 GB/s vs 384 GB/s).
Could Sony's NEO platform rival the R9 390?
If the NEO hardware is based on Grenada / Hawaii GPU design, there are some interesting questions to ask. With the push into 4K that we expect with the upgraded PlayStation, it would be painful if the GPU didn't natively support HDMI 2.0 (4K @ 60 Hz). With the modularity of current semi-custom APU designs it is likely that AMD could swap out the display controller on NEO with one that can support HDMI 2.0 even though no consumer shipping graphics cards in the 300-series does so.
It is also POSSIBLE that NEO is based on the upcoming AMD Polaris GPU architecture, which supports HDR and HDMI 2.0 natively. That would be a much more impressive feat for both Sony and AMD, as we have yet to see Polaris released in any consumer GPU. Couple that with the variables of 14/16nm FinFET process production and you have a complicated production pipe that would need significant monitoring. It would potentially lower cost on the build side and lower power consumption for the NEO device, but I would be surprised if Sony wanted to take a chance on the first generation of tech from AMD / Samsung / Global Foundries.
However, if you look at recent rumors swirling about the June announcement of the Radeon R9 480 using the Polaris architecture, it is said to have 2,304 stream processors, perfectly matching the NEO specs above.
New features of the AMD Polaris architecture due this summer
There is a lot Sony and game developers could do with roughly twice the GPU compute capability on a console like NEO. This could make the PlayStation VR a much more comparable platform to the Oculus Rift and HTC Vive though the necessity to work with the original PS4 platform might hinder the upgrade path.
The other obvious use is to upgrade the image quality and/or rendering resolution of current games and games in development or just to improve the frame rate, an area that many current generation consoles seem to have been slipping on.
In the documents we’ve received, Sony offers suggestions for reaching 4K/UltraHD resolutions for NEO mode game builds, but they're also giving developers a degree of freedom with how to approach this. 4K TV owners should expect the NEO to upscale games to fit the format, but one place Sony is unwilling to bend is on frame rate. Throughout the documents, Sony repeatedly reminds developers that the frame rate of games in NEO Mode must meet or exceed the frame rate of the game on the original PS4 system.
There is still plenty to read in the Giant Bomb report, and I suggest you head over and do so. If you thought the summer was going to be interesting solely because of new GPU releases from AMD and NVIDIA, it appears that Sony and Microsoft have their own agenda as well.
Subject: Processors | April 5, 2016 - 06:30 AM | Josh Walrath
Tagged: mobile, hp, GCN, envy, ddr4, carrizo, Bristol Ridge, APU, amd, AM4
Today AMD is “pre-announcing” their latest 7th generation APU. Codenamed “Bristol Ridge”, this new SOC is based off of the Excavator architecture featured in the previous Carrizo series of products. AMD provided very few hints as to what was new and different in Bristol Ridge as compared to Carrizo, but they have provided a few nice hints.
They were able to provide a die shot of the new Bristol Ridge APU and there are some interesting differences between it and the previous Carrizo. Unfortunately, there really are no changes that we can see from this shot. Those new functional units that you are tempted to speculate about? For some reason AMD decided to widen out the shot of this die. Those extra units around the border? They are the adjacent dies on the wafer. I was bamboozled at first, but happily Marc Sauter pointed it out to me. No new functional units for you!
This is the Carrizo shot. It is functionally identical to what we see with Bristol Ridge.
AMD appears to be using the same 28 nm HKMG process from GLOBALFOUNDRIES. This is not going to give AMD much of a jump, but from information in the industry GLOBALFOUNDRIES and others have put an impressive amount of work into several generations of 28 nm products. TSMC is on their third iteration which has improved power and clock capabilities on that node. GLOBALFOUNDRIES has continued to improve their particular process and likely Bristol Ridge is going to be the last APU built on that node.
All of the competing chips are rated at 15 watts TDP. Intel has the compute advantage, but AMD is cleaning up when it comes to graphics.
The company has also continued to improve upon their power gating and clocking technologies to keep TDPs low, yet performance high. AMD recently released the Godavari APUs to the market which exhibit better clocking and power characteristics from the previous Kaveri. Little was done on the actual design, rather it was improved process tech as well as better clock control algorithms that achieved these advances. It appears as though AMD has continued this trend with Bristol Ridge.
We likely are not seeing per clock increases, but rather higher and longer sustained clockspeeds providing the performance boost that we are seeing between Carrizo and Bristol Ridge. In these benchmarks AMD is using 15 watt TDP products. These are mobile chips and any power improvements will show off significant gains in overall performance. Bristol Ridge is still a native quad core part with what looks to be an 8 module GCN unit.
Again with all three products at a 15 watt TDP we can see that AMD is squeezing every bit of performance it can with the 28 nm process and their Excavator based design.
The basic core and GPU design look relatively unchanged, but obviously there were a lot of tweaks applied to give the better performance at comparable TDPs.
AMD is announcing this along with the first product that will feature this APU. The HP Envy X360. This convertible tablet offers some very nice features and looks to be one of the better implementations that AMD has seen using its latest APUs. Carrizo had some wins, but taking marketshare back from Intel in the mobile space has been tortuous at best. AMD obviously hopes that Bristol Ridge in the sub-35 watt range will continue to show fight for the company in this important market. Perhaps one of the more interesting features is the option for the PCIe SSD. Hopefully AMD will send out a few samples so we can see what a more “premium” type convertible can do with the AMD silicon.
The HP Envy X360 convertible in all of its glory.
Bristol Ridge will be coming to the AM4 socket infrastructure in what appears to be a Computex timeframe. These parts will of course feature higher TDPs than what we are seeing here with the 15 watt unit that was tested. It seems at that time AMD will announce the full lineup from top to bottom and start seeding the market with AM4 boards that will eventually house the “Zen” CPUs that will show up in late 2016.
Subject: Processors | March 22, 2016 - 05:08 PM | Ryan Shrout
Tagged: Intel, tick tock, tick-tock, process technology, kaby lake
It should come as little surprise to our readers that have followed news about Kaby Lake, Intel's extension of the Skylake architecture that officially broke nearly a decade of tick-tock processor design. With tick-tock, Intel would iterate in subsequent years between a new processor microarchitecture (Sandy Bridge, Ivy Bridge, etc.) and a new process technology (45nm, 32nm, 22nm, etc.). According to this story over at Fool.com, Intel's officially ending that pattern of production.
From the company's latest K-10 filing:
"We expect to lengthen the amount of time we will utilize our 14 [nanometer] and our next-generation 10 [nanometer] process technologies, further optimizing our products and process technologies while meeting the yearly market cadence for product introductions."
It is likely that that graphic above that showcases the changes from Tick-Tock to what is going on now isn't "to scale" and we may see more than three steps in each iteration along the way. Intel still believes that it has and will continue to have the best process technology in the world and that its processors will benefit.
Continuing further, the company indicates that "this competitive advantage will be extended in the future as the costs to build leading-edge fabrication facilities increase, and as fewer semiconductor companies will be able to leverage platform design and manufacturing."
Kaby Lake details leaking out...
As Scott pointed out in our discussions about this news, it might mean consumers will see advantages in longer socket compatibility going forward though I would still see this as a net-negative for technology. As process technology improvements slow down, either due to complexity or lack of competition in the market, we will see less innovation in key areas of performance and power consumption.
Subject: Processors | March 15, 2016 - 12:52 PM | Sebastian Peak
Tagged: TSMC, SoC, servers, process technology, low power, FinFET, datacenter, cpu, arm, 7nm, 7 nm FinFET
ARM and TSMC have announced their collaboration on 7 nm FinFET process technology for future SoCs. A multi-year agreement between the companies, products produces on this 7 nm FinFET process are intended to expand ARM’s reach “beyond mobile and into next-generation networks and data centers”.
TSMC Headquarters (Image credit: AndroidHeadlines)
So when can we expect to see 7nm SoCs on the market? The report from The Inquirer offers this quote from TSMC:
“A TSMC spokesperson told the INQUIRER in a statement: ‘Our 7nm technology development progress is on schedule. TSMC's 7nm technology development leverages our 10nm development very effectively. At the same time, 7nm offers a substantial density improvement, performance improvement and power reduction from 10nm’.”
Full press release after the break.
Clockspeed Jump and More!
On March 1st AMD announced the availability of two new processors as well as more information on the A10 7860 APU.
The two new units are the A10-7890K and the Athlon X4 880K. These are both Kaveri based parts, but of course the Athlon has the GPU portion disabled. Product refreshes for the past several years have followed a far different schedule than the days of yore. Remember back in time when the Phenom II series and the competing Core 2 series would have clockspeed updates that were expected yearly, if not every half year with a slightly faster top end performer to garner top dollar from consumers?
Things have changed, for better or worse. We have so far seen two clockspeed bumps for the Kaveri /Godavari based APU. Kaveri was first introduced over two years ago with the A10-7850K and the lower end derivatives. The 7850K has a clockspeed that ranges from 3.7 GHz to the max 4 GHz with boost. The GPU portion is clocked at 720 MHz. This is a 95 watt TDP part that is one of the introductory units from GLOBALFOUNDRIES 28 nm HKMG process.
Today the new top end A10-7890K is clocked at 4.1 GHz to 4.3 GHz max. The GPU receives a significant boost in performance with a clockspeed of 866 MHz. The combination of CPU and GPU clockspeed increases push the total performance of the part exceeding 1 TFLOPs. It features the same dual module/quad core Godavari design as well as the 8 GCN Units. The interesting part here is that the APU does not exceed the 95 watt TDP that it shares with the older and slower 7850K. It is also a boost in performance from last year’s refresh of the A10-7870K which is clocked 200 MHz slower on the CPU portion but retains the 866 MHz speed of the GPU. This APU is fully unlocked so a user can easily overclock both the CPU and GPU cores.
The Athlon X4 880K is still based on the Godavari family rather than the Carizzo update that the X4 845 uses. This part is clocked from 4.0 to 4.2 GHz. It again retains the 95 watt TDP rating of the previous Athlon X4 CPUs. Previously the X4 860K was the highest clocked unit at 3.7 GHz to 4.0, but the 880K raises that to 4 to 4.2 GHz. A 300 MHz gain in base clock is pretty significant as well as stretching that ceiling to 4.2 GHz. The Godavari modules retain their full amount of L2 cache so the 880K has 4 MB available to it. These parts are very popular with budget enthusiasts and gaming builds as they are extremely inexpensive and perform at an acceptable level with free overclocking thrown in.
Subject: Graphics Cards, Processors | February 29, 2016 - 06:48 PM | Scott Michaud
Tagged: tesla motors, tesla, SoC, Peter Bannon, Jim Keller
When we found out that Jim Keller has joined Tesla, we were a bit confused. He is highly skilled in processor design, and he moved to a company that does not design processors. Kind of weird, right? There are two possibilities that leap to mind: either he wanted to try something new in life, and Elon Musk hired him for his general management skills, or Tesla wants to get more involved in the production of their SoCs, possibly even designing their own.
Now Peter Bannon, who was a colleague of Jim Keller at Apple, has been hired by Tesla Motors. Chances are, the both of them were not independently interested in an abrupt career change that led them to the same company. That seems highly unlikely, to say the least. So it appears that Tesla Motors wants experienced chip designers in house. What for? We don't know. This is a lot of talent to just look over the shoulders of NVIDIA and other SoC partners, to make sure they have an upper hand in negotiation. Jim Keller is at Tesla as their “Vice-President of Autopilot Hardware Engineering.” We don't know what Peter Bannon's title will be.
And then, if Tesla Motors does get into creating their own hardware, we wonder what they will do with it. The company has a history of open development and releasing patents (etc.) into the public. That said, SoC design is a highly encumbered field, depending on what they're specifically doing, which we have no idea about.