Subject: Graphics Cards, Processors | October 8, 2014 - 05:54 PM | Scott Michaud
In an abrupt announcement, Rory Read has stepped down from his positions at AMD, leaving them to Dr. Lisa Su. Until today, Mr. Read served as president and Chief Executive Officer (CEO) of the x86 chip designer and Dr. Su as Chief Operating Officer (COO). Today however, Dr. Su has become president and CEO, and Mr. Read will stay on for a couple of months as an adviser during the transition.
Josh Walrath, editor here at PC Perspective, tweeted that he was "Curious as to why Rory didn't stay on longer? He did some good things there [at AMD], but [it's] very much an unfinished job." I would have to agree. It feels like an odd time, hence the earlier use of the word "abrupt", to have a change in management. AMD restructured just four months ago, which was the occasion for Dr. Su to be promoted to COO. In fact, at least as far as I know, no-one is planned to fill her former position as COO.
These points suggest that she was planned to take over the company for at least several months.
I have been told that timing is everything. I guess this rings true, but only if you truly know the circumstances around any action. Today’s announcement by AMD was odd in its timing, but it was not exactly unexpected. As Scott mentioned above, I was confused by this happening now. I had expected Rory to be in charge for at least another year, if not two. Rory had hinted that he was not planning on being at AMD forever, but was aiming at creating a solid foundation for the company and to help shore up its finances and instill a new culture. While the culture is turning due to pressure from up top as well as a pretty significant personnel cuts, AMD is not quite as nimble yet as they want to be.
Rory’s term has seen the return of seasoned veterans like Jim Keller and Raja Koduri. These guys are helping to turn the ship around after some fairly mediocre architecturse on the CPU and GPU sides. While Raja had little to do with GCN, we are seeing some aggressive moves there in terms of features that are making their products much more competitive with NVIDIA. Keller has made some very significant changes to the overall roadmap on the CPU side and I think we will see some very solid improvements in design and execution over the next two years.
Lisa Su was brought in by Rory shortly after he was named CEO. Lisa has a pretty significant background in semiconductors and has made a name for herself in her work with IBM and Freescale. Lisa attained all three of her degrees from MIT. This is not unheard of, but it is uncommon to stay in one academic setting when gaining advanced degrees. Having said that, MIT certainly is the top engineering and science school in the nation (if not the world). I’m sure people from RPI, GT, and CalTech might argue that, but it certainly is an impressive school to have on your resume.
Dr. Su has seemingly been groomed for this transition for quite some time now. She went from a VP to COO rather quickly, and is now shouldering the burden of being CEO. Lisa has been on quite a few of the quarterly conference calls and taking questions. She also serves on the Board of Directors at Analog Devices.
I think that Lisa will continue along the same path that Rory set out, but she will likely bring a few new wrinkles due to her experience with semiconductor design and R&D at IBM. We can only hope that this won’t become a Dirk Meyer 2.0 type situation where a successful engineer and CPU architect could not change the course of the company after the disastrous reign of Hector Ruiz. I do not think that this will be the case, as Rory did not leave the mess that Hector did. I also believe that Lisa has more business sense and acumen than Dirk did.
This change, at this time, has provided some instability in the markets when regarding AMD. Some weeks ago AMD was at a near high for the year at around $4.66 per share. Right now it is hovering at $3.28. I was questioning why the stock price was going down, and it seems that my question was answered. One way or the other, rumors of Rory taking off reached investors’ ears and we saw a rapid decline in share price. We have yet to see what Q3 earnings look like now that Rory has rather abruptly left his position, but people are pessimistic as to what will be announced with such a sudden departure.
Subject: Processors | September 30, 2014 - 06:02 PM | Josh Walrath
Tagged: arm, cortex, Cortex-A, cortex-m, 90 nm, 40 nm, 28 nm, 32 bit
Last week ARM announced the latest member of their Cortex-M series of embedded parts. The new Cortex-M7 design is a 32 bit processor designed to have good performance while achieving excellent power consumption. The M7 is a fully superscalar design with 6 pipeline stages. This product should not be confused with the Cortex-A series of products, as the M series is aimed directly at embedded markets.
This product is not necessarily meant for multi-media rich applications, so it will not find its way into a modern smart phone. Products that it is leveraged at would be products like the latest generation of smart watches. Industrial control applications, automotive computing, low power and low heat applications, and countless IoT (Internet of Things) products can utilize this architecture.
The designs are being offered on a variety of process nodes from 90 nm down to 28 nm. These choices are made by the licensee depending on the specifics of their application. In the most energy efficient state, ARM claims that these products can see multiple years of running non-stop on a small lithium battery.
This obviously is not the most interesting ARM based product that we have seen lately, but it addresses a very important market. What is perhaps most interesting about this release not only is the pretty dramatic increase in per clock performance from the previous generation of part, but also how robust the support is in terms of design tools, software ecosystem, and 3rd party support.
Cortex-M7 can also be utilized in areas where a more complex DSP has traditionally been used. In comparison to some common DSPs, the Cortex-M7 is competitive in terms of specialized workload performance. It also has the advantage of being much more flexible than a DSP in a general computing environment.
ARM just keeps on moving along with products that address many different computing markets. ARM’s high end Cortex-A series of parts powers the majority of smart phones and tablets while the Cortex-M series have sold in the billions addressing the embedded market. The Cortex-M7 is the latest member of that family and will find more than its fair share of products to be integrated into.
Subject: Graphics Cards, Processors | September 30, 2014 - 03:33 AM | Scott Michaud
Tagged: iris, Intel, core m, broadwell-y, broadwell-u, Broadwell
Intel's upcoming 14nm product line, Broadwell, is expected to have six categories of increasing performance. Broadwell-Y, later branded Core M, is part of the soldered BGA family at expected TDPs of 3.5 to 4.5W. Above this is Broadwell-U, which are also BGA packages, and thus require soldering by the system builder. VR-Zone China has a list of seemingly every 15W SKU in that category. 28W TDP "U" products are expected to be available in the following quarter, but are not listed.
Image Credit: VR-Zone
As for those 15W parts though, there are seventeen (17!) of them, ranging from Celeron to Core i7. While each product is dual-core, the ones that are Core i3 and up have Hyper-Threading, increasing the parallelism to four tasks simultaneously. In terms of cache, Celerons and Pentiums will have 2MB, Core i7s will have 4MB, and everything in between will have 3MB. Otherwise, the products vary on the clock frequency they were binned (bin-sorted) at, and the integrated graphics that they contain.
Image Credit: VR-Zone
These integrated iGPUs range from "Intel HD Graphics" on the Celerons and Pentiums, to "Intel Iris Graphics 6100" on one Core i7, two Core i5s, and one Core i3. The rest pretty much alternate between Intel HD Graphics 5500 and Intel HD Graphics 6000. Maximum frequency of any given iGPU can vary within the same product, but only by about 100 MHz at the most. The exact spread is below.
- Intel HD Graphics: 300 MHz base clock, 800 MHz at load.
- Intel HD Graphics 5500: 300 MHz base clock, 850-950 MHz at load (depending on SKU).
- Intel HD Graphics 6000: 300 MHz base clock, 1000 MHz at load.
- Intel Iris Graphics 6100: 300 MHz base clock, 1000-1100 MHz at load (depending on SKU).
Unfortunately, without the number of shader units to go along with the core clock, we cannot derive a FLOP value yet. This is a very important metric for increasing resolution and shader complexity, and it would provide a relatively fair metric to compare the new parts against previous offerings for higher resolutions and quality settings, especialy in DirectX 12 I would assume.
Image Credit: VR-Zone
Probably the most interesting part to me is that "Intel HD Graphics" without a number meant GT1 with Haswell. Starting with Broadwell, it has been upgraded to GT2 (apparently). As we can see from even the 4.5W Core M processors, Intel is taking graphics seriously. It is unclear whether their intention is to respect gaming's influence on device purchases, or if they are believing that generalized GPU compute will be "a thing" very soon.
Subject: Graphics Cards, Processors, Mobile | September 29, 2014 - 01:53 AM | Scott Michaud
Tagged: apple, a8, a7, Imagination Technologies, PowerVR
First, Chipworks released a dieshot of the new Apple A8 SoC (stored at archive.org). It is based on the 20nm fabrication process from TSMC, which they allegedly bought the entire capacity for. From there, a bit of a debate arose regarding what each group of transistors represented. All sources claim that it is based around a dual-core CPU, but the GPU is a bit polarizing.
Image Credit: Chipworks via Ars Technica
Most sources, including Chipworks, Ars Technica, Anandtech, and so forth believe that it is a quad-core graphics processor from Imagination Technologies. Specifically, they expect that it is the GX6450 from the PowerVR Series 6XT. This is a narrow upgrade over the G6430 found in the Apple A7 processor, which is in line with the initial benchmarks that we saw (and not in line with the 50% GPU performance increase that Apple claims). For programmability, the GX6450 is equivalent to a DirectX 10-level feature set, unless it was extended by Apple, which I doubt.
Image Source: DailyTech
DailyTech has their own theory, suggesting that it is a GX6650 that is horizontally-aligned. From my observation, their "Cluster 2" and "Cluster 5" do not look identical at all to the other four, so I doubt their claims. I expect that they heard Apple's 50% claims, expected six GPU cores as the rumors originally indicated, and saw cores that were not there.
Which brings us back to the question of, "So what is the 50% increase in performance that Apple claims?" Unless they had a significant increase in clock rate, I still wonder if Apple is claiming that their increase in graphics performance will come from the Metal API even though it is not exclusive to new hardware.
But from everything we saw so far, it is just a handful of percent better.
Subject: General Tech, Processors, Mobile | September 27, 2014 - 02:38 PM | Scott Michaud
Tagged: Intel, spreadtrum, rda, Rockchip, SoC
A few months ago, Intel partnered with Rockchip to develop low-cost SoCs for Android. The companies would work together on a design that could be fabricated at TSMC. This time Intel is partnering with Tsinghua Unigroup Ltd. and, unlike Rockchip, also investing in them. The deal will be up to $1.5 billion USD in exchange for a 20% share (approximately) of a division of Tsinghua.
Image Credit: Wikipedia
Intel is hoping to use this partnership to develop mobile SoCs, for smart (and "feature") phones, tablets, and other devices, and get significant presence in the Chinese mobile market. Tsinghua acquired Spreadtrum Communications and RDA Microelectronics within the last two years. The "holding group" that owns these division is apparently the part of Tsinghua which Intel is investing in, specifically.
Spreadtrum will produce SoCs based on Intel's "Intel Architecture". This sounds like they are referring to the 32-bit IA-32, which means that Spreadtrum would be developing 32-bit SoCs, but it is possible that they could be talking about Intel 64. These products are expected for 2H'15.
Subject: Processors | September 25, 2014 - 02:56 PM | Jeremy Hellstrom
Tagged: linux, X99, core i7-5960x, Haswell-E
After the smoke from their previous attempt at testing the i7 5960X CPU Phoronix picked up a Gigabyte X99-UD4-CF and have now had a chance to test Haswell-E performance on Linux. The new processor is compared to over a dozen others on machines running Ubuntu and really showed up the competition on benchmarks that took advantage of the 8 cores. Single threaded applications that depended on a higher clock speed proved to be a weakness as the 4790K's higher frequency allowed it to outperform the new Haswell-E processor. Check out the very impressive results of Phoronix's testing right here.
"With the X99 burned-up motherboard problem of last week appearing to be behind us with no further issues when using a completely different X99 motherboard, here's the first extensive look at the Core i7 5960X Haswell-E processor running on Ubuntu Linux."
Here are some more Processor articles from around the web:
- Intel's Xeon E5-2687W v3 @ The Tech Report
- Intel Core i7-5960X Extreme @ Benchmark Reviews
- Intel Core i7-4790K and Core i5-4690K @ X-bit Labs
- Return of the Athlon: AMD Brings Kabini to the desktop @ Bjorn3d
- AMD FX8370E @ Kitguru
- AMD FX-8370E @ eTeknix
Subject: General Tech, Motherboards, Processors | September 20, 2014 - 06:51 PM | Scott Michaud
Tagged: xeon, Haswell-EP, ddr4, ddr3, Intel
Well this is interesting and, while not new, is news to me.
The upper-tier Haswell processors ushered DDR4 into the desktops for enthusiasts and servers, but DIMMs are quite expensive and incompatible with the DDR3 sticks that your organization might have been stocking up on. Despite the memory controller being placed on the processor, ASRock has a few motherboards which claim DDR3 support. ASRock, responding to Anandtech's inquiry, confirmed that this is not an error and Intel will launch three SKUs, one eight-core, one ten-core, and one twelve-core, with a DDR3-supporting memory controller.
The three models are:
|E5-2629 v3||E5-2649 v3||E5-2669 v3|
|Cores (Threads)||8 (16)||10 (20)||12 (24)|
|Clock Rate||2.4 GHz||2.3 GHz||2.3 Ghz|
The processors, themselves, might not be cheap or easily attainable, though. There are rumors that Intel will require customers purchase at least a minimum amount. It might not be worth buying these processors unless you have a significant server farm (or similar situation).
Subject: General Tech, Processors, Mobile | September 12, 2014 - 01:30 PM | Scott Michaud
Tagged: apple, apple a8, SoC, iphone 6, iphone 6 plus
So one of the first benchmarks for Apple's A8 SoC has been published to Rightware, and it is not very different from its predecessor. The Apple A7 GPU of last year's iPhone 5S received a score of 20,253.80 on the Basemark X synthetic benchmark. The updated Apple A8 GPU, found on the iPhone 6, saw a 4.7% increase, to 21204.26, on the same test.
Again, this is a synthetic benchmark and not necessarily representative of real-world performance. To me, though, it wouldn't surprise me if the GPU is identical, and the increase corresponds mostly to the increase in CPU performance. That said, it still does not explain the lack of increase that we see, despite Apple's switch to TSMC's 20nm process. Perhaps it matters more in power consumption and non-gaming performance? That does not align well with their 20% faster CPU and 50% faster GPU claims...
Speaking of gaming performance, iOS 8 introduces the Metal API, which is Apple's response to Mantle, DirectX 12, and OpenGL Next Initiative. Maybe that boost will give Apple a pass for a generation? Perhaps we will see the two GPUs (A7 and A8) start to diverge in the Metal API? We shall see when more benchmarks and reviews get published.
Subject: General Tech, Processors, Mobile | September 11, 2014 - 06:27 PM | Scott Michaud
Tagged: qualcomm, snapdragon 210, snapdragon, LTE, cheap tablet
The Snapdragon 210 was recently announced by Qualcomm to be an SoC for cheap, sub-$100 tablets and mobile phones. With it, the company aims to bring LTE connectivity to that market segment, including Dual SIM support. It will be manufactured on the 28nm process, with up to four ARM CPU cores and a Qualcomm Adreno 304 GPU.
According to Qualcomm, the SoC can decode 1080p video. It will also be able to manage cameras with up to 8 megapixels of resolution, including HDR, autofocus, auto white balance, and auto exposure. Let's be honest, you will not really get much more than that for a sub-$100 device.
The Snapdragon 210 has been given Quick Charge 2.0, normally reserved for the 400-line and up, refill the battery quickly when connected to a Quick Charge 2.0-supporting charger (ex: the Motorola Turbo Charger). Quick Charge 1.0 worked by optimizing how energy was delivered to the battery through a specification. Quick Charge 2.0 does the same, just with 60 watts of power (!!). For reference, the USB standard defines 2.5W, which is 5V at 0.5A, although the specification is regularly extended to 5 or 10 watts.
Devices featuring the Snapdragon 210 are expected for the first half of 2015.
Core M 5Y70 Early Testing
During a press session today with Intel, I was able to get some early performance results on Broadwell-Y in the form of the upcoming Core M 5Y70 processor.
Testing was done on a reference design platform code named Llama Mountain and at the heart of the system is the Broadwell-Y designed dual-core CPU, the Core M 5Y70, which is due out later this year. Power consumption of this system is low enough that Intel has built it with a fanless design. As we posted last week, this processor has a base frequency of just 1.10 GHz but it can boost as high as 2.6 GHz for extra performance when it's needed.
Before we dive into the actual result, you should keep in mind a couple of things. First, we didn't have to analyze the systems to check driver revisions, etc., so we are going on Intel's word that these are setup as you would expect to see them in the real world. Next, because of the disjointed nature of test were were able to run, the comparisons in our graphs aren't as great as I would like. Still, the results for the Core M 5Y70 are here should you want to compare them to any other scores you like.
First, let's take a look at old faithful: CineBench 11.5.
UPDATE: A previous version of this graph showed the TDP for the Intel Core M 5Y70 as 15 watts, not the 4.5 watt listed here now. The reasons are complicated. Even though the Intel Ark website lists the TDP of the Core M 5Y70, Intel has publicly stated the processor will make very short "spikes" at 15 watts when in its highest Turbo Boost modes. It comes to a discussion of semantics really. The cooling capability of the tablet is only targeted to 4.5-6.0 watts and those very short 15 watt spikes can be dissipated without the need for extra heatsink surface...because they are so short. SDP anyone? END UPDATE
With a score of 2.77, the Core M 5Y70 processor puts up an impressive fight against CPUs with much higher TDP settings. For example, Intel's own Pentium G3258 gets a score of 2.71 in CB11, and did so with a considerably higher thermal envelope. The Core i3-4330 scores 38% higher than the Core M 5Y70 but it requires a TDP 3.6-times larger to do so. Both of AMD's APUs in the 45 watt envelope fail to keep up with Core M.