Fighting for Relevance
AMD is still kicking. While the results of this past year have been forgettable, they have overcome some significant hurdles and look like they are improving their position in terms of cutting costs while extracting as much revenue as possible. There were plenty of ups and downs for this past quarter, but when compared to the rest of 2015 there were some solid steps forward here.
The company reported revenues of $958 million, which is down from $1.06 billion last quarter. The company also recorded a $103 million loss, but that is down significantly from the $197 million loss the quarter before. Q3 did have a $65 million write-down due to unsold inventory. Though the company made far less in revenues, they also shored up their losses. The company is still bleeding, but they still have plenty of cash on hand for the next several quarters to survive. When we talk about non-GAAP figures, AMD reports a $79 million loss for this past quarter.
For the entire year AMD recorded $3.99 billion in revenue with a net loss of $660 million. This is down from FY 2014 revenues of $5.51 billion and a net loss of $403 million. AMD certainly is trending downwards year over year, but they are hoping to reverse that come 2H 2016.
Graphics continues to be solid for AMD as they increased their sales from last quarter, but are down year on year. Holiday sales were brisk, but with only the high end Fury series being a new card during this season, the impact of that particular part was not as great as compared to the company having a new mid-range series like the newly introduced R9 380X. The second half of 2016 will see the introduction of the Polaris based GPUs for both mobile and desktop applications. Until then, AMD will continue to provide the current 28 nm lineup of GPUs to the market. At this point we are under the assumption that AMD and NVIDIA are looking at the same timeframe for introducing their next generation parts due to process technology advances. AMD already has working samples on Samsung’s/GLOBALFOUNDRIES 14nm LPP (low power plus) that they showed off at CES 2016.
Introduction and CPU Performance
We had a chance this week to go hands-on with the Snapdragon 820, the latest flagship SoC from Qualcomm, in a hardware session featuring prototype handsets powered by this new silicon. How did it perform? Read on to find out!
As you would expect from an all-new flagship part, the Snapdragon 820 offers improvements in virtually every category compared to their previous products. And with the 820 Qualcomm is emphasizing not only performance, but lower power consumption with claims of anywhere from 20% to 10x better efficiency across the components that make up this new SoC. And part of these power savings will undoubtedly come as the result of Qualcomm’s decision to move to a quad-core design with the 820, rather than the 8-core design of the 810.
So what exactly does comprise a high-end SoC like the Snapdragon 820? Ryan covered the launch in detail back in November (and we introduced aspects of the new SoC in a series of articles leading up to the launch). In brief, the Snapdragon 820 includes a custom quad-core CPU (Kryo), the Andreno 530 GPU, a new DSP (Hexagon 680), new ISP (Spectra), and a new LTE modem (X12). The previous flagship Snapdragon 810 used stock ARM cores (Cortex-A57, Cortex-A53) in a big.LITTLE configuration, but for various reasons Qualcomm has chosen not to introduce another 8-core SoC with this new product.
The four Kryo CPU cores found in the Snapdragon 820 can operate at speeds of up to 2.2 GHz, and since is half the number of the octo-core Snapdragon 810, the IPC (instructions per clock) of this new part will help determine how competitive the SD820's performance will be; but there’s a lot more to the story. This SoC design placed equal emphasis on all components therein, and the strategy with the SD820 seems to be leveraging the capability of the advanced signal processing (Hexagon 680) which should help offload the work to allow the CPU to work with greater efficiency, and at lower power.
Subject: Processors, Mobile | November 12, 2015 - 09:30 AM | Sebastian Peak
Tagged: SoC, smartphone, Samsung Galaxy, Samsung, mobile, Exynos 8890, Exynos 8 Octa, Exynos 7420, Application Processor
Coming just a day after Qualcomm officially launched their Snapdragon 820 SoC, Samsung is today unveiling their latest flagship mobile part, the Exynos 8 Octa 8890.
The Exynos 8 Octa 8890 is built on Samsung’s 14 nm FinFET process like the previous Exynos 7 Octa 7420, and again is based on the a big.LITTLE configuration; though the big processing cores are a custom design this time around. The Exynos 7420 was comprised of four ARM Cortex A57 cores and four small Cortex A53 cores, and while the small cores in the 8890 are again ARM Cortex A53, the big cores feature Samsung’s “first custom designed CPU based on 64-bit ARMv8 architecture”.
“With Samsung’s own SCI (Samsung Coherent Interconnect) technology, which provides cache-coherency between big and small cores, the Exynos 8 Octa fully utilizes benefits of big.LITTLE structure for efficient usage of the eight cores. Additionally, Exynos 8 Octa is built on highly praised 14nm FinFET process. These all efforts for Exynos 8 Octa provide 30% more superb performance and 10% more power efficiency.”
Another big advancement for the Exynos 8 Octa is the integrated modem, which provides Category 12/13 LTE with download speeds (with carrier aggregation) of up to 600 Mbps, and uploads up to 150 Mbps. This might sound familiar, as it mirrors the LTE Release 12 specs of the new modem in the Snapdragon 820.
Video processing is handled by the Mali-T880 GPU, moving up from the Mali-T760 found in the Exynos 7 Octa. The T880 is “the highest performance and the most energy-efficient mobile GPU in the Mali family”, with up to 1.8x the performance of the T760 while being 40% more energy-efficient.
Samsung will be taking this new SoC into mass production later this year, and the chip is expected to be featured in the company’s upcoming flagship Galaxy phone.
Full PR after the break.
Subject: Mobile | September 30, 2015 - 02:33 PM | Sebastian Peak
Tagged: X12 Modem, SoC, snapdragon 820, qualcomm, phones, mu-mimo, mobile, LTE, cell phones
The upcoming Snapdragon 820 is shaping up to be a formidable SoC after the disappointing response to the previous flagship, the Snapdragon 810, which was in far fewer devices than expected for reasons still shrouded in mystery and speculation. One of the biggest aspects of the upcoming 820 is Qualcomm’s new X12 modem, which will provide the most advanced LTE connectivity seen to date when the SoC launches. The X12 features CAT 12 LTE downlink speeds for up to 600 Mbps, and CAT 13 on the uplink for up to 150 Mbps.
LTE connectivity isn’t the only new thing here, as we see from this slide there is also tri-band Wi-Fi supporting 2x2 MU-MIMO.
“This is the first publicly announced processor for use in mobile devices to support LTE Category 12 in the downlink and Category 13 in the uplink, providing up to 33 percent and 200 percent improvement over its predecessor’s download and upload speeds, respectively.”
The specifications for this new modem are densely packed:
- Cat 12 (up to 600 Mbps) in the downlink
- Cat 13 (up to 150 Mbps) in the uplink
- Up to 4x4 MIMO on one downlink LTE carrier
- 2x2 MU-MIMO (802.11ac)
- Multi-gigabit 802.11ad
- LTE-U and LTE+Wi-Fi Link Aggregation (LWA)
- Next Gen HD Voice and Video calling over LTE and Wi-Fi
- Call Continuity across Wi-Fi, LTE, 3G, and 2G
- RF front end innovations
- Advanced Closed Loop Antenna Tuner
- Qualcomm RF360™ front end solution with CA
- Wi-Fi/LTE antenna sharing
Rumored phones that could end up running the Snapdragon 820 with this X12 modem include the Samsung Galaxy S7 and around 30 other devices, though final word is of course pending on shipping hardware.
Subject: Mobile | September 24, 2015 - 07:55 PM | Sebastian Peak
Tagged: usb, snapdragon 820, Quick Charge 3.0, Quick Charge, qualcomm, mobile, battery charger
Qualcomm has announced Quick Charge 3.0, the latest iteration of their fast battery charging technology. The new version is said to not only further improve battery charging times, but also better maintain battery health and reduce temperatures.
One of the biggest issues with fast battery charging is the premature failure of the battery cells; something my first Nexus 6 (which was replaced due to a bad battery) can attest to. The new 3.0 standard adds "Battery Saver Technology" (BST) which constantly varies the current delivery rate based on what the battery can safely accept, thus preventing damage to the cells. This new version of Quick Charge also claims to offer lower temps while charging, which could be partly the result of this variable current delivery.
The other change comes from "Intelligent Negotiation for Optimum Voltage" (INOV), which can vary the voltage delivery anywhere from 3.6V to 20V in 200mV increments depending on the device's negotiated connection. INOV will allow Quick Charge 3.0 to charge a full 2x faster than the original Quick Charge 1.0 (it's 1.5x faster than QC 2.0), and 4x over standard USB charging as it provides up to 60W to compatible devices.
This new Quick Charge 3.0 technology will be available soon with devices featuring upcoming Qualcomm SoCs such as the Snapdragon 820.
Subject: Displays, Mobile | May 31, 2015 - 06:00 PM | Ryan Shrout
Tagged: nvidia, notebooks, msi, mobile, gsync, g-sync, asus
If you remember back to January of this year, Allyn and posted an article that confirmed the existence of a mobile variant of G-Sync thanks to a leaked driver and an ASUS G751 notebook. Rumors and speculation floated around the Internet ether for a few days but we eventually got official word from NVIDIA that G-Sync for notebooks was a real thing and that it would launch "soon." Well we have that day here finally with the beginning of Computex.
G-Sync for notebooks has no clever branding, no "G-Sync Mobile" or anything like that, so discussing it will be a bit more difficult since the technologies are different. Going forward NVIDIA claims that any gaming notebook using NVIDIA GeForce GPUs will be a G-Sync notebook and will support all of the goodness that variable refresh rate gaming provides. This is fantastic news as notebook gaming is often at lower frame rates than you would find on a desktop PC because of lower powered hardware yet comparable (1080p, 1440p) resolution displays.
Of course, as we discovered in our first look at G-Sync for notebooks back in January, the much debated G-Sync module is not required and will not be present on notebooks featuring the variable refresh technology. So what gives? We went over some of this before, but it deserves to be detailed again.
NVIDIA uses the diagram above to demonstrate the complication of the previous headaches presented by the monitor and GPU communication path before G-Sync was released. You had three different components: the GPU, the monitor scalar and the monitor panel that all needed to work together if VRR was going to become a high quality addition to the game ecosystem.
NVIDIA's answer was to take over all aspects of the pathway for pixels from the GPU to the eyeball, creating the G-Sync module and helping OEMs to hand pick the best panels that would work with VRR technology. This helped NVIDIA make sure it could do things to improve the user experience such as implementing an algorithmic low-frame-rate, frame-doubling capability to maintain smooth and tear-free gaming at frame rates under the panels physical limitations. It also allows them to tune the G-Sync module to the specific panel to help with ghosting and implemention variable overdrive logic.
All of this is required because of the incredible amount of variability in the monitor and panel markets today.
But with notebooks, NVIDIA argues, there is no variability at all to deal with. The notebook OEM gets to handpick the panel and the GPU directly interfaces with the screen instead of passing through a scalar chip. (Note that some desktop monitors like the ever popular Dell 3007WFP did this as well.) There is no other piece of logic in the way attempting to enforce a fixed refresh rate. Because of that direct connection, the GPU is able to control the data passing between it and the display without any other logic working in the middle. This makes implementing VRR technology much more simple and helps with quality control because NVIDIA can validate the panels with the OEMs.
As I mentioned above, going forward, all new notebooks using GTX graphics will be G-Sync notebooks and that should solidify NVIDIA's dominance in the mobile gaming market. NVIDIA will be picking the panels, and tuning the driver for them specifically, to implement anti-ghosting technology (like what exists on the G-Sync module today) and low frame rate doubling. NVIDIA also claims that the world's first 75 Hz notebook panels will ship with GeForce GTX and will be G-Sync enabled this summer - something I am definitely looking forward to trying out myself.
Though it wasn't mentioned, I am hopeful that NVIDIA will continue to allow users the ability to disable V-Sync at frame rates above the maximum refresh of these notebook panels. With most of them limited to 60 Hz (but this applies to 75 Hz as well) the most demanding gamers are going to want that same promise of minimal latency.
At Computex we'll see a handful of models announced with G-Sync up and running. It should be no surprise of course to see the ASUS G751 with the GeForce GTX 980M GPU on this list as it was the model we used in our leaked driver testing back in January. MSI will also launch the GT72 G with a 1080p G-Sync ready display and GTX 980M/970M GPU option. Gigabyte will have a pair of notebooks: the Aorus X7 Pro-SYNC with GTX 970M SLI and a 1080p screen as well as the Aorus X5 with a pair of GTX 965M in SLI and a 3K resolution (2560x1440) screen.
This move is great for gamers and I am eager to see what the resulting experience is for users that pick up these machines. I have long been known as a proponent of variable refresh displays and getting access to that technology on your notebook is a victory for NVIDIA's team.
ARM Releases Cortex-A72 for Licensing
On February 3rd, ARM announced a slew of new designs, including the Cortex A72. Few details were shared with us, but what we learned was that it could potentially redefine power and performance in the ARM ecosystem. Ryan was invited to London to participate in a deep dive of what ARM has done to improve its position against market behemoth Intel in the very competitive mobile space. Intel has a leg up on process technology with their 14nm Tri-Gate process, but they are continuing to work hard in making their x86 based processors more power efficient, while still maintaining good performance. There are certain drawbacks to using an ISA that is focused on high performance computing rather than being designed from scratch to provide good performance with excellent energy efficiency.
ARM has been on a pretty good roll with their Cortex A9, A7, A15, A17, A53, and A57 parts over the past several years. These designs have been utilized in a multitude of products and scenarios, with configurations that have scaled up to 16 cores. While each iteration has improved upon the previous, ARM is facing the specter of Intel’s latest generation, highly efficient x86 SOCs based on the 2nd gen 14nm Tri-Gate process. Several things have fallen into place for ARM to help them stay competitive, but we also cannot ignore the experience and design hours that have led to this product.
(Editor's Note: During my time with ARM last week it became very apparent that it is not standing still, not satisfied with its current status. With competition from Intel, Qualcomm and others ramping up over the next 12 months in both mobile and server markets, ARM will more than ever be depedent on the evolution of core design and GPU design to maintain advantages in performance and efficiency. As Josh will go into more detail here, the Cortex-A72 appears to be an incredibly impressive design and all indications and conversations I have had with others, outside of ARM, believe that it will be an incredibly successful product.)
Cortex A72: Highest Performance ARM Cortex
ARM has been ubiquitous for mobile applications since it first started selling licenses for their products in the 90s. They were found everywhere it seemed, but most people wouldn’t recognize the name ARM because these chips were fabricated and sold by licensees under their own names. Guys like Ti, Qualcomm, Apple, DEC and others all licensed and adopted ARM technology in one form or the other.
ARM’s importance grew dramatically with the introduction of increased complexity cellphones and smartphones. They also gained attention through multimedia devices such as the Microsoft Zune. What was once a fairly niche company with low performance, low power offerings became the 800 pound gorilla in the mobile market. Billions of chips are sold yearly based on ARM technology. To stay in that position ARM has worked aggressively on continually providing excellent power characteristics for their parts, but now they are really focusing on overall performance and capabilities to address, not only the smartphone market, but also the higher performance computing and server spaces that they want a significant presence in.
Subject: Graphics Cards, Mobile | February 19, 2015 - 03:58 PM | Ryan Shrout
Tagged: nvidia, notebooks, mobile, gpu
After a week or so of debate circling NVIDIA's decsision to disable overclocking on mobility GPUs, we have word that the company has reconsidered and will be re-enabling the feature in next month's driver release:
As you know, we are constantly tuning and optimizing the performance of your GeForce PC.
We obsess over every possible optimization so that you can enjoy a perfectly stable machine that balances game, thermal, power, and acoustic performance.
Still, many of you enjoy pushing the system even further with overclocking.
Our recent driver update disabled overclocking on some GTX notebooks. We heard from many of you that you would like this feature enabled again. So, we will again be enabling overclocking in our upcoming driver release next month for those affected notebooks.
If you are eager to regain this capability right away, you can also revert back to 344.75.
Now, I don't want to brag here, but we did just rail NVIDIA for this decision on last night's podcast...and then the decision was posted on NVIDIA's forums just four hours ago... I'm not saying, but I'm just saying!
All kidding aside, this is great news! And NVIDIA desperately needs to be paying attention to what consumers are asking for in order to make up for some poor decisions made in the last several months. Now (or at least soon), you will be able to return to your mobile GPU overclocking!
Subject: General Tech | January 6, 2015 - 06:51 PM | Sebastian Peak
Tagged: mobile, headphone amplifier, DAP, DAC, ces 2015, CES, audiophile, audio
For the audio enthusiasts at CES this year Calyx Audio (Korean maker of audiophile-grade audio components) has a new prototype to show along with last year's Calyx M music player, and for an audiophile product the pricing is very aggressive.
Render of the Calyx PaT (dimensions in mm)
The PaT is a similar product in some ways as Calyx Audio's existing $199 USB DAC called the "Coffee", but this unit will be much smaller and will cost half as much at $99. And the reduction in price and size is only half of the story as the PaT also works with mobile devices as an outboard DAC/headphone amp. Apple iPhones and iPads will be supported, and Android devices with USB audio-out support as well (probably via USB OTG).
The PaT supports up to 16-bit, 48kHz files (AIF, M4A, PCM, OGG, and MP3) and will also control track playback and volume via hardware control buttons on the unit. The PaT requires no external power or battery, taking what little juice it needs directly from the connection to your mobile device. As for amplification, in typical Calyx fashion even this miniature board is using a discrete class A/B headphone amplifier. Since the PaT relies only on the power passed through the USB connection it is only capable of outputting 0.8 V, which by comparison is slightly lower than an iPhone 5 which outputs about 0.9 - 1.0 V.
The tiny prototype PaT in action
The PaT may be just a working board at this point, but the company has scheduled the release for February 2015, when the devices will be available in various colors of thin aluminum enclosures.
In the world of computer audio much more attention has been focused lately on advancements in sound, with special shielding and isolation on motherboards, special gold-plated USB ports for DACs, and customizable op-amps a trend. While the market for dedicated sound cards isn't what it once was, high-end PCI-E and USB cards from Creative (Sound Blaster) and ASUS (Xonar) are still widely available. Most of these products are for desktop users, but there is a growing number of portable devices that allow mobile users to experience great sound, too. For myself, great sound means faithful reproduction of 2-channel music, and it's nice to see attention paid to that area without the added effects of digital signal processing (DSP). Calyx seems interested only in engineering products that play back music as close to the source as possible, and I can't argue with that!
The Calyx PaT is scheduled to launch in February for $99, but like most high-end audio components it will take a little research to track it down. The USA distributor of the Calyx brand has a website with product and contact information here.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: Processors | November 20, 2014 - 01:31 PM | Josh Walrath
Tagged: amd, APU, carrizo, Carrizo-L, Kaveri, Excavator, Steamroller, SoC, Intel, mobile
AMD has certainly gone about doing things in a slightly different manner than we are used to. Today they announced their two latest APUs which will begin shipping in the first half of 2015. These APUs are running at AMD and are being validated as we speak. AMD did not release many details on these products, but what we do know is pretty interesting.
Carrizo is based on the latest iteration of AMD’s CPU technology. Excavator is the codename for these latest CPU cores, and they promise to be smaller and more efficient than the previous Steamroller core which powers the latest Kaveri based APUs. Carrizo-L is the lower power variant which will be based on the Puma+ core. The current Beema APU is based on the Puma architecture.
Roadmaps show that the Carrizo APUs will be 28 nm products, presumably fabricated by GLOBALFOUNDRIES. Many were hoping that AMD would make the jump to 20 nm with this generation of products, but that does not seem to be the case. This is not surprising due to the limitations of that particular process when dealing with large designs that require a lot of current. AMD will likely be pushing for 16 nm FinFET for the generation of products after Carrizo.
The big Carrizo supposedly has a next generation GCN unit. My guess here is that it will use the same design as we saw with the R9 285. That particular product is a next generation unit that has improved efficiency. AMD did not release how many GCN cores will be present in Carizzo, but it will be very similar to what we see now with Kaveri. Carrizo-L will use the same GCN units as the previous generation Beema based products.
I believe AMD has spent a lot more time hand tuning Excavator instead of relying on a lot of automated place and route. This should allow them to retain much of the performance of the part, all the while cutting down on transistor count dramatically. Some rumors that I have seen point to each Excavator module being 40% smaller than Steamroller. I am not entirely sure they have achieved that type of improvement, but more hand layout does typically mean greater efficiency and less waste. The downside to hand layout is that it is extremely time and manpower intensive. Intel can afford this type of design while AMD has to rely more on automated place and route.
Carrizo will be the first HSA 1.0 compliant SOC. It is in fact an SOC as it integrates the southbridge functions that previously had been handled by external chips like the A88X that supports the current Kaveri desktop APUs. Carrizo and Carrizo-L will also share the same infrastructure. This means that motherboards that these APUs will be soldered onto are interchangeable. One motherboard from the partner OEMs will be able to address multiple markets that will see products range from 4 watts TDP up to 35 watts.
Finally, both APUs feature the security processor that allows them access to the ARM TrustZone technology. This is a very small ARM processor that handles the secure boot partition and handles the security requests. This puts AMD on par with Intel and their secure computing solution (vPro).
These products will be aimed only at the mobile market. So far AMD has not announced Carrizo for the desktop market, but when they do I would imagine that they will hit a max TDP of around 65 watts. AMD claims that Carrizo is one of the biggest jumps for them in terms of power efficiency. A lot of different pieces of technology have all come together with this product to make them more competitive with Intel and their process advantage. Time will tell if this is the case, but for now AMD is staying relevant and pushing their product releases so that they are more consistently ontime.