Subject: Processors | April 5, 2016 - 06:30 AM | Josh Walrath
Tagged: mobile, hp, GCN, envy, ddr4, carrizo, Bristol Ridge, APU, amd, AM4
Today AMD is “pre-announcing” their latest 7th generation APU. Codenamed “Bristol Ridge”, this new SOC is based off of the Excavator architecture featured in the previous Carrizo series of products. AMD provided very few hints as to what was new and different in Bristol Ridge as compared to Carrizo, but they have provided a few nice hints.
They were able to provide a die shot of the new Bristol Ridge APU and there are some interesting differences between it and the previous Carrizo. Unfortunately, there really are no changes that we can see from this shot. Those new functional units that you are tempted to speculate about? For some reason AMD decided to widen out the shot of this die. Those extra units around the border? They are the adjacent dies on the wafer. I was bamboozled at first, but happily Marc Sauter pointed it out to me. No new functional units for you!
This is the Carrizo shot. It is functionally identical to what we see with Bristol Ridge.
AMD appears to be using the same 28 nm HKMG process from GLOBALFOUNDRIES. This is not going to give AMD much of a jump, but from information in the industry GLOBALFOUNDRIES and others have put an impressive amount of work into several generations of 28 nm products. TSMC is on their third iteration which has improved power and clock capabilities on that node. GLOBALFOUNDRIES has continued to improve their particular process and likely Bristol Ridge is going to be the last APU built on that node.
All of the competing chips are rated at 15 watts TDP. Intel has the compute advantage, but AMD is cleaning up when it comes to graphics.
The company has also continued to improve upon their power gating and clocking technologies to keep TDPs low, yet performance high. AMD recently released the Godavari APUs to the market which exhibit better clocking and power characteristics from the previous Kaveri. Little was done on the actual design, rather it was improved process tech as well as better clock control algorithms that achieved these advances. It appears as though AMD has continued this trend with Bristol Ridge.
We likely are not seeing per clock increases, but rather higher and longer sustained clockspeeds providing the performance boost that we are seeing between Carrizo and Bristol Ridge. In these benchmarks AMD is using 15 watt TDP products. These are mobile chips and any power improvements will show off significant gains in overall performance. Bristol Ridge is still a native quad core part with what looks to be an 8 module GCN unit.
Again with all three products at a 15 watt TDP we can see that AMD is squeezing every bit of performance it can with the 28 nm process and their Excavator based design.
The basic core and GPU design look relatively unchanged, but obviously there were a lot of tweaks applied to give the better performance at comparable TDPs.
AMD is announcing this along with the first product that will feature this APU. The HP Envy X360. This convertible tablet offers some very nice features and looks to be one of the better implementations that AMD has seen using its latest APUs. Carrizo had some wins, but taking marketshare back from Intel in the mobile space has been tortuous at best. AMD obviously hopes that Bristol Ridge in the sub-35 watt range will continue to show fight for the company in this important market. Perhaps one of the more interesting features is the option for the PCIe SSD. Hopefully AMD will send out a few samples so we can see what a more “premium” type convertible can do with the AMD silicon.
The HP Envy X360 convertible in all of its glory.
Bristol Ridge will be coming to the AM4 socket infrastructure in what appears to be a Computex timeframe. These parts will of course feature higher TDPs than what we are seeing here with the 15 watt unit that was tested. It seems at that time AMD will announce the full lineup from top to bottom and start seeding the market with AM4 boards that will eventually house the “Zen” CPUs that will show up in late 2016.
Fighting for Relevance
AMD is still kicking. While the results of this past year have been forgettable, they have overcome some significant hurdles and look like they are improving their position in terms of cutting costs while extracting as much revenue as possible. There were plenty of ups and downs for this past quarter, but when compared to the rest of 2015 there were some solid steps forward here.
The company reported revenues of $958 million, which is down from $1.06 billion last quarter. The company also recorded a $103 million loss, but that is down significantly from the $197 million loss the quarter before. Q3 did have a $65 million write-down due to unsold inventory. Though the company made far less in revenues, they also shored up their losses. The company is still bleeding, but they still have plenty of cash on hand for the next several quarters to survive. When we talk about non-GAAP figures, AMD reports a $79 million loss for this past quarter.
For the entire year AMD recorded $3.99 billion in revenue with a net loss of $660 million. This is down from FY 2014 revenues of $5.51 billion and a net loss of $403 million. AMD certainly is trending downwards year over year, but they are hoping to reverse that come 2H 2016.
Graphics continues to be solid for AMD as they increased their sales from last quarter, but are down year on year. Holiday sales were brisk, but with only the high end Fury series being a new card during this season, the impact of that particular part was not as great as compared to the company having a new mid-range series like the newly introduced R9 380X. The second half of 2016 will see the introduction of the Polaris based GPUs for both mobile and desktop applications. Until then, AMD will continue to provide the current 28 nm lineup of GPUs to the market. At this point we are under the assumption that AMD and NVIDIA are looking at the same timeframe for introducing their next generation parts due to process technology advances. AMD already has working samples on Samsung’s/GLOBALFOUNDRIES 14nm LPP (low power plus) that they showed off at CES 2016.
Introduction and CPU Performance
We had a chance this week to go hands-on with the Snapdragon 820, the latest flagship SoC from Qualcomm, in a hardware session featuring prototype handsets powered by this new silicon. How did it perform? Read on to find out!
As you would expect from an all-new flagship part, the Snapdragon 820 offers improvements in virtually every category compared to their previous products. And with the 820 Qualcomm is emphasizing not only performance, but lower power consumption with claims of anywhere from 20% to 10x better efficiency across the components that make up this new SoC. And part of these power savings will undoubtedly come as the result of Qualcomm’s decision to move to a quad-core design with the 820, rather than the 8-core design of the 810.
So what exactly does comprise a high-end SoC like the Snapdragon 820? Ryan covered the launch in detail back in November (and we introduced aspects of the new SoC in a series of articles leading up to the launch). In brief, the Snapdragon 820 includes a custom quad-core CPU (Kryo), the Andreno 530 GPU, a new DSP (Hexagon 680), new ISP (Spectra), and a new LTE modem (X12). The previous flagship Snapdragon 810 used stock ARM cores (Cortex-A57, Cortex-A53) in a big.LITTLE configuration, but for various reasons Qualcomm has chosen not to introduce another 8-core SoC with this new product.
The four Kryo CPU cores found in the Snapdragon 820 can operate at speeds of up to 2.2 GHz, and since is half the number of the octo-core Snapdragon 810, the IPC (instructions per clock) of this new part will help determine how competitive the SD820's performance will be; but there’s a lot more to the story. This SoC design placed equal emphasis on all components therein, and the strategy with the SD820 seems to be leveraging the capability of the advanced signal processing (Hexagon 680) which should help offload the work to allow the CPU to work with greater efficiency, and at lower power.
Subject: Processors, Mobile | November 12, 2015 - 09:30 AM | Sebastian Peak
Tagged: SoC, smartphone, Samsung Galaxy, Samsung, mobile, Exynos 8890, Exynos 8 Octa, Exynos 7420, Application Processor
Coming just a day after Qualcomm officially launched their Snapdragon 820 SoC, Samsung is today unveiling their latest flagship mobile part, the Exynos 8 Octa 8890.
The Exynos 8 Octa 8890 is built on Samsung’s 14 nm FinFET process like the previous Exynos 7 Octa 7420, and again is based on the a big.LITTLE configuration; though the big processing cores are a custom design this time around. The Exynos 7420 was comprised of four ARM Cortex A57 cores and four small Cortex A53 cores, and while the small cores in the 8890 are again ARM Cortex A53, the big cores feature Samsung’s “first custom designed CPU based on 64-bit ARMv8 architecture”.
“With Samsung’s own SCI (Samsung Coherent Interconnect) technology, which provides cache-coherency between big and small cores, the Exynos 8 Octa fully utilizes benefits of big.LITTLE structure for efficient usage of the eight cores. Additionally, Exynos 8 Octa is built on highly praised 14nm FinFET process. These all efforts for Exynos 8 Octa provide 30% more superb performance and 10% more power efficiency.”
Another big advancement for the Exynos 8 Octa is the integrated modem, which provides Category 12/13 LTE with download speeds (with carrier aggregation) of up to 600 Mbps, and uploads up to 150 Mbps. This might sound familiar, as it mirrors the LTE Release 12 specs of the new modem in the Snapdragon 820.
Video processing is handled by the Mali-T880 GPU, moving up from the Mali-T760 found in the Exynos 7 Octa. The T880 is “the highest performance and the most energy-efficient mobile GPU in the Mali family”, with up to 1.8x the performance of the T760 while being 40% more energy-efficient.
Samsung will be taking this new SoC into mass production later this year, and the chip is expected to be featured in the company’s upcoming flagship Galaxy phone.
Full PR after the break.
Subject: Mobile | September 30, 2015 - 02:33 PM | Sebastian Peak
Tagged: X12 Modem, SoC, snapdragon 820, qualcomm, phones, mu-mimo, mobile, LTE, cell phones
The upcoming Snapdragon 820 is shaping up to be a formidable SoC after the disappointing response to the previous flagship, the Snapdragon 810, which was in far fewer devices than expected for reasons still shrouded in mystery and speculation. One of the biggest aspects of the upcoming 820 is Qualcomm’s new X12 modem, which will provide the most advanced LTE connectivity seen to date when the SoC launches. The X12 features CAT 12 LTE downlink speeds for up to 600 Mbps, and CAT 13 on the uplink for up to 150 Mbps.
LTE connectivity isn’t the only new thing here, as we see from this slide there is also tri-band Wi-Fi supporting 2x2 MU-MIMO.
“This is the first publicly announced processor for use in mobile devices to support LTE Category 12 in the downlink and Category 13 in the uplink, providing up to 33 percent and 200 percent improvement over its predecessor’s download and upload speeds, respectively.”
The specifications for this new modem are densely packed:
- Cat 12 (up to 600 Mbps) in the downlink
- Cat 13 (up to 150 Mbps) in the uplink
- Up to 4x4 MIMO on one downlink LTE carrier
- 2x2 MU-MIMO (802.11ac)
- Multi-gigabit 802.11ad
- LTE-U and LTE+Wi-Fi Link Aggregation (LWA)
- Next Gen HD Voice and Video calling over LTE and Wi-Fi
- Call Continuity across Wi-Fi, LTE, 3G, and 2G
- RF front end innovations
- Advanced Closed Loop Antenna Tuner
- Qualcomm RF360™ front end solution with CA
- Wi-Fi/LTE antenna sharing
Rumored phones that could end up running the Snapdragon 820 with this X12 modem include the Samsung Galaxy S7 and around 30 other devices, though final word is of course pending on shipping hardware.
Subject: Mobile | September 24, 2015 - 07:55 PM | Sebastian Peak
Tagged: usb, snapdragon 820, Quick Charge 3.0, Quick Charge, qualcomm, mobile, battery charger
Qualcomm has announced Quick Charge 3.0, the latest iteration of their fast battery charging technology. The new version is said to not only further improve battery charging times, but also better maintain battery health and reduce temperatures.
One of the biggest issues with fast battery charging is the premature failure of the battery cells; something my first Nexus 6 (which was replaced due to a bad battery) can attest to. The new 3.0 standard adds "Battery Saver Technology" (BST) which constantly varies the current delivery rate based on what the battery can safely accept, thus preventing damage to the cells. This new version of Quick Charge also claims to offer lower temps while charging, which could be partly the result of this variable current delivery.
The other change comes from "Intelligent Negotiation for Optimum Voltage" (INOV), which can vary the voltage delivery anywhere from 3.6V to 20V in 200mV increments depending on the device's negotiated connection. INOV will allow Quick Charge 3.0 to charge a full 2x faster than the original Quick Charge 1.0 (it's 1.5x faster than QC 2.0), and 4x over standard USB charging as it provides up to 60W to compatible devices.
This new Quick Charge 3.0 technology will be available soon with devices featuring upcoming Qualcomm SoCs such as the Snapdragon 820.
Subject: Displays, Mobile | May 31, 2015 - 06:00 PM | Ryan Shrout
Tagged: nvidia, notebooks, msi, mobile, gsync, g-sync, asus
If you remember back to January of this year, Allyn and posted an article that confirmed the existence of a mobile variant of G-Sync thanks to a leaked driver and an ASUS G751 notebook. Rumors and speculation floated around the Internet ether for a few days but we eventually got official word from NVIDIA that G-Sync for notebooks was a real thing and that it would launch "soon." Well we have that day here finally with the beginning of Computex.
G-Sync for notebooks has no clever branding, no "G-Sync Mobile" or anything like that, so discussing it will be a bit more difficult since the technologies are different. Going forward NVIDIA claims that any gaming notebook using NVIDIA GeForce GPUs will be a G-Sync notebook and will support all of the goodness that variable refresh rate gaming provides. This is fantastic news as notebook gaming is often at lower frame rates than you would find on a desktop PC because of lower powered hardware yet comparable (1080p, 1440p) resolution displays.
Of course, as we discovered in our first look at G-Sync for notebooks back in January, the much debated G-Sync module is not required and will not be present on notebooks featuring the variable refresh technology. So what gives? We went over some of this before, but it deserves to be detailed again.
NVIDIA uses the diagram above to demonstrate the complication of the previous headaches presented by the monitor and GPU communication path before G-Sync was released. You had three different components: the GPU, the monitor scalar and the monitor panel that all needed to work together if VRR was going to become a high quality addition to the game ecosystem.
NVIDIA's answer was to take over all aspects of the pathway for pixels from the GPU to the eyeball, creating the G-Sync module and helping OEMs to hand pick the best panels that would work with VRR technology. This helped NVIDIA make sure it could do things to improve the user experience such as implementing an algorithmic low-frame-rate, frame-doubling capability to maintain smooth and tear-free gaming at frame rates under the panels physical limitations. It also allows them to tune the G-Sync module to the specific panel to help with ghosting and implemention variable overdrive logic.
All of this is required because of the incredible amount of variability in the monitor and panel markets today.
But with notebooks, NVIDIA argues, there is no variability at all to deal with. The notebook OEM gets to handpick the panel and the GPU directly interfaces with the screen instead of passing through a scalar chip. (Note that some desktop monitors like the ever popular Dell 3007WFP did this as well.) There is no other piece of logic in the way attempting to enforce a fixed refresh rate. Because of that direct connection, the GPU is able to control the data passing between it and the display without any other logic working in the middle. This makes implementing VRR technology much more simple and helps with quality control because NVIDIA can validate the panels with the OEMs.
As I mentioned above, going forward, all new notebooks using GTX graphics will be G-Sync notebooks and that should solidify NVIDIA's dominance in the mobile gaming market. NVIDIA will be picking the panels, and tuning the driver for them specifically, to implement anti-ghosting technology (like what exists on the G-Sync module today) and low frame rate doubling. NVIDIA also claims that the world's first 75 Hz notebook panels will ship with GeForce GTX and will be G-Sync enabled this summer - something I am definitely looking forward to trying out myself.
Though it wasn't mentioned, I am hopeful that NVIDIA will continue to allow users the ability to disable V-Sync at frame rates above the maximum refresh of these notebook panels. With most of them limited to 60 Hz (but this applies to 75 Hz as well) the most demanding gamers are going to want that same promise of minimal latency.
At Computex we'll see a handful of models announced with G-Sync up and running. It should be no surprise of course to see the ASUS G751 with the GeForce GTX 980M GPU on this list as it was the model we used in our leaked driver testing back in January. MSI will also launch the GT72 G with a 1080p G-Sync ready display and GTX 980M/970M GPU option. Gigabyte will have a pair of notebooks: the Aorus X7 Pro-SYNC with GTX 970M SLI and a 1080p screen as well as the Aorus X5 with a pair of GTX 965M in SLI and a 3K resolution (2560x1440) screen.
This move is great for gamers and I am eager to see what the resulting experience is for users that pick up these machines. I have long been known as a proponent of variable refresh displays and getting access to that technology on your notebook is a victory for NVIDIA's team.
ARM Releases Cortex-A72 for Licensing
On February 3rd, ARM announced a slew of new designs, including the Cortex A72. Few details were shared with us, but what we learned was that it could potentially redefine power and performance in the ARM ecosystem. Ryan was invited to London to participate in a deep dive of what ARM has done to improve its position against market behemoth Intel in the very competitive mobile space. Intel has a leg up on process technology with their 14nm Tri-Gate process, but they are continuing to work hard in making their x86 based processors more power efficient, while still maintaining good performance. There are certain drawbacks to using an ISA that is focused on high performance computing rather than being designed from scratch to provide good performance with excellent energy efficiency.
ARM has been on a pretty good roll with their Cortex A9, A7, A15, A17, A53, and A57 parts over the past several years. These designs have been utilized in a multitude of products and scenarios, with configurations that have scaled up to 16 cores. While each iteration has improved upon the previous, ARM is facing the specter of Intel’s latest generation, highly efficient x86 SOCs based on the 2nd gen 14nm Tri-Gate process. Several things have fallen into place for ARM to help them stay competitive, but we also cannot ignore the experience and design hours that have led to this product.
(Editor's Note: During my time with ARM last week it became very apparent that it is not standing still, not satisfied with its current status. With competition from Intel, Qualcomm and others ramping up over the next 12 months in both mobile and server markets, ARM will more than ever be depedent on the evolution of core design and GPU design to maintain advantages in performance and efficiency. As Josh will go into more detail here, the Cortex-A72 appears to be an incredibly impressive design and all indications and conversations I have had with others, outside of ARM, believe that it will be an incredibly successful product.)
Cortex A72: Highest Performance ARM Cortex
ARM has been ubiquitous for mobile applications since it first started selling licenses for their products in the 90s. They were found everywhere it seemed, but most people wouldn’t recognize the name ARM because these chips were fabricated and sold by licensees under their own names. Guys like Ti, Qualcomm, Apple, DEC and others all licensed and adopted ARM technology in one form or the other.
ARM’s importance grew dramatically with the introduction of increased complexity cellphones and smartphones. They also gained attention through multimedia devices such as the Microsoft Zune. What was once a fairly niche company with low performance, low power offerings became the 800 pound gorilla in the mobile market. Billions of chips are sold yearly based on ARM technology. To stay in that position ARM has worked aggressively on continually providing excellent power characteristics for their parts, but now they are really focusing on overall performance and capabilities to address, not only the smartphone market, but also the higher performance computing and server spaces that they want a significant presence in.
Subject: Graphics Cards, Mobile | February 19, 2015 - 03:58 PM | Ryan Shrout
Tagged: nvidia, notebooks, mobile, gpu
After a week or so of debate circling NVIDIA's decsision to disable overclocking on mobility GPUs, we have word that the company has reconsidered and will be re-enabling the feature in next month's driver release:
As you know, we are constantly tuning and optimizing the performance of your GeForce PC.
We obsess over every possible optimization so that you can enjoy a perfectly stable machine that balances game, thermal, power, and acoustic performance.
Still, many of you enjoy pushing the system even further with overclocking.
Our recent driver update disabled overclocking on some GTX notebooks. We heard from many of you that you would like this feature enabled again. So, we will again be enabling overclocking in our upcoming driver release next month for those affected notebooks.
If you are eager to regain this capability right away, you can also revert back to 344.75.
Now, I don't want to brag here, but we did just rail NVIDIA for this decision on last night's podcast...and then the decision was posted on NVIDIA's forums just four hours ago... I'm not saying, but I'm just saying!
All kidding aside, this is great news! And NVIDIA desperately needs to be paying attention to what consumers are asking for in order to make up for some poor decisions made in the last several months. Now (or at least soon), you will be able to return to your mobile GPU overclocking!
Subject: General Tech | January 6, 2015 - 06:51 PM | Sebastian Peak
Tagged: mobile, headphone amplifier, DAP, DAC, ces 2015, CES, audiophile, audio
For the audio enthusiasts at CES this year Calyx Audio (Korean maker of audiophile-grade audio components) has a new prototype to show along with last year's Calyx M music player, and for an audiophile product the pricing is very aggressive.
Render of the Calyx PaT (dimensions in mm)
The PaT is a similar product in some ways as Calyx Audio's existing $199 USB DAC called the "Coffee", but this unit will be much smaller and will cost half as much at $99. And the reduction in price and size is only half of the story as the PaT also works with mobile devices as an outboard DAC/headphone amp. Apple iPhones and iPads will be supported, and Android devices with USB audio-out support as well (probably via USB OTG).
The PaT supports up to 16-bit, 48kHz files (AIF, M4A, PCM, OGG, and MP3) and will also control track playback and volume via hardware control buttons on the unit. The PaT requires no external power or battery, taking what little juice it needs directly from the connection to your mobile device. As for amplification, in typical Calyx fashion even this miniature board is using a discrete class A/B headphone amplifier. Since the PaT relies only on the power passed through the USB connection it is only capable of outputting 0.8 V, which by comparison is slightly lower than an iPhone 5 which outputs about 0.9 - 1.0 V.
The tiny prototype PaT in action
The PaT may be just a working board at this point, but the company has scheduled the release for February 2015, when the devices will be available in various colors of thin aluminum enclosures.
In the world of computer audio much more attention has been focused lately on advancements in sound, with special shielding and isolation on motherboards, special gold-plated USB ports for DACs, and customizable op-amps a trend. While the market for dedicated sound cards isn't what it once was, high-end PCI-E and USB cards from Creative (Sound Blaster) and ASUS (Xonar) are still widely available. Most of these products are for desktop users, but there is a growing number of portable devices that allow mobile users to experience great sound, too. For myself, great sound means faithful reproduction of 2-channel music, and it's nice to see attention paid to that area without the added effects of digital signal processing (DSP). Calyx seems interested only in engineering products that play back music as close to the source as possible, and I can't argue with that!
The Calyx PaT is scheduled to launch in February for $99, but like most high-end audio components it will take a little research to track it down. The USA distributor of the Calyx brand has a website with product and contact information here.
Follow all of our coverage of the show at http://pcper.com/ces!