Subject: Mobile | September 30, 2015 - 02:33 PM | Sebastian Peak
Tagged: X12 Modem, SoC, snapdragon 820, qualcomm, phones, mu-mimo, mobile, LTE, cell phones
The upcoming Snapdragon 820 is shaping up to be a formidable SoC after the disappointing response to the previous flagship, the Snapdragon 810, which was in far fewer devices than expected for reasons still shrouded in mystery and speculation. One of the biggest aspects of the upcoming 820 is Qualcomm’s new X12 modem, which will provide the most advanced LTE connectivity seen to date when the SoC launches. The X12 features CAT 12 LTE downlink speeds for up to 600 Mbps, and CAT 13 on the uplink for up to 150 Mbps.
LTE connectivity isn’t the only new thing here, as we see from this slide there is also tri-band Wi-Fi supporting 2x2 MU-MIMO.
“This is the first publicly announced processor for use in mobile devices to support LTE Category 12 in the downlink and Category 13 in the uplink, providing up to 33 percent and 200 percent improvement over its predecessor’s download and upload speeds, respectively.”
The specifications for this new modem are densely packed:
- Cat 12 (up to 600 Mbps) in the downlink
- Cat 13 (up to 150 Mbps) in the uplink
- Up to 4x4 MIMO on one downlink LTE carrier
- 2x2 MU-MIMO (802.11ac)
- Multi-gigabit 802.11ad
- LTE-U and LTE+Wi-Fi Link Aggregation (LWA)
- Next Gen HD Voice and Video calling over LTE and Wi-Fi
- Call Continuity across Wi-Fi, LTE, 3G, and 2G
- RF front end innovations
- Advanced Closed Loop Antenna Tuner
- Qualcomm RF360™ front end solution with CA
- Wi-Fi/LTE antenna sharing
Rumored phones that could end up running the Snapdragon 820 with this X12 modem include the Samsung Galaxy S7 and around 30 other devices, though final word is of course pending on shipping hardware.
Subject: Mobile | September 24, 2015 - 07:55 PM | Sebastian Peak
Tagged: usb, snapdragon 820, Quick Charge 3.0, Quick Charge, qualcomm, mobile, battery charger
Qualcomm has announced Quick Charge 3.0, the latest iteration of their fast battery charging technology. The new version is said to not only further improve battery charging times, but also better maintain battery health and reduce temperatures.
One of the biggest issues with fast battery charging is the premature failure of the battery cells; something my first Nexus 6 (which was replaced due to a bad battery) can attest to. The new 3.0 standard adds "Battery Saver Technology" (BST) which constantly varies the current delivery rate based on what the battery can safely accept, thus preventing damage to the cells. This new version of Quick Charge also claims to offer lower temps while charging, which could be partly the result of this variable current delivery.
The other change comes from "Intelligent Negotiation for Optimum Voltage" (INOV), which can vary the voltage delivery anywhere from 3.6V to 20V in 200mV increments depending on the device's negotiated connection. INOV will allow Quick Charge 3.0 to charge a full 2x faster than the original Quick Charge 1.0 (it's 1.5x faster than QC 2.0), and 4x over standard USB charging as it provides up to 60W to compatible devices.
This new Quick Charge 3.0 technology will be available soon with devices featuring upcoming Qualcomm SoCs such as the Snapdragon 820.
Subject: Displays, Mobile | May 31, 2015 - 06:00 PM | Ryan Shrout
Tagged: nvidia, notebooks, msi, mobile, gsync, g-sync, asus
If you remember back to January of this year, Allyn and posted an article that confirmed the existence of a mobile variant of G-Sync thanks to a leaked driver and an ASUS G751 notebook. Rumors and speculation floated around the Internet ether for a few days but we eventually got official word from NVIDIA that G-Sync for notebooks was a real thing and that it would launch "soon." Well we have that day here finally with the beginning of Computex.
G-Sync for notebooks has no clever branding, no "G-Sync Mobile" or anything like that, so discussing it will be a bit more difficult since the technologies are different. Going forward NVIDIA claims that any gaming notebook using NVIDIA GeForce GPUs will be a G-Sync notebook and will support all of the goodness that variable refresh rate gaming provides. This is fantastic news as notebook gaming is often at lower frame rates than you would find on a desktop PC because of lower powered hardware yet comparable (1080p, 1440p) resolution displays.
Of course, as we discovered in our first look at G-Sync for notebooks back in January, the much debated G-Sync module is not required and will not be present on notebooks featuring the variable refresh technology. So what gives? We went over some of this before, but it deserves to be detailed again.
NVIDIA uses the diagram above to demonstrate the complication of the previous headaches presented by the monitor and GPU communication path before G-Sync was released. You had three different components: the GPU, the monitor scalar and the monitor panel that all needed to work together if VRR was going to become a high quality addition to the game ecosystem.
NVIDIA's answer was to take over all aspects of the pathway for pixels from the GPU to the eyeball, creating the G-Sync module and helping OEMs to hand pick the best panels that would work with VRR technology. This helped NVIDIA make sure it could do things to improve the user experience such as implementing an algorithmic low-frame-rate, frame-doubling capability to maintain smooth and tear-free gaming at frame rates under the panels physical limitations. It also allows them to tune the G-Sync module to the specific panel to help with ghosting and implemention variable overdrive logic.
All of this is required because of the incredible amount of variability in the monitor and panel markets today.
But with notebooks, NVIDIA argues, there is no variability at all to deal with. The notebook OEM gets to handpick the panel and the GPU directly interfaces with the screen instead of passing through a scalar chip. (Note that some desktop monitors like the ever popular Dell 3007WFP did this as well.) There is no other piece of logic in the way attempting to enforce a fixed refresh rate. Because of that direct connection, the GPU is able to control the data passing between it and the display without any other logic working in the middle. This makes implementing VRR technology much more simple and helps with quality control because NVIDIA can validate the panels with the OEMs.
As I mentioned above, going forward, all new notebooks using GTX graphics will be G-Sync notebooks and that should solidify NVIDIA's dominance in the mobile gaming market. NVIDIA will be picking the panels, and tuning the driver for them specifically, to implement anti-ghosting technology (like what exists on the G-Sync module today) and low frame rate doubling. NVIDIA also claims that the world's first 75 Hz notebook panels will ship with GeForce GTX and will be G-Sync enabled this summer - something I am definitely looking forward to trying out myself.
Though it wasn't mentioned, I am hopeful that NVIDIA will continue to allow users the ability to disable V-Sync at frame rates above the maximum refresh of these notebook panels. With most of them limited to 60 Hz (but this applies to 75 Hz as well) the most demanding gamers are going to want that same promise of minimal latency.
At Computex we'll see a handful of models announced with G-Sync up and running. It should be no surprise of course to see the ASUS G751 with the GeForce GTX 980M GPU on this list as it was the model we used in our leaked driver testing back in January. MSI will also launch the GT72 G with a 1080p G-Sync ready display and GTX 980M/970M GPU option. Gigabyte will have a pair of notebooks: the Aorus X7 Pro-SYNC with GTX 970M SLI and a 1080p screen as well as the Aorus X5 with a pair of GTX 965M in SLI and a 3K resolution (2560x1440) screen.
This move is great for gamers and I am eager to see what the resulting experience is for users that pick up these machines. I have long been known as a proponent of variable refresh displays and getting access to that technology on your notebook is a victory for NVIDIA's team.
ARM Releases Cortex-A72 for Licensing
On February 3rd, ARM announced a slew of new designs, including the Cortex A72. Few details were shared with us, but what we learned was that it could potentially redefine power and performance in the ARM ecosystem. Ryan was invited to London to participate in a deep dive of what ARM has done to improve its position against market behemoth Intel in the very competitive mobile space. Intel has a leg up on process technology with their 14nm Tri-Gate process, but they are continuing to work hard in making their x86 based processors more power efficient, while still maintaining good performance. There are certain drawbacks to using an ISA that is focused on high performance computing rather than being designed from scratch to provide good performance with excellent energy efficiency.
ARM has been on a pretty good roll with their Cortex A9, A7, A15, A17, A53, and A57 parts over the past several years. These designs have been utilized in a multitude of products and scenarios, with configurations that have scaled up to 16 cores. While each iteration has improved upon the previous, ARM is facing the specter of Intel’s latest generation, highly efficient x86 SOCs based on the 2nd gen 14nm Tri-Gate process. Several things have fallen into place for ARM to help them stay competitive, but we also cannot ignore the experience and design hours that have led to this product.
(Editor's Note: During my time with ARM last week it became very apparent that it is not standing still, not satisfied with its current status. With competition from Intel, Qualcomm and others ramping up over the next 12 months in both mobile and server markets, ARM will more than ever be depedent on the evolution of core design and GPU design to maintain advantages in performance and efficiency. As Josh will go into more detail here, the Cortex-A72 appears to be an incredibly impressive design and all indications and conversations I have had with others, outside of ARM, believe that it will be an incredibly successful product.)
Cortex A72: Highest Performance ARM Cortex
ARM has been ubiquitous for mobile applications since it first started selling licenses for their products in the 90s. They were found everywhere it seemed, but most people wouldn’t recognize the name ARM because these chips were fabricated and sold by licensees under their own names. Guys like Ti, Qualcomm, Apple, DEC and others all licensed and adopted ARM technology in one form or the other.
ARM’s importance grew dramatically with the introduction of increased complexity cellphones and smartphones. They also gained attention through multimedia devices such as the Microsoft Zune. What was once a fairly niche company with low performance, low power offerings became the 800 pound gorilla in the mobile market. Billions of chips are sold yearly based on ARM technology. To stay in that position ARM has worked aggressively on continually providing excellent power characteristics for their parts, but now they are really focusing on overall performance and capabilities to address, not only the smartphone market, but also the higher performance computing and server spaces that they want a significant presence in.
Subject: Graphics Cards, Mobile | February 19, 2015 - 03:58 PM | Ryan Shrout
Tagged: nvidia, notebooks, mobile, gpu
After a week or so of debate circling NVIDIA's decsision to disable overclocking on mobility GPUs, we have word that the company has reconsidered and will be re-enabling the feature in next month's driver release:
As you know, we are constantly tuning and optimizing the performance of your GeForce PC.
We obsess over every possible optimization so that you can enjoy a perfectly stable machine that balances game, thermal, power, and acoustic performance.
Still, many of you enjoy pushing the system even further with overclocking.
Our recent driver update disabled overclocking on some GTX notebooks. We heard from many of you that you would like this feature enabled again. So, we will again be enabling overclocking in our upcoming driver release next month for those affected notebooks.
If you are eager to regain this capability right away, you can also revert back to 344.75.
Now, I don't want to brag here, but we did just rail NVIDIA for this decision on last night's podcast...and then the decision was posted on NVIDIA's forums just four hours ago... I'm not saying, but I'm just saying!
All kidding aside, this is great news! And NVIDIA desperately needs to be paying attention to what consumers are asking for in order to make up for some poor decisions made in the last several months. Now (or at least soon), you will be able to return to your mobile GPU overclocking!
Subject: General Tech | January 6, 2015 - 06:51 PM | Sebastian Peak
Tagged: mobile, headphone amplifier, DAP, DAC, ces 2015, CES, audiophile, audio
For the audio enthusiasts at CES this year Calyx Audio (Korean maker of audiophile-grade audio components) has a new prototype to show along with last year's Calyx M music player, and for an audiophile product the pricing is very aggressive.
Render of the Calyx PaT (dimensions in mm)
The PaT is a similar product in some ways as Calyx Audio's existing $199 USB DAC called the "Coffee", but this unit will be much smaller and will cost half as much at $99. And the reduction in price and size is only half of the story as the PaT also works with mobile devices as an outboard DAC/headphone amp. Apple iPhones and iPads will be supported, and Android devices with USB audio-out support as well (probably via USB OTG).
The PaT supports up to 16-bit, 48kHz files (AIF, M4A, PCM, OGG, and MP3) and will also control track playback and volume via hardware control buttons on the unit. The PaT requires no external power or battery, taking what little juice it needs directly from the connection to your mobile device. As for amplification, in typical Calyx fashion even this miniature board is using a discrete class A/B headphone amplifier. Since the PaT relies only on the power passed through the USB connection it is only capable of outputting 0.8 V, which by comparison is slightly lower than an iPhone 5 which outputs about 0.9 - 1.0 V.
The tiny prototype PaT in action
The PaT may be just a working board at this point, but the company has scheduled the release for February 2015, when the devices will be available in various colors of thin aluminum enclosures.
In the world of computer audio much more attention has been focused lately on advancements in sound, with special shielding and isolation on motherboards, special gold-plated USB ports for DACs, and customizable op-amps a trend. While the market for dedicated sound cards isn't what it once was, high-end PCI-E and USB cards from Creative (Sound Blaster) and ASUS (Xonar) are still widely available. Most of these products are for desktop users, but there is a growing number of portable devices that allow mobile users to experience great sound, too. For myself, great sound means faithful reproduction of 2-channel music, and it's nice to see attention paid to that area without the added effects of digital signal processing (DSP). Calyx seems interested only in engineering products that play back music as close to the source as possible, and I can't argue with that!
The Calyx PaT is scheduled to launch in February for $99, but like most high-end audio components it will take a little research to track it down. The USA distributor of the Calyx brand has a website with product and contact information here.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: Processors | November 20, 2014 - 01:31 PM | Josh Walrath
Tagged: amd, APU, carrizo, Carrizo-L, Kaveri, Excavator, Steamroller, SoC, Intel, mobile
AMD has certainly gone about doing things in a slightly different manner than we are used to. Today they announced their two latest APUs which will begin shipping in the first half of 2015. These APUs are running at AMD and are being validated as we speak. AMD did not release many details on these products, but what we do know is pretty interesting.
Carrizo is based on the latest iteration of AMD’s CPU technology. Excavator is the codename for these latest CPU cores, and they promise to be smaller and more efficient than the previous Steamroller core which powers the latest Kaveri based APUs. Carrizo-L is the lower power variant which will be based on the Puma+ core. The current Beema APU is based on the Puma architecture.
Roadmaps show that the Carrizo APUs will be 28 nm products, presumably fabricated by GLOBALFOUNDRIES. Many were hoping that AMD would make the jump to 20 nm with this generation of products, but that does not seem to be the case. This is not surprising due to the limitations of that particular process when dealing with large designs that require a lot of current. AMD will likely be pushing for 16 nm FinFET for the generation of products after Carrizo.
The big Carrizo supposedly has a next generation GCN unit. My guess here is that it will use the same design as we saw with the R9 285. That particular product is a next generation unit that has improved efficiency. AMD did not release how many GCN cores will be present in Carizzo, but it will be very similar to what we see now with Kaveri. Carrizo-L will use the same GCN units as the previous generation Beema based products.
I believe AMD has spent a lot more time hand tuning Excavator instead of relying on a lot of automated place and route. This should allow them to retain much of the performance of the part, all the while cutting down on transistor count dramatically. Some rumors that I have seen point to each Excavator module being 40% smaller than Steamroller. I am not entirely sure they have achieved that type of improvement, but more hand layout does typically mean greater efficiency and less waste. The downside to hand layout is that it is extremely time and manpower intensive. Intel can afford this type of design while AMD has to rely more on automated place and route.
Carrizo will be the first HSA 1.0 compliant SOC. It is in fact an SOC as it integrates the southbridge functions that previously had been handled by external chips like the A88X that supports the current Kaveri desktop APUs. Carrizo and Carrizo-L will also share the same infrastructure. This means that motherboards that these APUs will be soldered onto are interchangeable. One motherboard from the partner OEMs will be able to address multiple markets that will see products range from 4 watts TDP up to 35 watts.
Finally, both APUs feature the security processor that allows them access to the ARM TrustZone technology. This is a very small ARM processor that handles the secure boot partition and handles the security requests. This puts AMD on par with Intel and their secure computing solution (vPro).
These products will be aimed only at the mobile market. So far AMD has not announced Carrizo for the desktop market, but when they do I would imagine that they will hit a max TDP of around 65 watts. AMD claims that Carrizo is one of the biggest jumps for them in terms of power efficiency. A lot of different pieces of technology have all come together with this product to make them more competitive with Intel and their process advantage. Time will tell if this is the case, but for now AMD is staying relevant and pushing their product releases so that they are more consistently ontime.
Subject: General Tech, Processors, Mobile | November 19, 2014 - 07:36 PM | Scott Michaud
Tagged: x86, restructure, mobile, Intel
Last month, Josh wrote about Intel's Q3 earnings report. The company brought in $14.55 billion USD, of which they could keep $3.31 billion. Their PC group is responsible for $9 billion of that revenue and $4.12 billion of that profit, according to the Wall Street Journal. On the other hand, their mobile division is responsible for about $1 million – and it took over a billion to get that million. This has been the trend for quite some time now, as Intel pushes their square battering ram into the mobile and tablet round hole. Of course, these efforts could benefit the company as a whole, but they cannot show that in a quarterly, per-division report.
And so we hear rumors that Intel intends to combine their mobile and PC divisions, which Chuck Mulloy, an Intel spokesperson, later confirmed in the same article. The new division, allegedly called the “Client Computing” group in an internal email that was leaked to the Wall Street Journal, will handle the processors for mobile devices but, apparently, not the wireless modem chipsets; those will allegedly be moved to a “wireless platform research and development organization”.
At face value, this move should allow Intel to push for mobile even more aggressively, while simultaneously reducing the pressure from investors to give up and settle for x86 PCs. Despite some differences, this echos a recent reorganization by AMD, where they paired-up divisions that were doing well with divisions that were struggling to make a few average divisions that were each treading water, at least on paper.
The reorganization is expected to complete by the end of Q1 2015, but that might not be a firm deadline.
One Small Step
While most articles surrounding the iPhone 6 and iPhone 6 Plus this far have focused around user experience and larger screen sizes, performance, and in particular the effect of Apple's transition to the 20nm process node for the A8 SoC have been our main questions regarding these new phones. Naturally, I decided to put my personal iPhone 6 though our usual round of benchmarks.
First, let's start with 3DMark.
Comparing the 3DMark scores of the new Apple A8 to even the last generation A7 provides a smaller improvement than we are used to seeing generation-to-generation with Apple's custom ARM implementations. When you compare the A8 to something like the NVIDIA Tegra K1, which utilizes desktop-class GPU cores, the overall score blows Apple out of the water. Even taking a look at the CPU-bound physics score, the K1 is still a winner.
A 78% performance advantage in overall score when compared the A8 shows just how much of a powerhouse NVIDIA has with the K1. (Though clearly power envelopes are another matter entirely.)
If we look at more CPU benchmarks, like the browser-based Google Octane and SunSpider tests, the A8 starts to shine more.
While the A8 edges out the A7 to be the best performing device and 54% faster than the K1 in SunSpider, the A8 and K1 are neck and neck in the Google Octane benchmark.
Moving back to a graphics heavy benchmark, GFXBench's Manhattan test, the Tegra K1 has a 75% percent performance advantage over the A8 though it is 36% faster than the previous A7 silicon.
These early results are certainly a disappointment compared to the usual generation-to-generation performance increase we see with Apple SoCs.
However, the other aspect to look at is power efficiency. With normal use I have noticed a substantial increase in battery life of my iPhone 6 over the last generation iPhone 5S. While this may be due to a small (about 1 wH) increase in battery capacity, I think more can be credited to this being an overall more efficient device. Certain choices like sticking to a highly optimized Dual Core CPU design and Quad Core GPU, as well as a reduction in process node to 20nm all contribute to increased battery life, while surpassing the performance of the last generation Apple A7.
In that way, the A8 moves the bar forward for Apple and is a solid first attempt at using the 20nm silicon technology at TSMC. There is a strong potential that further refined parts (like the expected A8x for the iPad revisions) Apple will be able to further surpass 28nm silicon in performance and efficiency.
Subject: General Tech, Cases and Cooling | September 17, 2014 - 06:57 PM | Scott Michaud
Tagged: windows, mobile, microsoft, keyboard, ios, Android
Let me share a story. There was a time, around the first Surface launch, that I worked in an electronics retail store (and the several years prior -- but I digress). At around that time, Microsoft was airing ads with people dancing around, clicking keyboards to the Surface tablet with its magnetic click or snap. One day, a customer came in looking for the keyboard from the TV spots for their iPad. I thought about it for a few seconds and realized how terrible Microsoft's branding actually was.
Without already knowing the existence of their Windows 8 and RT tablets, which the ads were supposed to convey, it really did look like an accessory for an iPad.
Doing Microsoft's job for them, I explained the Surface Pro and Surface RT tablets along with its keyboard-cover accessories. Eventually, I told them that it was a Microsoft product for their own tablet brand and would not see an iPad release. The company felt threatened by these mobile, touch devices and was directly competing with them.
So Microsoft is announcing a keyboard for Windows, Android, and iOS. Sure, it is very different from the Type and Touch Covers; for instance, it does not attach to these devices magnetically. Microsoft has also been known to develop hardware, software, and services for competing platforms. While it is not unsurprising that Microsoft keyboards would work on competing devices, it does feel weird for their keyboard to have features that are specialized for these competing platforms.
There are three things interesting about this keyboard: it has a built-in stand, it has special keys for Android and iOS that are not present in Windows, and it has a built-in rechargeable battery that lasts up to 6 months. The peripheral pairs wirelessly with all of these devices through Bluetooth.
The Microsoft Universal Mobile Keyboard is coming soon for $79.95 (MSRP).