Subject: General Tech | May 5, 2014 - 03:46 PM | Jeremy Hellstrom
Tagged: amd, arm, seattle
While you are awaiting Josh's take on the announcements from AMD this morning you can get a brief tease at The Tech Report, who will also likely be updating their information as the presentation progresses. You can read about the chip bearing the code-name K12 here, though there is no in depth information as of yet. You can also check out the stats on a server powered by ARM Cortex-A57 CPU also known as the Opteron A1100 or Seattle. Keep your eyes peeled for more information on our front page.
"At a press event just now, AMD offered an update on its "ambidextrous" strategy for CPUs and SoCs. There's lots of juicy detail here, but the big headline news is that the company is working on two new-from-scratch CPU core designs, one that's compatible with the 64-bit ARMv8 instruction set ISA and another that is an x86 replacement for Bulldozer and its descendants."
Here is some more Tech News from around the web:
- My first foray into password management @ The Tech Report
- Building A CO2 Laser In A Hardware Store @ Hack a Day
- ARM tests: Intel flops on Android compatibility, Windows power @ The Register
- '25,000 Windows Server 2003 boxes' must be upgraded A DAY to meet OS support death date @ The Register
- Asus RT-AC68U 802.11ac Dual-Band Wireless Router @ eTeknix
- Star Wars 1313 artwork shows the canceled game's environments @ Polygon
Subject: Processors, Mobile | April 30, 2014 - 07:06 PM | Ryan Shrout
Tagged: Intel, clover trail, Bay Trail, arm, Android
While we are still waiting for those mysterious Intel Bay Trail based Android tablets to find their way into our hands, we met with ARM today to discuss quite few varying topics. One of them centered around the cost of binary translation - the requirement to convert application code compiled for one architecture and running it after conversion on a different architecture. In this case, running native ARMv7 Android applications on an x86 platform like Bay Trail from Intel.
Based on results presented by ARM, so take everything here in that light, more than 50% of the top 250 applications in the Android Play Store require binary translation to run. 23-30% have been compiled to x86 natively, 20-21% run through Dalvik and the rest have more severe compatibility concerns. That paints a picture of the current state of Android apps and the environment in which Intel is working while attempting to release Android tablets this spring.
Performance of these binary translated applications will be lower than they would be natively, as you would expect, but to what degree? These results, again gathered by ARM, show a 20-40% performance drop in games like Riptide GP2 and Minecraft while also increasing "jank" - a measure of smoothness and stutter found with variances in frame rates. These are applications that exist in a native mode but were tricked into running through binary conversion as well. The insinuation is that we can now forecast what the performance penalty is for applications that don't have a natively compiled version and are forced to run in translation mode.
The result of this is lower battery life as it requires the CPU to draw more power to keep the experience close to nominal. While gaming on battery, which most people do with items like the Galaxy Tab 3 used for testing, a 20-35% decrease in game time will hurt Intel's ability to stand up to the best ARM designs on the market.
Other downsides to this binary translation include longer load times for applications, lower frame rates and longer execution time. Of course, the Galaxy Tab 3 10.1 is based on Intel's Atom Z2560 SoC, a somewhat older Clover Trail+ design. That is the most modern currently available Android platform from Intel as we are still awaiting Bay Trail units. This also explains why ARM did not do any direct performance comparisons to any devices from its partners. All of these results were comparing Intel in its two execution modes: native and translated.
Without a platform based on Bay Trail to look at and test, we of course have to use the results that ARM presented as a placeholder at best. It is possible that Intel's performance is high enough with Silvermont that it makes up for these binary translation headaches for as long as necessary to see x86 more ubiquitous. And in fairness, we have seen many demonstrations from Intel directly that show the advantage of performance and power efficiency going in the other direction - in Intel's favor. This kind of debate requires some more in-person analysis with hardware in our hands soon and with a larger collection of popular applications.
More from our visit with ARM soon!
Subject: General Tech, Processors, Mobile | April 16, 2014 - 08:40 PM | Scott Michaud
Tagged: Intel, silvermont, arm, quarterly earnings, quarterly results
Sean Hollister at The Verge reported on Intel's recent quarterly report. Their chosen headline focuses on the significant losses incurred from the Mobile and Communications Group, the division responsible for tablet SoCs and 3G/4G modems. Its revenue dropped 52%, since last quarter, and its losses increased about 6%. Intel is still making plenty of money, with $12.291 billion USD in profits for 2013, but that is in spite of Mobile and Communications losing $3.148 billion over the same time.
Intel did have some wins, however. The Internet of Things Group is quite profitable, with $123 million USD of income from $482 million of revenue. They also had a better March quarter than the prior year, up a few hundred million in both revenue and profits. Also, Mobile and Communications should have a positive impact on the rest of the company. The Silvermont architecture, for instance, will eventually form the basis for 2015's Xeon Phi processors and co-processors.
It is concerning that Internet of Things has over twice the sales of Mobile but I hesitate to make any judgments. From my position, it is very difficult to see whether or not this trend follows Intel's projections. We simply do not know whether the division, time and time again, fails to meet expectations or whether Intel is just intentionally being very aggressive to position itself better in the future. I would shrug off the latter but, obviously, the former would be a serious concern.
The best thing for us to do is to keep an eye on their upcoming roadmaps and compare them to early projections.
Subject: General Tech | April 10, 2014 - 01:55 PM | Tim Verry
Tagged: videocore iv, Raspberry Pi, bcm2835, arm
Although the Raspberry Pi's original purpose was as an educational tool, many enthusiasts have used the (mostly) open source hardware at the heart of home automation, robotics projects, and other embedded systems. In light of this success, the Raspberry Pi Foundation has unveiled the Raspberry Pi Compute Module, a miniaturized version of the Raspberry Pi sans IO ports that fits onto a single SO-DIMM module. The Compute Module houses the BCM2835 SoC, 512MB of RAM, and 4GB of flash memory and can be paired with custom designed PCBs.
The Raspberry Pi Compute Module. Note that the pin out is entirely different from a memory module, so don't try plugging this into your laptop!
The Compute Module will initially be released along with an open source breakout board called the Compute Module IO Board. The IO Board is intended to be an example to get users started and to help them along the path of designing their own customized PCB. The IO Board has a SO-DIMM connector that the Compute Module plugs into. It further offers up two serial camera ports, two serial display ports, two banks of 2x30 GPIO pinouts, a micro USB port for power, one full-size USB port, and one HDMI output. The Raspberry Pi Foundation will be releasing full documentation and schematics for both the Compute Module and IO Board over the next few weeks.
Using the Compute Module and a custom PCB, the embedded system can be smaller and lighter than then traditional Raspberry Pi.
The Compute Module IO Board (left) with the Compute Module installed (right).
The Raspberry Pi Compute Module and IO Board will be available as a bundle (the "Compute Module Development Kit") from Element14 and RS in June. Shortly after the development kit launch, customers will be able to purchase the compute module itself for $30 each in batches of 100 or slightly more for smaller orders.
More information can be found on the Raspberry Pi blog. Here's hoping the industrial / embedded market successes will help fuel additional educational endeavours and new Raspberry Pis versions in the future.
Subject: Mobile | March 3, 2014 - 05:58 PM | Tim Verry
Tagged: Samsung, exynos 5, chromebook 2, Chromebook, chrome os, arm
Samsung is bringing a new Chromebook to market next month. Coming in 11-inch and 13-inch form factors the new Samsung Chromebook 2 offers updated hardware and more than eight hours of battery life.
The Chromebook 2 will be available in 11.6” and 13.3” models. The smaller variant will come in white or black while the larger SKU is only available in gray. The lids use a soft touch plastic that resembles stitched leather like that found on some Samsung smartphones. The 11.6” is 0.66-inches thick and weighs 2.43 pounds. The 13.3” model is 0.65-inches thick and weighs 3.09 pounds. The 11.6” Chromebook 2 has a 1366x768 display while the 13.3” Chromebook uses a 1920 x 1080 resolution display.
Internally, the Chromebook 2 is powered by an unspecified Exynos 5 Octa SoC at either 1.9GHz (11.6”) or 2.1GHz (13.3”), 4GB of DDR3L memory, and 16GB internal SSD storage. Internal radios include 802.11ac Wi-Fi and Bluetooth 4.0. Samsung rates the battery life at 8 hours for the 11.6” Chromebook and 8.5 hours for the 13.3” Chromebook.
Beyond the wireless tech, I/O includes one USB 3.0 port, one USB 2.0 port, one HDMI, one headphone output, and one micro SD card slot. This port configuration is available on both Chromebook 2 sizes.
Samsung is launching its Chromebook 2 in April at $319.99 and $399.99 for the 11.6” and 13.3” respectively. This new Chromebook is coming to a competitive market that is increasingly packed with Bay Trail-powered Windows 8.1 notebooks (and tablets) that are getting cheaper and Android tablets that are getting more features and more powerful thanks to new ARM-based SoCs. I'm interested to see what platform users start gravitating towards, is the cloud-connected Chrome OS good enough when paired with good battery life and a physical keyboard?
Are you looking forward to Samsung's new Chromebook 2?
Subject: Processors | February 26, 2014 - 11:46 PM | Tim Verry
Tagged: SoC, Samsung, exynos 5, big.little, arm, 28nm
Samsung recently announced two new 32-bit Exynos 5 processors with the eight core Exynos 5 Octa 5422 and six core Exynos 5 Hexa 5260. Both SoCs utilize a combination of ARM Cortex-A7 and Cortex-A15 CPU cores along with ARM's Mali graphics. Unlike the previous Exynos 5 chips, the upcoming processors utilize a big.LITTLE configuration variant called big.LITTLE MP that allows all CPU cores to be used simultaneously. Samsung continues to use a 28nm process node, and the SoCs should be available for use in smartphones and tablets immediately.
The Samsung Exynos 5 Octa 5422 offers up eight CPU cores and an ARM Mali T628 MP6 GPU. The CPU configuration consists of four Cortex-A15 cores clocked at 2.1GHz and four Cortex-A7 cores clocked at 1.5GHz. Devices using this chip will be able to tap up to all eight cores at the same time for demanding workloads, allowing the device to complete the computations and return to a lower-power or sleep state sooner. Devices using previous generation Exynos chips were faced with an either-or scenario when it came to using the A15 or A7 groups of cores, but the big.LITTLE MP configuration opens up new possibilites.
While the Octa 5422 occupies the new high end for the lineup, the Exynos 5 Hexa 5260 is a new midrange chip that is the first six core Exynos product. This chip uses an as-yet-unnamed ARM Mali GPU along with six ARM cores. The configuration on this SoC is four low power Cortex-A7 cores clocked at 1.3GHz paired with two Cortex-A15 cores clocked at 1.7GHz. Devices can use all six cores at a time or more selectively. The Hexa 5260 offers up two higher powered cores for single threaded performance along with four power sipping cores for running background tasks and parallel workloads.
The new chips offer up access to more cores for more performance at the cost of higher power draw. While the additional cores may seem like overkill for checking email and surfing the web, the additional power can enable things like onboard voice recognition, machine vision, faster photo filtering and editing, and other parallel-friendly tasks. Notably, the GPU should be able to assist with some of this parallel processing, but GPGPU is still relatively new whereas developers have had much more time to familiarize themselves with and optimize applications for multiple CPU threads. Yes, the increasing number of cores lends itself well to marketing, but that does not preclude them from having real world performance benefits and application possibilities. As such, I'm interested to see what these chips can do and what developers are able to wring out of them.
Subject: Processors, Mobile | February 21, 2014 - 10:47 AM | Ryan Shrout
Tagged: wearables, wearable computing, quark, Intel, arm
On a post from the official ARM blogs, the guns are blazing in the battle for the wearable market mind share. Pretty much all the currently available wearable computing devices are using ARM-based processors but that hasn't prevented Intel from touting its Quark platform as the best platform for wearables. There are still lots of questions about Quark when it comes to performance and power consumption but ARM decided to pit its focus on heat.
For a blog post on ARM's website:
Intel’s Quark is an example that has a relatively low level of integration, but has still been positioned as a solution for wearables. Fine you may think, there are plenty of ARM powered communication chipsets it could be paired with, but a quick examination of the development board brings the applicability further into question. Quark runs at a rather surprising, and sizzling to the touch, 57°C. The one attribute it does offer is a cognitive awareness, not through any hardware integration suitable for the wearable market, but from the inbuilt thermal management hardware (complete with example code), which in the attached video you can see is being used to toggle a light switch once touched by a finger which, acting as a heat sync, drops the temperature below 50°C.
Along with this post is a YouTube video that shows this temperature testing taking place.
Of course, when looking at competitive analysis between companies you should always take the results as tentative at best. There is likely to be some change between the Quark Adruino board (Galileo) integration of the X1000 and what would make it into a final production wearable device. Obviously this is something Intel is award of as well and they are also aware of what temperature means for devices that users will have such direct contact with.
The proof will be easy to see, either way, as we progress through 2014. Will device manufacturers integrated Quark in any final design wins and what will the user experience of those units be like?
Still, it's always interesting to see marketing battles heat up between these types of computing giants.
Subject: General Tech | February 14, 2014 - 02:11 PM | Ken Addison
Tagged: video, r9 270x, r7 265, r7 260x, podcast, nvidia, fusion-io, arm, amd, A17
PC Perspective Podcast #287 - 02/14/2014
Join us this week as we discuss the release of the AMD R7 265, Coin Mining's effect on GPU Prices, NVIDIA Earnings and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath and Allyn Malventano
Week in Review:
0:28:45 NVIDIA Posts Solid Quarter
News items of interest:
0:56:15 In a Galaxy far far away?
Hardware/Software Picks of the Week:
Josh: $379 for next few days!
Subject: General Tech, Processors, Mobile | February 12, 2014 - 05:48 PM | Scott Michaud
Tagged: mediatek, arm, cortex, A17
Our Josh Walrath wrote up an editorial about the Cortex-A17 architecture less than two days ago. In it, he reports on ARM's announcement that "the IP" will ship in 2015. On the same calendar date, MediaTek announced their MT6595 SoC, integrating A17 and A7 cores, will be commercially available in 1H 2014 with devices in 2H 2014.
Of course, it is difficult to tell how ahead of schedule this is, depending on what ARM meant by shipping in 2015 and what MediaTek meant by devices based on the MT6595 platform in 2H 2014.
There are two key features about the A17: a 40% power reduction from A15 and its ability to integrate with A7 cores in a big.LITTLE structure. MediaTek goes a little further with "CorePilot", which schedules tasks across all eight cores (despite it being a grouping of two different architectures). This makes some amount of sense because it allows for four strong threads which can be augmented with four weaker threads. Especially for applications like web browsers, it is not uncommon to have a dominant main thread.
The SoC will also support LTE and HSPA+ mobile and 802.11ac wireless connections. It will not integrate the Mali-T720 GPU (DX11/OpenGL ES 3.0), but instead use the Power VR Series6 GPU (DX10/OpenGL ES 3.0 unless it is an unannounced design). MediaTek does not explain why they chose the one licensed GPU over the other.
MediaTek claims the MT6595 platform will be available in the first half of 2014 with devices coming in the second half.