Podcast #299 - ASUS Z97-Deluxe, NCASE M1 Case, AMD's custom ARM Designs and more!

Subject: General Tech | May 8, 2014 - 11:57 AM |
Tagged: podcast, video, asus, z97, Z97-Deluxe, ncase, m1, amd, seattle, arm, nvidia, Portal, shield

PC Perspective Podcast #299 - 05/08/2014

Join us this week as we discuss ASUS Z97-Deluxe, NCASE M1 Case, AMD's custom ARM Designs and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Josh Walrath, Jeremy Hellstrom, Allyn Malventano, and Morry Tietelman

Program length: 1:27:11
  1. Week in Review:
  2. News items of interest:
  3. Hardware/Software Picks of the Week:
  4. Closing/outro

Be sure to subscribe to the PC Perspective YouTube channel!!

 

AMD Shows Off ARM-Based Opteron A1100 Server Processor And Reference Motherboard

Subject: Processors | May 8, 2014 - 12:26 AM |
Tagged: TrustZone, server, seattle, PCI-E 3.0, opteron a1100, opteron, linux, Fedora, ddr4, ARMv8, arm, amd, 64-bit

AMD showed off its first ARM-based “Seattle” processor running on a reference platform motherboard at an event in San Francisco earlier this week. The new chip, which began sampling in March, is slated for general availability in Q4 2014. The “Seattle” processor will be officially labeled the AMD Opteron A1100.

During the press event, AMD demonstrated the Opteron A1100 running on a reference design motherboard (the Seattle Development Platform). The hardware was used to drive a LAMP software stack including an ARM optimized version of Linux based on RHEL, Apache 2.4.6, MySQL 5.5.35, and PHP 5.4.16. The server was then used to host a WordPress blog that included stream-able video.

AMD Seattle Development Platform Opteron A1100.jpg

Of course, the hardware itself is the new and interesting bit and thanks to the event we now have quite a few details to share.

The Opteron A1100 features eight ARM Cortex-A57 cores clocked at 2.0 GHz (or higher). AMD has further packed in an integrated memory controller, TrustZone encryption hardware, and floating point and NEON video acceleration hardware. Like a true SoC, the Opteron A1100 supports 8 lanes of PCI-E 3.0, eight SATA III 6Gbps ports, and two 10GbE network connections.

The Seattle processor has a total of 4MB of L2 cache (each pair of cores shares 1MB of L2) and 8MB L3 cache that all eight cores share. The integrated memory controller supports DDR3 and DDR4 memory in SO-DIMM, unbuffered DIMM, and registered ECC RDIMM forms (only one type per motherboard) enabling the ARM-based platform to be used in a wide range of server environments (enterprise, SMB, and home servers et al).

AMD has stated that the upcoming Opteron A1100 processor delivers between two and four times the performance of the existing Opteron X series (which uses four x86 Jaguar cores clocked at 1.9 GHz). The A1100 has a 25W TDP and is manufactured by Global Foundries. Despite the slight increase in TDP versus the Opteron X series (the Opteron X2150 is a 22W part), AMD claims the increased performance results in notable improvements in compute/watt performance.

AMD Opteron Server Processor.png

AMD has engineered a reference motherboard though partners will also be able to provide customized solutions. The combination of reference motherboard and ARM-based Opteron A1100 is known at the Seattle Development Platform. This reference motherboard features four registered DDR3 DIMM slots for up to 128GB of memory, eight SATA 6Gbps ports, support for standard ATX power supplies, and multiple PCI-E connectors that can be configured to run as a single PCI-E 3.0 x8 slot or two PCI-E 3.0 x4 slots.

The Opteron A1100 is an interesting move from AMD that will target low power servers. the ARM-based server chip has an uphill battle in challenging x86-64 in this space, but the SoC does have several advantages in terms of compute performance per watt and overall cost. AMD has taken the SoC elements (integrated IO, memory, companion processor hardware) of the Opteron X series and its APUs in general, removed the graphics portion, and crammed in as many low power 64-bit ARM cores as possible. This configuration will have advantages over the Opteron X CPU+GPU APU when running applications that use multiple serial threads and can take advantage of large amounts of memory per node (up to 128GB). The A1100 should excel in serving up files and web pages or acting as a caching server where data can be held in memory for fast access.

I am looking forward to the launch as the 64-bit ARM architecture makes its first major inroads into the server market. The benchmarks, and ultimately software stack support, will determine how well it is received and if it ends up being a successful product for AMD, but at the very least it keeps Intel on its toes and offers up an alternative and competitive option.

Source: Tech Report

ARM's CoreLink family feels right at home in the server room

Subject: General Tech | May 7, 2014 - 02:33 PM |
Tagged: arm, servers, CoreLink, CCN-508, CN-504

ARM has a new chip on the block, the CCN-508,  It is a capable of combining up to eight 64-bit ARMv8 CPU clusters of four cores apiece, either all ARM Cortex-53s or ARM Cortex-57s, using ARM's AMBA 5 CHI interconnect technology.  Those processors can then be attached to a wide variety of what ARM refers to as partners, including up to 24 other AMBA interconnects for other CPUs, DDR3 or DDR4 memory controllers, PCIe, SATA, and 10-40 gigabit Ethernet.  So much for ARM just being a mobile processor; check out more at The Register.

ccn_508_small.jpg

"ARM has released more details about the innards of its cache-coherent on-chip networking scheme for use cases ranging from storage to servers to networking – specifically, its CCN-5xx microarchitecture family and its newest member, the muscular CoreLink CCN-508."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register

Building a SkyBridge in 64 bits, between ARM and x86

Subject: General Tech | May 6, 2014 - 02:31 PM |
Tagged: amd, arm, project skybridge, k12

The Register has put together an overview of what AMD discussed yesterday about the K12 processor and Project Skybridge.  The most impressive feat is Project SkyBridge; with the license AMD now has to develop ARMv8 architecture they will be creating pin compatible ARM and x86 SoCs, so you can choose which you want to drop in your server and can easily change your mind any time in the future.  The more traditional 64-bit x86 processors will be "Puma+" cores while the ARM SoCs will be 64-bit A57s, and will not only be fully HSA compliant but will be able to run Android.  They also delve into AMD's upcoming strategy to remain a valid contender in the silicon ring, read on to get a glimpse into Papermaster's brain.

building_blocks_small.jpg

"AMD has announced that it will create pin-compatible 64-bit x86 and ARM SoCs in an effort it's calling "Project SkyBridge", and that it has licensed the ARMv8 architecture and will design its own home-grown ARM-based processors."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register

AMD is ARMed for ambidextrous computing

Subject: General Tech | May 5, 2014 - 03:46 PM |
Tagged: amd, arm, seattle

While you are awaiting Josh's take on the announcements from AMD this morning you can get a brief tease at The Tech Report, who will also likely be updating their information as the presentation progresses.  You can read about the chip bearing the code-name K12 here, though there is no in depth information as of yet.  You can also check out the stats on a server powered by ARM Cortex-A57 CPU also known as the Opteron A1100 or Seattle.  Keep your eyes peeled for more information on our front page.

newarmcores.jpg

"At a press event just now, AMD offered an update on its "ambidextrous" strategy for CPUs and SoCs. There's lots of juicy detail here, but the big headline news is that the company is working on two new-from-scratch CPU core designs, one that's compatible with the 64-bit ARMv8 instruction set ISA and another that is an x86 replacement for Bulldozer and its descendants."

Here is some more Tech News from around the web:

Tech Talk

 

ARM Claims x86 Android Binary Translation on Intel SoC Hurting Efficiency

Subject: Processors, Mobile | April 30, 2014 - 07:06 PM |
Tagged: Intel, clover trail, Bay Trail, arm, Android

While we are still waiting for those mysterious Intel Bay Trail based Android tablets to find their way into our hands, we met with ARM today to discuss quite few varying topics. One of them centered around the cost of binary translation - the requirement to convert application code compiled for one architecture and running it after conversion on a different architecture. In this case, running native ARMv7 Android applications on an x86 platform like Bay Trail from Intel.

translate1.jpg

Based on results presented by ARM, so take everything here in that light, more than 50% of the top 250 applications in the Android Play Store require binary translation to run. 23-30% have been compiled to x86 natively, 20-21% run through Dalvik and the rest have more severe compatibility concerns. That paints a picture of the current state of Android apps and the environment in which Intel is working while attempting to release Android tablets this spring.

translate2.jpg

Performance of these binary translated applications will be lower than they would be natively, as you would expect, but to what degree? These results, again gathered by ARM, show a 20-40% performance drop in games like Riptide GP2 and Minecraft while also increasing "jank" - a measure of smoothness and stutter found with variances in frame rates. These are applications that exist in a native mode but were tricked into running through binary conversion as well. The insinuation is that we can now forecast what the performance penalty is for applications that don't have a natively compiled version and are forced to run in translation mode.

translate3.jpg

The result of this is lower battery life as it requires the CPU to draw more power to keep the experience close to nominal. While gaming on battery, which most people do with items like the Galaxy Tab 3 used for testing, a 20-35% decrease in game time will hurt Intel's ability to stand up to the best ARM designs on the market.

Other downsides to this binary translation include longer load times for applications, lower frame rates and longer execution time. Of course, the Galaxy Tab 3 10.1 is based on Intel's Atom Z2560 SoC, a somewhat older Clover Trail+ design. That is the most modern currently available Android platform from Intel as we are still awaiting Bay Trail units. This also explains why ARM did not do any direct performance comparisons to any devices from its partners. All of these results were comparing Intel in its two execution modes: native and translated.

Without a platform based on Bay Trail to look at and test, we of course have to use the results that ARM presented as a placeholder at best. It is possible that Intel's performance is high enough with Silvermont that it makes up for these binary translation headaches for as long as necessary to see x86 more ubiquitous. And in fairness, we have seen many demonstrations from Intel directly that show the advantage of performance and power efficiency going in the other direction - in Intel's favor. This kind of debate requires some more in-person analysis with hardware in our hands soon and with a larger collection of popular applications.

More from our visit with ARM soon!

The Health of Intel's Many Divisions...

Subject: General Tech, Processors, Mobile | April 16, 2014 - 08:40 PM |
Tagged: Intel, silvermont, arm, quarterly earnings, quarterly results

Sean Hollister at The Verge reported on Intel's recent quarterly report. Their chosen headline focuses on the significant losses incurred from the Mobile and Communications Group, the division responsible for tablet SoCs and 3G/4G modems. Its revenue dropped 52%, since last quarter, and its losses increased about 6%. Intel is still making plenty of money, with $12.291 billion USD in profits for 2013, but that is in spite of Mobile and Communications losing $3.148 billion over the same time.

intel-computex-07.jpg

Intel did have some wins, however. The Internet of Things Group is quite profitable, with $123 million USD of income from $482 million of revenue. They also had a better March quarter than the prior year, up a few hundred million in both revenue and profits. Also, Mobile and Communications should have a positive impact on the rest of the company. The Silvermont architecture, for instance, will eventually form the basis for 2015's Xeon Phi processors and co-processors.

It is concerning that Internet of Things has over twice the sales of Mobile but I hesitate to make any judgments. From my position, it is very difficult to see whether or not this trend follows Intel's projections. We simply do not know whether the division, time and time again, fails to meet expectations or whether Intel is just intentionally being very aggressive to position itself better in the future. I would shrug off the latter but, obviously, the former would be a serious concern.

The best thing for us to do is to keep an eye on their upcoming roadmaps and compare them to early projections.

Source: The Verge

Raspberry Pi Compute Module Will Work With Custom PCBs

Subject: General Tech | April 10, 2014 - 01:55 PM |
Tagged: videocore iv, Raspberry Pi, bcm2835, arm

Although the Raspberry Pi's original purpose was as an educational tool, many enthusiasts have used the (mostly) open source hardware at the heart of home automation, robotics projects, and other embedded systems. In light of this success, the Raspberry Pi Foundation has unveiled the Raspberry Pi Compute Module, a miniaturized version of the Raspberry Pi sans IO ports that fits onto a single SO-DIMM module. The Compute Module houses the BCM2835 SoC, 512MB of RAM, and 4GB of flash memory and can be paired with custom designed PCBs.

Raspberry Pi Compute Module.png

The Raspberry Pi Compute Module. Note that the pin out is entirely different from a memory module, so don't try plugging this into your laptop!

The Compute Module will initially be released along with an open source breakout board called the Compute Module IO Board. The IO Board is intended to be an example to get users started and to help them along the path of designing their own customized PCB. The IO Board has a SO-DIMM connector that the Compute Module plugs into. It further offers up two serial camera ports, two serial display ports, two banks of 2x30 GPIO pinouts, a micro USB port for power, one full-size USB port, and one HDMI output. The Raspberry Pi Foundation will be releasing full documentation and schematics for both the Compute Module and IO Board over the next few weeks.

Using the Compute Module and a custom PCB, the embedded system can be smaller and lighter than then traditional Raspberry Pi.

Raspberry Pi Compute Module IO Board.jpeg

The Compute Module IO Board (left) with the Compute Module installed (right).

The Raspberry Pi Compute Module and IO Board will be available as a bundle (the "Compute Module Development Kit") from Element14 and RS in June. Shortly after the development kit launch, customers will be able to purchase the compute module itself for $30 each in batches of 100 or slightly more for smaller orders.

More information can be found on the Raspberry Pi blog. Here's hoping the industrial / embedded market successes will help fuel additional educational endeavours and new Raspberry Pis versions in the future.

Samsung Launching 11-Inch and 13-Inch Chromebook 2s

Subject: Mobile | March 3, 2014 - 05:58 PM |
Tagged: Samsung, exynos 5, chromebook 2, Chromebook, chrome os, arm

Samsung is bringing a new Chromebook to market next month. Coming in 11-inch and 13-inch form factors the new Samsung Chromebook 2 offers updated hardware and more than eight hours of battery life.

The Chromebook 2 will be available in 11.6” and 13.3” models. The smaller variant will come in white or black while the larger SKU is only available in gray. The lids use a soft touch plastic that resembles stitched leather like that found on some Samsung smartphones. The 11.6” is 0.66-inches thick and weighs 2.43 pounds. The 13.3” model is 0.65-inches thick and weighs 3.09 pounds. The 11.6” Chromebook 2 has a 1366x768 display while the 13.3” Chromebook uses a 1920 x 1080 resolution display.

Samsung Chromebook 2 11-Inch In Black.jpg

Internally, the Chromebook 2 is powered by an unspecified Exynos 5 Octa SoC at either 1.9GHz (11.6”) or 2.1GHz (13.3”), 4GB of DDR3L memory, and 16GB internal SSD storage. Internal radios include 802.11ac Wi-Fi and Bluetooth 4.0. Samsung rates the battery life at 8 hours for the 11.6” Chromebook and 8.5 hours for the 13.3” Chromebook.

Beyond the wireless tech, I/O includes one USB 3.0 port, one USB 2.0 port, one HDMI, one headphone output, and one micro SD card slot. This port configuration is available on both Chromebook 2 sizes.

Samsung Chromebook 2 13-Inch In Gray.jpg

Samsung is launching its Chromebook 2 in April at $319.99 and $399.99 for the 11.6” and 13.3” respectively. This new Chromebook is coming to a competitive market that is increasingly packed with Bay Trail-powered Windows 8.1 notebooks (and tablets) that are getting cheaper and Android tablets that are getting more features and more powerful thanks to new ARM-based SoCs. I'm interested to see what platform users start gravitating towards, is the cloud-connected Chrome OS good enough when paired with good battery life and a physical keyboard?

Are you looking forward to Samsung's new Chromebook 2?

Source: Ars Technica

Samsung Releases 8-Core and 6-Core 32-Bit Exynos 5 SoCs

Subject: Processors | February 26, 2014 - 11:46 PM |
Tagged: SoC, Samsung, exynos 5, big.little, arm, 28nm

Samsung recently announced two new 32-bit Exynos 5 processors with the eight core Exynos 5 Octa 5422 and six core Exynos 5 Hexa 5260. Both SoCs utilize a combination of ARM Cortex-A7 and Cortex-A15 CPU cores along with ARM's Mali graphics. Unlike the previous Exynos 5 chips, the upcoming processors utilize a big.LITTLE configuration variant called big.LITTLE MP that allows all CPU cores to be used simultaneously. Samsung continues to use a 28nm process node, and the SoCs should be available for use in smartphones and tablets immediately.

The Samsung Exynos 5 Octa 5422 offers up eight CPU cores and an ARM Mali T628 MP6 GPU. The CPU configuration consists of four Cortex-A15 cores clocked at 2.1GHz and four Cortex-A7 cores clocked at 1.5GHz. Devices using this chip will be able to tap up to all eight cores at the same time for demanding workloads, allowing the device to complete the computations and return to a lower-power or sleep state sooner. Devices using previous generation Exynos chips were faced with an either-or scenario when it came to using the A15 or A7 groups of cores, but the big.LITTLE MP configuration opens up new possibilites.

Samsung Exynos 5 Hexa 5260.jpg

While the Octa 5422 occupies the new high end for the lineup, the Exynos 5 Hexa 5260 is a new midrange chip that is the first six core Exynos product. This chip uses an as-yet-unnamed ARM Mali GPU along with six ARM cores. The configuration on this SoC is four low power Cortex-A7 cores clocked at 1.3GHz paired with two Cortex-A15 cores clocked at 1.7GHz. Devices can use all six cores at a time or more selectively. The Hexa 5260 offers up two higher powered cores for single threaded performance along with four power sipping cores for running background tasks and parallel workloads.

The new chips offer up access to more cores for more performance at the cost of higher power draw. While the additional cores may seem like overkill for checking email and surfing the web, the additional power can enable things like onboard voice recognition, machine vision, faster photo filtering and editing, and other parallel-friendly tasks. Notably, the GPU should be able to assist with some of this parallel processing, but GPGPU is still relatively new whereas developers have had much more time to familiarize themselves with and optimize applications for multiple CPU threads. Yes, the increasing number of cores lends itself well to marketing, but that does not preclude them from having real world performance benefits and application possibilities. As such, I'm interested to see what these chips can do and what developers are able to wring out of them.

Source: Ars Technica