Subject: General Tech | June 7, 2013 - 03:18 PM | Jeremy Hellstrom
Tagged: arm, 64bit, servers
With Calxeda and Applied Micro showing off ARM64 based servers at Computex this year, in addition to the existing products coming from Marvell and Dell, DigiTimes prediction that 64bit ARM processors will quickly grow in popularity seems to be based in fact. It was not too long ago that many thought that ARM was fooling themselves if they thought they could take server space from AMD and Intel but it looks like they were right to develop server chips. With low power usage becoming more popular than processor overkill and modularity growing in importance ARM seems poised to perform far beyond expectations. Expect to see a lot more new on ARM64 processors and products over the coming months.
"Although Intel platforms are still the mainstream in the server industry, since 64-bit products have a broader range of applications, and ARM has been aggressively promoting related products, sources from the server industry expect more 64-bit ARM-based products to appear in the market between the end of 2013 and the first quarter of 2014."
Here is some more Tech News from around the web:
- One Year After World IPv6 Launch — Are We There Yet? @ Slashdot
- The best and worst of Computex 2013 @ The Inquirer
- YES, Xbox One DOES need internet, DOES restrict game trading @ The Register
- Interview: Steve Jackson, role-playing game titan @ The Register
- Neteller vs Payoneer - Online Payment and Prepaid Cards @ FunkyKit
- How to Install Linux @ Linux.com
Cortex-A12 fills a gap
Starting off Computex with an interesting announcement, ARM is talking about a new Cortex-A12 core that will attempt to address a performance gap in the SoC ecosystem between the A9 and A15. In the battle to compete with Krait and Intel's Silvermont architecture due in late 2013, ARM definitely needed to address the separation in performance and efficiency of the A9 and A15.
Source: ARM. Top to bottom: Cortex-A15, A12, A9 die size estimate
Targeted at mid-range devices that tend to be more cost (and thus die-size) limited, the Cortex-A12 will ship in late 2014 for product sampling and you should begin seeing hardware for sale in early 2015.
Architecturally, the changes for the upcoming A12 core revolve around a move to fully out of order dual-issue design including the integrated floating point units. The execution units are faster and the memory design has been improved but ARM wasn't ready to talk about specifics with me yet; expect that later in the year.
ARM claims this results in a 40% performance gain for the Cortex-A12 over the Cortex-A9, tested in SPECint. Because product won't even start sampling until late in 2014 we have no way to verify this data yet or to evaluate efficiency claims. That time lag between announcement and release will also give competitors like Intel, AMD and even Qualcomm time to answer back with potential earlier availability.
Subject: General Tech | May 29, 2013 - 05:20 PM | Tim Verry
Tagged: x11, weston, wayland, videocore iv, Raspberry Pi, linux, bcm2835, arm
The Raspberry Pi Foundation has been working with Collabora to fund development of a Wayland display server that is compatible with the Raspberry Pi and also allows the continued use of legacy X applications.
So far, operating systems that run on the Raspberry Pi have used X as the display server and window compositor. The Raspberry Pi Foundation wants to move to a window compositor that will take advantage of the Raspberry Pi's Hardware Video Scaler (HVS) and take the burden of window composition off of the relatively much slower ARM CPU. The Raspberry Pi Foundation has chosen Wayland as the display server for the task.
The Raspberry Pi Model A.
Taking advantage of the HVS and OpenGL ES compatible GPU will make the system feel much more responsive and allow for advanced effects (fading, Expose'-like window browsers, et al) for those that like a little more bling with their OS.
The Wayland/Weston display server allows for GPU acceleration and window composition using the Pi's VideoCore IV GPU and HVS (which is independent of the hardware units that run OpenGL code). The display server will feed the entire set of windows along with how they should be laid out on screen (stacking order, transparency, 2D transform, ect.) to the HVS which will hardware accelerate the process and free the ARM CPU up for other tasks.
According to the Raspberry Pi Foundation, the Raspberry Pi's HVS is fairly powerful for a mobile-class SoC with 500 Megapixel/s scaling throughput and 1 Gigapixel per second blending throughput.
In addition to GPU acceleration, Wayland will allow non-rectangular windows, fading and other effects, support for legacy X applications with Xwayland, and a scaled window browser.
The Raspberry Pi Foundation has been working with developers since late last year and is nearly ready to roll a technology preview into the next Raspian operating system release. The developers are still working on improving the performance and reducing memory usage. As a result, the new Wayland/Weston display server is not expected to become the new default in the various Raspberry Pi operating systems until late 2013 at the earliest.
This is a project that is really nice to see, especially since at least a small part of the development work going into supporting the ARM-based Raspberry Pi on Wayland will help other ARM devices and Wayland in general which is becoming an increasingly popular choice in new Linux distributions and the best X alternative so far. Of course, this is primarily going to be a useful update for those Raspberry Pi users that run OSes with GUIs as the responsiveness should be a lot snappier!
If you simply can't wait until later this year, it is possible to install the technology preview (beta) of Wayland/Weston onto the current version of Raspbian Linux by cloning the git project or installing a Raspbian package of Weston 1.0. Blogger Daniel Stone has all the details for installing the display server onto your Pi under the section titled "sounds great; how do i get it?" on this post.
See a video of Wayland technology preview in action on the Raspberry Pi on the Raspberry Pi Foundation's blog.
Read more about the Raspberry Pi at PC Perspective.
Subject: General Tech | April 30, 2013 - 09:46 AM | Tim Verry
Tagged: ssd caching, operating system, linux, kernel 3.9, kernel, arm, 802.11ac
Linus Torvalds recently released a new version of the Linux kernel -- version 3.9 -- that advances the core of the GNU/Linux operating system with a number of new features. Among other tweaks, the new kernel rolls in new drivers, improves virtualization support, adds new hardware sleep modes, and tweaks file system and storage support.
The new kernel has added quite a few new experimental features, but developers/enthusiasts will no longer have to employ the CONFIG_EXPERIMENTAL flag when compiling the kernel in order to enable them. The kernel development team has decided to remove that option, enable the features by default, and merely tag those experimental features in the documentation. One of the experimental features is SSD caching that allows a solid state drive to cache both reads and writes. The SSD can cache frequently accessed data on the faster solid state drive as well as take the write cached data and write it to the hard drive when the IO subsystem isn’t being heavily utilized. The feature is not new to Linux distributions, but the caching support has now been moved to the kernel. Furthermore, the kernel is now RAID-aware when using the btrfs file system and RAID 5 or RAID 6.
On the driver front, Linux Kernel 3.9 now supports Intel’s upcoming 802.11ac Wi-Fi adapters, improved HD audio codec, AMD’s Oland (8500/8600) and Richland GPUs, and additional NVIDIA GPU support. The new kernel also rolls in a power-optimized driver for Intel’s Haswell GPU and several more track pads.
Kernel 3.9 also adds a new suspend/sleep mode. It will use more power than the traditional S3 (suspend to memory) sleep mode because components are not completely powered down (merely at their lowest sleep mode), but the system will be almost-instantly accessible upon exiting the new suspend mode as a result. According to H-Online, this "lightweight suspend" mode would be ideal for mobile devices or hardware used in network appliances. Also interesting is support for a KVM hypervisor on ARM Cortex A15 SoCs as well as some software tweaks to the kernel to improve web server workloads by allowing multiple networking sockets (and associated CPU processes) to listen on the same network port.
In all, version 3.9 looks to be a worthy upgrade, and one that I hope Linux distro makers will opt for in upcoming releases. I think the new drivers and the SSD caching being rolled into the kernel are the most important features for desktop users, though the networking stack improvements also sound interesting.
For more details, Thorsten Leemhuis has written up an extensive article on the new kernel.
British chip design company ARM recently released an unaudited financial report with details on its Q1 2013 performance. The mobile SoC giant announced that it saw 2.6 million ARM chips in the first quarter of this year, a 35% improvement over last year and further evidence that ARM still dominates the low-power mobile market.
In fact, the chip designer made $94.9 million in licensing all those ARM chips, which was a big chunk of the company’s total Q1 2013 revenue of $263.9 million. Revenue was up by 26% versus the first quarter of the previous year (Q1 2012), which was only $209.4 million. Further, ARM’s profit (pre-tax) is 89.4 million pounds or approximately $137 million USD.
ARM saw revenue from both licensing and royalties increase year over year (YoY) by 24% and 33% which indicates that more companies are jumping into the mobile and embedded markets with ARM chips or licenses to make custom designs of their own. According to the report, the company sold five-times more Mali GPUs, saw a 50% increase in ARM-powered embedded devices, and noticed a 25% increase in ARM mobile devices year over year respectively. ARM has also started moving ARMv8 (64-bit ARM) licenses. Of the total 22 licenses in Q1 2013, 7 of the licenses were for ARM’s Cortex-A50 series processors along with a single ARMv8 license (a total of 9 to date). In Q1 2013, ARM also sold three Mali GPU licenses, and one of those was for the company’s high-end Skymir GPU.
In all, ARM had a good first quarter and is showing signs of increased growth. With ARMv8 on the horizon, I am interested to see the company’s numbers next year and how they compare year over year as ARM attempts to take over the server room in particular. The profits and revenue are modest in comparison to X86 giant Intel's Q1 2013 results, but are not bad at all for a company that doesn’t produce chips itself!
Subject: Systems | April 19, 2013 - 03:56 AM | Tim Verry
Tagged: X-Gene, servers, project moonshot, microserver, hp, arm, Applied Micro Circuits, 64-bit
A recent press release from AppliedMicro (Applied Micro Circuits Corporation) announced that the company’s X-Gene server on a chip technology would be used in an upcoming HP Project Moonshot server.
An HP Moonshot server (expect the X-Gene version to be at least slightly different).
The X-Gene is a 64-bit ARM SoC that combines ARM processing cores with networking and storage offload engines as well as a high-speed interconnect networking fabric. AppliedMicro designed the chip to provide ARM-powered servers that will reportedly reduce the Total Cost of Ownership of running webservers in a data center by reducing upfront hardware and ongoing electrical costs.
The X-Gene chips that will appear in HP’s Project Moonshot servers feature a SoC with eight AppliedMicro-designed 64-bit ARMv8 cores clocked at 2.4GHz, four ARM Cortex A5 cores for running the Software Defined Network (SDN) controller, and support for storage IO, PCI-E IO, and integrated Ethernet (four 10Gb Ethernet links). The X-Gene chips are located on card-like daughter cards that slot into a carrier board that has networking fabric to connect all the X-Gene cards (and the SoCs on those cards). Currently, servers using X-Gene SoCs require a hardware switch to connect all of the X-Gene cards in a rack. However, the next-generation 28nm X-Gene chips will eliminate the need for a rack-level hardware switch as well as featuring 100Gb networking links).
The X-Gene chips in HP Project Moonshot will use relatively little power compared to Xeon-based solutions. AppliedMicro has stated that the X-Gene chips will be at least two-times as power efficient, but has not officially release power consumption numbers for the X-Gene chips under load. However, at idle the X-Gene SoCs will use as little as 500mW and 300mW of power at idle and standby (sleep mode) respectively. The 64-bit quad issue, Out of Order Execution chips are some of the most-powerful ARM processors to date, though they will soon be joined by ARM’s own 64-bit design(s). I think the X-Gene chips are intriquing, and I am excited to see how well they fare in the data center environment running server applications. ARM has handily taken over the mobile space, but it is still relatively new in the server world. Even so, the 64-bit ARM chips by AppliedMicro (X-Gene) and others are the first step towards ARM being a viable option for servers.
According to AppliedMicro, HP Project Moonshot servers with X-Gene SoCs will be available later this year. You can find the press blast below.
Subject: General Tech | April 12, 2013 - 02:08 AM | Tim Verry
Tagged: SECO, nvidia, mini ITX, kepler, kayla, GTC 13, GTC, CUDA, arm
Last month, NVIDIA revealed its Kayla development platform that combines a quad core Tegra System on a Chip (SoC) with a NVIDIA Kepler GPU. Kayla will out later this year, but that has not stopped other board makers from putting together their own solutions. One such solution that began shipping earlier this week is the mITX GPU Devkit from SECO.
The new mITX GPU Devkit is a hardware platform for developers to program CUDA applications for mobile devices, desktops, workstations, and HPC servers. It combines a NVIDIA Tegra 3 processor, 2GB of RAM, and 4GB of internal storage (eMMC) on a Qseven module with a Mini-ITX form factor motherboard. Developers can then plug their own CUDA-capable graphics card into the single PCI-E 2.0 x16 slot (which actually runs at x4 speeds). Additional storage can be added via an internal SATA connection, and cameras can be hooked up using the CIC headers.
Rear IO on the mITX GPU Devkit includes:
- 1 x Gigabit Ethernet
- 3 x USB
- 1 x OTG port
- 1 x HDMI
- 1 x Display Port
- 3 x Analog audio
- 2 x Serial
- 1 x SD card slot
The SECO platform is a proving to be popular for GPGPU in the server space, especially with systems like Pedraforca. The intention of using these types of platforms in servers is to save power by using a low power ARM chip for inter-node communication and basic tasks while the real computing is done solely on the graphics cards. With Intel’s upcoming Haswell-based Xeon chips getting down to 13W TPDs though, systems like this are going to be more difficult to justify. SECO is mostly positioning this platform as a development board, however. One use in that respect is to begin optimizing GPU-accelerated code for mobile devices. With future Tegra chips to get CUDA-compatible graphics cards, new software development and optimization of existing GPGPU code for smartphones and tablet will be increasingly important.
Either way, the SECO mITX GPU Devkit is available now for 349 EUR or approximately $360 (in both cases, before any taxes).
Subject: Processors | April 3, 2013 - 08:35 AM | Tim Verry
Tagged: mobile, Lenovo, electrical engineering, chip design, arm
According to a recent article in the EE Times, Beijing-based PC OEM Lenovo many be entering the mobile chip design business. An anonymous source allegedly familiar with the matter has indicated that Lenovo will be expanding its Integrated Circuits design team to 100 engineers by the second-half of this year. Further, Lenovo will reportedly task the newly-expanded team with designing an ARM processor of its own to join the ranks of Apple, Intel, NVIDIA, Qualcomm, Huawei, Samsung, and others.
It is unclear whether Lenovo simply intends to license an existing ARM core and graphics module or if the design team expansion is merely the begining of a growing division that will design a custom chip for its smartphones and Chromebooks to truly differentiate itself and take advantage of vertical integration.
Junko Yoshida of the EE Times article notes that Lenovo was turned away by Samsung when it attempted to use the company's latest Exynos Octa processor. I think that might contribute to the desire to have its own chip design team, but it may also be that the company believes it can compete in a serious way and set its lineup of smartphones apart from the crowd (as Apple has managed to do) as it pursues further Chinese market share and slowly moves its phones into the United States market.
Details are scarce, but it is at least an intriguing protential future for the company. It will be interesting to see if Lenovo is able to make it work in this extremely-competitive and expensive area.
Do you think Lenovo has what it takes to design its own mobile chip? Is it a good idea?
Subject: General Tech | April 2, 2013 - 05:57 PM | Jeremy Hellstrom
Tagged: arm, FinFET, 16nm, TSMC, Cortex-A57
While what DigiTimes is reporting on is only the first tape out, it is still very interesting to see TSMC hitting 16nm process testing and doing it with the 3D transistor technology we have come to know as FinFET. It was a 64-bit ARM Cortex-A57 chip that was created using this process, unfortunately we did not get much information about what comprised the chip apart from the slide you can see below.
As it can be inferred by the mention that it can run alongside big.LITTLE chips it will not be of the same architecture, nor will it be confined to cellphones. This does help reinforce TSMC's position in the market for keeping up with the latest fabrication trends and another solid ARM contract will also keep the beancounters occupied. You can't expect to see these chips immediately but this is a solid step towards an new process being mastered by TSMC.
"The achievement is the first milestone in the collaboration between ARM and TSMC to jointly optimize the 64-bit ARMv8 processor series on TSMC FinFET process technologies, the companies said. The pair has teamed up to produce Cortex-A57 processors and libraries to support early customer implementations on 16nm FinFET for ARM-based SoCs."
Here is some more Tech News from around the web:
- Wiping a Smartphone Still Leaves Data Behind @ Slashdot
- ARM processor competition to fire up @ DigiTimes
- Physicists bang the drum for quantum memory @ The Register
- Intel Haswell Socket H Heatsink Requirements and Overclocking Thoughts @ Tweaktown
- Killing Your Internet with Killer Ethernet @ Techgage
- Backdoors Found In Bitlocker, FileVault and TrueCrypt? @ TechARP
- Win ASRock FM2A85X Extreme 6 & Seasonic M12II-850 @ Kitguru
- Win Enermax Goodies From Insomnia i48 @ eTeknix
- NikKTech & Synology Joint Giveaway - One DiskStation DS213+ Up For Grabs
- The TR Podcast 131: News from GDC and FCAT attacks
- Dispatches from the Nexus @ The Tech Report
- AMD touts unified gaming strategy @ The Tech Report
- Intel gets serious about graphics for gaming @ The Tech Report
ARM is a company that no longer needs much of an introduction. This was not always the case. ARM has certainly made a name for themselves among PC, tablet, and handheld consumers. Their primary source of income is licensing CPU designs as well as their ISA. While names like the Cortex A9 and Cortex A15 are fairly well known, not as many people know about the graphics IP that ARM also licenses. Mali is the product name of the graphics IP, and it encompasses an entire range of features and performance that can be licensed by other 3rd parties.
I was able to get a block of time with Nizar Romdhane, Head of the Mali Ecosystem at ARM. I was able to ask a few questions about Mali, ARM’s plans to address the increasingly important mobile graphics market, and how they will compete with competition from Imagination Technologies, Intel, AMD, NVIDIA, and Qualcomm.
We would like to thank Nizar for his time, as well as Phil Hughes in facilitating this interview. Stay tuned as we are expecting to continue this series of interviews with other ARM employees in the near future.