Subject: Systems | April 19, 2013 - 03:56 AM | Tim Verry
Tagged: X-Gene, servers, project moonshot, microserver, hp, arm, Applied Micro Circuits, 64-bit
A recent press release from AppliedMicro (Applied Micro Circuits Corporation) announced that the company’s X-Gene server on a chip technology would be used in an upcoming HP Project Moonshot server.
An HP Moonshot server (expect the X-Gene version to be at least slightly different).
The X-Gene is a 64-bit ARM SoC that combines ARM processing cores with networking and storage offload engines as well as a high-speed interconnect networking fabric. AppliedMicro designed the chip to provide ARM-powered servers that will reportedly reduce the Total Cost of Ownership of running webservers in a data center by reducing upfront hardware and ongoing electrical costs.
The X-Gene chips that will appear in HP’s Project Moonshot servers feature a SoC with eight AppliedMicro-designed 64-bit ARMv8 cores clocked at 2.4GHz, four ARM Cortex A5 cores for running the Software Defined Network (SDN) controller, and support for storage IO, PCI-E IO, and integrated Ethernet (four 10Gb Ethernet links). The X-Gene chips are located on card-like daughter cards that slot into a carrier board that has networking fabric to connect all the X-Gene cards (and the SoCs on those cards). Currently, servers using X-Gene SoCs require a hardware switch to connect all of the X-Gene cards in a rack. However, the next-generation 28nm X-Gene chips will eliminate the need for a rack-level hardware switch as well as featuring 100Gb networking links).
The X-Gene chips in HP Project Moonshot will use relatively little power compared to Xeon-based solutions. AppliedMicro has stated that the X-Gene chips will be at least two-times as power efficient, but has not officially release power consumption numbers for the X-Gene chips under load. However, at idle the X-Gene SoCs will use as little as 500mW and 300mW of power at idle and standby (sleep mode) respectively. The 64-bit quad issue, Out of Order Execution chips are some of the most-powerful ARM processors to date, though they will soon be joined by ARM’s own 64-bit design(s). I think the X-Gene chips are intriquing, and I am excited to see how well they fare in the data center environment running server applications. ARM has handily taken over the mobile space, but it is still relatively new in the server world. Even so, the 64-bit ARM chips by AppliedMicro (X-Gene) and others are the first step towards ARM being a viable option for servers.
According to AppliedMicro, HP Project Moonshot servers with X-Gene SoCs will be available later this year. You can find the press blast below.
Subject: General Tech | April 12, 2013 - 02:08 AM | Tim Verry
Tagged: SECO, nvidia, mini ITX, kepler, kayla, GTC 13, GTC, CUDA, arm
Last month, NVIDIA revealed its Kayla development platform that combines a quad core Tegra System on a Chip (SoC) with a NVIDIA Kepler GPU. Kayla will out later this year, but that has not stopped other board makers from putting together their own solutions. One such solution that began shipping earlier this week is the mITX GPU Devkit from SECO.
The new mITX GPU Devkit is a hardware platform for developers to program CUDA applications for mobile devices, desktops, workstations, and HPC servers. It combines a NVIDIA Tegra 3 processor, 2GB of RAM, and 4GB of internal storage (eMMC) on a Qseven module with a Mini-ITX form factor motherboard. Developers can then plug their own CUDA-capable graphics card into the single PCI-E 2.0 x16 slot (which actually runs at x4 speeds). Additional storage can be added via an internal SATA connection, and cameras can be hooked up using the CIC headers.
Rear IO on the mITX GPU Devkit includes:
- 1 x Gigabit Ethernet
- 3 x USB
- 1 x OTG port
- 1 x HDMI
- 1 x Display Port
- 3 x Analog audio
- 2 x Serial
- 1 x SD card slot
The SECO platform is a proving to be popular for GPGPU in the server space, especially with systems like Pedraforca. The intention of using these types of platforms in servers is to save power by using a low power ARM chip for inter-node communication and basic tasks while the real computing is done solely on the graphics cards. With Intel’s upcoming Haswell-based Xeon chips getting down to 13W TPDs though, systems like this are going to be more difficult to justify. SECO is mostly positioning this platform as a development board, however. One use in that respect is to begin optimizing GPU-accelerated code for mobile devices. With future Tegra chips to get CUDA-compatible graphics cards, new software development and optimization of existing GPGPU code for smartphones and tablet will be increasingly important.
Either way, the SECO mITX GPU Devkit is available now for 349 EUR or approximately $360 (in both cases, before any taxes).
Subject: Processors | April 3, 2013 - 08:35 AM | Tim Verry
Tagged: mobile, Lenovo, electrical engineering, chip design, arm
According to a recent article in the EE Times, Beijing-based PC OEM Lenovo many be entering the mobile chip design business. An anonymous source allegedly familiar with the matter has indicated that Lenovo will be expanding its Integrated Circuits design team to 100 engineers by the second-half of this year. Further, Lenovo will reportedly task the newly-expanded team with designing an ARM processor of its own to join the ranks of Apple, Intel, NVIDIA, Qualcomm, Huawei, Samsung, and others.
It is unclear whether Lenovo simply intends to license an existing ARM core and graphics module or if the design team expansion is merely the begining of a growing division that will design a custom chip for its smartphones and Chromebooks to truly differentiate itself and take advantage of vertical integration.
Junko Yoshida of the EE Times article notes that Lenovo was turned away by Samsung when it attempted to use the company's latest Exynos Octa processor. I think that might contribute to the desire to have its own chip design team, but it may also be that the company believes it can compete in a serious way and set its lineup of smartphones apart from the crowd (as Apple has managed to do) as it pursues further Chinese market share and slowly moves its phones into the United States market.
Details are scarce, but it is at least an intriguing protential future for the company. It will be interesting to see if Lenovo is able to make it work in this extremely-competitive and expensive area.
Do you think Lenovo has what it takes to design its own mobile chip? Is it a good idea?
Subject: General Tech | April 2, 2013 - 05:57 PM | Jeremy Hellstrom
Tagged: arm, FinFET, 16nm, TSMC, Cortex-A57
While what DigiTimes is reporting on is only the first tape out, it is still very interesting to see TSMC hitting 16nm process testing and doing it with the 3D transistor technology we have come to know as FinFET. It was a 64-bit ARM Cortex-A57 chip that was created using this process, unfortunately we did not get much information about what comprised the chip apart from the slide you can see below.
As it can be inferred by the mention that it can run alongside big.LITTLE chips it will not be of the same architecture, nor will it be confined to cellphones. This does help reinforce TSMC's position in the market for keeping up with the latest fabrication trends and another solid ARM contract will also keep the beancounters occupied. You can't expect to see these chips immediately but this is a solid step towards an new process being mastered by TSMC.
"The achievement is the first milestone in the collaboration between ARM and TSMC to jointly optimize the 64-bit ARMv8 processor series on TSMC FinFET process technologies, the companies said. The pair has teamed up to produce Cortex-A57 processors and libraries to support early customer implementations on 16nm FinFET for ARM-based SoCs."
Here is some more Tech News from around the web:
- Wiping a Smartphone Still Leaves Data Behind @ Slashdot
- ARM processor competition to fire up @ DigiTimes
- Physicists bang the drum for quantum memory @ The Register
- Intel Haswell Socket H Heatsink Requirements and Overclocking Thoughts @ Tweaktown
- Killing Your Internet with Killer Ethernet @ Techgage
- Backdoors Found In Bitlocker, FileVault and TrueCrypt? @ TechARP
- Win ASRock FM2A85X Extreme 6 & Seasonic M12II-850 @ Kitguru
- Win Enermax Goodies From Insomnia i48 @ eTeknix
- NikKTech & Synology Joint Giveaway - One DiskStation DS213+ Up For Grabs
- The TR Podcast 131: News from GDC and FCAT attacks
- Dispatches from the Nexus @ The Tech Report
- AMD touts unified gaming strategy @ The Tech Report
- Intel gets serious about graphics for gaming @ The Tech Report
ARM is a company that no longer needs much of an introduction. This was not always the case. ARM has certainly made a name for themselves among PC, tablet, and handheld consumers. Their primary source of income is licensing CPU designs as well as their ISA. While names like the Cortex A9 and Cortex A15 are fairly well known, not as many people know about the graphics IP that ARM also licenses. Mali is the product name of the graphics IP, and it encompasses an entire range of features and performance that can be licensed by other 3rd parties.
I was able to get a block of time with Nizar Romdhane, Head of the Mali Ecosystem at ARM. I was able to ask a few questions about Mali, ARM’s plans to address the increasingly important mobile graphics market, and how they will compete with competition from Imagination Technologies, Intel, AMD, NVIDIA, and Qualcomm.
We would like to thank Nizar for his time, as well as Phil Hughes in facilitating this interview. Stay tuned as we are expecting to continue this series of interviews with other ARM employees in the near future.
Subject: Systems | March 25, 2013 - 01:14 PM | Jeremy Hellstrom
Tagged: arm, calxeda, Boston Viridis
Perhaps the most telling part of AnandTech's review of the Calxeda Boston Viridis server was the statement that "It's a Cluster, Not a Server" as that paints a different picture of the appliance in many tech's heads. When you first open the chassis you are greeted by 24 2.5” SATA drive bays and a very non-standard looking motherboard full of PCIe slots, each of which can hold a EnergyCard which consists of four quad-core ARM SoCs, each with one DIMM slot and 4 SATA ports with the theoretical limit being 4096 nodes interconnected by physical, distributed layer-2 switches not virtualized switches which use CPU cycles. Check out the results of AnandTech's virtual machine testing and a deeper look at the architecture of the cluster in the full article.
"ARM based servers hold the promise of extremely low power and excellent performance per Watt ratios. It's theoretically possible to place an incredible number of servers into a single rack; there are already implementations with as many as 1000 ARM servers in one rack (48 server nodes in a 2U chassis). What's more, all of those nodes consume less than 5KW combined (or around 5W per quad-core ARM node). But whenever a new technology is hyped, it's important to remain objective. The media loves to rave about new trends and people like reading about "some new thing"; however, at the end of the day the system administrator has to keep his IT services working and convince his boss to invest in new technologies."
Here are some more Systems articles from around the web:
- MESH Slayer 3770K OC System @ Kitguru
- Apple iMac 27 inch 2012 review: Core i7, GTX 680MX and Fusion Drive @ Hardware.info
- ARIA Gladiator Warbird 660 i5-3570K GTX660 SLI OC @ Kitguru
- Digital Storm Bolt Desktop Gaming PC @ Tweaktown
- Gateway One ZX4970G-UW308 Review @ TechReviewSource
Linaro Forms Linux Networking Group to Collaborate on Open Source Software for ARM Networking Hardware
Subject: General Tech | February 22, 2013 - 02:16 AM | Tim Verry
Tagged: oss, open source, networking, linux networking group, linux, linaro, arm
Linaro, a non-profit engineering group, announced a new collaborative organization called the Linux Networking Group at the Embedded Linux Conference in San Francisco this week. The new group will work on developing open source software to be used with ARM-based hardware in cloud, mobile, and networking industry sectors. Of course, being open source, the software for ARM SoCs will be used with Linux operating systems. One of the Linux Networking Group’s purposes is to develop a new “enhanced core Linux platform” for networking equipment, for example.
The new Linux Networking Group is currently comprised of the following organizations:
- Nokia Siemens Networks
- Texas Instruments
The new cooperative has announced four main goals for 2013:
- "Virtualization support with considerations for real-time performance, I/O optimization, robustness and heterogeneous operating environments on multi-core SoCs.
- Real-time operations and the Linux kernel optimizations for the control and data plane.
- Packet processing optimizations that maximize performance and minimize latency in data flows through the network.
- Dealing with legacy software and mixed-endian issues prevalent in the networking space."
Reportedly, Linaro will have an initial software release within the first half of this year. Further, the organization will follow up with monthly software updates to improve performance and add new features. More collaboration and the furthering of ARM-compatible open source software is always a good thing. It remains to be seen how useful the Linux Networking Group will be in pushing its ARM software goals, but here’s hoping it works out for the best.
The full press release can be found below.
Subject: General Tech | February 18, 2013 - 01:52 PM | Jeremy Hellstrom
Tagged: winRT, arm, x86 emulator
While there was a previous hack which allowed you to run unsigned applications on WinRT devices it would not survive a reboot and so needed to be reapplied. A programmer at XDA Developers has created a similar and improved tool which functions as a limited 32bit x86 emulator on WinRT. Once you unlock your device and install the software, which is still in beta, you will be able to run a number of older games and a number of simple applications. One thing it cannot do at this point is launch an x86 program from within an emulated x86 program so some installers will not function if they rely on decompressing and launching a second program. Check out the latest version of the software and the FAQ by following the link from Hack a Day.
"It seems with a lot of black magic, [mamaich] over at the XDA Developers forum has a solution for us. He’s created a tool for running x86 Win32 apps on Windows RT. Basically, he’s created an x86 emulator for ARM devices that also passes Windows API calls to Windows RT."
Here is some more Tech News from around the web:
- Microsoft’s Office 2013 software licence can’t be transferred to another PC @ The Inquirer
- Ready or not: Microsoft preps early delivery of IE10 for Windows 7 @ The Register
- Interactive Tool Visualizes Tolkien's Works @ Slashdot
- NVIDIA Free-to-Play Reviewed @ OCC
- Canonical will release Ubuntu smartphone software on 21 February @ The Inquirer
- Light Virtualization and Instant Recovery Software: A great overall safety net for your computer @ Tweaktown
- Survey shows Americans treat mobile devices as best friends, says Citrix @ DigiTimes
Subject: General Tech | February 10, 2013 - 12:45 PM | Tim Verry
Tagged: SFF, Raspberry Pi, camera, arm
The Raspberry Pi Foundation has been working on offering a camera attachment for Raspberry Pi boards for some time now. The developers began with a 41MP sensor, but have since moved to a smaller (and cheaper) camera with a 5MP sensor. That particular model is nearly complete and should be available for purchase sometime this spring, according to the developers.
The Raspberry Pi camera will be $25 which aligns itself well with the recently released Model A Raspberry Pi computer (which is also $25). The PCB hosting the camera module measures 20 x 25 x 10mm, while the camera module itself measures 8.5 x 8.5 x 5mm. It connects to the Raspberry Pi board via a flat cable into the CSI port below the Ethernet jack.
The $25 camera is capable of capturing HD video as well as stills. It uses the Omnivision OV5647 sensor and a fixed focus lens. The 5MP sensor is capable of capturing still photos with a pixel resolution of 2592 x 1944 and up to 1080p video. While the developers are still working on the kinks to ensure that it the camera can do this, the sensor itself is capable of 1080p30, 720p60, and 640x480p90 video capture. The Raspberry Pi Foundation has stated that at least the 1080p30 capture mode is working.
Interestingly, the Raspberry Pi ISP hardware can support two cameras, but the PCB only provides a single CSI connector (so no 3D image capture using two cameras). The Raspberry Pi Foundation is providing this little CSI camera as an alternative to USB cameras. While it is possible to use USB cameras with the Raspberry Pi, USB driver overhead and USB bandwidth issues specific to the Raspberry Pi limit the performance that you can get out of USB cameras. The $25 CSI camera add-on bypasses the USB interface in favor of the CSI port that feeds into the image processing parts of the ARM SoC.
The developers have not released an exact weight measurement, but have described it as being rather lightweight--making it ideal for use in drones, weather balloons, and other flying projects. For more information, the developers have set up a forum thread to answer questions and keep interested users updated on the project status.
Subject: General Tech | February 5, 2013 - 05:32 AM | Tim Verry
Tagged: Raspberry Pi, model a, cheap computer, arm
The Raspberry Pi Foundation has announced that its Model A computer is (finally) available for purchase in Europe. The Raspberry Pi Model A is the small computer that the foundation originally pitched as the low-cost $25 PC. The other computer is the Model B, which has been available for some time now. The Model A is a stripped down version of the Model B covered previously. It features a single USB port, and half of the RAM of the latest Model B at 256MB. Further, there is no Ethernet jack on the model B, so users wanting Internet access will have to grab a USB NIC.
The Model A PC. Notice the lack of Ethernet support.
The Model A is powered by the same Broadcom BCM2835 chipset as the Model B. That includes an ARM1176JZFS processor clocked at 700MHz and a Videocore 4 GPU. The GPU is capable of hardware accelerating H.264 video decodes at up to 1080p30 and 40Mbps video. The GPU is rated at 24 GLOPS general compute performance, and it supports the OpenGL ES2.0 and OpenVG libraries.
Interestingly, the Model A was originally planned to have a mere 128MB of RAM, but with the update of the Model B to 512MB RAM, the Raspberry Pi Foundation was also able to include twice the RAM in the Model A while maintaining the $25 price point.
The underside of the Raspberry Pi Model A.
The Model A reportedly uses as much as a third of the power as the Model B, which makes it ideal for projects that will run off of battery or renewable energy sources--like solar. The Raspberry Pi Foundation suggests that the Model A will be useful in robotics and networking projects, for example.
The Model A Raspberry Pi PC is currently available in Europe, but US availability is coming soon. It will cost $25, but you will also need at least an SD card for the operating system and a DC power source (like a cell phone wall charger with male micro USB connector). The promised $25 PC is finally here (at least for those on the other side of the pond). What will you be using it for?
Read more about the Raspberry Pi at PC Perspective.