BitScope Unveils Raspberry Pi Cluster With 2,880 CPU Cores For LANL HPC R&D

Subject: General Tech | November 30, 2017 - 12:48 AM |
Tagged: HPC, supercomputer, Raspberry Pi 3, cluster, research, LANL

The Raspberry Pi has been used to build cheap servers and small clusters before, but BitScope is taking the idea to the extreme with a professional enterprise solution. On display at SC17, the BitScope Raspberry Pi Cluster Module is a 6U rackable drawer that holds 144 Raspberry Pi 3 single board computers along with all of the power, networking, and air cooling needed to keep things running smoothly.

Each cluster module holds two and a half BitScope Blades with each BitScope Blade holding up to 60 Raspberry Pi PCs (or other SBCs like the ODROID C2). Enthusiasts can already purchase their own Quattro Pi boards as well as the cluster plate to assemble their own small clusters though the 6U Cluster Module drawer doesn’t appear to be for sale yet (heh). Specifically each Cluster Module has room for 144 active nodes, six spare nodes, and one cluster manager node.

Bitscope Raspberry Pi Cluster Module.jpg

For reference, the Raspberry Pi 3 features the Broadcom BCM2837 SoC with 4 ARM Cortex A53 cores at 1.2 GHz and a VideoCore IV GPU that is paired with 1 GB of LPDDR2 memory at 900 MHz, 100 Mbps Ethernet, 802.11n Wi-Fi and Bluetooth. The ODROID C2 has 4 Amlogic cores at 1.5 GHz, a Mali 450 GPU, 2 GB of DDR3 SDRAM, and Gigabit Ethernet. Interestingly, BitScope claims the Cluster Module uses a 10 Gigabit Ethernet SFP+ backbone which will help when communicating between Cluster Modules but speeds between individual nodes will be limited by at best one gigabit speeds (less in real world, and in the case of the Pi it is much less than the 100 Mbps port rating due to how it is wired to the SoC).

BitScope is currently building a platform for Los Alamos National Laboratory that will feature five Cluster Modules for a whopping 2,880 64-bit ARM cores, 720GB of RAM, and a 10GbE SFP+ fabric backbone. Fully expanded, a 42U server cabinet holds 7 modules (1008 active nodes / 4,032 active cores) and would consume up to 6KW of power. LANL expects their 5 module setup to use around 3000 W on average though.

What is the New Mexico Consortium and LANL planning to do with all these cores? Well, playing Crysis would prove tough even if they could SLI all those GPUs so instead they plan to use the Raspberry Pi-powered system to model much larger and prohibitively expensive supercomputers for R&D and software development. Building out a relatively low cost and low power system enables it to be powered on and accessed by more people including students, researchers, and programmers where they can learn and design software that runs as efficiently as possible on massive multiple core and multiple node systems. Getting software to scale out to hundreds and thousands of different nodes is tricky, especially if you want all the nodes working on the same problem(s) at once. Keeping each node fed with data, communicating amongst themselves, and returning accurate results while keeping latency low and utilization high is a huge undertaking. LANL is hoping that the Raspberry Pi based system will be the perfect testing ground for software and techniques they can then use on the big gun supercomputers like Trinity, Titan, Summit (ORNL, slated for 2018), and other smaller HPC clusters.

It is cool to see how far the Raspberry Pi has come and while I wish the GPU was more open so that the researchers could more easily work with heterogenous HPC coding rather than just working with the thousands of ARM cores, it is still impressive to see what is essentially a small supercomputer with a 1008 node cluster for under $25,000!

I am interested to see how the researchers at Los Alamos put it to work and the eventual improvements to HPC and supercomputing software that come from this budget cluster project!

Also read:

Source: BitScope

Microwave your RAM to make it faster?

Subject: General Tech | October 13, 2016 - 03:19 PM |
Tagged: terahertz, research, memory

You have probably recently heard of terahertz radiation used to scan physical objects, be it the T-Rays at airports or the the researchers at MIT who are reading books through the covers.  There is more recent of news on researchers utilizing the spectrum between frequencies of 0.3THz and 3THz, this time pertaining to RAM cycles and the possibility of increasing the speed at which RAM can flip between a 0 and 1.  In theory a terahertz electric field could flip bits 1000 times faster than the electromagnetic process currently used in flash memory. This could also be used in the new prototype RAM technology we have seen, such as MRAM, PRAM or STT-RAM.  This is still a long way off but a rather interesting read, especially if you can follow the links from The Inquirer to the Nature submission.

EM Spectrum.png

"Using the prototypical antiferromagnet thulium orthoferrite (TmFeO3), we demonstrate that resonant terahertz pumping of electronic orbital transitions modifies the magnetic anisotropy for ordered Fe3+ spins and triggers large-amplitude coherent spin oscillations," the researchers helpfully explained."

Here is some more Tech News from around the web:

Tech Talk

Source: The Inquirer

NVIDIA Will Present Global Impact Award And $150,000 Grant To Researchers At GTC 2015

Subject: General Tech | April 8, 2014 - 05:03 PM |
Tagged: research, nvidia, GTC, gpgpu, global impact award

During the GPU Technology Conference last month, NVIDIA introduced a new annual grant called the Global Impact Award. The grant awards $150,000 to researchers using NVIDIA GPUs to research issues with worldwide impact such as disease research, drug design, medical imaging, genome mapping, urban planning, and other "complex social and scientific problems."

NVIDIA Global Impact Award.png

NVIDIA will be presenting the Global Impact Award to the winning researcher or non-profit institution at next year's GPU Technology Conference (GTC 2015). Individual researchers, universities, and non-profit research institutions that are using GPUs as a significant enabling technology in their research are eligible for the grant. Both third party and self-nomiations (.doc form) are accepted with the nominated candidates being evaluated based on several factors including the level of innovation, social impact, and current state of the research and its effectiveness in approaching the problem. Submissions for nominations are due by December 12, 2014 with the finalists being announced by NVIDIA on March 13, 2015. NVIDIA will then reveal the winner of the $150,000 grant at GTC 2015 (April 28, 2015).

The researcher, university, or non-profit firm can be located anywhere in the world, and the grant money can be assigned to a department, initiative, or a single project. The massively parallel nature of modern GPUs makes them ideal for many times of research with scalable projects, and I think the Global Impact Award is a welcome incentive to encourage the use of GPGPU in applicable research projects. I am interested to see what the winner will do with the money and where the research leads.

More information on the Global Impact Award can be found on the NVIDIA website.

Source: NVIDIA

Good work if you can get in, Intel starts researching wetware-hardware interaction

Subject: General Tech | June 27, 2012 - 04:35 PM |
Tagged: Intel Science Technology Center, social, research, Intel

Intel has earmarked $15 million to be spent over the next 5 years researching how people interact with their machines.  They will be focusing on the social aspect as opposed to hardware and software; trying to discover how people interact with their machines, from cell phones to servers as well as investigating how people would like to interact with their machines.  The Register believes that this is an attempt to work on the next generation of patents and to avoid the fate of Xerox's PARC.  While they invented many of the communications technologies which we take for granted today they never managed to capitalize on them successfully enough to survive in the market.  Since Intel has the money to invest in research and a demonstrated ability to capitalize on their intellectual property this expenditure makes sense and should help Intel remain at the top of the technological heap for quite a while.  In the mean time, it sounds like a great project to be working on.

Intel_Science_Technology_Center_UC_Berkeley.jpg

"The new Intel Science Technology Center is a $15m program funding five years of research into social and anthropological research into how people use technology. Rather than focus on how hardware and software are used, the new center will be looking at how human wetware interacts with the resulting data."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register