Subject: General Tech | July 30, 2013 - 12:41 PM | Jeremy Hellstrom
Tagged: Intel, arm, low power, server, Avoton, rangeley
Intel envisions a sea change in the server room, with servers, SANs and racks of switches which all have been controlled separately becoming much more software based as the ability to virtualize hardware becomes more prevalent. This is not to imply that the hardware will disappear and that Intel will go the way of IBM and get out of the chip business as neither are true; instead Intel is moving forward on the belief that the optimization of your virtualization software will be more important than specific hardware optimizations. While it is great to have tiered storage with expensive SSDs, solid SAS drives and other longer term and lower availability all working together there is little benefit if the software which allocates your data to those media doesn't do so properly.
In this new server room the SoC could be king, modular designs which offer scalable processing power to any and all tasks which is something that ARM cut its teeth on and is now scaling up their power to become a major player in server room design. Intel is coming at this market segment from the other direction as it has to trim power down on its chips without crippling them like we saw in the first 45nm Atom chips. To that end they are working on the new Silvermont architecture, the 22nm Avoton and Rangley which will be mature 64bit chips, something that ARM is still in early days with. Check out more info on these two chips and their successors, along with a teaser on Broadwell at The Tech Report.
"Last week, Intel hosted an event for press and analysts where it provided some updates on the state of its data center business. Then it proceeded to confound our expectations by demonstrating how it's gearing up for a protracted fight with ARM."
Here is some more Tech News from around the web:
- NVIDIA's Linux Driver On Ubuntu Is Very Competitive With Windows 8 @ Phoronix
- BeagleBone Black becomes a handheld classic gaming console @ Hack a Day
- Nvidia buys Portland Group for compiler smarts @ The Register
- KitGuru visit the Overclockers UK Store
- Funky Kit Interview with ASRock
NVIDIA Finally Gets Serious with Tegra
Tegra has had an interesting run of things. The original Tegra 1 was utilized only by Microsoft with Zune. Tegra 2 had a better adoption, but did not produce the design wins to propel NVIDIA to a leadership position in cell phones and tablets. Tegra 3 found a spot in Microsoft’s Surface, but that has turned out to be a far more bitter experience than expected. Tegra 4 so far has been integrated into a handful of products and is being featured in NVIDIA’s upcoming Shield product. It also hit some production snags that made it later to market than expected.
I think the primary issue with the first three generations of products is pretty simple. There was a distinct lack of differentiation from the other ARM based products around. Yes, NVIDIA brought their graphics prowess to the market, but never in a form that distanced itself adequately from the competition. Tegra 2 boasted GeForce based graphics, but we did not find out until later that it was comprised of basically four pixel shaders and four vertex shaders that had more in common with the GeForce 7800/7900 series than it did with any of the modern unified architectures of the time. Tegra 3 boasted a big graphical boost, but it was in the form of doubling the pixel shader units and leaving the vertex units alone.
While NVIDIA had very strong developer relations and a leg up on the competition in terms of software support, it was never enough to propel Tegra beyond a handful of devices. NVIDIA is trying to rectify that with Tegra 4 and the 72 shader units that it contains (still divided between pixel and vertex units). Tegra 4 is not perfect in that it is late to market and the GPU is not OpenGL ES 3.0 compliant. ARM, Imagination Technologies, and Qualcomm are offering new graphics processing units that are not only OpenGL ES 3.0 compliant, but also offer OpenCL 1.1 support. Tegra 4 does not support OpenCL. In fact, it does not support NVIDIA’s in-house CUDA. Ouch.
Jumping into a new market is not an easy thing, and invariably mistakes will be made. NVIDIA worked hard to make a solid foundation with their products, and certainly they had to learn to walk before they could run. Unfortunately, running effectively entails having design wins due to outstanding features, performance, and power consumption. NVIDIA was really only average in all of those areas. NVIDIA is hoping to change that. Their first salvo into offering a product that offers features and support that is a step above the competition is what we are talking about today.
Subject: General Tech, Systems | July 14, 2013 - 11:51 PM | Tim Verry
Tagged: utilite, ubuntu, silent, SFF, linux, fanless, cortex-a9, compulab, arm, Android
CompuLab has announced a new fanless, small form factor PC called the Utilite. This new PC comes from the same company that engineered the MintBox, MintBox 2, and Fit PC series. The Utilite is a low-power desktop PC powered by a quad core ARM Cortex A9 processor and runs either Ubuntu Linux or Google’s Android operating system.
The upcoming Utilite measures 5.3” x 3.9” x 0.8”(135 x 100 x 21mm) and consumes anywhere between 3W and 8W of power depending on the particular hardware configuration. It is designed to be a quiet desktop replacement with plenty of IO.
CompuLab will provide single core, dual core, and quad core CPU SKUs. Specifically, the Utilite is powered by a Freescale i.MX6 ARM Cortex-A9 MPCore processor that is clocked at up to 1.2 GHz. Users will be able to further configure the system with up to 4GB of DDR3 1066 MHz memory and a 512GB mSATA SSD. Storage can be further expanded using Micro SD-XC cards (maximum of 128GB per card). The GPU in the SoC is compatible with OpenGL ES 1.1 and 2.0 as well as OpenVG 1.1 and OpenCL EP. It is capable of hardware decoding multi-stream 1080p video in a variety of common formats.
Wireless functionality includes an 802.11b/g/n Wi-Fi card and Bluetooth 3.0.
The Utilite has a dark gray case with silver front and rear bezels.
The front of the Utilite PC features the following IO options in addition to the power button and indicator LEDs.
- 1 x USB OTG (Micro USB)
- 1 x RS232 (ultra mini serial connector)
- 1 x Micro SD card slot
- 2 x USB 2.0
- 2 x 3.5mm audio jacks (line in, line out)
The rear of the PC hosts:
- 1 x DC power input
- 1 x Wi-Fi antenna
- 1 x RS232 (ultra mini serial connector)
- 2 x USB 2.0
- 2 x Gigabit Ethernet RJ45 jacks
- 2 x HDMI video outputs
According to fanless PC guru FanlessTech, CompuLab will be releasing the ARM-powered Utilite mini PC next month with a starting price of $99 and a variety of SKUs with varying amounts of CPU cores, memory, and storage. The Utilite PC is a bit on the expensive side, but this is a system for industrial and enterprise use as well as consumers, and Olivier from FanlessTech notes that build quality should be on par with those goals/industry aims.
Subject: General Tech | July 12, 2013 - 02:21 PM | Jeremy Hellstrom
The Tech Report recently attended a conference hosted by ARM in which they described their business model and the various types of licenses that they offer manufacturers. ARM is ahead of the curve in that they have fully abandoned being a hardware company, similar to what IBM did long ago and what AMD has attempted to do with GLOBALFOUNDRIES. They develop their new architectures in house but instead of fabbing them they license out the technology to anyone who wants to make their processors for use in their own products. This allows customization to be provided for their customers to ensure the processor meets their needs exactly. Take a peek at their three licensing models and their overall business model in the full article.
"ARM summoned some of the world's most technically inclined journalists and analysts to a confab at its headquarters in Cambridge, England recently. In an obvious case of poor judgment, I was also allowed to attend. Happily, I now get the chance to tell you another side of the story of the next generation of low-power processors."
Here is some more Tech News from around the web:
- Electro-permanent magnets for quadcopters @ Hack a Day
- HP Keeps Installing Secret Backdoors In Enterprise Storage @ Slashdot
- ARM Steps Into Networking, Running Linux @ Linux.com
- Konami asks users to change passwords after 35,000 accounts were accessed @ The Inquirer
- ARMA games publisher Bohemia Interactive resets passwords after attack @ The Inquirer
- Ubuntu 13.10 to ship with Mir instead of X @ The Register
Subject: General Tech | June 19, 2013 - 09:51 PM | Josh Walrath
Tagged: Volta, nvidia, maxwell, licensing, kepler, Denver, Blogs, arm
Yesterday we all saw the blog piece from NVIDIA that stated that they were going to start licensing their IP to interested third parties. Obviously, there was a lot of discussion about this particular move. Some were in favor, some were opposed, and others yet thought that NVIDIA is now simply roadkill. I believe that it is an interesting move, but we are not yet sure of the exact details or the repercussions of such a decision on NVIDIA’s part.
The biggest bombshell of the entire post was that NVIDIA would be licensing out their latest architecture to interested clients. The Kepler architecture powers the very latest GTX 700 series of cards and at the top end it is considered one of the fastest and most efficient architectures out there. Seemingly, there is a price for this though. Time to dig a little deeper.
Kepler will be the first technology licensed to third party manufacturers. We will not see full GPUs, these will only be integrated into mobile products.
The very latest Tegra parts from NVIDIA do not feature the Kepler architecture for the graphics portion. Instead, the units featured in Tegra can almost be described as GeForce 7000 series parts. The computational units are split between pixel shaders and vertex shaders. They support a maximum compatibility of D3D 9_3 and OpenGL ES 2.0. This is a far cry from a unified shader architecture and support for the latest D3D 11 and OpenGL ES 3.0 specifications. Other mobile units feature the latest Mali and Adreno series of graphics units which are unified and support DX11 and OpenGL ES 3.0.
So why exactly does the latest Tegras not share the Kepler architecture? Hard to say. It could be a variety of factors that include time to market, available engineering teams, and simulations which could dictate if power and performance can be better served by a less complex unit. Kepler is not simple. A Kepler unit that occupies the same die space could potentially consume more power with any given workload, or conversely it could perform poorly given the same power envelope.
We can look at the desktop side of this argument for some kind of proof. At the top end Kepler is a champ. The GTX 680/770 has outstanding performance and consumes far less power than the competition from AMD. When we move down a notch and see the GTX 660 Ti/HD 7800 series of cards, we see much greater parity in performance and power consumptions. Going to the HD 7790 as compared to the 650 Ti Boost, we see the Boost part have slightly better performance but consumes significantly more power. Then we move down to the 650 and 650 Ti and these parts do not consume any more power than the competing AMD parts, but they also perform much more poorly. I know these are some pretty hefty generalizations and the engineers at NVIDIA could very effectively port Kepler over to mobile applications without significant performance or power penalties. But so far, we have not seen this work.
Power, performance, and die area aside there is also another issue to factor in. NVIDIA just announced that they are doing this. We have no idea how long this effort has been going, but it is very likely that it has only been worked on for the past six months. In that time NVIDIA needs to hammer out how they are going to license the technology, how much manpower they must provide licensees to get those parts up and running, and what kind of fees they are going to charge. There is a lot of work going on there and this is not a simple undertaking.
So let us assume that some three months ago an interested partner such as Rockchip or Samsung comes knocking to NVIDIA’s door. They work out the licensing agreements and this takes several months. Then we start to see the transfer of technology between the companies. Obviously Samsung and Rockchip are not going to apply this graphics architecture to currently shipping products, but will instead bundle it in with a next generation ARM based design. These designs are not spun out overnight. For example, the 64 bit ARMv8 designs have been finalized for around a year, and we do not expect to see initial parts being shipped until late 1H 2014. So any partner that decides to utilize NVIDIA’s Kepler architecture for such an application will not see this part be released until 1H 2015 at the very earliest.
Sheild is still based on a GPU posessing separate pixel and vertex shaders. DX11 and OpenGL ES 3.0? Nope!
If someone decides to license this technology from NVIDIA, it will not be of great concern. The next generation of NVIDIA graphics will already be out by that time, and we could very well be approaching the next iteration for the desktop side. NVIDIA plans on releasing a Kepler based mobile unit in 2014 (Logan), which would be a full year in advance of any competing product. In 2015 NVIDIA is planning on releasing an ARM product based on the Denver CPU and Maxwell GPU. So we can easily see that NVIDIA will only be licensing out an older generation product so it will not face direct competition when it comes to GPUs. NVIDIA obviously is hoping that their GPU tech will still be a step ahead of that of ARM (Mali), Qualcomm (Adreno), and Imagination Technologies (PowerVR).
This is an easy and relatively painfree way to test the waters that ARM, Imagination Technologies, and AMD are already treading. ARM only licenses IP and have shown the world that it can not only succeed at it, but thrive. Imagination Tech used to produce their own chips much like NVIDIA does, but they changed direction and continue to be profitable. AMD recently opened up about their semi-custom design group that will design specific products for customers and then license those designs out. I do not think this is a desperation move by NVIDIA, but it certainly is one that probably is a little late in coming. The mobile market is exploding, and we are approaching a time where nearly every electricity based item will have some kind of logic included in it, billions of chips a year will be sold. NVIDIA obviously wants a piece of that market. Even a small piece of “billions” is going to be significant to the bottom line.
Subject: General Tech | June 19, 2013 - 03:02 PM | Jeremy Hellstrom
Tagged: amd, Kyoto, berlin, seattle, warsaw, arm
DigiTimes named the four new families of server chip that AMD will be using to keep their products in the server room. Kyoto is known as the Opteron X-series and is available now, based on Jaguar and offering GPU compute enhancements as well as increased CPU performance. The Seattle family will replace these CPUs in the near future and will represent a new era for AMD as these chips will be clusters of ARM Cortex-A57 on AMD's advanced Freedom Fabric. Berlin will be a true x86 AMD chip with the new Steamroller architecture which will replace Piledriver and support HSA compliant optimizations. Last is Warsaw, which will be the most powerful chip, uniting 12 or 16 Piledriver cores in a chip which is compatible with the current Socket G43 used by the Opteron 6300 family, offering a simple drop in upgrade solution.
"AMD has publicly disclosed its strategy and roadmap to recapture market share in enterprise and data center servers by unveiling products that address key technologies and meet the requirements of the fastest-growing data center and cloud computing workloads."
Here is some more Tech News from around the web:
- Nvidia stretches CUDA coding to ARM chips @ The Register
- Intel previews future 'Knights Landing' Xeon Phi x86 coprocessor with integrated memory @ The Register
- Fusion-io's founding CEO quits board @ The Register
- Apple issues Java patch for Mac OS X users fixing 40 critical vulnerabilities @ The Inquirer
- Flash flaw potentially makes every webcam or laptop a PEEPHOLE @ The Register
- The Linux Kernel As An Exquisitely Sensitive Stability Test For Overclocked Systems @ TechARP
- Samsung EX2F Camera Review - A Low-Light Advanced Point-And-Shoot For Any Photographer @ SSD Review
- Australian unis to test quantum-comms-over-fibre @ The Register
- Uros Goodspeed review: MiFi, but bigger @ Hardware.info
- Adding wireless charging to any phone @ Hack a Day
- Canon PowerShot N Review @ TechReviewSource
- E3 2013: Wrap Up Coverage @ Legit Reviews
Subject: General Tech | June 17, 2013 - 02:37 PM | Jeremy Hellstrom
Tagged: arm, clover trail, tegra 3
ARM might be in for more of a fight than we had thought if they want to keep their market share for the next generation of cellphones, assuming of course that they are sold in North America. The Register posted about research recently done contrasting performance and power efficiency between several phone CPUs; the Lenovo K900 with a 2.0GHz Atom Z2580, a a Samsung Nexus 10 with a dual core 1.7GHz Cortex-A15, a Galaxy S4 phone running a "big.LITTLE" Exynos Octa with paired quad-core Cortex-A15 and Cortex A7 and even a Asus Nexus 7 with an Nvidia Tegra 3. Those phones give a good representation of current generation technology and it seems that while the performance for the top phones was very similar, Intel's new ATOM did it with 2/3 the amperage, specifically an average of 0.85A as opposed to the 1.38A of the second lowest competitor. ATOM seems to have finally found a market segment it can do very well in as long as the price is right.
"The industry analysts at ABI Research pitted a Lenovo smartphone based on Intel's Atom-based Clover Trail+ platform against a quartet of ARM-based systems, and Chipzilla's system not only kept pace with the best of them, but did so using less power."
Here is some more Tech News from around the web:
- Optimized Binaries Provide Great Benefits For Intel Haswell @ Phoronix
- Samsung releases PCI-Express SSD for ultrabooks @ The Inquirer
- Intel 2014 Haswell-E to pack 8 cores, DDR4, X99 PCH and more @ VR-Zone
- Microsoft unleashes wave of Azure mobile updates @ The Register
- Critical Java SE update due Tuesday fixes 40 flaws @ The Register
- Blackberry 10.2 will support Android 4.2.2 Jelly Bean apps @ The Inquirer
- Android 5.0 Key Lime Pie to come in late October, also optimized for older phones @ VR-Zone
- Letting Bluetooth take the wires out of your headphones @ Hack a Day
- Adding WiFi to a kid’s tablet @ Hack a Day
- Intel bakes smaller, slower flash memory. Aah, now that's progress @ The Register
- TRENDnet AC1200 Dual Band Wireless USB Adapter (TEW-805UB) Review @ Madshrimps
- Computex 2013 Madshrimps Style @ Madshrimps
- AMD Today & Beyond Event @ SilverSpoon, Publika @ TechARP
- ModSynergy 10-Year Celebration Contest - USA and International Edition
Subject: General Tech | June 7, 2013 - 03:18 PM | Jeremy Hellstrom
Tagged: arm, 64bit, servers
With Calxeda and Applied Micro showing off ARM64 based servers at Computex this year, in addition to the existing products coming from Marvell and Dell, DigiTimes prediction that 64bit ARM processors will quickly grow in popularity seems to be based in fact. It was not too long ago that many thought that ARM was fooling themselves if they thought they could take server space from AMD and Intel but it looks like they were right to develop server chips. With low power usage becoming more popular than processor overkill and modularity growing in importance ARM seems poised to perform far beyond expectations. Expect to see a lot more new on ARM64 processors and products over the coming months.
"Although Intel platforms are still the mainstream in the server industry, since 64-bit products have a broader range of applications, and ARM has been aggressively promoting related products, sources from the server industry expect more 64-bit ARM-based products to appear in the market between the end of 2013 and the first quarter of 2014."
Here is some more Tech News from around the web:
- One Year After World IPv6 Launch — Are We There Yet? @ Slashdot
- The best and worst of Computex 2013 @ The Inquirer
- YES, Xbox One DOES need internet, DOES restrict game trading @ The Register
- Interview: Steve Jackson, role-playing game titan @ The Register
- Neteller vs Payoneer - Online Payment and Prepaid Cards @ FunkyKit
- How to Install Linux @ Linux.com
Cortex-A12 fills a gap
Starting off Computex with an interesting announcement, ARM is talking about a new Cortex-A12 core that will attempt to address a performance gap in the SoC ecosystem between the A9 and A15. In the battle to compete with Krait and Intel's Silvermont architecture due in late 2013, ARM definitely needed to address the separation in performance and efficiency of the A9 and A15.
Source: ARM. Top to bottom: Cortex-A15, A12, A9 die size estimate
Targeted at mid-range devices that tend to be more cost (and thus die-size) limited, the Cortex-A12 will ship in late 2014 for product sampling and you should begin seeing hardware for sale in early 2015.
Architecturally, the changes for the upcoming A12 core revolve around a move to fully out of order dual-issue design including the integrated floating point units. The execution units are faster and the memory design has been improved but ARM wasn't ready to talk about specifics with me yet; expect that later in the year.
ARM claims this results in a 40% performance gain for the Cortex-A12 over the Cortex-A9, tested in SPECint. Because product won't even start sampling until late in 2014 we have no way to verify this data yet or to evaluate efficiency claims. That time lag between announcement and release will also give competitors like Intel, AMD and even Qualcomm time to answer back with potential earlier availability.
Subject: General Tech | May 29, 2013 - 05:20 PM | Tim Verry
Tagged: x11, weston, wayland, videocore iv, Raspberry Pi, linux, bcm2835, arm
The Raspberry Pi Foundation has been working with Collabora to fund development of a Wayland display server that is compatible with the Raspberry Pi and also allows the continued use of legacy X applications.
So far, operating systems that run on the Raspberry Pi have used X as the display server and window compositor. The Raspberry Pi Foundation wants to move to a window compositor that will take advantage of the Raspberry Pi's Hardware Video Scaler (HVS) and take the burden of window composition off of the relatively much slower ARM CPU. The Raspberry Pi Foundation has chosen Wayland as the display server for the task.
The Raspberry Pi Model A.
Taking advantage of the HVS and OpenGL ES compatible GPU will make the system feel much more responsive and allow for advanced effects (fading, Expose'-like window browsers, et al) for those that like a little more bling with their OS.
The Wayland/Weston display server allows for GPU acceleration and window composition using the Pi's VideoCore IV GPU and HVS (which is independent of the hardware units that run OpenGL code). The display server will feed the entire set of windows along with how they should be laid out on screen (stacking order, transparency, 2D transform, ect.) to the HVS which will hardware accelerate the process and free the ARM CPU up for other tasks.
According to the Raspberry Pi Foundation, the Raspberry Pi's HVS is fairly powerful for a mobile-class SoC with 500 Megapixel/s scaling throughput and 1 Gigapixel per second blending throughput.
In addition to GPU acceleration, Wayland will allow non-rectangular windows, fading and other effects, support for legacy X applications with Xwayland, and a scaled window browser.
The Raspberry Pi Foundation has been working with developers since late last year and is nearly ready to roll a technology preview into the next Raspian operating system release. The developers are still working on improving the performance and reducing memory usage. As a result, the new Wayland/Weston display server is not expected to become the new default in the various Raspberry Pi operating systems until late 2013 at the earliest.
This is a project that is really nice to see, especially since at least a small part of the development work going into supporting the ARM-based Raspberry Pi on Wayland will help other ARM devices and Wayland in general which is becoming an increasingly popular choice in new Linux distributions and the best X alternative so far. Of course, this is primarily going to be a useful update for those Raspberry Pi users that run OSes with GUIs as the responsiveness should be a lot snappier!
If you simply can't wait until later this year, it is possible to install the technology preview (beta) of Wayland/Weston onto the current version of Raspbian Linux by cloning the git project or installing a Raspbian package of Weston 1.0. Blogger Daniel Stone has all the details for installing the display server onto your Pi under the section titled "sounds great; how do i get it?" on this post.
See a video of Wayland technology preview in action on the Raspberry Pi on the Raspberry Pi Foundation's blog.
Read more about the Raspberry Pi at PC Perspective.
Get notified when we go live!