Subject: Processors, Mobile, Shows and Expos | October 26, 2013 - 08:13 AM | Ryan Shrout
Tagged: techcon, iot, internet of things, arm
This year at the Santa Clara Convention Center ARM will host TechCon, a gathering of partners, customers, and engineers with the goal of collaboration and connection. While I will attending as an outside observer to see what this collection of innovators is creating, there will be sessions and tracks for chip designers, system implementation engineers and software developers.
Areas of interest will include consumer products, enterprise products and of course, the Internet of Things, the latest terminology for a completely connected infrastructure of devices. ARM has designed tracks for interested parties in chip design, data security, mobile, networking, server, software and quite a few more.
Of direct interest to PC Perspective and our readers will be the continued release of information about the Cortex-A12, the upcoming mainstream processor core from ARM that will address the smartphone and tablet markets. We will also get some time with ARM engineers to talk about the coming migration of the market to 64-bit. Because of the release of the Apple A7 SoC that integrated 64-bit and ARMv8 architecture earlier this year, it is definitely going to be the most extensively discussed topic. If you have specific questions you'd like us to bring to the folks at ARM, as well as its partners, please leave me a note in the comments below and I'll be sure it is addressed!
I am also hearing some rumblings of a new ARM developed Mali graphics product that will increase efficiency and support newer graphics APIs as well.
Even if you cannot attend the event in Santa Clara, you should definitely pay attention for the news and products that are announced and shown at ARM TechCon as they are going to be a critical part of the mobile ecosystem in the near, and distant, future. As a first time attendee myself, I am incredibly excited about what we'll find and learn next week!
Subject: General Tech | September 18, 2013 - 09:49 AM | Jeremy Hellstrom
Tagged: amd, arm, Cortex-A57, servers, seattle
DigiTimes spoke with AMD's current server guru about their move from providing only x86/64 based processors in their server chips to the inclusion of ARM cores in the Seattle chip family. These will be the first processors from AMD using 64-bit Cortex-A57 cores and they hope to sell them to companies who depend on Hadoop or run web hosting services which will benefit from scalability. As these will be true APUs as well, any application which can be accelerated by a GPU will also greatly benefit from the new design from AMD. It is AMD's hope that they will be able to offer server customers a choice in the architecture they want to use in their server rooms and able to choose between more than just competing x86/64 chips.
"Commenting on AMD's decision to make ARM-based processors for servers, corporate vice president and general manager of AMD's server business, Suresh Gopalakrishnan, said that as more server applications will show up in the future, different architectures will provide different advantages to clients. Providing solutions based on market demand will be the major business strategy for AMD's server business, Gopalakrishnan noted."
Here is some more Tech News from around the web:
- Explaining the low level stuff you don’t know about ARM programming @ Hack a Day
- Nvidia announces the Tegra Note Android tablet prototype @ The Inquirer
- Microsoft relents: 'Go ahead, install Windows 8.1 on clean PCs' @ The Register
- IBM Bets Big Again on Linux: $1B for Linux on Power Systems @ Linux.com
- Windows Phone 8 is deemed secure by the US and Canadian governments @ The Inquirer
- Blackberry Z30 Phablet Announced @ Slashdot
Subject: General Tech | September 13, 2013 - 01:15 AM | Tim Verry
Tagged: solidrun, SFF, Freescale, cubox-i, arm
SolidRun Ltd. Has come up with its own ARM-powered mini computer called the CuBox-i. The new PC measures 2” x 2” x 2” and has some respectable IO for its size. The CuBox-i comes in multiple flavors from $45 to $120. The cheapest version competes in many ways with the Raspberry Pi while the top-end device is more in line with Android development boards that tend to run in the hundreds of dollars.
There are actually four SKUs in the CuBox-i series:
The CuBox-i PCs are powered by single, dual, or quad core variant of a Freescale i.MX6 SoC at up to 1.2 GHz. The SoC uses ARMv7 instructions and dedicated NEON media encode/decode hardware. The GPU included in the SoC supports OpenGL ES 2.0 on all models, and the GPU in the two higher-end models further supports OpenCL 1.1 embedded. Memory is 512MB on the $45 CuBox-i1, 1GB on both CuBox-2 systems, and 2GB of DDR3 on the CuBox i4Pro. The mini PCs support 1080p video playback, and are compatible with Android 4.2.2, XBMC, and various Linux distributions.
IO on the CuBox-i PCs includes two powered USB 2.0 ports, HDMI, Ethernet (Gigabit on the higher end models, limited to less than 470 Mbps internally), one eSATA 3Gbps port, an optical S/PDIF output, microSD slot, microUSB (RS-232 adapter on higher end models), and an infrared reciever. The two higher-priced models also include an infrared transmitter. The high end systems also support Wi-Fi b/g/n, Bluetooth, and a hardware RTC (Real Time Clock) with backup battery.
The table above shows the breakdown of IO and internal hardware in the various SKUs. While the systems start at $45, it is the higher priced models that add some interesting features. It is always nice to see competition in the mini PC space. The CuBox-i series will be available in limited quantities later this year. Pre-order pricing breaks down as follows:
- CuBox-i1 for $45
- CuBox-i2 for $70
- CuBox-2Ultra for $95
- CuBox-4Pro for $120
Compared to the previously-announced CuBox Pro, the CuBox-i series is slightly cheaper, uses a faster SoC, and is available in multiple SKUs. For example, the top-end CuBox-i4Pro is a bit cheaper at $120 versus $160 for the CuBox Pro's original price. Naturally, the lower end CuBox-i's are even cheaper but also have less memory and IO.
Subject: General Tech | August 6, 2013 - 10:28 AM | Tim Verry
Tagged: asus, memo pad hd 7, eee memo pad, mediatek, mt8125, arm, Android 4.2.1
ASUS has released its own spin on a budget tablet with the new MeMo Pad HD 7. An updated model of the original MeMo Pad, the new 7” tablet runs Android 4.2.1 with newer hardware.
On the outside, the MeMo Pad HD 7 features a 1280x800 IPS display, 1.2 MP webcam, and 5 MP rear camera with auto focus. The top of the tablet hosts a micro USB port, microphone, and headphone jack. The MeMo Pad HD 7 measures 7.7” x 4.7” x 0.43” and weighs 0.67 pounds (303 grams). The MeMo Pad HD 7 comes in blue, green, pink, and white.
The ASUS MeMo Pad HD 7 is powered by a quad core MediaTek MT8125 SoC clocked at 1.2GHz, 1GB of RAM, 16GB of internal storage, and a 15Whr battery. Wireless radios include 802.11b/g/n Wi-Fi, Bluetooth 4.0, and Miracast wireless display support. The device also has two built in stereo speakers with Sonic Master audio technology.
Best of all, the budget tablet is available now with a price of $149. As an even more affordable alternative to the new Nexus 7, the ASUS MeMo Pad HD 7 looks to be a decent device.
Subject: Editorial, General Tech, Processors, Mobile | August 3, 2013 - 04:21 PM | Scott Michaud
Tagged: qualcomm, Intel, mediatek, arm
MediaTek, do you even lift?
According to a Taiwan Media Roundtable transcript, discovered by IT World, Qualcomm has no interest, at least at the moment, in developing an octo-core processor. MediaTek, their competitor, recently unveiled an eight core ARM System on a Chip (SoC) which can be fully utilized. Most other mobile SoCs with eight cores function as a fast quad-core and a slower, but more efficient, quad-core processor with the most appropriate chosen for the task.
Anand Chandrasekher of Qualcomm believes it is desperation.
So, I go back to what I said: it's not about cores. When you can't engineer a product that meets the consumers' expectations, maybe that’s when you resort to simply throwing cores together. That is the equivalent of throwing spaghetti against the wall and seeing what sticks. That's a dumb way to do it and I think our engineers aren't dumb.
The moderator, clearly amused by the reaction, requested a firm clarification that Qualcomm will not launch an octo-core product. A firm, but not clear, response was given, "We don't do dumb things". Of course they would not commit to swearing off eight cores for all eternity, at some point they may find core count to be their bottleneck, but that is not the case for the moment. They will also not discuss whether bumping the clock rate is the best option or whether they should focus on graphics performance. He is just assured that they are focused on the best experience for whatever scenario each product is designed to solve.
And he is assured that Intel, his former employer, still cannot catch them. As we have discussed in the past: Intel is a company that will spend tens of billions of dollars, year over year, to out-research you if they genuinely want to play in your market. Even with his experience at Intel, he continues to take them lightly.
We don't see any impact from any of Intel's claims on current or future products. I think the results from empirical testers on our products that are currently shipping in the marketplace is very clear, and across a range of reviewers from Anandtech to Engadget, Qualcomm Snapdragon devices are winning both on experience as well as battery life. What our competitors are claiming are empty promises and is not having an impact on us.
Qualcomm has a definite lead, at the moment, and may very well keep ahead through Bay Trail. AMD, too, kept a lead throughout the entire Athlon 64 generation and believed they could beat anything Intel could develop. They were complacent, much as Qualcomm sounds currently, and when Intel caught up AMD could not float above the sheer volume of money trying to drown them.
Then again, even if you are complacent, you may still be the best. Maybe Intel will never get a Conroe moment against ARM.
Subject: General Tech | July 30, 2013 - 09:41 AM | Jeremy Hellstrom
Tagged: Intel, arm, low power, server, Avoton, rangeley
Intel envisions a sea change in the server room, with servers, SANs and racks of switches which all have been controlled separately becoming much more software based as the ability to virtualize hardware becomes more prevalent. This is not to imply that the hardware will disappear and that Intel will go the way of IBM and get out of the chip business as neither are true; instead Intel is moving forward on the belief that the optimization of your virtualization software will be more important than specific hardware optimizations. While it is great to have tiered storage with expensive SSDs, solid SAS drives and other longer term and lower availability all working together there is little benefit if the software which allocates your data to those media doesn't do so properly.
In this new server room the SoC could be king, modular designs which offer scalable processing power to any and all tasks which is something that ARM cut its teeth on and is now scaling up their power to become a major player in server room design. Intel is coming at this market segment from the other direction as it has to trim power down on its chips without crippling them like we saw in the first 45nm Atom chips. To that end they are working on the new Silvermont architecture, the 22nm Avoton and Rangley which will be mature 64bit chips, something that ARM is still in early days with. Check out more info on these two chips and their successors, along with a teaser on Broadwell at The Tech Report.
"Last week, Intel hosted an event for press and analysts where it provided some updates on the state of its data center business. Then it proceeded to confound our expectations by demonstrating how it's gearing up for a protracted fight with ARM."
Here is some more Tech News from around the web:
- NVIDIA's Linux Driver On Ubuntu Is Very Competitive With Windows 8 @ Phoronix
- BeagleBone Black becomes a handheld classic gaming console @ Hack a Day
- Nvidia buys Portland Group for compiler smarts @ The Register
- KitGuru visit the Overclockers UK Store
- Funky Kit Interview with ASRock
NVIDIA Finally Gets Serious with Tegra
Tegra has had an interesting run of things. The original Tegra 1 was utilized only by Microsoft with Zune. Tegra 2 had a better adoption, but did not produce the design wins to propel NVIDIA to a leadership position in cell phones and tablets. Tegra 3 found a spot in Microsoft’s Surface, but that has turned out to be a far more bitter experience than expected. Tegra 4 so far has been integrated into a handful of products and is being featured in NVIDIA’s upcoming Shield product. It also hit some production snags that made it later to market than expected.
I think the primary issue with the first three generations of products is pretty simple. There was a distinct lack of differentiation from the other ARM based products around. Yes, NVIDIA brought their graphics prowess to the market, but never in a form that distanced itself adequately from the competition. Tegra 2 boasted GeForce based graphics, but we did not find out until later that it was comprised of basically four pixel shaders and four vertex shaders that had more in common with the GeForce 7800/7900 series than it did with any of the modern unified architectures of the time. Tegra 3 boasted a big graphical boost, but it was in the form of doubling the pixel shader units and leaving the vertex units alone.
While NVIDIA had very strong developer relations and a leg up on the competition in terms of software support, it was never enough to propel Tegra beyond a handful of devices. NVIDIA is trying to rectify that with Tegra 4 and the 72 shader units that it contains (still divided between pixel and vertex units). Tegra 4 is not perfect in that it is late to market and the GPU is not OpenGL ES 3.0 compliant. ARM, Imagination Technologies, and Qualcomm are offering new graphics processing units that are not only OpenGL ES 3.0 compliant, but also offer OpenCL 1.1 support. Tegra 4 does not support OpenCL. In fact, it does not support NVIDIA’s in-house CUDA. Ouch.
Jumping into a new market is not an easy thing, and invariably mistakes will be made. NVIDIA worked hard to make a solid foundation with their products, and certainly they had to learn to walk before they could run. Unfortunately, running effectively entails having design wins due to outstanding features, performance, and power consumption. NVIDIA was really only average in all of those areas. NVIDIA is hoping to change that. Their first salvo into offering a product that offers features and support that is a step above the competition is what we are talking about today.
Subject: General Tech, Systems | July 14, 2013 - 08:51 PM | Tim Verry
Tagged: utilite, ubuntu, silent, SFF, linux, fanless, cortex-a9, compulab, arm, Android
CompuLab has announced a new fanless, small form factor PC called the Utilite. This new PC comes from the same company that engineered the MintBox, MintBox 2, and Fit PC series. The Utilite is a low-power desktop PC powered by a quad core ARM Cortex A9 processor and runs either Ubuntu Linux or Google’s Android operating system.
The upcoming Utilite measures 5.3” x 3.9” x 0.8”(135 x 100 x 21mm) and consumes anywhere between 3W and 8W of power depending on the particular hardware configuration. It is designed to be a quiet desktop replacement with plenty of IO.
CompuLab will provide single core, dual core, and quad core CPU SKUs. Specifically, the Utilite is powered by a Freescale i.MX6 ARM Cortex-A9 MPCore processor that is clocked at up to 1.2 GHz. Users will be able to further configure the system with up to 4GB of DDR3 1066 MHz memory and a 512GB mSATA SSD. Storage can be further expanded using Micro SD-XC cards (maximum of 128GB per card). The GPU in the SoC is compatible with OpenGL ES 1.1 and 2.0 as well as OpenVG 1.1 and OpenCL EP. It is capable of hardware decoding multi-stream 1080p video in a variety of common formats.
Wireless functionality includes an 802.11b/g/n Wi-Fi card and Bluetooth 3.0.
The Utilite has a dark gray case with silver front and rear bezels.
The front of the Utilite PC features the following IO options in addition to the power button and indicator LEDs.
- 1 x USB OTG (Micro USB)
- 1 x RS232 (ultra mini serial connector)
- 1 x Micro SD card slot
- 2 x USB 2.0
- 2 x 3.5mm audio jacks (line in, line out)
The rear of the PC hosts:
- 1 x DC power input
- 1 x Wi-Fi antenna
- 1 x RS232 (ultra mini serial connector)
- 2 x USB 2.0
- 2 x Gigabit Ethernet RJ45 jacks
- 2 x HDMI video outputs
According to fanless PC guru FanlessTech, CompuLab will be releasing the ARM-powered Utilite mini PC next month with a starting price of $99 and a variety of SKUs with varying amounts of CPU cores, memory, and storage. The Utilite PC is a bit on the expensive side, but this is a system for industrial and enterprise use as well as consumers, and Olivier from FanlessTech notes that build quality should be on par with those goals/industry aims.
Subject: General Tech | July 12, 2013 - 11:21 AM | Jeremy Hellstrom
The Tech Report recently attended a conference hosted by ARM in which they described their business model and the various types of licenses that they offer manufacturers. ARM is ahead of the curve in that they have fully abandoned being a hardware company, similar to what IBM did long ago and what AMD has attempted to do with GLOBALFOUNDRIES. They develop their new architectures in house but instead of fabbing them they license out the technology to anyone who wants to make their processors for use in their own products. This allows customization to be provided for their customers to ensure the processor meets their needs exactly. Take a peek at their three licensing models and their overall business model in the full article.
"ARM summoned some of the world's most technically inclined journalists and analysts to a confab at its headquarters in Cambridge, England recently. In an obvious case of poor judgment, I was also allowed to attend. Happily, I now get the chance to tell you another side of the story of the next generation of low-power processors."
Here is some more Tech News from around the web:
- Electro-permanent magnets for quadcopters @ Hack a Day
- HP Keeps Installing Secret Backdoors In Enterprise Storage @ Slashdot
- ARM Steps Into Networking, Running Linux @ Linux.com
- Konami asks users to change passwords after 35,000 accounts were accessed @ The Inquirer
- ARMA games publisher Bohemia Interactive resets passwords after attack @ The Inquirer
- Ubuntu 13.10 to ship with Mir instead of X @ The Register
Subject: General Tech | June 19, 2013 - 06:51 PM | Josh Walrath
Tagged: Volta, nvidia, maxwell, licensing, kepler, Denver, Blogs, arm
Yesterday we all saw the blog piece from NVIDIA that stated that they were going to start licensing their IP to interested third parties. Obviously, there was a lot of discussion about this particular move. Some were in favor, some were opposed, and others yet thought that NVIDIA is now simply roadkill. I believe that it is an interesting move, but we are not yet sure of the exact details or the repercussions of such a decision on NVIDIA’s part.
The biggest bombshell of the entire post was that NVIDIA would be licensing out their latest architecture to interested clients. The Kepler architecture powers the very latest GTX 700 series of cards and at the top end it is considered one of the fastest and most efficient architectures out there. Seemingly, there is a price for this though. Time to dig a little deeper.
Kepler will be the first technology licensed to third party manufacturers. We will not see full GPUs, these will only be integrated into mobile products.
The very latest Tegra parts from NVIDIA do not feature the Kepler architecture for the graphics portion. Instead, the units featured in Tegra can almost be described as GeForce 7000 series parts. The computational units are split between pixel shaders and vertex shaders. They support a maximum compatibility of D3D 9_3 and OpenGL ES 2.0. This is a far cry from a unified shader architecture and support for the latest D3D 11 and OpenGL ES 3.0 specifications. Other mobile units feature the latest Mali and Adreno series of graphics units which are unified and support DX11 and OpenGL ES 3.0.
So why exactly does the latest Tegras not share the Kepler architecture? Hard to say. It could be a variety of factors that include time to market, available engineering teams, and simulations which could dictate if power and performance can be better served by a less complex unit. Kepler is not simple. A Kepler unit that occupies the same die space could potentially consume more power with any given workload, or conversely it could perform poorly given the same power envelope.
We can look at the desktop side of this argument for some kind of proof. At the top end Kepler is a champ. The GTX 680/770 has outstanding performance and consumes far less power than the competition from AMD. When we move down a notch and see the GTX 660 Ti/HD 7800 series of cards, we see much greater parity in performance and power consumptions. Going to the HD 7790 as compared to the 650 Ti Boost, we see the Boost part have slightly better performance but consumes significantly more power. Then we move down to the 650 and 650 Ti and these parts do not consume any more power than the competing AMD parts, but they also perform much more poorly. I know these are some pretty hefty generalizations and the engineers at NVIDIA could very effectively port Kepler over to mobile applications without significant performance or power penalties. But so far, we have not seen this work.
Power, performance, and die area aside there is also another issue to factor in. NVIDIA just announced that they are doing this. We have no idea how long this effort has been going, but it is very likely that it has only been worked on for the past six months. In that time NVIDIA needs to hammer out how they are going to license the technology, how much manpower they must provide licensees to get those parts up and running, and what kind of fees they are going to charge. There is a lot of work going on there and this is not a simple undertaking.
So let us assume that some three months ago an interested partner such as Rockchip or Samsung comes knocking to NVIDIA’s door. They work out the licensing agreements and this takes several months. Then we start to see the transfer of technology between the companies. Obviously Samsung and Rockchip are not going to apply this graphics architecture to currently shipping products, but will instead bundle it in with a next generation ARM based design. These designs are not spun out overnight. For example, the 64 bit ARMv8 designs have been finalized for around a year, and we do not expect to see initial parts being shipped until late 1H 2014. So any partner that decides to utilize NVIDIA’s Kepler architecture for such an application will not see this part be released until 1H 2015 at the very earliest.
Sheild is still based on a GPU posessing separate pixel and vertex shaders. DX11 and OpenGL ES 3.0? Nope!
If someone decides to license this technology from NVIDIA, it will not be of great concern. The next generation of NVIDIA graphics will already be out by that time, and we could very well be approaching the next iteration for the desktop side. NVIDIA plans on releasing a Kepler based mobile unit in 2014 (Logan), which would be a full year in advance of any competing product. In 2015 NVIDIA is planning on releasing an ARM product based on the Denver CPU and Maxwell GPU. So we can easily see that NVIDIA will only be licensing out an older generation product so it will not face direct competition when it comes to GPUs. NVIDIA obviously is hoping that their GPU tech will still be a step ahead of that of ARM (Mali), Qualcomm (Adreno), and Imagination Technologies (PowerVR).
This is an easy and relatively painfree way to test the waters that ARM, Imagination Technologies, and AMD are already treading. ARM only licenses IP and have shown the world that it can not only succeed at it, but thrive. Imagination Tech used to produce their own chips much like NVIDIA does, but they changed direction and continue to be profitable. AMD recently opened up about their semi-custom design group that will design specific products for customers and then license those designs out. I do not think this is a desperation move by NVIDIA, but it certainly is one that probably is a little late in coming. The mobile market is exploding, and we are approaching a time where nearly every electricity based item will have some kind of logic included in it, billions of chips a year will be sold. NVIDIA obviously wants a piece of that market. Even a small piece of “billions” is going to be significant to the bottom line.