ARM Brings Out Marketing Guns - Says Intel Quark Too Hot for Wearables

Subject: Processors, Mobile | February 21, 2014 - 10:47 AM |
Tagged: wearables, wearable computing, quark, Intel, arm

On a post from the official ARM blogs, the guns are blazing in the battle for the wearable market mind share.  Pretty much all the currently available wearable computing devices are using ARM-based processors but that hasn't prevented Intel from touting its Quark platform as the best platform for wearables.  There are still lots of questions about Quark when it comes to performance and power consumption but ARM decided to pit its focus on heat.

For a blog post on ARM's website

Intel’s Quark is an example that has a relatively low level of integration, but has still been positioned as a solution for wearables. Fine you may think, there are plenty of ARM powered communication chipsets it could be paired with, but a quick examination of the development board brings the applicability further into question. Quark runs at a rather surprising, and sizzling to the touch, 57°C. The one attribute it does offer is a cognitive awareness, not through any hardware integration suitable for the wearable market, but from the inbuilt thermal management hardware (complete with example code), which in the attached video you can see is being used to toggle a light switch once touched by a finger which, acting as a heat sync, drops the temperature below 50°C.

Along with this post is a YouTube video that shows this temperature testing taking place.

Of course, when looking at competitive analysis between companies you should always take the results as tentative at best.  There is likely to be some change between the Quark Adruino board (Galileo) integration of the X1000 and what would make it into a final production wearable device.  Obviously this is something Intel is award of as well and they are also aware of what temperature means for devices that users will have such direct contact with.  

quark.jpg

The proof will be easy to see, either way, as we progress through 2014. Will device manufacturers integrated Quark in any final design wins and what will the user experience of those units be like?  

Still, it's always interesting to see marketing battles heat up between these types of computing giants.

Source: ARM

Intel Roadmap Including Xeon E7 v2 Lineup

Subject: General Tech, Processors, Mobile | February 19, 2014 - 03:28 AM |
Tagged: Intel, SoC, atom, haswell, Haswell-E, Airmont, Ivy Bridge-EX

Every few months, we get another snapshot at some of Intel's products. This timeline has a rough placement for every segment, from their Internet of Things (IoT) product, the Quark, up to the Xeon E7 v2. While it covers from now through December, it is not designed to be a strict schedule and might contain an error or two.

intel-2014-roadmap.jpg

Image Credit: VR-Zone

First up is Ivy Bridge-EX (Xeon E7 v2). PCMag has an interesting rundown on these parts in depth, although some aspects are a little fuzzy. These 22nm-based chips range from 6 to 15 cores and can access up to 1.5TB of memory, per socket. Intel also claims they will support up to four times the I/O bandwidth for disk and network transactions. Naturally, they have all the usual virtualization and other features that are useful for servers. Most support Turbo Boost and all but one have Hyper-Threading Technology.

Jumping back to the VR-Zone editorial, the timeline suggests that the Quark X1000 will launch in April. As far as I can tell, this is new information. Quark is Intel's ultra low-end SoC that is designed for adding intelligence to non-computing devices. One example given by Intel at CES was a smart baby bottle warmer.

The refresh of Haswell is also expected to happen in April.

Heading into the third quarter, we should see Haswell-E make an appearance for the enthusiast desktop and moderately high-end server. This should be the first time since Sandy Bridge-E (2011) that expensive PCs get a healthy boost to single-threaded performance, clock for clock. Ivy Bridge-E, while a welcome addition, was definitely aimed at reducing power consumption.

Ending the year should be the launch of Airmont at 14nm. The successor to Silvermont, Airmont will be the basis of Cherry Trail tablets and lower end PCs at the very end of the year. Moorefield, which is Airmont for smartphones, is not listed on this roadmap and should not surface until 2015.

Source: VR-Zone

MediaTek Follows ARM Cortex-A17 Unveil with MT6595

Subject: General Tech, Processors, Mobile | February 12, 2014 - 05:48 PM |
Tagged: mediatek, arm, cortex, A17

Our Josh Walrath wrote up an editorial about the Cortex-A17 architecture less than two days ago. In it, he reports on ARM's announcement that "the IP" will ship in 2015. On the same calendar date, MediaTek announced their MT6595 SoC, integrating A17 and A7 cores, will be commercially available in 1H 2014 with devices in 2H 2014.

arm_A17_diag_r.png

Of course, it is difficult to tell how ahead of schedule this is, depending on what ARM meant by shipping in 2015 and what MediaTek meant by devices based on the MT6595 platform in 2H 2014.

There are two key features about the A17: a 40% power reduction from A15 and its ability to integrate with A7 cores in a big.LITTLE structure. MediaTek goes a little further with "CorePilot", which schedules tasks across all eight cores (despite it being a grouping of two different architectures). This makes some amount of sense because it allows for four strong threads which can be augmented with four weaker threads. Especially for applications like web browsers, it is not uncommon to have a dominant main thread.

The SoC will also support LTE and HSPA+ mobile and 802.11ac wireless connections. It will not integrate the Mali-T720 GPU (DX11/OpenGL ES 3.0), but instead use the Power VR Series6 GPU (DX10/OpenGL ES 3.0 unless it is an unannounced design). MediaTek does not explain why they chose the one licensed GPU over the other.

MediaTek claims the MT6595 platform will be available in the first half of 2014 with devices coming in the second half.

Source: MediaTek
Author:
Subject: Processors
Manufacturer: ARM

Cortex-A12 Optimized!

ARM is an interesting little company.  Years ago people would have no idea who you are talking about, but now there is a much greater appreciation for the company.  Their PR group is really starting to get the hang of getting their name out.  One thing that ARM does that is significantly different from what other companies do is announce products far in advance of when they will actually be seeing the light of day.  Today they are announcing the Cortex-A17 IP that will ship in 2015.
 
arm_01.jpg
 
ARM really does not have much of a choice in how they announce their technology, primarily because they rely on 3rd parties to actually ship products.  ARM licenses their IP to guys like Samsung, Qualcomm, Ti, NVIDIA, etc. and then wait for them to actually build and ship product.  I guess part of pre-announcing these bits of IP provides a greater push for their partners to actually license that specific IP due to end users and handset makers showing interest?  Whatever the case, it is interesting to see where ARM is heading with their technology.
 
The Cortex-A17 can be viewed as a more supercharged version of the Cortex-A12, but with features missing from that particular product.  The big advancement over the A12 is that the A17 can be utilized in a big.LITTLE configuration with Cortex-A7 IP.  The A17 is more power optimized as well so it can go into a sleep state faster than the A12, and it also features more memory controller tweaks to improve performance while again lowering power consumption.
 
arm_02.jpg
 
In terms of overall performance it gets a pretty big boost as compared to the very latest Cortex-A9r4 designs (such as the Tegra 4i).  Numbers bandied about by ARM show that the A17 is around 60% faster than the A9, and around 40% faster than the A12.  These numbers may or may not jive with real-world experience due to differences in handset and tablet designs, but theoretically speaking they look to be in the ballpark.  The A17 should be close in overall performance to A15 based SOCs.  A15s are shipping now, but they are not as power efficient as what ARM is promising with the A17.
 

NitroWare Tests AMD's Photoshop OpenCL Claims

Subject: General Tech, Graphics Cards, Processors | February 5, 2014 - 02:08 AM |
Tagged: photoshop, opencl, Adobe

Adobe has recently enhanced Photoshop CC to accelerate certain filters via OpenCL. AMD contacted NitroWare with this information and claims of 11-fold performance increases with "Smart Sharpen" on Kaveri, specifically. The computer hardware site decided to test these claims on a Radeon HD 7850 using the test metrics that AMD provided them.

Sure enough, he noticed a 16-fold gain in performance. Without OpenCL, the filter's loading bar was on screen for over ten seconds; with it enabled, there was no bar.

Dominic from NitroWare is careful to note that an HD 7850 is significantly higher performance than an APU (barring some weird scenario involving memory transfers or something). This might mark the beginning of Adobe's road to sensible heterogeneous computing outside of video transcoding. Of course, this will also be exciting for AMD. While they cannot keep up with Intel, thread per thread, they are still a heavyweight in terms of total performance. With Photoshop, people might actually notice it.

Video Perspective: Free to Play Games on the A10-7850K vs. Intel Core i3 + GeForce GT 630

Subject: Graphics Cards, Processors | January 31, 2014 - 04:36 PM |
Tagged: 7850k, A10-7850K, amd, APU, gt 630, Intel, nvidia, video

As a follow up to our first video posted earlier in the week that looked at the A10-7850K and the GT 630 from NVIDIA in five standard games, this time we compare the A10-7850K APU against the same combination of the Intel and NVIDIA hardware in five of 2013's top free to play games.

UPDATE: I've had some questions about WHICH of the GT 630 SKUs were used in this testing.  Our GT 630 was this EVGA model that is based on 96 CUDA cores and a 128-bit DDR3 memory interface.  You can see a comparison of the three current GT 630 options on NVIDIA's website here.

If you are looking for more information on AMD's Kaveri APUs you should check out my review of the A8-7600 part as well our testing of Dual Graphics with the A8-7600 and a Radeon R7 250 card.

Video Perspective: 2013 Games on the A10-7850K vs. Intel Core i3 + GeForce GT 630

Subject: Graphics Cards, Processors | January 29, 2014 - 03:44 PM |
Tagged: video, nvidia, Intel, gt 630, APU, amd, A10-7850K, 7850k

The most interesting aspect of the new Kaveri-based APUs from AMD, in particularly the A10-7850K part, is how it improves mainstream gaming performance.  AMD has always stated that these APUs shake up the need for low-cost discrete graphics and when we got the new APU in the office we did a couple of quick tests to see how much validity there to that claim.

In this short video we compare the A10-7850K APU against a combination of the Intel Core i3-4330 and GeForce GT 630 discrete graphics card in five of 2013's top PC releases.  I think you'll find the results pretty interesting.

UPDATE: I've had some questions about WHICH of the GT 630 SKUs were used in this testing.  Our GT 630 was this EVGA model that is based on 96 CUDA cores and a 128-bit DDR3 memory interface.  You can see a comparison of the three current GT 630 options on NVIDIA's website here.

If you are looking for more information on AMD's Kaveri APUs you should check out my review of the A8-7600 part as well our testing of Dual Graphics with the A8-7600 and a Radeon R7 250 card.

Author:
Manufacturer: AMD

Hybrid CrossFire that actually works

The road to redemption for AMD and its driver team has been a tough one.  Since we first started to reveal the significant issues with AMD's CrossFire technology back in January of 2013 the Catalyst driver team has been hard at work on a fix, though I will freely admit it took longer to convince them that the issue was real than I would have liked.  We saw the first steps of the fix released in August of 2013 with the release of the Catalyst 13.8 beta driver.  It supported DX11 and DX10 games and resolutions of 2560x1600 and under (no Eyefinity support) but was obviously still less than perfect.  

In October with the release of AMD's latest Hawaii GPU the company took another step by reorganizing the internal architecture of CrossFire on the chip level with XDMA.  The result was frame pacing that worked on the R9 290X and R9 290 in all resolutions, including Eyefinity, though still left out older DX9 titles.  

One thing that had not been addressed, at least not until today, was the issues that surrounded AMD's Hybrid CrossFire technology, now known as Dual Graphics.  This is the ability for an AMD APU with integrated Radeon graphics to pair with a low cost discrete GPU to improve graphics performance and gaming experiences.  Recently over at Tom's Hardware they discovered that Dual Graphics suffered from the exact same scaling issues as standard CrossFire; frame rates in FRAPS looked good but the actually perceived frame rate was much lower.

drivers01.jpg

A little while ago a new driver made its way into my hands under the name of Catalyst 13.35 Beta X, a driver that promised to enable Dual Graphics frame pacing with Kaveri and R7 graphics cards.  As you'll see in the coming pages, the fix definitely is working.  And, as I learned after doing some more probing, the 13.35 driver is actually a much more important release than it at first seemed.  Not only is Kaveri-based Dual Graphics frame pacing enabled, but Richland and Trinity are included as well.  And even better, this driver will apparently fix resolutions higher than 2560x1600 in desktop graphics as well - something you can be sure we are checking on this week!

drivers02.jpg

Just as we saw with the first implementation of Frame Pacing in the Catalyst Control Center, with the 13.35 Beta we are using today you'll find a new set of options in the Gaming section to enable or disable Frame Pacing.  The default setting is On; which makes me smile inside every time I see it.

drivers03.jpg

The hardware we are using is the same basic setup we used in my initial review of the AMD Kaveri A8-7600 APU review.  That includes the A8-7600 APU, an Asrock A88X mini-ITX motherboard, 16GB of DDR3 2133 MHz memory and a Samsung 840 Pro SSD.  Of course for our testing this time we needed a discrete card to enable Dual Graphics and we chose the MSI R7 250 OC Edition with 2GB of DDR3 memory.  This card will run you an additional $89 or so on Amazon.com.  You could use either the DDR3 or GDDR5 versions of the R7 250 as well as the R7 240, but in our talks with AMD they seemed to think the R7 250 DDR3 was the sweet spot for the CrossFire implementation.

IMG_9457.JPG

Both the R7 250 and the A8-7600 actually share the same number of SIMD units at 384, otherwise known as 384 shader processors or 6 Compute Units based on the new nomenclature that AMD is creating.  However, the MSI card is clocked at 1100 MHz while the GPU portions of the A8-7600 APU are running at only 720 MHz. 

So the question is, has AMD truly fixed the issues with frame pacing with Dual Graphics configurations, once again making the budget gamer feature something worth recommending?  Let's find out!

Continue reading our look at Dual Graphics Frame Pacing with the Catalyst 13.35 Beta Driver!!

(HCW) Kaveri Overclocked +1GHz CPU, +300 MHz GPU

Subject: General Tech, Processors | January 27, 2014 - 03:24 AM |
Tagged: overclocking, Kaveri, amd

HCW does quite a few overclocking reviews for both Intel and AMD processors. This time, Carl Nelson got a hold of the high-end AMD A10-7850K and gave it a pretty healthy boost in frequencies. By the time he was done with it, the CPU was operating a whole gigahertz above stock simultaneous with a 300 MHz boost to its integrated graphics.

hcw-kaveri-overclocking-performance.png

Image Credit: HCW

3DMark 2013 Fire Strike scores gained 27%.

One again, they break down tests along a suite of different games of varying engines and add some OpenCL tests to round things out. In real-world applications, the increase was not quite as dramatic as the one seen in 3DMark but still significant. This overclock allowed certain games to jump from 720p to playable at 1080p. Apparently this silicon is a decent little overclocker.

Source: HCW

Four Processors Might Be Slated for AMD's AM1 Socket

Subject: General Tech, Processors | January 26, 2014 - 09:28 PM |
Tagged: AM1, Kabini, amd

Chinese VR-Zone published claims that AMD will have up to four processors planned for AM1. This is the brand of socket designed for the upcoming Kabini APUs that we have discussed since the CES time frame. Three of the upcoming processors will be quad-core with one dual-core for variety. Regardless of core count, all four processors are listed at 25 watts (TDP).

Product Cores CPU
GPU
L2 TDP
A6-5350 Quad 2.05GHz HD 8400 2MB 25W
A4-5150 Quad 1.60GHz HD 8400 2MB 25W
E2-3850 Quad 1.30GHz HD 8280 2MB 25W
E1-2650 Dual 1.45GHz HD 8240 1MB 25W

Kabini pairs Jaguar cores, for x86-based serial processing, with a GCN-based graphics processor supporting DirectX 11.1. Users planning to purchase Kabini for use with Windows 8.1 should expect to miss out on some or all of the benefits associated with DirectX 11.2 (along with everyone on Windows 8 and earlier). Little of value would be lost, however.

These products are expected to be positioned against Bay Trail-D which powers Intel's Pentium and Celeron lines. The currently available products from Intel are classified at 10W TDP and around 2 GHz.

Kaveri and socketed Kabini at CES 2014

AMD is pushing lesser-clocked (and higher TDP) products based on Jaguar against Intel's Silvermont. I am not sure sure how the two architectures compare although I would expect the latter to win out clock-for-clock and watt-for-watt. Then again, cost and graphics performance could be significantly superior with AMD. Ultimately, it will be up to the overall benchmarks (and pricing) to see how they will actually stack up.

Source: VR-Zone