Subject: Processors | January 18, 2015 - 05:16 PM | Sebastian Peak
Tagged: SoC, rumor, processor, leak, iris pro, Intel, graphics, cpu, carrizo, APU, amd
A new report of leaked benchmarks paints a very interesting picture of the upcoming AMD Carrizo mobile APU.
Image credit: SiSoftware
Announced as strictly mobile parts, Carrizo is based on the next generation Excavator core and features what AMD is calling one of their biggest ever jumps in efficiency. Now alleged leaked benchmarks are showing significant performance gains as well, with numbers that should elevate the IGP dominance of AMD's APUs.
Image credit: WCCFtech
"The A10 7850K scores around 270 Mpix/s while Intel’s HD5200 Iris Pro scores a more modest 200 Mpix/s. Carriso scores here over 600 Mpix/s which suggests that Carrizo is more than twice as fast as Kaveri and three times faster than Iris Pro. To put this into perspective this is what an R7 265 graphics card scores, a card that offers the same graphics performance inside the Playstation 4."
While the idea of desktop APUs with greatly improved graphics and higher efficency is tantalizing, AMD has made it clear that these will be mobile-only parts at launch. When asked by Anandtech, AMD had this to say about the possibility of a desktop variant:
“With regards to your specific question, we expect Carrizo will be seen in BGA form factor desktops designs from our OEM partners. The Carrizo project was focused on thermally constrained form factors, which is where you'll see the big differences in performance and other experiences that consumers value.”
The new mobile APU will be manufactured with the same 28nm process as Kaveri, with power consumption up to 35W for the Carrizo down to a maximum of 15W for the ultra-mobile Carrizo-L parts.
Subject: Processors | January 15, 2015 - 03:41 PM | Jeremy Hellstrom
Tagged: Pentium G3258, overclock, Intel
You just don't see CPU overclocking guides much anymore, the process has become much easier over the years as Intel and AMD both now sell unlocked CPUs that they expect you to overclock and the motherboard tools and UEFI interfaces do a lot of the heavy lifting for you now. No longer are you doing calculations for frequency ratios or drawing on your CPU with conductive ink. Overclockers Club is revisiting those heydays with a guide on how to make your $70 3.2GHz Pentium G3258 into a more serious beast with a speed well over 4GHz. The steps for overclocking are not difficult but for those who do not have a background in overclocking CPUs, the verification testing steps they describe will be of great value. If you are already well versed in the ways of MemTest86 and Prime95 then perhaps it will be a nice reminder of the days of the Celeron and the huge increases in frequency that family rewarded the patient overclocker with.
"To reach 4.7GHz was a cinch once I adjusted all the smaller voltage settings. Like all overclockers, it was a journey with many failures along the way. One day it would boot and run Prime95, and the next time Windows would not load. It took a while to sort it out by backing down to 4.5GHz and raising each setting until I settled on the below settings."
Here are some more Processor articles from around the web:
- Pentium J2900 CPU Review @ Hardware Secrets
- Athlon 5150 CPU Review @ Hardware Secrets
- AMD FX-9590 @ Benchmark Reviews
- AMD FX-8320E @ Benchmark Reviews
Subject: General Tech, Processors, Mobile, Shows and Expos | January 7, 2015 - 08:04 PM | Scott Michaud
Tagged: smartwatch, mt2601, mediatek, ces 2015, CES
When you start getting into the wearables market, even mobile SoCs can be somewhat big and power-hungry. As such, we are seeing more innovation in processors that satisfy these lower classes (which could just be us paying more attention). The MediaTek MT2601 is one such device, which combines a pair of ARM Cortex-A7 cores (1.2 GHz) with an ARM Mali 400MP GPU (intended frequency unknown) on a
package PCB that is less than 480mm2. (Edit @ 9:48PM -- they seem to mean the SoC and other chips, like the Bluetooth module)
Of course, these chips are designed to be low cost, low power, and whatever performance can be squeezed out of those two requirements, so it might not be the most interesting SoC that we can talk about. Still, battery life has been a major hindrance to smart watches and other small, niche devices. It will be interesting to see new-generation devices that use these components.
Heck, if I had more time, I might even want to hack around with these directly.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: Processors | January 5, 2015 - 07:30 PM | Josh Walrath
Tagged: SoC, low power, Intel, Cherry Trail, cell phones, ces 2015, CES, Bay Trail, 14 nm trigate, tablets
It wouldn’t be CES if there wasn’t an Intel release. Today they are releasing their latest 14 nm Cherry Trail SOC. Very little information has been released about this part, but it is the follow-up to the fairly successful Bay Trail. That particular part was a second generation 22 nm part that exhibited very good power and performance characteristics for the price. While Bay Trail was not as popular as Intel had hoped for, it did have some impressive design wins in multiple market sectors.
The next generation process technology from Intel will improve power and performance for the Cherry Trail parts as compared to previous products. It will work in both Windows and Android environments. While Cherry Trail is x86, Intel has been working very closely with Google to get Android to work effectively and quickly with a non-ARM based ISA.
Intel is shipping these parts to their partners for integration into phones, tablets, and small form factor computers. We had previously seen Bay Trail parts integrated into low cost motherboards with the J1800 and J1900 SKUs from Intel. We can expect these products to be refreshed with the latest Cherry Trail products that are being released today.
There is very little information being provided by Intel about the nuts and bolts of the Cherry Trail products. Intel promises to release more information once their partners start announcing individual products. We know that these parts will have improved graphics performance and will exist in the same TDPs as previous Bay Trail products. Other than that, feeds and speeds are a big question for this latest generation part.
These products will be integrating Intel’s RealSense technology. Password-less security, gestures, and 3D camera recognition are all aspects of this technology. I am sure we will get more information on how this technology leverages the power of the CPU cores and GPU cores in the latest Cherry Trail SOCs.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: General Tech, Processors, Shows and Expos | January 5, 2015 - 10:00 AM | Scott Michaud
Tagged: iris graphics 6100, iris, Intel, hd graphics 6000, hd graphics 5500, ces 2015, CES, broadwell-u, Broadwell
When Intel launched Broadwell-Y in November, branded Core M by that point, they had a 4.5W processor that was just a little slower than a 15W Haswell Ultrabook CPU. This is quite a bit of power efficiency, although these numbers are maximum draw and might not be exactly proportional to average power under load.
At CES, Intel has launched Broadwell-U, which takes this efficiency and scales it up to 15W and 28W SKUs. The idea is that the extra thermal headroom will scale up for extra CPU and GPU performance. These are all BGA-attached components, which means that these processors need to be physically soldered to the motherboards -- they are destined for OEMs.
As an example of Broadwell-U's increased performance, the Core M 5Y70 has a base frequency of 1.1 GHz that can boost to 2.6 GHz; the top-end Broadwell-U has a base clock of 3.1 GHz and boosts to 3.4 GHz. From Core i3 up to Core i7, regardless of TDP, each of these processors are dual-core with HyperThreading (4 threads total). There is also a single Pentium and two Celeron SKUs, which are dual-core without HyperThreading (2 threads total).
Its GPU receives a large boost as well, particularly with the 28W SKUs receiving Iris Graphics 6100, although Iris Pro Graphics (6200 and 6300) do not yet make an appearance. If we had access to the number of execution units and we assumed the same instruction-per-clock count as Iris Graphics 5100, we would be able to calculate a theoretical FLOP figure, but that is information that we do not have. It would make sense if it were 48 execution units, twice Core M and consistent with the official die shot that Intel doesn't actually identify by product number. This would give it about 845 GFLOPs of performance, or about an OEM NVIDIA GeForce GTX 460 (the retail GTX 460 cards were about 4% faster than the OEM ones).
It is also within 2% of Haswell's Iris 5100 theoretical GFLOPs, albeit with a 15% drop in clock rate.
From a features standpoint, the GPU is a definite step-up. It has “Enhanced” hardware support for VP8, VP9, and h.265 (HEVC) video and 4K UltraHD output, wired or by Intel WiDi. Broadwell's iGPU was designed with DirectX 12 in mind and supports OpenCL 2.0 -- leaving NVIDIA behind in that regard, since AMD added that API in last month's Omega driver.
Intel is slightly behind in OpenGL support however, claiming 4.3 compatibility while AMD is at 4.4 and NVIDIA is at 4.5. This could mean that these GPUs do not (unless a future driver changes this) support “Efficient Multiple Object Binding”, “Sparse Texture Extension”, or “Direct State Access”. Then again, they could support these features as extensions or something, because it is OpenGL and extensions are its thing, but you know -- they're obviously missing some part of the spec, somewhere.
This leaves Broadwell-H and Broadwell-K, high performance BGA and socketed LGA respectively, to launch later in the year. These products will have significantly higher TDPs and stronger performance, at the expense of battery life (a non-issue for the desktop-bound -K parts) and heat.
Follow all of our coverage of the show at http://pcper.com/ces!
NVIDIA's Tegra X1
NVIDIA seems to like begin on a one year cycle with their latest Tegra products. Many years ago we were introduced to the Tegra 2, and the year after that the Tegra 3, and the year after that the Tegra 4. Well, NVIDIA did spice up their naming scheme to get away from the numbers (not to mention the potential stigma of how many of those products actually made an impact in the industry). Last year's entry was the Tegra K1 based on the Kepler graphics technology. These products were interesting due to the use of the very latest, cutting edge graphics technology in a mobile/low power format. The Tegra K1 64 bit variant used two “Denver” cores that were actually designed by NVIDIA.
While technically interesting, the Tegra K1 series have made about the same impact as the previous versions. The Nexus 9 was the biggest win for NVIDIA with these parts, and we have heard of a smattering of automotive companies using Tegra K1 in those applications. NVIDIA uses the Tegra K1 in their latest Shield tablet, but they do not typically release data regarding the number of products sold. The Tegra K1 looks to be the most successful product since the original Tegra 2, but the question of how well they actually sold looms over the entire brand.
So why the history lesson? Well, we have to see where NVIDIA has been to get a good idea of where they are heading next. Today, NVIDIA is introducing the latest Tegra product, and it is going in a slightly different direction than what many had expected.
The reference board with 4 GB of LPDDR4.
Subject: General Tech, Processors, Systems | December 23, 2014 - 04:07 AM | Scott Michaud
Tagged: x86, Nintendo, arm, amd
The tea leaves that WCCFTech have been reading are quite scattered, but they could be right. The weaker half is pulled from an interview between Shigeru Miyamoto and the Associated Press. At the very end, the creator of many Nintendo franchises states, “While we're busy working on software for the Wii U, we have production lines that are working on ideas for what the next system might be.”
Of course they do. That is not confirmation of a new console.
Original Mario Bros. Screenshot Credit: Giant Bomb (Modified)
A bit earlier, he also states, “I think that maybe when we release the next hardware system, you can look forward to seeing Mario take on a new role or in a new game.”
This, on the other hand, sounds a little bit like they are iterating on game design ideas that will shape the next console. From what I understand, this is how Nintendo tends to work – they apparently engineer hardware around concept use cases. It could also be a mistake.
The rumor's stronger half is a statement from Devinder Kumar, the CFO of AMD.
“I will say that one [design win] is x86 and [another] is ARM, and at least one will [be] beyond gaming, right,” said Devinder Kumar, chief financial officer of AMD, at the Raymond James Financial technology conference. “But that is about as much as you going to get out me today. From the standpoint [of being] fair to [customers], it is their product, and they launch it. They are going to announce it and then […] you will find out that it is AMD’s APU that is being used in those products.”
So AMD has secured design wins from two companies, one gaming and the other is something else. Also, one design will be x86 and the other will be ARM-based. This could be an awkward co-incidence but, at the same time, there are not too many gaming companies around.
Also, if it is Nintendo, which architecture would they choose? x86 is the common instruction set amongst the PC and other two consoles, and it is easy to squeeze performance out of. On the other hand, Nintendo has been vocal about Apple and the mobile market, which could have them looking at ARM, especially if the system design is particularly abnormal. Beyond that, AMD could have offered Nintendo an absolute steal of a deal in an effort to get a high-profile customer associated with their ARM initiative.
Or, again, this could all be coincidence.
Subject: General Tech, Processors | December 16, 2014 - 11:00 AM | Scott Michaud
Tagged: Intel, holiday, devil's canyon, 10 days of christmas
Are you still hunting for that perfect gift for the hardware and technology fan in your life? Or maybe you are looking for recommendations to give to your friends and family about what to buy for YOU? Or maybe you just want something new and cool to play with over the break? Welcome to PC Perspective's 10 Days of Christmas where we will suggest a new item each day for you to consider. Enjoy!
Today, we go from rusty gates (or rather cutting the bolts off of them with a Dremel) to tri-gates. Either way, you are probably looking for hardware that prides itself on variable speed. If you are looking to build or upgrade an upper-mainstream desktop PC, then the Intel Core i7-4790K is the last stop before Haswell-E.
The CPU, codenamed Devil's Canyon, was Intel's offering for mainstream gamers and non-Enthusiast (capital E) enthusiasts during their Haswell refresh. It is cooler than its 4770K predecessor due to an improved thermal interface under the processor lid. It is a deal this week because its price dropped down to $299.99, which is about $50 below Intel's list price.
If you are having trouble picking out a gift for a loved one, consider buying an Amazon.com gift card! Amazon has basically every product on the planet for your gift recipient to order and purchasing gift cards through these links directly sponsors and supports PC Perspective! And hey, if you were to buy gift cards for yourself to do your own Amazon-based Christmas shopping...that wouldn't exactly be a bad thing for us either! ;)
Did you miss any of our other PCPer 10 Days of Christmas posts?
Subject: Processors | November 21, 2014 - 04:08 PM | Sebastian Peak
Tagged: quad core, pentium, gaming, far cry 4, dual-core, dragon age inquisition, cpus, budget, athlon
A new report covering dual-core woes with Far Cry 4 paints a "bleak future" for budget gamers.
Image credit: Polygon
For a while now the dual-core Pentium processors have been a great option for budget gaming, with the Pentium G3220 and newer G3258 Anniversary Edition taking center stage in a number of budget gaming builds. Today, we may be nearing the end of the road for dual-core CPUs entirely as a couple of high-profile games now require a quad-core CPU.
Is the anniversary really...over?
Far Cry 4 won't even open with a dual-core CPU installed, and while the game will load when using dual-core CPU's with hyper-threading enabled (for 4 total "cores") the performance isn't very good. PC World's article points to users "reporting that Far Cry 4 flat-out refuses to work with 'straight' dual-core PCs - chips that don’t use hyperthreading to 'fake' having additional cores." The article references a "black-screen 'failure to launch' bug" being reported by users with these dual-core chips.
This should come as good news for AMD, who has embraced quad-core designs throughout their lineup, including very affordable offerings in the budget space.
Image credit: AMD
AMD offers very good gaming performance with a part like the Athlon X4 760K, which matched the Pentium G3220 in our budget gaming shootout and was neck and neck with the Pentium in our $550 1080p gaming PC article back in April. And the Athlon 760K is now selling for just under $77, close to the current best-selling $70 Pentium.
Ubisoft has made no secret of their new game's hefty system requirements, with an Intel Core i5-750 or AMD Phenom II X4 955 listed as the minimum CPUs supported. Another high-profile new release, Dragon Age: Inquisition, also requires a quad core CPU and cannot be played on dual-core machines.
Image credit: Origin
Looks like the budget gaming landscape is changing. AMD’s position looks very good unless Intel chooses to challenge the under $80 price segment with some true quad-core parts (and their current 4-core CPUs start at more than twice that amount).
Subject: Processors | November 20, 2014 - 01:31 PM | Josh Walrath
Tagged: amd, APU, carrizo, Carrizo-L, Kaveri, Excavator, Steamroller, SoC, Intel, mobile
AMD has certainly gone about doing things in a slightly different manner than we are used to. Today they announced their two latest APUs which will begin shipping in the first half of 2015. These APUs are running at AMD and are being validated as we speak. AMD did not release many details on these products, but what we do know is pretty interesting.
Carrizo is based on the latest iteration of AMD’s CPU technology. Excavator is the codename for these latest CPU cores, and they promise to be smaller and more efficient than the previous Steamroller core which powers the latest Kaveri based APUs. Carrizo-L is the lower power variant which will be based on the Puma+ core. The current Beema APU is based on the Puma architecture.
Roadmaps show that the Carrizo APUs will be 28 nm products, presumably fabricated by GLOBALFOUNDRIES. Many were hoping that AMD would make the jump to 20 nm with this generation of products, but that does not seem to be the case. This is not surprising due to the limitations of that particular process when dealing with large designs that require a lot of current. AMD will likely be pushing for 16 nm FinFET for the generation of products after Carrizo.
The big Carrizo supposedly has a next generation GCN unit. My guess here is that it will use the same design as we saw with the R9 285. That particular product is a next generation unit that has improved efficiency. AMD did not release how many GCN cores will be present in Carizzo, but it will be very similar to what we see now with Kaveri. Carrizo-L will use the same GCN units as the previous generation Beema based products.
I believe AMD has spent a lot more time hand tuning Excavator instead of relying on a lot of automated place and route. This should allow them to retain much of the performance of the part, all the while cutting down on transistor count dramatically. Some rumors that I have seen point to each Excavator module being 40% smaller than Steamroller. I am not entirely sure they have achieved that type of improvement, but more hand layout does typically mean greater efficiency and less waste. The downside to hand layout is that it is extremely time and manpower intensive. Intel can afford this type of design while AMD has to rely more on automated place and route.
Carrizo will be the first HSA 1.0 compliant SOC. It is in fact an SOC as it integrates the southbridge functions that previously had been handled by external chips like the A88X that supports the current Kaveri desktop APUs. Carrizo and Carrizo-L will also share the same infrastructure. This means that motherboards that these APUs will be soldered onto are interchangeable. One motherboard from the partner OEMs will be able to address multiple markets that will see products range from 4 watts TDP up to 35 watts.
Finally, both APUs feature the security processor that allows them access to the ARM TrustZone technology. This is a very small ARM processor that handles the secure boot partition and handles the security requests. This puts AMD on par with Intel and their secure computing solution (vPro).
These products will be aimed only at the mobile market. So far AMD has not announced Carrizo for the desktop market, but when they do I would imagine that they will hit a max TDP of around 65 watts. AMD claims that Carrizo is one of the biggest jumps for them in terms of power efficiency. A lot of different pieces of technology have all come together with this product to make them more competitive with Intel and their process advantage. Time will tell if this is the case, but for now AMD is staying relevant and pushing their product releases so that they are more consistently ontime.