Subject: Graphics Cards | February 25, 2013 - 08:01 PM | Josh Walrath
Tagged: nvidia, tegra, tegra 4, Tegra 4i, pixel, vertex, PowerVR, mali, adreno, geforce
When Tegra 4 was introduced at CES there was precious little information about the setup of the integrated GPU. We all knew that it would be a much more powerful GPU, but we were not entirely sure how it was set up. Now NVIDIA has finally released a slew of whitepapers that deal with not only the GPU portion of Tegra 4, but also some of the low level features of the Cortex A15 processor. For this little number I am just going over the graphics portion.
This robust looking fellow is the Tegra 4. Note the four pixel "pipelines" that can output 4 pixels per clock.
The graphics units on the Tegra 4 and Tegra 4i are identical in overall architecture, just that the 4i has fewer units and they are arranged slightly differently. Tegra 4 is comprised of 72 units, 48 of which are pixel shaders. These pixel shaders are VLIW based VEC4 units. The other 24 units are vertex shaders. The Tegra 4i is comprised of 60 units, 48 of which are pixel shaders and 12 are vertex shaders. We knew at CES that it was not a unified shader design, but we were still unsure of the overall makeup of the part. There are some very good reasons why NVIDIA went this route, as we will soon explore.
If NVIDIA were to transition to unified shaders, it would increase the overall complexity and power consumption of the part. Each shader unit would have to be able to handle both vertex and pixel workloads, which means more transistors are needed to handle it. Simpler shaders focused on either pixel or vertex operations are more efficient at what they do, both in terms of transistors used and power consumption. This is the same train of thought when using fixed function units vs. fully programmable. Yes, the programmability will give more flexibility, but the fixed function unit is again smaller, faster, and more efficient at its workload.
On the other hand here we have the Tegra 4i, which gives up half the pixel pipelines and vertex shaders, but keeps all 48 pixel shaders.
If there was one surprise here, it would be that the part is not completely OpenGL ES 3.0 compliant. It is lacking in one major function that is required for certification. This particular part cannot render at FP32 levels. It has been quite a few years since we have heard of anything not being able to do FP32 in the PC market, but it is quite common to not support it in the power and transistor conscious mobile market. NVIDIA decided to go with a FP 20 partial precision setup. They claim that for all intents and purposes, it will not be noticeable to the human eye. Colors will still be rendered properly and artifacts will be few and far between. Remember back in the day when NVIDIA supported FP16 and FP32 while they chastised ATI for choosing FP24 with the Radeon 9700 Pro? Times have changed a bit. Going with FP20 is again a power and transistor saving decision. It still supports DX9.3 and OpenGL ES 2.0, but it is not fully OpenGL ES 3.0 compliant. This is not to say that it does not support any 3.0 features. It in fact does support quite a bit of the functionality required by 3.0, but it is still not fully compliant.
This will be an interesting decision to watch over the next few years. The latest Mali 600 series, PowerVR 6 series, and Adreno 300 series solutions all support OpenGL ES 3.0. Tegra 4 is the odd man out. While most developers have no plans to go to 3.0 anytime in the near future, it will eventually be implemented in software. When that point comes, then the Tegra 4 based devices will be left a bit behind. By then NVIDIA will have a fully compliant solution, but that is little comfort for those buying phones and tablets in the near future that will be saddled with non-compliance once applications hit.
The list of OpenGL ES 3.0 features that are actually present in Tegra 4, but the lack of FP32 relegates it to 2.0 compliant status.
The core speed is increased to 672 MHz, well up from the 520 MHz in Tegra 3 (8 pixel and 4 vertex shaders). The GPU can output four pixels per clock, double that of Tegra 3. Once we consider the extra clock speed and pixel pipelines, the Tegra 4 increases pixel fillrate by 2.6x. Pixel and vertex shading will get a huge boost in performance due to the dramatic increase of units and clockspeed. Overall this is a very significant improvement over the previous generation of parts.
The Tegra 4 can output to a 4K display natively, and that is not the only new feature for this part. Here is a quick list:
2x/4x Multisample Antialiasing (MSAA)
24-bit Z (versus 20-bit Z in the Tegra 3 processor) and 8-bit Stencil
4K x 4K texture size incl. Non-Power of Two textures (versus 2K x 2K in the Tegra 3 processor) – for higher quality textures, and easier to port full resolution textures from console and PC games to Tegra 4 processor. Good for high resolution displays.
16:1 Depth (Z) Compression and 4:1 Color Compression (versus none in Tegra 3 processor) – this is lossless compression and is useful for reducing bandwidth to/from the frame buffer, and especially effective in antialiasing processing when processing multiple samples per pixel
Percentage Closer Filtering for Shadow Texture Mapping and Soft Shadows
Texture border color eliminate coarse MIP-level bleeding
sRGB for Texture Filtering, Render Surfaces and MSAA down-filter
1 - CSAA is no longer supported in Tegra 4 processors
This is a big generational jump, and now we only have to see how it performs against the other top end parts from Qualcomm, Samsung, and others utilizing IP from Imagination and ARM.
Subject: General Tech | February 13, 2013 - 12:40 PM | Jeremy Hellstrom
Tagged: Intel, tv, intel media, imagination, PowerVR, intel tv
Ryan spotted prototype Intel TV hardware at CES in the Imagination suite and today Intel Media's Erik Huggers has confirmed that Intel will be producing some sort of set top box or TV for sale in the near future. It will likely be in partnership with Imagination and their PowerVR technology and might possibly be tied to the new NUC that Intel released recently and which would fit the definition of set top box, plus keep your cat nice and warm. While The Inquirer did get confirmation that Intel will release hardware to compete with Apple and Google before the end of the year but they would not specify exactly what that hardware would be. They plan to set themselves apart from NetFlix and other content streamers by offering live TV streams which will probably not make them popular with established cable or satellite proveders but with Intel's deep pockets and the possiblity of personalized advertising they could well steal customers away.
"CHIPMAKER Intel has confirmed that it is working on hardware to stream live and on-demand content to televisions in 2013."
Here is some more Tech News from around the web:
- Intel outs caching software for 910 series SSDs @ The Inquirer
- iFixit tears down the Surface Pro @ The Inquirer
- Apple iWatch could be used as smartphone ‘support’ tool @ Kitguru
- TECH LOVERS competition with Seasonic and Sapphire @ Kitguru
Subject: Systems | January 10, 2013 - 02:16 PM | Ryan Shrout
Tagged: CES, ces 2013, Intel, tv, intel media, imagination, PowerVR
While visiting with the folks at Imagination, responsible for the graphics system known as PowerVR found in many Apple and Samsung SoCs, we were shown a new, innovative way to watch TV. This new system used an impressively quick graphic overlay, the ability to preview other channels before changing to them and even the ability to browse content on your phone and "toss" it to your TV.
The software infrastructure is part of the iFeelSmart package but the PowerVR team was demonstrating the performance and use experiences that its low power graphics system could provide for future applications. And guess what we saw was connected to the TV?
With all of the information filtering out on Intel's upcoming dive into the TV ecosystem, it shouldn't be a surprise that find hardware like this floating around. We aren't sure what kind of hardware Intel would actually end up using for the set top box expected later this year, but it is possible we are looking at an early development configuration right here.
PC Perspective's CES 2013 coverage is sponsored by AMD.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: General Tech, Processors, Mobile | September 27, 2012 - 12:26 PM | Tim Verry
Tagged: SoC, PowerVR, iphone, arm, apple, a6
Apple's latest smartphone was unveiled earlier this month, and just about every feature has been analyzed extensively by reviewers and expounded upon by Apple. However, the one aspect that remains a mystery is the ARM System on a Chip that is powering the iPhone 5. There has been a great deal of speculation, but the officially Apple is not talking. The company has stated that the new processor is two times faster than its predecessor, but beyond that it will be up to reviewers to figure out what makes it tick.
After the press conference PC Perspective's Josh Walrath researched what few hints there were on the new A6 processor, and determined that there was a good chance it was an ARM Cortex A15-based design. Since then some tidbits of information have come out that suggest otherwise, however. Developers for iOS disovered that the latest SDK suggest new functionality for the A6 processor, including some new instruction sets. That discovery tended credence to the A6 possibly being Cortex A15, but it did not prove that it wasn't. Following that, Anandtech posted an article that stated it was in a licensed Cortex A15 design. Rather, the A6 was a custom Apple-developed chip that would, ideally, give users the same level of performance without needing significantly more power – and without waiting for a Cortex A15 chip to be manufactured.
Finally, thanks to the work of the enthusiasts over at Chipworks, we have physical proof that, finally, reveals details about Apple's A6 SoC. By stripping away the outer protective layers, and placing the A6 die under a powerful microscope, they managed to get an 'up close and personal' look at the inside of the chip.
Despite the near-Jersey Shore (shudder) levels of drama between Apple and Samsung over the recent trade dress and patent infringement allegations, it seems that the two companies worked together to bring Apple's custom processor to market. The researchers determined that the A6 was based on Samsung's 32nm CMOS manufacturing process. It reads APL0589B01 on the inside, which suggests that it is of Apple's own design. Once the Chipworks team sliced open the processor further, they discovered proof that Apple really did craft a custom ARM processor.
In fact, Apple has created a chip with dual ARM CPU cores and three GPU cores (PowerVR). The CPU cores support the ARMv7s instruction set, and Apple has gone with a hand drawn design. Rather than employ computer libraries to automatically lay out the logic in the processor, Apple and the engineers acquired from its purchase of PA Semi have manually drawn out the processor by hand. This chip has likely been in the works for a couple of years now, and the 96.71mm^2 sized die will offer up some notable performance improvements.
It seems like Apple has opted to go for an expensive custom chip rather than opt for a licensed Cortex A15 design. That combined with the hand drawn layout should give Apple a processor with better performance than its past designs without requiring significantly more power.
At a time when mobile SoC giant Texas Instruments is giving up on ARM chips for tablets and smartphones, and hand drawn designs are becoming increasingly rare (even AMD has given up), I have to give Apple props for going with a custom processor laid out by hand. I'm interested to see what the company is able to do with it and where they will go from here.
Chipworks and iFixIt also took a look at the LTE modem, Wi-Fi chip, audio amplifier, and other aspects of the iPhone 5's internals, and it is definitely worth a read for the impressive imagery alone.
Apple Produces the new A6 for the iPhone 5
Today is the day that world gets introduced to the iPhone 5. I of course was very curious about what Apple would be bringing to market the year after the death of Steve Jobs. The excitement leading up to the iPhone announcement was somewhat muted as compared to years past, and a lot of that could be attributed to what has been happening in the Android market. Companies like Samsung and HTC have released new high end phones that are not only faster and more expansive than previous versions, but they also worked really well and were feature packed. While the iPhone 5 will be another success for Apple, for those somewhat dispassionate about the cellphone market will likely just shrug and say to themselves, “It looks like Apple caught up for the year, but too bad they really didn’t introduce anything really groundbreaking.”
If there was one area that many were anxiously awaiting, it was that of the SOC (system on a chip) that Apple would use for the iPhone 5. Speculation went basically from using a fresh piece of silicon based on the A5X (faster clocks, smaller graphics portion) to having a quad core monster running at high speeds but still sipping power. It seems that we actually got something in between. This is not a bad thing, but as we go forward we will likely see that the silicon again only matches what other manufacturers have been using since earlier this year.
Subject: General Tech | March 22, 2012 - 01:44 PM | Jeremy Hellstrom
Tagged: valleyview, shark bay, PowerVR, Ivy Bridge, haswell GT3, haswell, atom
Phoronix has been investigating the open source driver code which was recently released, designed to power Intel's Haswell chips. The news is not as good as some had hoped; there were rumours that a Gen8 Haswell GT3 IGP would appear in Haswell but according to the hardware IDs that were found in the code that is not going to be true. Instead you are looking at refined Gen7 Haswell GT1 and GT2 IGPs and so will be an improvement over Ivy Bridge but not a completely new chip. Check out the rest of the secrets revealed by the code here.
"While Intel's Ivy Bridge launch is imminent, and I'm still digging through information concerning today's Intel Valleyview code drop that brings Ivy Bridge graphics to their next-generation Atom as they do away with PowerVR graphics for their SoCs, more graphics driver code to enable Haswell support has landed this evening."
Here is some more Tech News from around the web:
- What's New in Linux 3.3? @ Linux.com
- Japan's once-proud semis learn size DOES matter @ The Register
- Globalfoundries ships 250,000 32nm wafers @ The Inquirer
- First-tier motherboard makers drop 7 series motherboard prices @ DigiTimes
- Intel Valley View: Atom SoC With Ivy Bridge Graphics @ Phoronix
- Weekly Giveaway #24: Thermaltake Chaser MK-1 and Saphira Mouse @ eTeknix
Subject: Mobile | December 16, 2011 - 06:00 AM | Tim Verry
Tagged: tegra, SoC, qualcomm, PowerVR, mobile, Android, adreno
Quite a few mobile device manufacturers are implementing graphics processors and image processors based on Imagination Technologies’ PowerVR technology. Popular licensees of Imagination Technologies PowerVR core patents include Intel, LG, Samsung, Sony, and Texas Instruments (a big one in regards to number of SoCs using PowerVR techs for mobile phones).
Interestingly, Qualcomm is not currently licensing the graphics processor portfolio that man other mobile OEMs license. Rather, Qualcomm is licensing the PowerVR display patents. The intellectual property features the PowerVR de-interlacing cores and de-judder purposed FRC (Frame Rate Conversion) core. The de-interlacing core(s) can do either “motion adaptive (MA) or motion compensated (MC) de-interlacing” as well as a few other algorithms to deliver smooth graphics. Further, the FRC cores take 24 FPS (frames per second) source material and outputs it as either 120 Hz or 240 Hz while applying image processing to keep the video looking smooth to the eye. The method for grabbing and extrapolating “extra” frames to take a 24 FPS video and display it on an LCD screen that refreshes at 120 Hz by displaying each one of those 24 frames five times every second involves a bit of math and algorithmic magic; a simplistic explanation can be read here.
It will be interesting to see how Qualcomm applies the image processing technology to their future SoCs (system on a chip) to entice manufacturers into going with them instead of competition like Texas Instruments or Nvidia’s Tegra chips. The Verge speculates that this Qualcomm and Imagination Technologies deal may be just the first step towards Qualcomm licensing more PowerVR tech (possibly) including the GPU portfolio. Whether Qualcomm will ditch their Adreno GPUs remains to be seen. If I had to guess, the SoC maker will invest in more PowerVR IP, but they will not completely abandon their Adreno graphics. Rather, they will continue developing next generation Adreno graphics for use in their SoCs while also integrating the useful and superior aspects of PowerVR graphics and display technologies. Another option may be to develop and sell both platforms (possibly with one being high end competition to Tegra and the other being for the rest of phones as competition to other low end, low power chips) to hedge their bets into the future of mobile SoCs which is a rapidly advancing industry where change and what is considered the top tech happens quickly.
Subject: Processors, Chipsets, Mobile | May 9, 2011 - 09:07 PM | Tim Verry
Tagged: PowerVR, Intel, gpu, atom
In a surprising move, Intel plans to move away from using it's own graphics processors with the next "full fat" Atom processors. Intel has traditionally favored its own graphics chipsets; however, VR-Zone reports that Intel has extended it's licensing agreements with PowerVR to include certain GPU architectures.
These GPU licenses will allow Intel to implement a PowerVR SGX545 equivalent graphics core with its Cedarview Atom chips. While the PowerVR graphics core is no match for dedicated GPUs or likely that found in Intel's own Sandy Bridge "HD 3000" series, the hardware will allow Atom powered systems to play video with ease thanks to hardware accelerated decodding of "MPEG-2, MPEG-4 part 2, VC1, WMV9 and the all-important H.264 codec." VR-Zone details the SGX545 GPU as being capable of "40 million triangles/s and 1Gpixles/s using a 64-bit bus" at the chips original 200mhz.
Intel plans to clock the mobile chips at 400mhz and the desktop graphics cores at 640mhz. The graphics cores will be capable of resolutions up to 1440x900 and supports VGA, HDMI 1.3a and Display Port 1.1 connections for video output. DirectX 10.1 support is also stated by VR-Zone to be supported by the SGX545, which means that the net-top versions of Atom may be capable of running the Aero desktop smoothly.
This integration by Intel of a GPU capable of hardware video acceleration will certainly make Nvidia's ION chipsets harder to justify for HTPC usage. ION chipsets will likely reliquish marketshare to cheaper stock Intel Atom platforms for basic home theater computers, but will still remain viable in the more specific market using ION + Atom chips as light gaming platforms in the living room.
Get notified when we go live!