All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Editorial, General Tech, Graphics Cards, Processors | May 8, 2013 - 06:32 PM | Scott Michaud
Tagged: Volcanic Islands, radeon, ps4, amd
So the Southern Islands might not be entirely stable throughout 2013 as we originally reported; seismic activity being analyzed suggests the eruption of a new GPU micro-architecture as early as Q4. These Volcanic Islands, as they have been codenamed, should explode onto the scene opposing NVIDIA's GeForce GTX 700-series products.
It is times like these where GPGPU-based seismic computation becomes useful.
The rumor is based upon a source which leaked a fragment of a slide outlining the processor in block diagram form and specifications of its alleged flagship chip, "Hawaii". Of primary note, Volcanic Islands is rumored to be organized with both Serial Processing Modules (SPMs) and a Parallel Compute Module (PCM).
So apparently a discrete GPU can have serial processing units embedded on it now.
Heterogeneous Systems Architecture (HSA) is a set of initiatives to bridge the gap between massively parallel workloads and branching logic tasks. We usually make reference to this in terms of APUs and bringing parallel-optimized hardware to the CPU. In this case, we are discussing it in terms of bringing serial processing to the discrete GPU. According to the diagram, the chip within would contain 8 processor modules each with two processing cores and an FPU for a total of 16 cores. There does not seem to be any definite identification whether these cores would be based upon their license to produce x86 processors or their other license to produce ARM processors. Unlike an APU, this is heavily skewed towards parallel computation rather than a relatively even balance between CPU, GPU, and chipset features.
Now of course, why would they do that? Graphics processors can do branching logic but it tends to sharply cut performance. With an architecture such as this, a programmer might be able to more efficiently switch between parallel and branching logic tasks without doing an expensive switch across the motherboard and PCIe bus between devices. Josh Walrath suggested a server containing these as essentially add-in card computers. For gamers, this might help out with workloads such as AI which is awkwardly split between branching logic and massively parallel visibility and path-finding tasks. Josh seems skeptical about this until HSA becomes further adopted, however.
Still, there is a reason why they are implementing this now. I wonder, if the SPMs are based upon simple x86 cores, how the PS4 will influence PC gaming. Technically, a Volcanic Island GPU would be an oversized PS4 within an add-in card. This could give AMD an edge, particularly in games ported to the PC from the Playstation.
This chip, Hawaii, is rumored to have the following specifications:
- 4096 stream processors
- 16 serial processor cores on 8 modules
- 4 geometry engines
- 256 TMUs
- 64 ROPs
- 512-bit GDDR5 memory interface, much like the PS4.
20 nm Gate-Last silicon fab process
- Unclear if TSMC or "Common Platform" (IBM/Samsung/GLOBALFOUNDRIES)
Softpedia is also reporting on this leak. Their addition claims that the GPU will be designed on a 20nm Gate-Last fabrication process. While gate-last is considered to be not worth the extra effort in production, Fully Depleted Silicon On Insulator (FD-SOI) is apparently "amazing" on gate-last at 28nm and smaller fabrication. This could mean that AMD is eying that technology and making this design with intent of switching to an FD-SOI process, without a large redesign which an initially easier gate-first production would require.
Well that is a lot to process... so I will leave you with an open question for our viewers: what do you think AMD has planned with this architecture, and what do you like and/or dislike about what your speculation would mean?
Subject: General Tech, Processors | May 6, 2013 - 11:34 AM | Jeremy Hellstrom
Tagged: silvermont, merrifield, Intel, Bay Trail, atom
The news today is all about shrinking the Atom, both in process size and power consumption. Indeed The Tech Report heard talk of milliwatts and SoC's which shows the change of strategy Intel is having with Atom from small footprint HTPCs to POS and other ultra-low power applications. Hyperthreading has been dropped and Out of Order processing has been brought in which makes far more sense for the new niche Atom is destined for.
"Since their debut five years ago, Intel's Atom microprocessors have relied on the same basic CPU core. Next-gen Atoms will be based on the all-new Silvermont core, and we've taken a closer look at its underlying architecture."
Here is some more Tech News from around the web:
- AMD says HSA will cut latency bottleneck in GPU processing @ The Inquirer
- Redmond probes new IE 8 vulnerability @ The Register
- Not Like a Fine Wine: Windows Activation Still a Piece of Junk After All These Years @ Techgage
- Acer unveils new ultrabooks, notebooks and tablet @ DigiTimes
- Angering hippies and financing evil @ The Tech Report
- BlackBerry 10 passes US defence department tests @ The Register
- The TR Podcast 133: Iris graphics and the Radeon HD 7990
Subject: Processors | May 3, 2013 - 03:45 AM | Tim Verry
Tagged: z87, overclocking, Intel, haswell, core i7 4770k, 7ghz
OCaholic has spotted an interesting entry in the CPU-Z database. According to the site, an overclocker by the handle of “rtiueuiurei” has allegedly managed to push an engineering sample of Intel’s upcoming Haswell Core i7-4770K processor past 7GHz.
If the CPU-Z entry is accurate, the overclocker used a BCLK speed of 91.01 and a multiplier of 77 to achieve a CPU clockspeed of 7012.65MHz. The chip was overclocked on a Z87 motherboard along with a single 2GB G.Skill DDR3 RAM module. Even more surprising than the 7GHz clockspeed is the voltage that the overclocker used to get there: an astounding 2.56V according to CPU-Z.
From the information Intel provided at IDF Beijing, the new 22nm Haswell processors feature an integrated voltage regulator (IVR), and the CPU portion of the chip’s voltage is controlled by the Vccin value. Intel recommends a range of 1.8V to 2.3V for this value, with a maximum of 3V and a default of 1.8V. Therefore, the CPU-Z-reported number may actually be correct. On the other hand, it may also just be a bug in the software due to the unreleased-nature of the Haswell chip.
Voltage questions aside, the frequency alone makes for an impressive overclock, and it seems that the upcoming chips will have decent overclocking potential!
Subject: Cases and Cooling, Processors | May 1, 2013 - 12:07 PM | Ryan Shrout
Tagged: power supply, Intel, idle, haswell, c7, c6
I came across an interesting news story posted by The Tech Report this morning that dives into the possibility of problems with Intel's upcoming Haswell processors and currently available power supplies. Apparently, the new C6 and C7 idle power states that give the new Haswell architecture benefits for low power scenarios place a requirement of receiving a 0.05 amps load on the 12V2 rail. (That's just 50 milliamps!) Without that capability, the system can exhibit unstable behavior and a quick look at the power supply selector on Intel's own website is only listing a couple dozen that support the feature.
This table from VR-Zone, the source of the information initially, shows the difference between the requirements for 3rd (Ivy Bridge) and 4th generation (Haswell) processors. The shift is an order of magnitude and is quite a dramatic change for PSU vendors. Users of Corsair power supplies will be glad to know that among those listed with support on the Intel website linked above were mostly Corsair units!
A potential side effect of this problem might be that motherboard vendors simply disable those sleep states by default. I don't imagine that will be a problem for PC builders anyway since most desktop users aren't really worried about the extremely small differences in power consumption they offer. For mobile users and upcoming Haswell notebook designs the increase in battery life is crucial though and Intel has surely been monitoring those power supplies closely.
I asked our in-house power supply guru, Lee Garbutt, who is responsible for all of the awesome power supply reviews on pcper.com, what he thought about this issue. He thinks the reason more power supplies don't support it already is for power efficiency concerns:
Most all PSUs have traditionally required "some load" on the various outputs to attain good voltage regulation and/or not shut down. Not very many PSUs are designed yet to operate with no load, especially on the critical +12V output. One of the reasons for this is efficiency. Its harder to design a PSU to operate correctly with a very low load AND to deliver high efficiency. It would be easy just to add some bleed resistance across the DC outputs to always have a minimal load to keep voltage regulation under control but then that lowers efficiency.
Subject: Processors | April 30, 2013 - 11:04 AM | Josh Walrath
Tagged: amd, FX, vishera, bulldozer, FX-6350, FX-4350, FX-6300, FX-4300, 32 nm, SOI, Beloved
Today AMD has released two new processors that address the AM3+ market. The FX-6350 and FX-4350 are two new refreshes of the quad and hex core lineup of processors. Currently the FX-8350 is still the fastest of the breed, and there is no update for that particular number yet. This is not necessarily a bad thing, but there are those of us who are still awaiting the arrival of the rumored “Centurion”.
These parts are 125 watt TDP units, which are up from their 95 watt predecessors. The FX-6350 runs at 3.9 GHz with a 4.2 GHz boost clock. This is up 300 MHz stock and 100 MHz boost from the previous 95 watt FX-6300. The FX-4350 runs at 3.9 GHz with a 4.3 GHz boost clock. This is 100 MHz stock and 300 MHz boost above that of the FX-4300. What is of greater interest here is that the L3 cache goes from 4 MB on the 4300 to 8 MB on the 4350. This little fact looks to be the reason why the FX-4350 is now a 125 watt TDP part.
It has been some two years since AMD started shipping 32 nm PD-SOI/HKMG products to the market, and it certainly seems as though spinning off GLOBALFOUNDRIES has essentially stopped the push to implement new features into a process node throughout the years. As many may remember, AMD was somewhat famous for injecting new process technology into current nodes to improve performance, yields, and power characteristics in “baby steps” type fashion instead of leaving the node as is and making a huge jump with the next node. Vishera has been out for some 7 months now and we have not really seen any major improvement in regards to performance and power characteristics. I am sure that yields and bins have improved, but the bottom line is that this is only a minor refresh and AMD raised TDPs to 125 watts for these particular parts.
The FX-6350 is again a three module part containing six cores. Each module features 2 MB of L2 cache for a total of 6 MB L2 and the entire chip features 8 MB of L3 cache. The FX-4350 is a two module chip with four cores. The modules again feature the same 2 MB of L2 cache for a total of 4 MB active on the chip with the above mentioned 8 MB of L3 cache that is double what the FX-4300 featured.
Perhaps soon we will see updates on FM2 with the Richland series of desktop processors, but for now this refresh is all AMD has at the moment. These are nice upgrades to the line. The FX-6350 does cost the same as the FX-6300, but the thinking behind that is that the 6300 is more “energy efficient”. We have seen in the past that AMD (and Intel for that matter) does put a premium on lower wattage parts in a lineup. The FX-4350 is $10 more expensive than the 4300. It looks as though the FX-6350 is in stock at multiple outlets but the 4350 has yet to show up.
These will fit in any modern AM3+ motherboard with the latest BIOS installed. While not an incredibly exciting release from AMD, it at least shows that they continue to address their primary markets. AMD is in a very interesting place, and it looks like Rory Read is busy getting the house in order. Now we just have to see if they can curve back their cost structure enough to make the company more financially stable. Indications are good so far, but AMD has a long ways to go. But hey, at least according to AMD the FX series is beloved!
Subject: Processors | April 17, 2013 - 06:48 PM | Tim Verry
Tagged: overclocking, intel ivr, intel hd graphics, Intel, haswell, cpu
During the Intel Developer Forum in Beijing, China the X86 chip giant revealed details about how overclocking will work on its upcoming Haswell processors. Enthusiasts will be pleased to know that the new chips do not appear to be any more restrictive than the existing Ivy Bridge processors as far as overclocking. Intel has even opened up the overclocking capabilities slightly by allowing additional BCLK tiers without putting aspects such as the PCI-E bus out of spec.
The new Haswell chips have an integrated voltage regulator, which allows programmable voltage to both the CPU, Memory, and GPU portions of the chip. As far as overclocking the CPU itself, Intel has opened up the Turbo Boost and is allowing enthusiasts to set an overclocked Turbo Boost clockspeed. Additionally, Intel is specifying available BCLK values of 100, 125, and 167MHz without putting other systems out of spec (they use different ratios to counterbalance the increased BCLK, which is important for keeping the PCI-E bus within ~100Mhz). The chips will also feature unlocked core ratios all the way up to 80 in 100MHz increments. That would allow enthusiasts with a cherry-picked chip and outrageous cooling to clock the chip up to 8GHz without overclocking the BCLK value (though no chip is likely to reach that clockspeed, especially for everyday usage!).
Remember that the CPU clockspeed is determined by the BCLK value times a pre-set multiplier. Unlocked processors will allow enthusiasts to adjust the multiplier up or down as they please, while non-K edition chips will likely only permit lower multipliers with higher-than-default multipliers locked out. Further, Intel will allow the adventurous to overclock the BLCK value above the pre-defined 100, 125, and 167MHz options, but the chip maker expects most chips will max out at anywhere between five-to-seven percent higher than normal. PC Perspective’s Morry Teitelman speculates that slightly higher BCLK overclocks may be possible if you have a good chip and adequate cooling, however.
Similar to current-generation Ivy Bridge (and Sandy Bridge before that) processors, Intel will pack Haswell processors with its own HD Graphics pGPU. The new HD Graphics will be unlocked and the graphics ratio will be able to scale up to a maximum of 60 in 50MHz steps for a potential maximum of 3GHz. The new processor graphics cards will also benefit from Intel’s IVR (programmable voltage) circuitry. The HD Graphics and CPU are fed voltage from the integrated voltage regulator (IVR), and is controlled by adjusting the Vccin value. The default is 1.8V, but it supports a recommended range of 1.8V to 2.3V with a maximum of 3V.
Finally, Intel is opening up the memory controller to further overclocking. Intel will allow enthusiasts to overclock the memory in either 200MHz or 266MHz increments, which allows for a maximum of either 2,000MHz or 2,666MHz respectively. The default voltage will depend on the particular RAM DIMMs you use, but can be controlled via the Vddq IVR setting.
It remains to be seen how Intel will lock down the various processor SKUs, especially the non-K edition chips, but at least now we have an idea of how a fully-unlocked Haswell processor will overclock. On a positive note, it is similar to what we have become used to with Ivy Bridge, so similar overclocking strategies for getting the most out of processors should still apply with a bit of tweaking. I’m interested to see how the integration of the voltage regulation hardware will affect overclocking though. Hopefully it will live up to the promises of increased efficiency!
Are you gearing up for a Haswell overhaul of your system, and do you plan to overclock?
In addition to Intel's announcement of new Xeon processors, the company is launching three new Atom-series processors for servers later this year. The new processor lineups include the Intel Atom S12x9 family for storage applications, Rangeley processors for networking gear, and Avoton SoCs for low-power micro-servers.
The Intel Atom S12x9 family takes the existing S1200 processors and makes a few tweaks to optimize the SoCs for storage servers and other storage appliances. For reference, the Intel Atom S1200 series of processors feature sub-9W TDPs, 1MB of cache, and two physical CPU cores clocked at up to 2GHz. However, Intel did not list the individual S12x9 SKUs or specifications, so it is unknown if they will also be clocked at up to 2GHz. The new Atom S12x9 processors will feature 40 PCI-E 2.0 lanes (26 Root Port and 16 Non-Transparent Bridge) to provide ample bandwidth between I/O and processor. The SoCs also feature hardware RAID acceleration, Native Dual-Casting, and Asynchronous DRAM Self-Refresh. Native Dual-Casting allows data to be read from one source and written to two memory locations simultaneously while Asynchronous DRAM Self-Refresh protects data during a power failure.
The new chips are available now to customers and will be available in OEM systems later this year. Vendors that plan to release systems with the S12x9 processors include Accusys, MacroSAN, Qnap, and Qsan.
Intel is also introducing a new series of processors --- codenamed Rangeley -- is intended to power future networking gear. The 22nm Atom SoC is slated to be available sometime in the second half of this year (2H'13). Intel is positioning the Rangeley processors at entry-level to mid-range routers, switches, and security appliances.
While S12x9 and Rangeley are targeted at specific tasks, the company is also releasing a general purpose Atom processor codenamed Avoton. The Avoton SoCs are aimed at low power micro-servers, and is Intel's answer to ARM chips in the server room. Avoton is Intel's second generation 64-bit Atom processor series. It uses the company's Silvermont architecture on a 22nm process. The major update with Avoton is the inclusion of an Ethernet controller built into the processor itself. According to Intel, building networking into the processor instead of as a chip on a separate add-on board results in "significant improvements in performance per watt." These chips are currently being sampled to partners, and should be available in Avoton-powered servers later this year (2H'13).
This year is certainly shaping up to be an interesting year for Atom processors. I'm excited to see how the battle unfolds between the ARM and Atom-based solutions in the data center.
Subject: Processors | April 3, 2013 - 05:35 AM | Tim Verry
Tagged: mobile, Lenovo, electrical engineering, chip design, arm
According to a recent article in the EE Times, Beijing-based PC OEM Lenovo many be entering the mobile chip design business. An anonymous source allegedly familiar with the matter has indicated that Lenovo will be expanding its Integrated Circuits design team to 100 engineers by the second-half of this year. Further, Lenovo will reportedly task the newly-expanded team with designing an ARM processor of its own to join the ranks of Apple, Intel, NVIDIA, Qualcomm, Huawei, Samsung, and others.
It is unclear whether Lenovo simply intends to license an existing ARM core and graphics module or if the design team expansion is merely the begining of a growing division that will design a custom chip for its smartphones and Chromebooks to truly differentiate itself and take advantage of vertical integration.
Junko Yoshida of the EE Times article notes that Lenovo was turned away by Samsung when it attempted to use the company's latest Exynos Octa processor. I think that might contribute to the desire to have its own chip design team, but it may also be that the company believes it can compete in a serious way and set its lineup of smartphones apart from the crowd (as Apple has managed to do) as it pursues further Chinese market share and slowly moves its phones into the United States market.
Details are scarce, but it is at least an intriguing protential future for the company. It will be interesting to see if Lenovo is able to make it work in this extremely-competitive and expensive area.
Do you think Lenovo has what it takes to design its own mobile chip? Is it a good idea?
Subject: Editorial, General Tech, Processors, Shows and Expos | March 20, 2013 - 03:26 PM | Scott Michaud
Tagged: windows rt, nvidia, GTC 2013
NVIDIA develops processors, but without an x86 license they are only able to power ARM-based operating systems. When it comes to Windows, that means Windows Phone or Windows RT. The latter segment of the market has disappointing sales according to multiple OEMs, which Microsoft blames them for, but the jolly green GPU company is not crying doomsday.
NVIDIA just skimming the Surface RT, they hope.
As reported by The Verge, NVIDIA CEO Jen-Hsun Huang was optimistic that Microsoft would eventually let Windows RT blossom. He noted how Microsoft very often "gets it right" at some point when they push an initiative. And it is true, Microsoft has a history of turning around perceived disasters across a variety of devices.
They also have a history of, as they call it, "knifing the baby."
I think there is a very real fear for some that Microsoft could consider Intel's latest offerings as good enough to stop pursuing ARM. Of course, the more the pursue ARM, the more their business model will rely upon the-interface-formerly-known-as-Metro and likely all of its certification politics. As such, I think it is safe to say that I am watching the industry teeter on a fence with a bear on one side and a pack of rabid dogs on the other. On the one hand, Microsoft jumping back to Intel would allow them to perpetuate the desktop and all of the openness it provides. On the other hand, even if they stick with Intel they likely will just kill the desktop anyway, for the sake of user confusion and the security benefits of cert. We might just have less processor manufacturers when they do that.
So it could be that NVIDIA is confident that Microsoft will push Windows RT, or it could be that NVIDIA is pushing Microsoft to continue to develop Windows RT. Frankly, I do not know which would be better... or more accurately, worse.
Subject: Processors | March 12, 2013 - 11:52 AM | Jeremy Hellstrom
Tagged: VLIW4, trinity, Richland, piledriver, notebook, mobile, hd 8000, APU, amd, A10-5750
The differences between Richland and Trinity are not earth shattering but there are certainly some refinements implemented by AMD in the A10-5750. One very noticeable one is support for DDR3-1866 as well as better power management for both the CPU and GPU; with new temperature balancing algorithms and measurement the ability to balance the load properly has increased from Trinity. Many AMD users will be more interested in the GPU portion of the die than the CPU, as that is where AMD actually has as lead on Intel and this particular chip contains the HD8650G, with clocks of 720MHz boost and 533MHz base and increase from the previous generation of 35 and 37MHz respectively. You can read more about the other three models that will be released over at The Tech Report.
"AMD has formally introduced the first members of its Richland APU family. We have the goods on the chips and Richland's new power management tech, which combines temperature-based inputs with bottleneck-aware clock boosting."
Here are some more Processor articles from around the web:
- AMD Richland APU Preview: Trinity Gets a Facelift @ Hardware Canucks
- 2013 AMD Mobile APU (Richland) @ Bjorn3D
- Westmere-EP to Sandy Bridge-EP: The Scientist Potential Upgrade @ AnandTech
- AMD Phenom II X4 955, Phenom II X4 960T, Phenom II X6 1075T and Intel Pentium G2120, Core i3-3220, Core i5-3330 @ ixbt.com
- AMD FX-8350 @ iXBT Labs
- The new Opteron 6300: Finally Tested! @ AnandTech
- Intel Core i5-3570K vs. i7-3770K Ivy Bridge @ techPowerUp
Subject: Processors | February 20, 2013 - 06:35 PM | Josh Walrath
Tagged: Tegra 4i, tegra 4, tegra 3, Tegra 2, tegra, phoenix, nvidia, icera, i500
The NVIDIA Tegra 4 and Shield project were announced at this year’s CES, but there were other products in the pipeline that were just not quite ready to see the light of day at that time. While Tegra 4 is an impressive looking part for mobile applications, it is not entirely appropriate for the majority of smart phones out there. Sure, the nebulous “Superphone” category will utilize Tegra 4, but that is not a large part of the smartphone market. The two basic issues with Tegra 4 is that it pulls a bit more power at the rated clockspeeds than some manufacturers like, and it does not contain a built-in modem for communication needs.
The die shot of the Tegra 4i. A lot going on in this little guy.
NVIDIA bought up UK modem designer Icera to help create true all-in-one SOCs. Icera has a unique method with building their modems that they say is not only more flexible than what others are offering, but also much more powerful. These modems skip a lot of fixed function units that most modems are made of and rely on high speed general purpose compute units and an interesting software stack to create smaller modems with greater flexibility when it comes to wireless standards. At CES NVIDIA showed off the first product of this acquisition, the i500. This is a standalone chip and is set to be offered with the Tegra 4 SOC.
Yesterday NVIDIA introduced the Tegra 4i, formerly codenamed “Grey”. This is a combined Tegra SOC with the Icera i500 modem. This is not exactly what we were expecting, but the results are actually quite exciting. Before I get too out of hand about the possibilities of the chip, I must make one thing perfectly clear. The chip itself will not be available until Q4 2013. It will be released in limited products with greater availability in Q1 2014. While NVIDIA is announcing this chip, end users will not get to use it until much later this year. I believe this issue is not so much that NVIDIA cannot produce the chips, but rather the design cycles of new and complex cell phones do not allow for rapid product development.
Tegra 4i really should not be confused for the slightly earlier Tegra 4. The 4i actually uses the 4th revision of the Cortex A9 processor rather than the Cortex A15 in the Tegra 4. The A9 has been a mainstay of modern cell phone processors for some years now and offers a great deal of performance when considering die size and power consumption. The 4th revision improves IPC of the A9 in a variety of ways (memory management, prefetch, buffers, etc.), so it will perform better than previous Cortex A9 solutions. Performance will not approach that provided by the much larger and complex A15 cores, but it is a nice little boost from what we have previously seen.
The Tegra 4 features a 72 core GPU (though NVIDIA has still declined to detail the specifics of their new mobile graphics technology- these ain’t Kepler though), while the 4i features a nearly identical unit featuring 60 cores. There is no word so far as to what speed these will be running at or how performance really compares to the latest graphics products from ARM, Imagination, or Qualcomm.
The chip is made on TSMC’s 28 nm HPM process and features core speeds up to 2.3 GHz. We again have no information on if that will be all four cores at that speed or turbo functionality with one core. The design adopts the previous 4+1 core setup with four high speed cores and one power saving core. Considering how small each core is (Cortex A9 or A15) it is not a waste of silicon as compared to the potential power savings. The HPM process is the high power version rather than the LPM (low power) used for Tegra 4. My guess here is that the A9 cores are not going to pull all that much power anyway due to their simpler design as compared to A15. Hitting 2.3 GHz is also a factor in the process decision. Also consider that +1 core that is fabricated slightly differently than the other four to allow for slower transistor switching speed with much lower leakage.
The die size looks to be in the 60 to 65 mm squared range. This is not a whole lot larger than the original Tegra 2 which was around 50 mm squared. Consider that the Tegra 4i has three more cores, a larger and more able GPU portion, and the integrated Icera i500 modem. The modem is a full Cat 3 LTE capable unit (100 mbps), so bandwidth should not be an issue for this phone. The chip has all of the features of the larger Tegra 4, such as the Computational Photography Architecture, Image Signal Processor, video engine, and the “optimized memory interface”. All of those neat things that NVIDIA showed off at CES will be included. The only other major feature that is not present is the ability to output 3200x2000 resolutions. This particular chip is limited to 1920x1200. Not a horrific tradeoff considering this will be a smartphone SOC with a max of 1080P resolution for the near future.
We expect to see Tegra 4 out in late Q2 in some devices, but not a lot. While Tegra 4 is certainly impressive, I would argue that Tegra 4i is the more marketable product with a larger chance of success. If it were available today, I would expect its market impact to be similar to what we saw with the original 28nm Krait SOCs from Qualcomm last year. There is simply a lot of good technology in this core. It is small, it has a built-in modem, and performance per mm squared looks to be pretty tremendous. Power consumption will be appropriate for handhelds, and perhaps might turn out to be better than most current solutions built on 28 nm and 32 nm processes.
NVIDIA also developed the Phoenix Reference Phone which features the Tegra 4i. This is a rather robust looking unit with a 5” screen and 1080P resolution. It has front and rear facing cameras, USB and HDMI ports, and is only 8 mm thin. Just as with the original Tegra 3 it features the DirectTouch functionality which uses the +1 core to handle all touch inputs. This makes it more accurate and sensitive as compared to other solutions on the market.
Overall I am impressed with this product. It is a very nice balance of performance, features, and power consumption. As mentioned before, it will not be out until Q4 2013. This will obviously give the competition some time to hone their own products and perhaps release something that will not only compete well with Tegra 4i in its price range, but exceed it in most ways. I am not entirely certain of this, but it is a potential danger. The potential is low though, as the design cycles for complex and feature packed cell phones are longer than 6 to 7 months. While NVIDIA has had some success in the SOC market, they have not had a true homerun yet. Tegra 2 and Tegra 3 had their fair share of design wins, but did not ship in numbers that came anywhere approaching Qualcomm or Samsung. Perhaps Tegra 4i will be that breakthrough part for NVIDIA? Hard to say, but when we consider how aggressive this company is, how deep their developer relations, and how feature packed these products seem to be, then I think that NVIDIA will continue to gain traction and marketshare in the SOC market.
Subject: Processors | January 25, 2013 - 03:11 PM | Jeremy Hellstrom
Tagged: haswell, Intel, overclocking, speculation, BCLK
hardCOREware is engaging in a bit of informed speculation on how overclocking the upcoming Haswell chips will be accomplished. Now that Intel has relaxed the draconian lock down of frequencies and multipliers that they enforced for a few generations of chips, overclockers are once again getting excited about their new chips. They talk about the departure of the Front Side Bus and the four frequencies which overclockers have been using in modern generations and then share their research on why the inclusion of a GPU on the CPU might just make overclockers very happy.
"This is an overclocking preview of Intel’s upcoming Haswell platform. We have noticed that they have made an architectural change that may be a great benefit to overclockers. Check out our thoughts on the potential return of BCLK overclocking!"
Here are some more Processor articles from around the web:
- Intel Core i7-3960x vs. i7-3970x@Bjorn3D
- Intel Core i3-3220 v. Intel Core i3-3225 Review @ MissingRemote
- Desktop CPU Comparison Guide @ TechARP
- Testing Memory Speeds on AMD's A10-5800K Trinity APU @ Legit Reviews
- AMD A10 5700K APU @ Guru of 3D
Subject: Graphics Cards, Processors | January 23, 2013 - 11:42 AM | Ryan Shrout
Tagged: southern islands, sony, ps4, playstation 4, orbis, Kaveri, bulldozer, APU, amd
Earlier today a report from Kotaku.com posted some details about the upcoming PlayStation console, code named Orbis and sometimes just called the PS4. Kotaku author Luke Plunkett got the information from a 90 page PDF that details the development kit so the information is likely pretty accurate if incomplete. It discusses a new controller and a completely new accounts system but I was mostly interested in the hardware details given.
We'll begin with the specs. And before we go any further, know that these are current specs for a PS4 development kit, not the final retail console itself. So while the general gist of the things you see here may be similar to what makes it into the actual commercial hardware, there's every chance some—if not all of it—changes, if only slightly.
This is key to keep in mind because here are the specs listed on the report:
- 8GB of system memory
- 2.2GB of graphics memory
- 4 module (8 core) AMD Bulldozer CPU
- AMD "R10xx" based GPU
- 4x USB 3.0 ports and 2x Ethernet connections
- Blu-ray drive
- 160GB HDD
- HDMI and optical audio output
We are essentially talking about an AMD FX-series processor with a Southern Islands based discrete card and I am nearly 100% sure that this will not match the configuration of the shipping system. Think about it - would a console developer really want to have a processor that can draw more than 100 watts inside its box in addition to a discrete GPU? I doubt it.
Instead, let's go with the idea that this developer kit is simply meant to emulate some final specifications. More than likely we are looking at an APU solution that combines Bulldozer or Steamroller cores along with GCN-based GPU SIMD arrays. The most likely candidate is Kaveri, a 28nm based product that meets both of those requirements. Josh recently discussed the future with Kaveri in a post during CES, worth checking out. AMD has told us several times that Kaveri should be able to hit the 1.0 TFLOPs level of performance and if we compare to the current discrete GPUs would enable graphics performance similar to that of an under-clocked Radeon HD 7770.
There is some room for doubt though - Kaveri isn't supposed to be out until "late Q4" though its possible that the PS4 will be the first customer. It is also possible that AMD is making a specific discrete GPU for implementation on the PS4 based on the GCN architecture that would be faster than the graphics performance expected on the Kaveri APU.
When speaking with our own Josh Walrath on this rumor, he tended to think that Sony and AMD would not use an APU but would rather combine a separate CPU and GPU on a single substrate, allowing for better yields than a combined APU part. In order to make up for the slower memory controller interface (on substrate is not as fast as on-die) AMD might again utilize backside cache, just like the one used on the Xbox 360 today. With process technology improvements its not unthinkable to see that jump to 30 or 40MB of cache.
With the debate of a 2013 or 2014 release still up in the air, there is plenty of time for this to change still but we will likely know for sure after our next trip to Taipei.
Subject: General Tech, Processors, Mobile, Shows and Expos | January 7, 2013 - 02:05 PM | Scott Michaud
Tagged: CES, ces 2013, haswell, Intel
Oh certification, how I loathe thee.
At the Intel CES 2013 keynote, Intel announced a few new requirements for OEMs to manufacture Haswell-based ultrabooks. Intel clearly wants to push OEMs to utilize several of their more cherished features and as such they will not allow products to be released without these features.
A fourth-generation ultrabook must contain the following features:
- Touch interaction support
- Intel WiDi support
- Installed Antivirus and Anti-Malware, Intel-owned McAfee will have an announcement soon.
These three certification requirements lead to two major points of contention with me: non-Windows 8 operating systems as well as Intel potentially strong-arming McAfee into your machine. When Intel requires touch support for Haswell-based ultrabooks, they basically declare that Windows 7 and Linux will not be around.
That requirement could seem minor depending on what Intel McAfee will soon announce after Intel’s announcement that Antivirus and Anti-Malware will be required on ultrabooks. Windows 8 already comes with Microsoft Security Essentials pre-installed and as such Intel might strong-arm vendors into using McAfee. It would not be a stretch to speculate that McAfee will have some deep attachment to the Haswell architecture. Unfortunately we will need to wait until Intel makes their announcement.
Intel also claims that ultrabooks will have touch-based products in the $599 price points very soon.
PC Perspective's CES 2013 coverage is sponsored by AMD.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: General Tech, Processors, Mobile, Shows and Expos | January 6, 2013 - 02:13 PM | Scott Michaud
Tagged: CES, ces 2013, vizio, amd
Why not open up CES-proper discussion with a tablet announcement?
AMD has begun their push into the tablet space with Vizio being one of their first OEM partners to announce products at CES. Due to AMD being one of the select few to still maintain a proper x86 license, they are about your only option outside of Intel for a true Windows 8 tablet. Vizio took them up on that position.
The Vizio Tablet PC, seemingly a play on their original Android-based Vizio tablet with an added declaration that “I am a PC”, will run standard Windows 8 certified as Microsoft Signature. No bloatware will be included which should help users experience the performance that 60-day antivirus trials and auto-launched demo notifications absorb.
On the technical side, the Tablet PC is loaded with 2 GB of RAM, an 11.6” full 1080p display, and a 1.0 GHz AMD Z60 processor. 64 GB of solid state storage is included although Windows 8 has been known to stake claims to a large portion of that. Readers of our site would probably have a primary computing device although this might be worth watching as a secondary device. You do not have a whole lot of other options for Flash support or access to non-default browsers.
PC Perspective's CES 2013 coverage is sponsored by AMD.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: Processors | January 6, 2013 - 02:09 PM | Josh Walrath
Tagged: valleyview, low power, Intel, Bay Trail, atom
When the original Intel Atom hit the scene, it was a reasonably large success for Intel with the massive explosion of netbooks. The original design was very simplistic, but was fairly power efficient. The weak link of the original Atom was the 945 chipset graphics that were not only underpowered, but were based on a relatively power hungry desktop chipset. The eventual competition from AMD featured a next generation low power core based on the Bobcat architecture which featured a modern graphics core that was more than adequate for most scenarios.
Intel never stood still, but their advancement of the low power cores was slow as compared to the massive leaps and bounds we saw from the original Core architecture in 2006 on the desktop and server markets. Typically these products lagged the desktop products in terms of process nodes, but they continued to advance these cores little by little.
Leap forward a few years and we saw the eventual demise of the netbook and the massive uptake of mobile computing. Mobile computing was primarily comprised of tablets and smartphones. Intel was late to the party as compared to products from Qualcomm, Samsung, and NVIDIA. A fire was lit under the Atom group at Intel, as the competition had far surpassed the company in ultra-mobile parts.
Happily for those of us paying attention, the 3D Center Forum has released some very interesting slides about the 22 nm generation of Atom products and the platforms they will be integrated into. Valleyview is the SoC while Bay Trail is the platform.
Valleyview is based on Intel’s 22 nm process and will be a next generation Atom processor with a multitude of new features. It will be a SoC as it will no longer require a traditional southbridge. It will have improved graphics as compared to the most recent Atom processors. While the SoC will feature USB 3.0, it will not embrace SATA-6G or PCI-E 3.0. The CPU will go up to quad core units that will be 50% to 100% faster than current parts. These new chips will also introduce a boost functionality (think desktop Turbo Boost) that will run the frequency equal to or greater than 2.7 GHz.
Power is of course the primary concern, and these products will be offered from 3 watts and below (Bay Trail T) and up to 12 watts (Bay Trail D) These products will not be competing with the Haswell products which are rumored to get around 10 watts at the very lowest.
While Intel has been slow to react to the mobile push, they are starting to get that ball rolling. It will be very interesting to see if they can move fast enough to outrun and outwit the ARM based competition, not to mention AMD’s latest 28 nm products that will be released in the first half of 2013.
Subject: Graphics Cards, Networking, Motherboards, Cases and Cooling, Processors, Systems, Storage, Mobile, Shows and Expos | January 5, 2013 - 07:47 AM | Ryan Shrout
Tagged: CES, ces 2013, pcper
It's that time of year - the staff at PC Perspective is loaded up and either already here in Las Vegas, on their way to Las Vegas or studiously sitting at their desk at home - for the 2013 Consumer Electronics Show! I know you are on our site looking for all the latest computer hardware news from the show and we will have it. The best place to keep checking is our CES landing page at http://pcper.com/ces. The home page will work too.
We'll have stories covering companies like, Intel, AMD, NVIDIA, ASUS, MSI, Gigabyte, Zotac, Sapphire, Galaxy, EVGA, Lucid, OCZ, Western Digital, Corsair and many many more that I don't feel like listing here. It all starts Sunday with CES Unveiled and then the NVIDIA Press Conference where they will announce...something.
Also, don't forget to subscribe to the PC Perspective Podcast as we will be bringing you daily podcasts wrapping up each day. We are also going to try to LIVE stream them on our PC Perspective Live! page but times and bandwidth will vary.
PC Perspective's CES 2013 coverage is sponsored by AMD.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: General Tech, Processors | January 3, 2013 - 03:00 PM | Scott Michaud
Tagged: Intel, haswell, Ivy Bridge-E
Intel creates a bunch of roadmaps as portions of their corporate slideshows and similarly to their development cycles: they get leaked like clockwork.
Last quarter’s roadmap revealed intentions for Intel to release the higher-end Ivy Bridge-E processors a whole quarter after dropping non-enthusiast Ivy Bridge from retail. That leak ended speculation from the prior quarter about the fate of Ivy Bridge-E with Haswell and Sandy Bridge-E pushing Ivy Bridge out of Intel’s second quarter 2013 lineup. After all, would Intel push higher-end SKUs of obsolete components? Would they just skip to Haswell-E? Could Sandy Bridge-E be slowly eaten away by the Xeon and lower end markets and left without a replacement? Apparently not the latters.
I cannot Haswell-E'sburger.
The most obvious data point to pull from this slide is that nothing changed; information was only added. Ivy Bridge-E is still on target to launch a little less than a year from now. What we were given is expected SKUs names of the Haswell parts.
From i5 up to Sandy Bridge-E we will have approximately 5 SKUs ranging from the i5-4570 up to the i7-4770K. Room is still left for SKUs above the i7-4770K and the i5-4670K although Intel does not show any direct intentions to produce such chips. WCCF Tech believes from previous rumors that Ivy Bridge-E will consist of four SKUs: i7-4930, i7-4960, i7-4970, and i7-4990.
I also cannot Haswell at all???
Intel’s lower-end roadmap was also leaked within the same post. Apparently Ivy Bridge has more legs in that price range with Haswell being delayed for a quarter for Pentium and i3 processors. Haswell is completely absent in the Celeron price point with the original Sandy Bridge sticking around for a whole year from now.
This clearly is not a panicked situation for Intel on the high-end. Three leaked roadmaps in a row show for all practical purposes the same identical vision. I will be curious to see how performance compares between Ivy Bridge-E and its older little brother Haswell; clearly Ivy Bridge-E will make more sense from the point of view of RAM-intensive applications, but will certain applications be able to better utilize Haswell and its new architecture?
Who do you think will win in a fistfight, Ha’s well Ghul or Poison Ivy Bridge-E?
Subject: General Tech, Processors | December 28, 2012 - 01:25 PM | Scott Michaud
Due to Phoronix being particularly interesting lately, how would you like a little more open-source news?
GCC is one of the most important compilers for C/C++-based software due to its ubiquity both in where it can run as well as where it can compile to. Intel has a lot of experience developing for compilers, to say the least. Creating a competing product does not stop Intel from contributing to the project, however.
Aww, looks like he wants a hug.
Intel created C/C++ language extensions known as “Cilk Plus” designed to help developers parallelize their code on multithreaded processors. Both the compiler and run-time portions of Cilk Plus has been made open source and were submitted to be included into GCC. Unfortunately, for reasons which are currently unclear, GCC completed development of version 4.8 of their software without the inclusion of Cilk Plus.
Patches developed by Intel have been available since the summer awaiting approval from the official maintainers of GCC. Because the deadline passed without inclusion of the completed code, we will allegedly need to wait until at least 2014 -- maybe longer -- before Cilk Plus has another chance to be included in the GCC.
Subject: Processors, Mobile | December 19, 2012 - 12:26 AM | Tim Verry
Tagged: wayne, tegra 4, SoC, nvidia, cortex a15, arm
Earlier this year, NVIDIA showed off a roadmap for its Tegra line of mobile system on a chip (SoC) processors. Namely, the next generation Tegra 4 mobile chip is codenamed Wayne and will be the successor to the Tegra 3.
Tegra 4 will use a 28nm manufacturing process and feature improvements to the CPU, GPU, and IO components. Thanks to a leaked slide that appeared on Chip Hell, we now have more details on Tegra 4.
The 28nm Tegra 4 SoC will keep the same 4+1 CPU design* as the Tegra 3, but it will use ARM Cortex A15 CPU cores instead of the Cortex A9 cores used in the current generation chips. NVIDIA is also improving the GPU portion, and Tegra 4 will reportedly feature a 72 core GPU based on a new architecture. Unfortunately, we do not have specifics on how that GPU is set up architecturally, but the leaked slide indicates that the GPU will be as much as 6x faster than NVIDIA’s own Tegra 3. It will allegedly be fast enough to power displays with resolutions from 1080p @ 120Hz to 4K (refresh rate unknown). Don’t expect to drive games at native 4K resolution, however it should run a tablet OS fine. Interestingly, NVIDIA has included hardware to hardware accelerate VP8 and H.264 video at up to 2560x1440 resolutions.
Additionally, Tegra 4 will feature support for dual channel DDR3L memory, USB 3.0 and hardware accelerated secuity options including HDCP, Secure Boot, and DRM which may make Tegra 4 an attractive option for Windows RT tablets.
The leaked slide has revealed several interesting details on Tegra 4, but it has also raised some questions on the nitty-gritty details. Also, there is no mention of the dual core variant of Tegra 4 – codenamed Grey – that is said to include an integrated Icera 4G LTE cellular modem. Here’s hoping more details surface at CES next month!
* NVIDIA's name for a CPU that features four ARM CPU cores and one lower power ARM companion core.
Get notified when we go live!