All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Editorial, General Tech, Graphics Cards, Processors, Systems | May 21, 2013 - 05:26 PM | Scott Michaud
Tagged: xbox one, xbox
Almost exactly three months have passed since Sony announced the Playstation 4 and just three weeks remain until E3. Ahead of the event, Microsoft unveiled their new Xbox console: The Xbox One. Being so close to E3, they are saving the majority of games until that time. For now, it is the box itself as well as its non-gaming functionality.
First and foremost, the raw specifications:
- AMD APU (5 billion transistors, 8 core, on-die eSRAM)
- 8GB RAM
- 500GB Storage, Bluray reader
- USB 3.0, 802.11n, HDMI out, HDMI in
The hardware is a definite win for AMD. The Xbox One is based upon an APU which is quite comparable to what the PS4 will offer. Unlike previous generations, there will not be too much differentiation based on available performance; I would not expect to see much of a fork in terms of splitscreen and other performance-sensitive features.
A new version of the Kinect sensor will also be present with all units which developers can depend upon. Technically speaking, the camera is higher resolution and more wide-angle; up to six skeletons can be tracked with joints able to rotate rather than just hinge. Microsoft is finally also permitting developers to use the Kinect along with a standard controller to, as they imagine, allow a user to raise their controller to block with a shield. That is the hope, but near the launch of the original Kinect, Microsoft filed a patent to allow sign language recognition: has not happened yet. Who knows whether the device will be successfully integrated into gaming applications.
Of course Microsoft is known most for system software, and the Xbox runs three lightweight operating environments. In Windows 8, you have the Modern interface which runs WinRT applications and you have the desktop app which is x86 compatible.
The Xbox One borrows more than a little from this model.
The home screen, which I am tempted to call the Start Screen, for the console has a very familiar tiled interface. They are not identical to Windows but they are definitely consistent. This interface allows for access to Internet Explorer and an assortment of apps. These apps can be pinned to the side of the screen, identical to Windows 8 modern app. I am expecting there to be "a lot of crossover" (to say the least) between this and the Windows Store; I would not be surprised if it is basically the same API. This works both when viewing entertainment content as well as within a game.
These three operating systems run at the same time. The main operating system is basically a Hyper-V environment which runs the two other operating systems simultaneously in sort-of virtual machines. These operating systems can be layered with low latency, since all you are doing is compositing them in a different order.
Lastly, they made reference to Xbox Live, go figure. Microsoft is seriously increasing their server capacity and expects developers to utilize Azure infrastructure to offload "latency-insensitive" computation for games. While Microsoft promises that you can play games offline, this obviously does not apply to features (or whole games) which rely upon the back-end infrastructure.
And yes, I know you will all beat up on me if I do not mention the SimCity debacle. Maxis claimed that much of the game requires an online connection due to the complicated server requirements; after a crack allowed offline functionality, it was clear that the game mostly operates fine on a local client. How much will the Xbox Live cloud service offload? Who knows, but that is at least their official word.
Now to tie up some loose ends. The Xbox One will not be backwards compatible with Xbox 360 games although that is no surprise. Also, Microsoft says they are allowing users to resell and lend games. That said, games will be installed and not require the disc, from what I have heard. Apart from the concerns about how much you can run on a single 500GB drive, once the game is installed rumor has it that if you load it elsewhere (the rumor is even more unclear about whether "elsewhere" counts accounts or machines) you will need to pay a fee to Microsoft. In other words? Basically not a used game.
Well, that has it. You can be sure we will add more as information comes forth. Comment away!
Subject: General Tech, Graphics Cards, Processors, Mobile | May 15, 2013 - 09:02 PM | Scott Michaud
Tagged: tegra 4, hp, tablets
Sentences containing the words "Hewlett-Packard" and "tablet" can end in a question mark, an exclamation mark, or a period on occasion. The gigantic multinational technology company tried to own a whole mobile operating system with their purchase of Palm and abandoned those plans just as abruptly with such a successful $99 liquidation of $500 tablets, go figure, that they to some extent did it twice. The operating system was open sourced and at some point LG swooped in and bought it, minus patents, for use in Smart TVs.
So how about that Android?
The floodgates are open on Tegra 4 with HP announcing their SlateBook x2 hybrid tablet just a single day after NVIDIA's SHIELD move out of the projects. The SlateBook x2 uses the Tegra 4 processor to power Android 4.2.2 Jellybean along with the full Google experience including the Google Play store. Along with Google Play, the SlateBook and its Tegra 4 processor are also allowed in TegraZone and NVIDIA's mobile gaming ecosystem.
As for the device itself, it is a 10.1" Android tablet which can dock into a keyboard for extended battery life, I/O ports, and well, a hardware keyboard. You are able to attach this tablet to a TV via HDMI along with the typical USB 2.0, combo audio jack, and a full-sized SD card slot; which half any given port is available through is anyone's guess, however. Wirelessly, you have WiFi a/b/g/n and some unspecified version of Bluetooth.
The raw specifications list follows:
NVIDIA Tegra 4 SoC
- ARM Cortex A15 quad core @ 1.8 GHz
- 72 "Core" GeForce GPU @ ~672MHz, 96 GFLOPS
- 2GB DDR3L RAM ("Starts at", maybe more upon customization?)
- 64GB eMMC SSD
- 1920x1200 10.1" touch-enabled IPS display
- HDMI output
- 1080p rear camera, 720p front camera with integrated microphone
- 802.11a/b/g/n + Bluetooth (4.0??)
- Combo audio jack, USB 2.0, SD Card reader
- Android 4.2.2 w/ Full Google and TegraZone experiences.
If this excites you, then you only have to wait until some point in August; you will also, of course, need to wait until you save up about $479.99 plus tax and shipping.
Subject: General Tech, Graphics Cards, Processors, Systems, Mobile | May 14, 2013 - 03:54 PM | Scott Michaud
Tagged: haswell, nec
While we are not sure when it will be released or whether it will be available for North America, we have found a Haswell laptop. Actually, NEC will release two products in this lineup: a high end 1080p unit and a lower end 1366x768 model. Unfortuantely, the article is in Japanese.
IPS displays have really wide viewing angles, even top and bottom.
NEC is known for their higher-end monitors; most people equate the Dell Ultrasharp panels with professional photo and video production, but their top end offers are ofter a tier below the best from companies like NEC and Eizo. The laptops we are discussing today both contain touch-enabled IPS panels with apparently double the contrast ratio of what NEC considers standard. While these may or may not be the tip-top NEC offerings, they should at least be putting in decent screens.
Obviously the headliner for us is the introduction of Haswell. While we do not know exactly which product NEC decided to embed, we do know that they are relying upon it for their graphics performance. With the aforementioned higher-end displays, it seems likely that NEC is intending this device for the professional market. A price-tag of 190000 yen (just under $1900 USD) for the lower end and 200000 yen (just under $2000 USD) for the higher end further suggests this is their target demographic.
Clearly a Japanese model.
The professional market does not exactly have huge requirements for graphics performance, but to explicitly see NEC trust Intel for their GPU performance is an interesting twist. Intel HD 4000 has been nibbling, to say the least, on the discrete GPU marketshare in laptops. I would expect this laptop would contain one of the BGA-based parts, which are soldered onto the motherboard, for the added graphics performance.
As a final note, the higher-end model will also contain a draft 802.11ac antenna. It is expected that network performance could be up to 867 megabits as a result.
Of course I could not get away without publishing the raw specifications:
LL850/MS (Price: 200000 yen):
- Fourth-generation Intel Core processor with onboard video
- 8GB DDR3 RAM
- 1TB HDD w/ 32GB SSD caching
- BDXL (100-128GB BluRay disc) drive
- IEEE 802.11ac WiFi adapter, Bluetooth 4.0
- SDXC, Gigabit Ethernet, HDMI, USB3.0, 2x2W stereo Yamaha speakers
- 1080p IPS display with touch support
- Office Home and Business 2013 preinstalled?
LL750/MS (Price: 190000 yen):
- Fourth-generation Intel Core processor with onboard video
- 8GB DDR3 RAM
- 1TB HDD (no SSD cache)
- (Optical disc support not mentioned)
- IEEE 802.11a/b/g/n WiFi adapter, Bluetooth 4.0
- SDXC, Gigabit Ethernet, HDMI, USB3.0, 2x2W stereo Yamaha speakers
- 1366x768 (IPS?) touch-enabled display
Subject: Editorial, General Tech, Cases and Cooling, Processors | May 10, 2013 - 04:23 PM | Scott Michaud
Tagged: c6, c7, haswell, PSU, corsair
I cannot do it captain! I don't have the not enough power!
We have been discussing the ultra-low power state of Haswell processors for a little over a week and how it could be detrimental to certain power supplies. Power supply manufacturers never quite expected that you could have as little as a 0.05 Amp (0.6W) draw on the 12V rail without being off. Since then, companies such as Enermax started to list power supplies which have been tested and are compliant with the new power requirements.
|AXi||AX1200i||Yes||100% Compatible with Haswell CPUs|
|AX860i||Yes||100% Compatible with Haswell CPUs|
|AX760i||Yes||100% Compatible with Haswell CPUs|
|AX||AX1200||Yes||100% Compatible with Haswell CPUs|
|AX860||Yes||100% Compatible with Haswell CPUs|
|AX850||Yes||100% Compatible with Haswell CPUs|
|AX760||Yes||100% Compatible with Haswell CPUs|
|AX750||Yes||100% Compatible with Haswell CPUs|
|AX650||Yes||100% Compatible with Haswell CPUs|
|HX||HX1050||Yes||100% Compatible with Haswell CPUs|
|HX850||Yes||100% Compatible with Haswell CPUs|
|HX750||Yes||100% Compatible with Haswell CPUs|
|HX650||Yes||100% Compatible with Haswell CPUs|
|TX-M||TX850M||Yes||100% Compatible with Haswell CPUs|
|TX750M||Yes||100% Compatible with Haswell CPUs|
|TX650M||Yes||100% Compatible with Haswell CPUs|
|TX||TX850||Yes||100% Compatible with Haswell CPUs|
|TX750||Yes||100% Compatible with Haswell CPUs|
|TX650||Yes||100% Compatible with Haswell CPUs|
|GS||GS800||Yes||100% Compatible with Haswell CPUs|
|GS700||Yes||100% Compatible with Haswell CPUs|
|GS600||Yes||100% Compatible with Haswell CPUs|
|CX-M||CX750M||Yes||100% Compatible with Haswell CPUs|
|CX600M||TBD||Likely compatible — currently validating|
|CX500M||TBD||Likely compatible — currently validating|
|CX430M||TBD||Likely compatible — currently validating|
|CX||CX750||Yes||100% Compatible with Haswell CPUs|
|CX600||TBD||Likely compatible — currently validating|
|CX500||TBD||Likely compatible — currently validating|
|CX430||TBD||Likely compatible — currently validating|
|VS||VS650||TBD||Likely compatible — currently validating|
|VS550||TBD||Likely compatible — currently validating|
|VS450||TBD||Likely compatible — currently validating|
|VS350||TBD||Likely compatible — currently validating|
Above is Corsair's slightly incomplete chart as of the time it was copied from their website, 3:30pm on May 10th, 2013; so far it is coming up all good. Their blog should be updated as new products get validated for the new C6 and C7 CPU sleep states.
The best part of this story is just how odd it is given the race to arc-welding (it's not a podcast so you can't Bingo! hahaha!) supplies we have been experiencing over the last several years. Simply put, some companies never thought that component manufacturers such as Intel would race to the bottom of power draws.
Subject: Editorial, General Tech, Graphics Cards, Processors | May 8, 2013 - 09:32 PM | Scott Michaud
Tagged: Volcanic Islands, radeon, ps4, amd
So the Southern Islands might not be entirely stable throughout 2013 as we originally reported; seismic activity being analyzed suggests the eruption of a new GPU micro-architecture as early as Q4. These Volcanic Islands, as they have been codenamed, should explode onto the scene opposing NVIDIA's GeForce GTX 700-series products.
It is times like these where GPGPU-based seismic computation becomes useful.
The rumor is based upon a source which leaked a fragment of a slide outlining the processor in block diagram form and specifications of its alleged flagship chip, "Hawaii". Of primary note, Volcanic Islands is rumored to be organized with both Serial Processing Modules (SPMs) and a Parallel Compute Module (PCM).
So apparently a discrete GPU can have serial processing units embedded on it now.
Heterogeneous Systems Architecture (HSA) is a set of initiatives to bridge the gap between massively parallel workloads and branching logic tasks. We usually make reference to this in terms of APUs and bringing parallel-optimized hardware to the CPU. In this case, we are discussing it in terms of bringing serial processing to the discrete GPU. According to the diagram, the chip within would contain 8 processor modules each with two processing cores and an FPU for a total of 16 cores. There does not seem to be any definite identification whether these cores would be based upon their license to produce x86 processors or their other license to produce ARM processors. Unlike an APU, this is heavily skewed towards parallel computation rather than a relatively even balance between CPU, GPU, and chipset features.
Now of course, why would they do that? Graphics processors can do branching logic but it tends to sharply cut performance. With an architecture such as this, a programmer might be able to more efficiently switch between parallel and branching logic tasks without doing an expensive switch across the motherboard and PCIe bus between devices. Josh Walrath suggested a server containing these as essentially add-in card computers. For gamers, this might help out with workloads such as AI which is awkwardly split between branching logic and massively parallel visibility and path-finding tasks. Josh seems skeptical about this until HSA becomes further adopted, however.
Still, there is a reason why they are implementing this now. I wonder, if the SPMs are based upon simple x86 cores, how the PS4 will influence PC gaming. Technically, a Volcanic Island GPU would be an oversized PS4 within an add-in card. This could give AMD an edge, particularly in games ported to the PC from the Playstation.
This chip, Hawaii, is rumored to have the following specifications:
- 4096 stream processors
- 16 serial processor cores on 8 modules
- 4 geometry engines
- 256 TMUs
- 64 ROPs
- 512-bit GDDR5 memory interface, much like the PS4.
20 nm Gate-Last silicon fab process
- Unclear if TSMC or "Common Platform" (IBM/Samsung/GLOBALFOUNDRIES)
Softpedia is also reporting on this leak. Their addition claims that the GPU will be designed on a 20nm Gate-Last fabrication process. While gate-last is considered to be not worth the extra effort in production, Fully Depleted Silicon On Insulator (FD-SOI) is apparently "amazing" on gate-last at 28nm and smaller fabrication. This could mean that AMD is eying that technology and making this design with intent of switching to an FD-SOI process, without a large redesign which an initially easier gate-first production would require.
Well that is a lot to process... so I will leave you with an open question for our viewers: what do you think AMD has planned with this architecture, and what do you like and/or dislike about what your speculation would mean?
Subject: General Tech, Processors | May 6, 2013 - 02:34 PM | Jeremy Hellstrom
Tagged: silvermont, merrifield, Intel, Bay Trail, atom
The news today is all about shrinking the Atom, both in process size and power consumption. Indeed The Tech Report heard talk of milliwatts and SoC's which shows the change of strategy Intel is having with Atom from small footprint HTPCs to POS and other ultra-low power applications. Hyperthreading has been dropped and Out of Order processing has been brought in which makes far more sense for the new niche Atom is destined for.
"Since their debut five years ago, Intel's Atom microprocessors have relied on the same basic CPU core. Next-gen Atoms will be based on the all-new Silvermont core, and we've taken a closer look at its underlying architecture."
Here is some more Tech News from around the web:
- AMD says HSA will cut latency bottleneck in GPU processing @ The Inquirer
- Redmond probes new IE 8 vulnerability @ The Register
- Not Like a Fine Wine: Windows Activation Still a Piece of Junk After All These Years @ Techgage
- Acer unveils new ultrabooks, notebooks and tablet @ DigiTimes
- Angering hippies and financing evil @ The Tech Report
- BlackBerry 10 passes US defence department tests @ The Register
- The TR Podcast 133: Iris graphics and the Radeon HD 7990
Subject: Processors | May 3, 2013 - 06:45 AM | Tim Verry
Tagged: z87, overclocking, Intel, haswell, core i7 4770k, 7ghz
OCaholic has spotted an interesting entry in the CPU-Z database. According to the site, an overclocker by the handle of “rtiueuiurei” has allegedly managed to push an engineering sample of Intel’s upcoming Haswell Core i7-4770K processor past 7GHz.
If the CPU-Z entry is accurate, the overclocker used a BCLK speed of 91.01 and a multiplier of 77 to achieve a CPU clockspeed of 7012.65MHz. The chip was overclocked on a Z87 motherboard along with a single 2GB G.Skill DDR3 RAM module. Even more surprising than the 7GHz clockspeed is the voltage that the overclocker used to get there: an astounding 2.56V according to CPU-Z.
From the information Intel provided at IDF Beijing, the new 22nm Haswell processors feature an integrated voltage regulator (IVR), and the CPU portion of the chip’s voltage is controlled by the Vccin value. Intel recommends a range of 1.8V to 2.3V for this value, with a maximum of 3V and a default of 1.8V. Therefore, the CPU-Z-reported number may actually be correct. On the other hand, it may also just be a bug in the software due to the unreleased-nature of the Haswell chip.
Voltage questions aside, the frequency alone makes for an impressive overclock, and it seems that the upcoming chips will have decent overclocking potential!
Subject: Cases and Cooling, Processors | May 1, 2013 - 03:07 PM | Ryan Shrout
Tagged: power supply, Intel, idle, haswell, c7, c6
I came across an interesting news story posted by The Tech Report this morning that dives into the possibility of problems with Intel's upcoming Haswell processors and currently available power supplies. Apparently, the new C6 and C7 idle power states that give the new Haswell architecture benefits for low power scenarios place a requirement of receiving a 0.05 amps load on the 12V2 rail. (That's just 50 milliamps!) Without that capability, the system can exhibit unstable behavior and a quick look at the power supply selector on Intel's own website is only listing a couple dozen that support the feature.
This table from VR-Zone, the source of the information initially, shows the difference between the requirements for 3rd (Ivy Bridge) and 4th generation (Haswell) processors. The shift is an order of magnitude and is quite a dramatic change for PSU vendors. Users of Corsair power supplies will be glad to know that among those listed with support on the Intel website linked above were mostly Corsair units!
A potential side effect of this problem might be that motherboard vendors simply disable those sleep states by default. I don't imagine that will be a problem for PC builders anyway since most desktop users aren't really worried about the extremely small differences in power consumption they offer. For mobile users and upcoming Haswell notebook designs the increase in battery life is crucial though and Intel has surely been monitoring those power supplies closely.
I asked our in-house power supply guru, Lee Garbutt, who is responsible for all of the awesome power supply reviews on pcper.com, what he thought about this issue. He thinks the reason more power supplies don't support it already is for power efficiency concerns:
Most all PSUs have traditionally required "some load" on the various outputs to attain good voltage regulation and/or not shut down. Not very many PSUs are designed yet to operate with no load, especially on the critical +12V output. One of the reasons for this is efficiency. Its harder to design a PSU to operate correctly with a very low load AND to deliver high efficiency. It would be easy just to add some bleed resistance across the DC outputs to always have a minimal load to keep voltage regulation under control but then that lowers efficiency.
Subject: Processors | April 30, 2013 - 02:04 PM | Josh Walrath
Tagged: amd, FX, vishera, bulldozer, FX-6350, FX-4350, FX-6300, FX-4300, 32 nm, SOI, Beloved
Today AMD has released two new processors that address the AM3+ market. The FX-6350 and FX-4350 are two new refreshes of the quad and hex core lineup of processors. Currently the FX-8350 is still the fastest of the breed, and there is no update for that particular number yet. This is not necessarily a bad thing, but there are those of us who are still awaiting the arrival of the rumored “Centurion”.
These parts are 125 watt TDP units, which are up from their 95 watt predecessors. The FX-6350 runs at 3.9 GHz with a 4.2 GHz boost clock. This is up 300 MHz stock and 100 MHz boost from the previous 95 watt FX-6300. The FX-4350 runs at 3.9 GHz with a 4.3 GHz boost clock. This is 100 MHz stock and 300 MHz boost above that of the FX-4300. What is of greater interest here is that the L3 cache goes from 4 MB on the 4300 to 8 MB on the 4350. This little fact looks to be the reason why the FX-4350 is now a 125 watt TDP part.
It has been some two years since AMD started shipping 32 nm PD-SOI/HKMG products to the market, and it certainly seems as though spinning off GLOBALFOUNDRIES has essentially stopped the push to implement new features into a process node throughout the years. As many may remember, AMD was somewhat famous for injecting new process technology into current nodes to improve performance, yields, and power characteristics in “baby steps” type fashion instead of leaving the node as is and making a huge jump with the next node. Vishera has been out for some 7 months now and we have not really seen any major improvement in regards to performance and power characteristics. I am sure that yields and bins have improved, but the bottom line is that this is only a minor refresh and AMD raised TDPs to 125 watts for these particular parts.
The FX-6350 is again a three module part containing six cores. Each module features 2 MB of L2 cache for a total of 6 MB L2 and the entire chip features 8 MB of L3 cache. The FX-4350 is a two module chip with four cores. The modules again feature the same 2 MB of L2 cache for a total of 4 MB active on the chip with the above mentioned 8 MB of L3 cache that is double what the FX-4300 featured.
Perhaps soon we will see updates on FM2 with the Richland series of desktop processors, but for now this refresh is all AMD has at the moment. These are nice upgrades to the line. The FX-6350 does cost the same as the FX-6300, but the thinking behind that is that the 6300 is more “energy efficient”. We have seen in the past that AMD (and Intel for that matter) does put a premium on lower wattage parts in a lineup. The FX-4350 is $10 more expensive than the 4300. It looks as though the FX-6350 is in stock at multiple outlets but the 4350 has yet to show up.
These will fit in any modern AM3+ motherboard with the latest BIOS installed. While not an incredibly exciting release from AMD, it at least shows that they continue to address their primary markets. AMD is in a very interesting place, and it looks like Rory Read is busy getting the house in order. Now we just have to see if they can curve back their cost structure enough to make the company more financially stable. Indications are good so far, but AMD has a long ways to go. But hey, at least according to AMD the FX series is beloved!
Subject: Processors | April 17, 2013 - 09:48 PM | Tim Verry
Tagged: overclocking, intel ivr, intel hd graphics, Intel, haswell, cpu
During the Intel Developer Forum in Beijing, China the X86 chip giant revealed details about how overclocking will work on its upcoming Haswell processors. Enthusiasts will be pleased to know that the new chips do not appear to be any more restrictive than the existing Ivy Bridge processors as far as overclocking. Intel has even opened up the overclocking capabilities slightly by allowing additional BCLK tiers without putting aspects such as the PCI-E bus out of spec.
The new Haswell chips have an integrated voltage regulator, which allows programmable voltage to both the CPU, Memory, and GPU portions of the chip. As far as overclocking the CPU itself, Intel has opened up the Turbo Boost and is allowing enthusiasts to set an overclocked Turbo Boost clockspeed. Additionally, Intel is specifying available BCLK values of 100, 125, and 167MHz without putting other systems out of spec (they use different ratios to counterbalance the increased BCLK, which is important for keeping the PCI-E bus within ~100Mhz). The chips will also feature unlocked core ratios all the way up to 80 in 100MHz increments. That would allow enthusiasts with a cherry-picked chip and outrageous cooling to clock the chip up to 8GHz without overclocking the BCLK value (though no chip is likely to reach that clockspeed, especially for everyday usage!).
Remember that the CPU clockspeed is determined by the BCLK value times a pre-set multiplier. Unlocked processors will allow enthusiasts to adjust the multiplier up or down as they please, while non-K edition chips will likely only permit lower multipliers with higher-than-default multipliers locked out. Further, Intel will allow the adventurous to overclock the BLCK value above the pre-defined 100, 125, and 167MHz options, but the chip maker expects most chips will max out at anywhere between five-to-seven percent higher than normal. PC Perspective’s Morry Teitelman speculates that slightly higher BCLK overclocks may be possible if you have a good chip and adequate cooling, however.
Similar to current-generation Ivy Bridge (and Sandy Bridge before that) processors, Intel will pack Haswell processors with its own HD Graphics pGPU. The new HD Graphics will be unlocked and the graphics ratio will be able to scale up to a maximum of 60 in 50MHz steps for a potential maximum of 3GHz. The new processor graphics cards will also benefit from Intel’s IVR (programmable voltage) circuitry. The HD Graphics and CPU are fed voltage from the integrated voltage regulator (IVR), and is controlled by adjusting the Vccin value. The default is 1.8V, but it supports a recommended range of 1.8V to 2.3V with a maximum of 3V.
Finally, Intel is opening up the memory controller to further overclocking. Intel will allow enthusiasts to overclock the memory in either 200MHz or 266MHz increments, which allows for a maximum of either 2,000MHz or 2,666MHz respectively. The default voltage will depend on the particular RAM DIMMs you use, but can be controlled via the Vddq IVR setting.
It remains to be seen how Intel will lock down the various processor SKUs, especially the non-K edition chips, but at least now we have an idea of how a fully-unlocked Haswell processor will overclock. On a positive note, it is similar to what we have become used to with Ivy Bridge, so similar overclocking strategies for getting the most out of processors should still apply with a bit of tweaking. I’m interested to see how the integration of the voltage regulation hardware will affect overclocking though. Hopefully it will live up to the promises of increased efficiency!
Are you gearing up for a Haswell overhaul of your system, and do you plan to overclock?
In addition to Intel's announcement of new Xeon processors, the company is launching three new Atom-series processors for servers later this year. The new processor lineups include the Intel Atom S12x9 family for storage applications, Rangeley processors for networking gear, and Avoton SoCs for low-power micro-servers.
The Intel Atom S12x9 family takes the existing S1200 processors and makes a few tweaks to optimize the SoCs for storage servers and other storage appliances. For reference, the Intel Atom S1200 series of processors feature sub-9W TDPs, 1MB of cache, and two physical CPU cores clocked at up to 2GHz. However, Intel did not list the individual S12x9 SKUs or specifications, so it is unknown if they will also be clocked at up to 2GHz. The new Atom S12x9 processors will feature 40 PCI-E 2.0 lanes (26 Root Port and 16 Non-Transparent Bridge) to provide ample bandwidth between I/O and processor. The SoCs also feature hardware RAID acceleration, Native Dual-Casting, and Asynchronous DRAM Self-Refresh. Native Dual-Casting allows data to be read from one source and written to two memory locations simultaneously while Asynchronous DRAM Self-Refresh protects data during a power failure.
The new chips are available now to customers and will be available in OEM systems later this year. Vendors that plan to release systems with the S12x9 processors include Accusys, MacroSAN, Qnap, and Qsan.
Intel is also introducing a new series of processors --- codenamed Rangeley -- is intended to power future networking gear. The 22nm Atom SoC is slated to be available sometime in the second half of this year (2H'13). Intel is positioning the Rangeley processors at entry-level to mid-range routers, switches, and security appliances.
While S12x9 and Rangeley are targeted at specific tasks, the company is also releasing a general purpose Atom processor codenamed Avoton. The Avoton SoCs are aimed at low power micro-servers, and is Intel's answer to ARM chips in the server room. Avoton is Intel's second generation 64-bit Atom processor series. It uses the company's Silvermont architecture on a 22nm process. The major update with Avoton is the inclusion of an Ethernet controller built into the processor itself. According to Intel, building networking into the processor instead of as a chip on a separate add-on board results in "significant improvements in performance per watt." These chips are currently being sampled to partners, and should be available in Avoton-powered servers later this year (2H'13).
This year is certainly shaping up to be an interesting year for Atom processors. I'm excited to see how the battle unfolds between the ARM and Atom-based solutions in the data center.
Subject: Processors | April 3, 2013 - 08:35 AM | Tim Verry
Tagged: mobile, Lenovo, electrical engineering, chip design, arm
According to a recent article in the EE Times, Beijing-based PC OEM Lenovo many be entering the mobile chip design business. An anonymous source allegedly familiar with the matter has indicated that Lenovo will be expanding its Integrated Circuits design team to 100 engineers by the second-half of this year. Further, Lenovo will reportedly task the newly-expanded team with designing an ARM processor of its own to join the ranks of Apple, Intel, NVIDIA, Qualcomm, Huawei, Samsung, and others.
It is unclear whether Lenovo simply intends to license an existing ARM core and graphics module or if the design team expansion is merely the begining of a growing division that will design a custom chip for its smartphones and Chromebooks to truly differentiate itself and take advantage of vertical integration.
Junko Yoshida of the EE Times article notes that Lenovo was turned away by Samsung when it attempted to use the company's latest Exynos Octa processor. I think that might contribute to the desire to have its own chip design team, but it may also be that the company believes it can compete in a serious way and set its lineup of smartphones apart from the crowd (as Apple has managed to do) as it pursues further Chinese market share and slowly moves its phones into the United States market.
Details are scarce, but it is at least an intriguing protential future for the company. It will be interesting to see if Lenovo is able to make it work in this extremely-competitive and expensive area.
Do you think Lenovo has what it takes to design its own mobile chip? Is it a good idea?
Subject: Editorial, General Tech, Processors, Shows and Expos | March 20, 2013 - 06:26 PM | Scott Michaud
Tagged: windows rt, nvidia, GTC 2013
NVIDIA develops processors, but without an x86 license they are only able to power ARM-based operating systems. When it comes to Windows, that means Windows Phone or Windows RT. The latter segment of the market has disappointing sales according to multiple OEMs, which Microsoft blames them for, but the jolly green GPU company is not crying doomsday.
NVIDIA just skimming the Surface RT, they hope.
As reported by The Verge, NVIDIA CEO Jen-Hsun Huang was optimistic that Microsoft would eventually let Windows RT blossom. He noted how Microsoft very often "gets it right" at some point when they push an initiative. And it is true, Microsoft has a history of turning around perceived disasters across a variety of devices.
They also have a history of, as they call it, "knifing the baby."
I think there is a very real fear for some that Microsoft could consider Intel's latest offerings as good enough to stop pursuing ARM. Of course, the more the pursue ARM, the more their business model will rely upon the-interface-formerly-known-as-Metro and likely all of its certification politics. As such, I think it is safe to say that I am watching the industry teeter on a fence with a bear on one side and a pack of rabid dogs on the other. On the one hand, Microsoft jumping back to Intel would allow them to perpetuate the desktop and all of the openness it provides. On the other hand, even if they stick with Intel they likely will just kill the desktop anyway, for the sake of user confusion and the security benefits of cert. We might just have less processor manufacturers when they do that.
So it could be that NVIDIA is confident that Microsoft will push Windows RT, or it could be that NVIDIA is pushing Microsoft to continue to develop Windows RT. Frankly, I do not know which would be better... or more accurately, worse.
Subject: Processors | March 12, 2013 - 02:52 PM | Jeremy Hellstrom
Tagged: VLIW4, trinity, Richland, piledriver, notebook, mobile, hd 8000, APU, amd, A10-5750
The differences between Richland and Trinity are not earth shattering but there are certainly some refinements implemented by AMD in the A10-5750. One very noticeable one is support for DDR3-1866 as well as better power management for both the CPU and GPU; with new temperature balancing algorithms and measurement the ability to balance the load properly has increased from Trinity. Many AMD users will be more interested in the GPU portion of the die than the CPU, as that is where AMD actually has as lead on Intel and this particular chip contains the HD8650G, with clocks of 720MHz boost and 533MHz base and increase from the previous generation of 35 and 37MHz respectively. You can read more about the other three models that will be released over at The Tech Report.
"AMD has formally introduced the first members of its Richland APU family. We have the goods on the chips and Richland's new power management tech, which combines temperature-based inputs with bottleneck-aware clock boosting."
Here are some more Processor articles from around the web:
- AMD Richland APU Preview: Trinity Gets a Facelift @ Hardware Canucks
- 2013 AMD Mobile APU (Richland) @ Bjorn3D
- Westmere-EP to Sandy Bridge-EP: The Scientist Potential Upgrade @ AnandTech
- AMD Phenom II X4 955, Phenom II X4 960T, Phenom II X6 1075T and Intel Pentium G2120, Core i3-3220, Core i5-3330 @ ixbt.com
- AMD FX-8350 @ iXBT Labs
- The new Opteron 6300: Finally Tested! @ AnandTech
- Intel Core i5-3570K vs. i7-3770K Ivy Bridge @ techPowerUp
Subject: Processors | February 20, 2013 - 09:35 PM | Josh Walrath
Tagged: Tegra 4i, tegra 4, tegra 3, Tegra 2, tegra, phoenix, nvidia, icera, i500
The NVIDIA Tegra 4 and Shield project were announced at this year’s CES, but there were other products in the pipeline that were just not quite ready to see the light of day at that time. While Tegra 4 is an impressive looking part for mobile applications, it is not entirely appropriate for the majority of smart phones out there. Sure, the nebulous “Superphone” category will utilize Tegra 4, but that is not a large part of the smartphone market. The two basic issues with Tegra 4 is that it pulls a bit more power at the rated clockspeeds than some manufacturers like, and it does not contain a built-in modem for communication needs.
The die shot of the Tegra 4i. A lot going on in this little guy.
NVIDIA bought up UK modem designer Icera to help create true all-in-one SOCs. Icera has a unique method with building their modems that they say is not only more flexible than what others are offering, but also much more powerful. These modems skip a lot of fixed function units that most modems are made of and rely on high speed general purpose compute units and an interesting software stack to create smaller modems with greater flexibility when it comes to wireless standards. At CES NVIDIA showed off the first product of this acquisition, the i500. This is a standalone chip and is set to be offered with the Tegra 4 SOC.
Yesterday NVIDIA introduced the Tegra 4i, formerly codenamed “Grey”. This is a combined Tegra SOC with the Icera i500 modem. This is not exactly what we were expecting, but the results are actually quite exciting. Before I get too out of hand about the possibilities of the chip, I must make one thing perfectly clear. The chip itself will not be available until Q4 2013. It will be released in limited products with greater availability in Q1 2014. While NVIDIA is announcing this chip, end users will not get to use it until much later this year. I believe this issue is not so much that NVIDIA cannot produce the chips, but rather the design cycles of new and complex cell phones do not allow for rapid product development.
Tegra 4i really should not be confused for the slightly earlier Tegra 4. The 4i actually uses the 4th revision of the Cortex A9 processor rather than the Cortex A15 in the Tegra 4. The A9 has been a mainstay of modern cell phone processors for some years now and offers a great deal of performance when considering die size and power consumption. The 4th revision improves IPC of the A9 in a variety of ways (memory management, prefetch, buffers, etc.), so it will perform better than previous Cortex A9 solutions. Performance will not approach that provided by the much larger and complex A15 cores, but it is a nice little boost from what we have previously seen.
The Tegra 4 features a 72 core GPU (though NVIDIA has still declined to detail the specifics of their new mobile graphics technology- these ain’t Kepler though), while the 4i features a nearly identical unit featuring 60 cores. There is no word so far as to what speed these will be running at or how performance really compares to the latest graphics products from ARM, Imagination, or Qualcomm.
The chip is made on TSMC’s 28 nm HPM process and features core speeds up to 2.3 GHz. We again have no information on if that will be all four cores at that speed or turbo functionality with one core. The design adopts the previous 4+1 core setup with four high speed cores and one power saving core. Considering how small each core is (Cortex A9 or A15) it is not a waste of silicon as compared to the potential power savings. The HPM process is the high power version rather than the LPM (low power) used for Tegra 4. My guess here is that the A9 cores are not going to pull all that much power anyway due to their simpler design as compared to A15. Hitting 2.3 GHz is also a factor in the process decision. Also consider that +1 core that is fabricated slightly differently than the other four to allow for slower transistor switching speed with much lower leakage.
The die size looks to be in the 60 to 65 mm squared range. This is not a whole lot larger than the original Tegra 2 which was around 50 mm squared. Consider that the Tegra 4i has three more cores, a larger and more able GPU portion, and the integrated Icera i500 modem. The modem is a full Cat 3 LTE capable unit (100 mbps), so bandwidth should not be an issue for this phone. The chip has all of the features of the larger Tegra 4, such as the Computational Photography Architecture, Image Signal Processor, video engine, and the “optimized memory interface”. All of those neat things that NVIDIA showed off at CES will be included. The only other major feature that is not present is the ability to output 3200x2000 resolutions. This particular chip is limited to 1920x1200. Not a horrific tradeoff considering this will be a smartphone SOC with a max of 1080P resolution for the near future.
We expect to see Tegra 4 out in late Q2 in some devices, but not a lot. While Tegra 4 is certainly impressive, I would argue that Tegra 4i is the more marketable product with a larger chance of success. If it were available today, I would expect its market impact to be similar to what we saw with the original 28nm Krait SOCs from Qualcomm last year. There is simply a lot of good technology in this core. It is small, it has a built-in modem, and performance per mm squared looks to be pretty tremendous. Power consumption will be appropriate for handhelds, and perhaps might turn out to be better than most current solutions built on 28 nm and 32 nm processes.
NVIDIA also developed the Phoenix Reference Phone which features the Tegra 4i. This is a rather robust looking unit with a 5” screen and 1080P resolution. It has front and rear facing cameras, USB and HDMI ports, and is only 8 mm thin. Just as with the original Tegra 3 it features the DirectTouch functionality which uses the +1 core to handle all touch inputs. This makes it more accurate and sensitive as compared to other solutions on the market.
Overall I am impressed with this product. It is a very nice balance of performance, features, and power consumption. As mentioned before, it will not be out until Q4 2013. This will obviously give the competition some time to hone their own products and perhaps release something that will not only compete well with Tegra 4i in its price range, but exceed it in most ways. I am not entirely certain of this, but it is a potential danger. The potential is low though, as the design cycles for complex and feature packed cell phones are longer than 6 to 7 months. While NVIDIA has had some success in the SOC market, they have not had a true homerun yet. Tegra 2 and Tegra 3 had their fair share of design wins, but did not ship in numbers that came anywhere approaching Qualcomm or Samsung. Perhaps Tegra 4i will be that breakthrough part for NVIDIA? Hard to say, but when we consider how aggressive this company is, how deep their developer relations, and how feature packed these products seem to be, then I think that NVIDIA will continue to gain traction and marketshare in the SOC market.
Subject: Processors | January 25, 2013 - 06:11 PM | Jeremy Hellstrom
Tagged: haswell, Intel, overclocking, speculation, BCLK
hardCOREware is engaging in a bit of informed speculation on how overclocking the upcoming Haswell chips will be accomplished. Now that Intel has relaxed the draconian lock down of frequencies and multipliers that they enforced for a few generations of chips, overclockers are once again getting excited about their new chips. They talk about the departure of the Front Side Bus and the four frequencies which overclockers have been using in modern generations and then share their research on why the inclusion of a GPU on the CPU might just make overclockers very happy.
"This is an overclocking preview of Intel’s upcoming Haswell platform. We have noticed that they have made an architectural change that may be a great benefit to overclockers. Check out our thoughts on the potential return of BCLK overclocking!"
Here are some more Processor articles from around the web:
- Intel Core i7-3960x vs. i7-3970x@Bjorn3D
- Intel Core i3-3220 v. Intel Core i3-3225 Review @ MissingRemote
- Desktop CPU Comparison Guide @ TechARP
- Testing Memory Speeds on AMD's A10-5800K Trinity APU @ Legit Reviews
- AMD A10 5700K APU @ Guru of 3D
Subject: Graphics Cards, Processors | January 23, 2013 - 02:42 PM | Ryan Shrout
Tagged: southern islands, sony, ps4, playstation 4, orbis, Kaveri, bulldozer, APU, amd
Earlier today a report from Kotaku.com posted some details about the upcoming PlayStation console, code named Orbis and sometimes just called the PS4. Kotaku author Luke Plunkett got the information from a 90 page PDF that details the development kit so the information is likely pretty accurate if incomplete. It discusses a new controller and a completely new accounts system but I was mostly interested in the hardware details given.
We'll begin with the specs. And before we go any further, know that these are current specs for a PS4 development kit, not the final retail console itself. So while the general gist of the things you see here may be similar to what makes it into the actual commercial hardware, there's every chance some—if not all of it—changes, if only slightly.
This is key to keep in mind because here are the specs listed on the report:
- 8GB of system memory
- 2.2GB of graphics memory
- 4 module (8 core) AMD Bulldozer CPU
- AMD "R10xx" based GPU
- 4x USB 3.0 ports and 2x Ethernet connections
- Blu-ray drive
- 160GB HDD
- HDMI and optical audio output
We are essentially talking about an AMD FX-series processor with a Southern Islands based discrete card and I am nearly 100% sure that this will not match the configuration of the shipping system. Think about it - would a console developer really want to have a processor that can draw more than 100 watts inside its box in addition to a discrete GPU? I doubt it.
Instead, let's go with the idea that this developer kit is simply meant to emulate some final specifications. More than likely we are looking at an APU solution that combines Bulldozer or Steamroller cores along with GCN-based GPU SIMD arrays. The most likely candidate is Kaveri, a 28nm based product that meets both of those requirements. Josh recently discussed the future with Kaveri in a post during CES, worth checking out. AMD has told us several times that Kaveri should be able to hit the 1.0 TFLOPs level of performance and if we compare to the current discrete GPUs would enable graphics performance similar to that of an under-clocked Radeon HD 7770.
There is some room for doubt though - Kaveri isn't supposed to be out until "late Q4" though its possible that the PS4 will be the first customer. It is also possible that AMD is making a specific discrete GPU for implementation on the PS4 based on the GCN architecture that would be faster than the graphics performance expected on the Kaveri APU.
When speaking with our own Josh Walrath on this rumor, he tended to think that Sony and AMD would not use an APU but would rather combine a separate CPU and GPU on a single substrate, allowing for better yields than a combined APU part. In order to make up for the slower memory controller interface (on substrate is not as fast as on-die) AMD might again utilize backside cache, just like the one used on the Xbox 360 today. With process technology improvements its not unthinkable to see that jump to 30 or 40MB of cache.
With the debate of a 2013 or 2014 release still up in the air, there is plenty of time for this to change still but we will likely know for sure after our next trip to Taipei.
Subject: General Tech, Processors, Mobile, Shows and Expos | January 7, 2013 - 05:05 PM | Scott Michaud
Tagged: CES, ces 2013, haswell, Intel
Oh certification, how I loathe thee.
At the Intel CES 2013 keynote, Intel announced a few new requirements for OEMs to manufacture Haswell-based ultrabooks. Intel clearly wants to push OEMs to utilize several of their more cherished features and as such they will not allow products to be released without these features.
A fourth-generation ultrabook must contain the following features:
- Touch interaction support
- Intel WiDi support
- Installed Antivirus and Anti-Malware, Intel-owned McAfee will have an announcement soon.
These three certification requirements lead to two major points of contention with me: non-Windows 8 operating systems as well as Intel potentially strong-arming McAfee into your machine. When Intel requires touch support for Haswell-based ultrabooks, they basically declare that Windows 7 and Linux will not be around.
That requirement could seem minor depending on what Intel McAfee will soon announce after Intel’s announcement that Antivirus and Anti-Malware will be required on ultrabooks. Windows 8 already comes with Microsoft Security Essentials pre-installed and as such Intel might strong-arm vendors into using McAfee. It would not be a stretch to speculate that McAfee will have some deep attachment to the Haswell architecture. Unfortunately we will need to wait until Intel makes their announcement.
Intel also claims that ultrabooks will have touch-based products in the $599 price points very soon.
PC Perspective's CES 2013 coverage is sponsored by AMD.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: General Tech, Processors, Mobile, Shows and Expos | January 6, 2013 - 05:13 PM | Scott Michaud
Tagged: CES, ces 2013, vizio, amd
Why not open up CES-proper discussion with a tablet announcement?
AMD has begun their push into the tablet space with Vizio being one of their first OEM partners to announce products at CES. Due to AMD being one of the select few to still maintain a proper x86 license, they are about your only option outside of Intel for a true Windows 8 tablet. Vizio took them up on that position.
The Vizio Tablet PC, seemingly a play on their original Android-based Vizio tablet with an added declaration that “I am a PC”, will run standard Windows 8 certified as Microsoft Signature. No bloatware will be included which should help users experience the performance that 60-day antivirus trials and auto-launched demo notifications absorb.
On the technical side, the Tablet PC is loaded with 2 GB of RAM, an 11.6” full 1080p display, and a 1.0 GHz AMD Z60 processor. 64 GB of solid state storage is included although Windows 8 has been known to stake claims to a large portion of that. Readers of our site would probably have a primary computing device although this might be worth watching as a secondary device. You do not have a whole lot of other options for Flash support or access to non-default browsers.
PC Perspective's CES 2013 coverage is sponsored by AMD.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: Processors | January 6, 2013 - 05:09 PM | Josh Walrath
Tagged: valleyview, low power, Intel, Bay Trail, atom
When the original Intel Atom hit the scene, it was a reasonably large success for Intel with the massive explosion of netbooks. The original design was very simplistic, but was fairly power efficient. The weak link of the original Atom was the 945 chipset graphics that were not only underpowered, but were based on a relatively power hungry desktop chipset. The eventual competition from AMD featured a next generation low power core based on the Bobcat architecture which featured a modern graphics core that was more than adequate for most scenarios.
Intel never stood still, but their advancement of the low power cores was slow as compared to the massive leaps and bounds we saw from the original Core architecture in 2006 on the desktop and server markets. Typically these products lagged the desktop products in terms of process nodes, but they continued to advance these cores little by little.
Leap forward a few years and we saw the eventual demise of the netbook and the massive uptake of mobile computing. Mobile computing was primarily comprised of tablets and smartphones. Intel was late to the party as compared to products from Qualcomm, Samsung, and NVIDIA. A fire was lit under the Atom group at Intel, as the competition had far surpassed the company in ultra-mobile parts.
Happily for those of us paying attention, the 3D Center Forum has released some very interesting slides about the 22 nm generation of Atom products and the platforms they will be integrated into. Valleyview is the SoC while Bay Trail is the platform.
Valleyview is based on Intel’s 22 nm process and will be a next generation Atom processor with a multitude of new features. It will be a SoC as it will no longer require a traditional southbridge. It will have improved graphics as compared to the most recent Atom processors. While the SoC will feature USB 3.0, it will not embrace SATA-6G or PCI-E 3.0. The CPU will go up to quad core units that will be 50% to 100% faster than current parts. These new chips will also introduce a boost functionality (think desktop Turbo Boost) that will run the frequency equal to or greater than 2.7 GHz.
Power is of course the primary concern, and these products will be offered from 3 watts and below (Bay Trail T) and up to 12 watts (Bay Trail D) These products will not be competing with the Haswell products which are rumored to get around 10 watts at the very lowest.
While Intel has been slow to react to the mobile push, they are starting to get that ball rolling. It will be very interesting to see if they can move fast enough to outrun and outwit the ARM based competition, not to mention AMD’s latest 28 nm products that will be released in the first half of 2013.
Get notified when we go live!