You've met Haswell, but have you overclocked it?

Subject: Processors | June 3, 2013 - 06:53 PM |
Tagged: haswell, z87, overclocking

If you haven't read your fill about Haswell's architecture you should cast your eyes onto Ryan's full review for an indepth look at the new design of Intel's Core processors.  If you have already done your homework and are now more interested in how well this new processor can overclock then heading to [H]ard|OCP will satisfy your curiosity.  When testing for the best overclock [H] utilized two different Z87 boards from ASUS to ensure we could see what the processor could do, not just what the motherboard was capable of but in the end the results were similar.  They also included a quick guide at the end for those wanting to apply an overclock without spending a lot of time in the BIOS.   Check it out here.

Hhaswell.png

"Intel's clock keeps ticking and today lands on a "tock" in the development cycle. The new desktop Haswell processor represents a new microarchitecture built on the tried and true 22nm process technology that we have come to know and love with Intel's current Ivy Bridge microarchitecture. But what does Haswell mean for the computer enthusiast?"

Here are some more Processor articles from around the web:

Processors

Source: [H]ard|OCP

Samsung Galaxy Tab 3 10.1: Intel inside an Android?

Subject: General Tech, Graphics Cards, Processors, Mobile | June 3, 2013 - 03:00 AM |
Tagged: Intel, atom, Clover Trail+, SoC, Samsung, Galaxy Tab 3 10.1

While Reuters is being a bit cagey with their source, if true: Intel may have nabbed just about the highest profile Android tablet design win possible. The, still currently unannounced, Samsung Galaxy Tab 3 10.1 is expected to embed Intel's Clover Trail+ System on a Chip (SoC). Samsung would not be the largest contract available in the tablet market, their previous tablets ship millions of units each; they are a good OEM vendor to have.

Source: BGR India

Samsung is also known for releasing multiple versions of the same device for various regions and partners. The Galaxy Tab 10.1 and Galaxy Tab 2 10.1 did not have a variety of models with differing CPUs like, for instance, the Galaxy S4 phone did; the original "10.1" contained an NVIDIA Tegra 2 and the later "2 10.1" embed a TI OMAP 4430 SoC. It is entirely possible that Intel won every Galaxy Tab 3 10.1 tablet ever, but it is also entirely possible that they did not.

Boy Genius Report India (BGR India, video above) also claims more specific hardware based on a pair of listings at GLBenchmark. The product is registered under the name Santos10: GT-P5200 being the 3G version, and GT-P5210 being the Wi-Fi version.

These specifications are:

  • Intel Atom Z2560 800-933 MHz dual-core SoC (4 threads, 1600 MHz Turbo)
  • PowerVR SGX 544MP GPU (OpenGL ES 2.0)
  • 1280x800 display
  • Android 4.2.2

I am not entirely sure what Intel has to offer with Clover Trail+ besides, I would guess, reliable fabrication. Raw graphics performance is still about half of Apple's A6X GPU although, if the leaked resolution is true, it has substantially less pixels to push without being attached to an external display.

Maybe Intel made it too cheap to refuse?

Source: Reuters

New Silvermont Atom Chips Will Use Pentium and Celeron Branding

Subject: Processors | June 2, 2013 - 11:32 PM |
Tagged: silvermont, pentium, Intel, haswell, celeron, atom, 22nm

In addition to the impending launch of Intel's desktop Haswell processors, the company is also working on new Atom-series chips based on Intel's Silvermont architecture. Ryan Shrout wrote about the upcoming Atom architecture a few weeks ago, and you can read up on it here. However, in short, Atoms using the Silvermont architecture are 22nm SoCs with a Hyper Threaded, dual-module quad core design that comes with burst-able clockspeeds and up to 2.5x the performance of chips using the previous generation Saltwell architecture. Intel is promising up to a 50% IPC (instructions per clock) increase, and 4.7x lower power versus previous generation Atom CPUs.

A block diagram of Intel's upcoming Silvermont architecture.

With that said, over the weekend I read an interesting article over at PC World that hinted at these new Silvermont-based Atom processors taking up the Pentium and Celeron branded CPU mantle. In speaking with Intel employee Kathy Gill, the site learned that Intel will be using the Silvermont architecture in code-named Bay Trail-M and Bay Trail-D processors for notebooks and desktops respectively. The Bay Trail code name isn't new, but Intel's use of the Pentium and Celeron branding for these Atom chips is. For the past few generations, Intel has re-purposed lower-tier or lower binned Core processors as Pentiums or Celerons by disabling features and/or clocking them lower. It seems that Intel finally believes that its Atom lineup is good enough to serve those low-end desktop and notebook CPU purposes under the budget brand families.

Intel Celeron Logo.jpg

Kathy Gill further stated that "we aren't ready to disclose additional details on Haswell plans at this time,” which does not rule out Haswell-based Celeron and Pentium chips. It does not confirm them either, however.

After a chat with PC Perspective's Josh Walrath on the issue, I'm not certain which direction Intel will take, but I do believe that Intel will (at least) favor the Atom chips for the Pentium and Celeron brands/lines because the company will see much better profit margins with the Silvermont-based chips compared to Haswell-based ones. On the other hand, Intel would lose out on the ability to re-brand low binning Core i3s as Pentium or Celeron CPUs. Further, going with both architectures would complicate matters and invite a good amount of brand confusion for many consumers in spite of allowing a mix of better profit margins and re-purposing chips that otherwise wouldn't make the cut (admittedly, Intel probably has to artificially limit some number of chips to keep up with the volume of Pentium and Celerons needed, it's difficult to say to what extent though).

Hopefully we will know more about Intel's Bay Trail CPUs and branding plans at Computex later this week.

What do you think of this move by Intel, and will the Silvermont-based Bay Trail chips be up to the task?

Source: PC World

AIDA64 Version 3.00 Released

Subject: Processors | June 2, 2013 - 10:43 PM |
Tagged: Kabini, haswell, FinalWire, aida64

01-aida64-title.PNG

Courtesy of FinalWire

Today, FinalWire Ltd. announced the release of version 3.00 of their diagnostic and benchmarking tool, AIDA64. This new version updates their Extreme Edition and Business Edition of the software.

02-shot2_gtx780_en.png

Courtesy of FinalWire

Source: FinalWire

Xbox One announced, the games: not so much.

Subject: Editorial, General Tech, Graphics Cards, Processors, Systems | May 21, 2013 - 05:26 PM |
Tagged: xbox one, xbox

xbox-one-head.jpg

Almost exactly three months have passed since Sony announced the Playstation 4 and just three weeks remain until E3. Ahead of the event, Microsoft unveiled their new Xbox console: The Xbox One. Being so close to E3, they are saving the majority of games until that time. For now, it is the box itself as well as its non-gaming functionality.

First and foremost, the raw specifications:

  • AMD APU (5 billion transistors, 8 core, on-die eSRAM)
  • 8GB RAM
  • 500GB Storage, Bluray reader
  • USB 3.0, 802.11n, HDMI out, HDMI in

The hardware is a definite win for AMD. The Xbox One is based upon an APU which is quite comparable to what the PS4 will offer. Unlike previous generations, there will not be too much differentiation based on available performance; I would not expect to see much of a fork in terms of splitscreen and other performance-sensitive features.

xbox-one-controller.jpg

A new version of the Kinect sensor will also be present with all units which developers can depend upon. Technically speaking, the camera is higher resolution and more wide-angle; up to six skeletons can be tracked with joints able to rotate rather than just hinge. Microsoft is finally also permitting developers to use the Kinect along with a standard controller to, as they imagine, allow a user to raise their controller to block with a shield. That is the hope, but near the launch of the original Kinect, Microsoft filed a patent to allow sign language recognition: has not happened yet. Who knows whether the device will be successfully integrated into gaming applications.

Of course Microsoft is known most for system software, and the Xbox runs three lightweight operating environments. In Windows 8, you have the Modern interface which runs WinRT applications and you have the desktop app which is x86 compatible.

The Xbox One borrows more than a little from this model.

The home screen, which I am tempted to call the Start Screen, for the console has a very familiar tiled interface. They are not identical to Windows but they are definitely consistent. This interface allows for access to Internet Explorer and an assortment of apps. These apps can be pinned to the side of the screen, identical to Windows 8 modern app. I am expecting there to be "a lot of crossover" (to say the least) between this and the Windows Store; I would not be surprised if it is basically the same API. This works both when viewing entertainment content as well as within a game.

Xbox_Home_UI_EN_US_Male_SS.jpg

These three operating systems run at the same time. The main operating system is basically a Hyper-V environment which runs the two other operating systems simultaneously in sort-of virtual machines. These operating systems can be layered with low latency, since all you are doing is compositing them in a different order.

Lastly, they made reference to Xbox Live, go figure. Microsoft is seriously increasing their server capacity and expects developers to utilize Azure infrastructure to offload "latency-insensitive" computation for games. While Microsoft promises that you can play games offline, this obviously does not apply to features (or whole games) which rely upon the back-end infrastructure.

xbox-one-live.jpg

And yes, I know you will all beat up on me if I do not mention the SimCity debacle. Maxis claimed that much of the game requires an online connection due to the complicated server requirements; after a crack allowed offline functionality, it was clear that the game mostly operates fine on a local client. How much will the Xbox Live cloud service offload? Who knows, but that is at least their official word.

Now to tie up some loose ends. The Xbox One will not be backwards compatible with Xbox 360 games although that is no surprise. Also, Microsoft says they are allowing users to resell and lend games. That said, games will be installed and not require the disc, from what I have heard. Apart from the concerns about how much you can run on a single 500GB drive, once the game is installed rumor has it that if you load it elsewhere (the rumor is even more unclear about whether "elsewhere" counts accounts or machines) you will need to pay a fee to Microsoft. In other words? Basically not a used game.

Well, that has it. You can be sure we will add more as information comes forth. Comment away!

Source: Xbox.com

HP SlateBook x2: Tegra 4 on Android 4.2.2 in August

Subject: General Tech, Graphics Cards, Processors, Mobile | May 15, 2013 - 09:02 PM |
Tagged: tegra 4, hp, tablets

Sentences containing the words "Hewlett-Packard" and "tablet" can end in a question mark, an exclamation mark, or a period on occasion. The gigantic multinational technology company tried to own a whole mobile operating system with their purchase of Palm and abandoned those plans just as abruptly with such a successful $99 liquidation of $500 tablets, go figure, that they to some extent did it twice. The operating system was open sourced and at some point LG swooped in and bought it, minus patents, for use in Smart TVs.

So how about that Android?

HP-slatex2-01.jpg

The floodgates are open on Tegra 4 with HP announcing their SlateBook x2 hybrid tablet just a single day after NVIDIA's SHIELD move out of the projects. The SlateBook x2 uses the Tegra 4 processor to power Android 4.2.2 Jellybean along with the full Google experience including the Google Play store. Along with Google Play, the SlateBook and its Tegra 4 processor are also allowed in TegraZone and NVIDIA's mobile gaming ecosystem.

As for the device itself, it is a 10.1" Android tablet which can dock into a keyboard for extended battery life, I/O ports, and well, a hardware keyboard. You are able to attach this tablet to a TV via HDMI along with the typical USB 2.0, combo audio jack, and a full-sized SD card slot; which half any given port is available through is anyone's guess, however. Wirelessly, you have WiFi a/b/g/n and some unspecified version of Bluetooth.

HP-slatex2-02.jpg

The raw specifications list follows:

  • NVIDIA Tegra 4 SoC
    • ARM Cortex A15 quad core @ 1.8 GHz
    • 72 "Core" GeForce GPU @ ~672MHz, 96 GFLOPS
  • 2GB DDR3L RAM ("Starts at", maybe more upon customization?)
  • 64GB eMMC SSD
  • 1920x1200 10.1" touch-enabled IPS display
  • HDMI output
  • 1080p rear camera, 720p front camera with integrated microphone
  • 802.11a/b/g/n + Bluetooth (4.0??)
  • Combo audio jack, USB 2.0, SD Card reader
  • Android 4.2.2 w/ Full Google and TegraZone experiences.

If this excites you, then you only have to wait until some point in August; you will also, of course, need to wait until you save up about $479.99 plus tax and shipping.

Source: HP

Haswell Laptop specs! NEC LaVie L to launch in Japan

Subject: General Tech, Graphics Cards, Processors, Systems, Mobile | May 14, 2013 - 03:54 PM |
Tagged: haswell, nec

While we are not sure when it will be released or whether it will be available for North America, we have found a Haswell laptop. Actually, NEC will release two products in this lineup: a high end 1080p unit and a lower end 1366x768 model. Unfortuantely, the article is in Japanese.

nec_haswell_01.jpg

IPS displays have really wide viewing angles, even top and bottom.

NEC is known for their higher-end monitors; most people equate the Dell Ultrasharp panels with professional photo and video production, but their top end offers are ofter a tier below the best from companies like NEC and Eizo. The laptops we are discussing today both contain touch-enabled IPS panels with apparently double the contrast ratio of what NEC considers standard. While these may or may not be the tip-top NEC offerings, they should at least be putting in decent screens.

Obviously the headliner for us is the introduction of Haswell. While we do not know exactly which product NEC decided to embed, we do know that they are relying upon it for their graphics performance. With the aforementioned higher-end displays, it seems likely that NEC is intending this device for the professional market. A price-tag of 190000 yen (just under $1900 USD) for the lower end and 200000 yen (just under $2000 USD) for the higher end further suggests this is their target demographic.

nec_haswell_02.jpg

Clearly a Japanese model.

The professional market does not exactly have huge requirements for graphics performance, but to explicitly see NEC trust Intel for their GPU performance is an interesting twist. Intel HD 4000 has been nibbling, to say the least, on the discrete GPU marketshare in laptops. I would expect this laptop would contain one of the BGA-based parts, which are soldered onto the motherboard, for the added graphics performance.

As a final note, the higher-end model will also contain a draft 802.11ac antenna. It is expected that network performance could be up to 867 megabits as a result.

Of course I could not get away without publishing the raw specifications:

LL850/MS (Price: 200000 yen):

  • Fourth-generation Intel Core processor with onboard video
  • 8GB DDR3 RAM
  • 1TB HDD w/ 32GB SSD caching
  • BDXL (100-128GB BluRay disc) drive
  • IEEE 802.11ac WiFi adapter, Bluetooth 4.0
  • SDXC, Gigabit Ethernet, HDMI, USB3.0, 2x2W stereo Yamaha speakers
  • 1080p IPS display with touch support
  • Office Home and Business 2013 preinstalled?

LL750/MS (Price: 190000 yen):

  • Fourth-generation Intel Core processor with onboard video
  • 8GB DDR3 RAM
  • 1TB HDD (no SSD cache)
  • (Optical disc support not mentioned)
  • IEEE 802.11a/b/g/n WiFi adapter, Bluetooth 4.0
  • SDXC, Gigabit Ethernet, HDMI, USB3.0, 2x2W stereo Yamaha speakers
  • 1366x768 (IPS?) touch-enabled display

Corsair has, well, Haswell PSU support chart

Subject: Editorial, General Tech, Cases and Cooling, Processors | May 10, 2013 - 04:23 PM |
Tagged: c6, c7, haswell, PSU, corsair

I cannot do it captain! I don't have the not enough power!

We have been discussing the ultra-low power state of Haswell processors for a little over a week and how it could be detrimental to certain power supplies. Power supply manufacturers never quite expected that you could have as little as a 0.05 Amp (0.6W) draw on the 12V rail without being off. Since then, companies such as Enermax started to list power supplies which have been tested and are compliant with the new power requirements.

PSU Series Model Haswell
Compatibility
Comment
AXi AX1200i Yes 100% Compatible with Haswell CPUs
AX860i Yes 100% Compatible with Haswell CPUs
AX760i Yes 100% Compatible with Haswell CPUs
AX AX1200 Yes 100% Compatible with Haswell CPUs
AX860 Yes 100% Compatible with Haswell CPUs
AX850 Yes 100% Compatible with Haswell CPUs
AX760 Yes 100% Compatible with Haswell CPUs
AX750 Yes 100% Compatible with Haswell CPUs
AX650 Yes 100% Compatible with Haswell CPUs
HX HX1050 Yes 100% Compatible with Haswell CPUs
HX850 Yes 100% Compatible with Haswell CPUs
HX750 Yes 100% Compatible with Haswell CPUs
HX650 Yes 100% Compatible with Haswell CPUs
TX-M TX850M Yes 100% Compatible with Haswell CPUs
TX750M Yes 100% Compatible with Haswell CPUs
TX650M Yes 100% Compatible with Haswell CPUs
TX TX850 Yes 100% Compatible with Haswell CPUs
TX750 Yes 100% Compatible with Haswell CPUs
TX650 Yes 100% Compatible with Haswell CPUs
GS GS800 Yes 100% Compatible with Haswell CPUs
GS700 Yes 100% Compatible with Haswell CPUs
GS600 Yes 100% Compatible with Haswell CPUs
CX-M CX750M Yes 100% Compatible with Haswell CPUs
CX600M TBD Likely compatible — currently validating
CX500M TBD Likely compatible — currently validating
CX430M TBD Likely compatible — currently validating
CX CX750 Yes 100% Compatible with Haswell CPUs
CX600 TBD Likely compatible — currently validating
CX500 TBD Likely compatible — currently validating
CX430 TBD Likely compatible — currently validating
VS VS650 TBD Likely compatible — currently validating
VS550 TBD Likely compatible — currently validating
VS450 TBD Likely compatible — currently validating
VS350 TBD Likely compatible — currently validating

Above is Corsair's slightly incomplete chart as of the time it was copied from their website, 3:30pm on May 10th, 2013; so far it is coming up all good. Their blog should be updated as new products get validated for the new C6 and C7 CPU sleep states.

The best part of this story is just how odd it is given the race to arc-welding (it's not a podcast so you can't Bingo! hahaha!) supplies we have been experiencing over the last several years. Simply put, some companies never thought that component manufacturers such as Intel would race to the bottom of power draws.

Source: Corsair

AMD to erupt Volcanic Islands GPUs as early as Q4 2013?

Subject: Editorial, General Tech, Graphics Cards, Processors | May 8, 2013 - 09:32 PM |
Tagged: Volcanic Islands, radeon, ps4, amd

So the Southern Islands might not be entirely stable throughout 2013 as we originally reported; seismic activity being analyzed suggests the eruption of a new GPU micro-architecture as early as Q4. These Volcanic Islands, as they have been codenamed, should explode onto the scene opposing NVIDIA's GeForce GTX 700-series products.

It is times like these where GPGPU-based seismic computation becomes useful.

The rumor is based upon a source which leaked a fragment of a slide outlining the processor in block diagram form and specifications of its alleged flagship chip, "Hawaii". Of primary note, Volcanic Islands is rumored to be organized with both Serial Processing Modules (SPMs) and a Parallel Compute Module (PCM).

Radeon9000.jpg

So apparently a discrete GPU can have serial processing units embedded on it now.

Heterogeneous Systems Architecture (HSA) is a set of initiatives to bridge the gap between massively parallel workloads and branching logic tasks. We usually make reference to this in terms of APUs and bringing parallel-optimized hardware to the CPU. In this case, we are discussing it in terms of bringing serial processing to the discrete GPU. According to the diagram, the chip within would contain 8 processor modules each with two processing cores and an FPU for a total of 16 cores. There does not seem to be any definite identification whether these cores would be based upon their license to produce x86 processors or their other license to produce ARM processors. Unlike an APU, this is heavily skewed towards parallel computation rather than a relatively even balance between CPU, GPU, and chipset features.

Now of course, why would they do that? Graphics processors can do branching logic but it tends to sharply cut performance. With an architecture such as this, a programmer might be able to more efficiently switch between parallel and branching logic tasks without doing an expensive switch across the motherboard and PCIe bus between devices. Josh Walrath suggested a server containing these as essentially add-in card computers. For gamers, this might help out with workloads such as AI which is awkwardly split between branching logic and massively parallel visibility and path-finding tasks. Josh seems skeptical about this until HSA becomes further adopted, however.

Still, there is a reason why they are implementing this now. I wonder, if the SPMs are based upon simple x86 cores, how the PS4 will influence PC gaming. Technically, a Volcanic Island GPU would be an oversized PS4 within an add-in card. This could give AMD an edge, particularly in games ported to the PC from the Playstation.

This chip, Hawaii, is rumored to have the following specifications:

  • 4096 stream processors
  • 16 serial processor cores on 8 modules
  • 4 geometry engines
  • 256 TMUs
  • 64 ROPs
  • 512-bit GDDR5 memory interface, much like the PS4.
  • 20 nm Gate-Last silicon fab process
    • Unclear if TSMC or "Common Platform" (IBM/Samsung/GLOBALFOUNDRIES)

Softpedia is also reporting on this leak. Their addition claims that the GPU will be designed on a 20nm Gate-Last fabrication process. While gate-last is considered to be not worth the extra effort in production, Fully Depleted Silicon On Insulator (FD-SOI) is apparently "amazing" on gate-last at 28nm and smaller fabrication. This could mean that AMD is eying that technology and making this design with intent of switching to an FD-SOI process, without a large redesign which an initially easier gate-first production would require.

Well that is a lot to process... so I will leave you with an open question for our viewers: what do you think AMD has planned with this architecture, and what do you like and/or dislike about what your speculation would mean?

Source: TechPowerUp

Intel plans a new Atom every year, starting with Silvermont

Subject: General Tech, Processors | May 6, 2013 - 02:34 PM |
Tagged: silvermont, merrifield, Intel, Bay Trail, atom

The news today is all about shrinking the Atom, both in process size and power consumption.  Indeed The Tech Report heard talk of milliwatts and SoC's which shows the change of strategy Intel is having with Atom from small footprint HTPCs to POS and other ultra-low power applications.  Hyperthreading has been dropped and Out of Order processing has been brought in which makes far more sense for the new niche Atom is destined for. 

Make sure to check out Ryan's report here as well.

TR_core-block.png

"Since their debut five years ago, Intel's Atom microprocessors have relied on the same basic CPU core. Next-gen Atoms will be based on the all-new Silvermont core, and we've taken a closer look at its underlying architecture."

Here is some more Tech News from around the web:

Tech Talk

Overclocker Pushes An Intel Haswell Core i7-4770K CPU Beyond 7GHz

Subject: Processors | May 3, 2013 - 06:45 AM |
Tagged: z87, overclocking, Intel, haswell, core i7 4770k, 7ghz

OCaholic has spotted an interesting entry in the CPU-Z database. According to the site, an overclocker by the handle of “rtiueuiurei” has allegedly managed to push an engineering sample of Intel’s upcoming Haswell Core i7-4770K processor past 7GHz.

Intel Core i7-4770K Overclocked Beyond 7GHz.jpg

If the CPU-Z entry is accurate, the overclocker used a BCLK speed of 91.01 and a multiplier of 77 to achieve a CPU clockspeed of 7012.65MHz. The chip was overclocked on a Z87 motherboard along with a single 2GB G.Skill DDR3 RAM module. Even more surprising than the 7GHz clockspeed is the voltage that the overclocker used to get there: an astounding 2.56V according to CPU-Z.

From the information Intel provided at IDF Beijing, the new 22nm Haswell processors feature an integrated voltage regulator (IVR), and the CPU portion of the chip’s voltage is controlled by the Vccin value. Intel recommends a range of 1.8V to 2.3V for this value, with a maximum of 3V and a default of 1.8V. Therefore, the CPU-Z-reported number may actually be correct. On the other hand, it may also just be a bug in the software due to the unreleased-nature of the Haswell chip.

Voltage questions aside, the frequency alone makes for an impressive overclock, and it seems that the upcoming chips will have decent overclocking potential!

Source: OCaholic

Possible power supply issues for Intel Haswell CPUs

Subject: Cases and Cooling, Processors | May 1, 2013 - 03:07 PM |
Tagged: power supply, Intel, idle, haswell, c7, c6

I came across an interesting news story posted by The Tech Report this morning that dives into the possibility of problems with Intel's upcoming Haswell processors and currently available power supplies.  Apparently, the new C6 and C7 idle power states that give the new Haswell architecture benefits for low power scenarios place a requirement of receiving a 0.05 amps load on the 12V2 rail.  (That's just 50 milliamps!)  Without that capability, the system can exhibit unstable behavior and a quick look at the power supply selector on Intel's own website is only listing a couple dozen that support the feature. 

haswellpsu.jpg

This table from VR-Zone, the source of the information initially, shows the difference between the requirements for 3rd (Ivy Bridge) and 4th generation (Haswell) processors.  The shift is an order of magnitude and is quite a dramatic change for PSU vendors.  Users of Corsair power supplies will be glad to know that among those listed with support on the Intel website linked above were mostly Corsair units!

A potential side effect of this problem might be that motherboard vendors simply disable those sleep states by default.  I don't imagine that will be a problem for PC builders anyway since most desktop users aren't really worried about the extremely small differences in power consumption they offer.  For mobile users and upcoming Haswell notebook designs the increase in battery life is crucial though and Intel has surely been monitoring those power supplies closely. 

I asked our in-house power supply guru, Lee Garbutt, who is responsible for all of the awesome power supply reviews on pcper.com, what he thought about this issue.  He thinks the reason more power supplies don't support it already is for power efficiency concerns:

Most all PSUs have traditionally required "some load" on the various outputs to attain good voltage regulation and/or not shut down. Not very many PSUs are designed yet to operate with no load, especially on the critical +12V output. One of the reasons for this is efficiency. Its harder to design a PSU to operate correctly with a very low load AND to deliver high efficiency. It would be easy just to add some bleed resistance across the DC outputs to always have a minimal load to keep voltage regulation under control but then that lowers efficiency.

Source: Tech Report

AMD Releases FX CPU Refreshes

Subject: Processors | April 30, 2013 - 02:04 PM |
Tagged: amd, FX, vishera, bulldozer, FX-6350, FX-4350, FX-6300, FX-4300, 32 nm, SOI, Beloved

 

Today AMD has released two new processors that address the AM3+ market.  The FX-6350 and FX-4350 are two new refreshes of the quad and hex core lineup of processors.  Currently the FX-8350 is still the fastest of the breed, and there is no update for that particular number yet.  This is not necessarily a bad thing, but there are those of us who are still awaiting the arrival of the rumored “Centurion”.

These parts are 125 watt TDP units, which are up from their 95 watt predecessors.  The FX-6350 runs at 3.9 GHz with a 4.2 GHz boost clock.  This is up 300 MHz stock and 100 MHz boost from the previous 95 watt FX-6300.  The FX-4350 runs at 3.9 GHz with a 4.3 GHz boost clock.  This is 100 MHz stock and 300 MHz boost above that of the FX-4300.  What is of greater interest here is that the L3 cache goes from 4 MB on the 4300 to 8 MB on the 4350.  This little fact looks to be the reason why the FX-4350 is now a 125 watt TDP part.

fx_logo.jpg

It has been some two years since AMD started shipping 32 nm PD-SOI/HKMG products to the market, and it certainly seems as though spinning off GLOBALFOUNDRIES has essentially stopped the push to implement new features into a process node throughout the years.  As many may remember, AMD was somewhat famous for injecting new process technology into current nodes to improve performance, yields, and power characteristics in “baby steps” type fashion instead of leaving the node as is and making a huge jump with the next node.  Vishera has been out for some 7 months now and we have not really seen any major improvement in regards to performance and power characteristics.  I am sure that yields and bins have improved, but the bottom line is that this is only a minor refresh and AMD raised TDPs to 125 watts for these particular parts.

The FX-6350 is again a three module part containing six cores.  Each module features 2 MB of L2 cache for a total of 6 MB L2 and the entire chip features 8 MB of L3 cache.  The FX-4350 is a two module chip with four cores.  The modules again feature the same 2 MB of L2 cache for a total of 4 MB active on the chip with the above mentioned 8 MB of L3 cache that is double what the FX-4300 featured.

Perhaps soon we will see updates on FM2 with the Richland series of desktop processors, but for now this refresh is all AMD has at the moment.  These are nice upgrades to the line.  The FX-6350 does cost the same as the FX-6300, but the thinking behind that is that the 6300 is more “energy efficient”.  We have seen in the past that AMD (and Intel for that matter) does put a premium on lower wattage parts in a lineup.  The FX-4350 is $10 more expensive than the 4300.  It looks as though the FX-6350 is in stock at multiple outlets but the 4350 has yet to show up.

These will fit in any modern AM3+ motherboard with the latest BIOS installed.  While not an incredibly exciting release from AMD, it at least shows that they continue to address their primary markets.  AMD is in a very interesting place, and it looks like Rory Read is busy getting the house in order.  Now we just have to see if they can curve back their cost structure enough to make the company more financially stable.  Indications are good so far, but AMD has a long ways to go.  But hey, at least according to AMD the FX series is beloved!

Source: AMD

Intel Talks Haswell Overclocking at IDF Beijing

Subject: Processors | April 17, 2013 - 09:48 PM |
Tagged: overclocking, intel ivr, intel hd graphics, Intel, haswell, cpu

During the Intel Developer Forum in Beijing, China the X86 chip giant revealed details about how overclocking will work on its upcoming Haswell processors. Enthusiasts will be pleased to know that the new chips do not appear to be any more restrictive than the existing Ivy Bridge processors as far as overclocking. Intel has even opened up the overclocking capabilities slightly by allowing additional BCLK tiers without putting aspects such as the PCI-E bus out of spec.

The new Haswell chips have an integrated voltage regulator, which allows programmable voltage to both the CPU, Memory, and GPU portions of the chip. As far as overclocking the CPU itself, Intel has opened up the Turbo Boost and is allowing enthusiasts to set an overclocked Turbo Boost clockspeed. Additionally, Intel is specifying available BCLK values of 100, 125, and 167MHz without putting other systems out of spec (they use different ratios to counterbalance the increased BCLK, which is important for keeping the PCI-E bus within ~100Mhz). The chips will also feature unlocked core ratios all the way up to 80 in 100MHz increments. That would allow enthusiasts with a cherry-picked chip and outrageous cooling to clock the chip up to 8GHz without overclocking the BCLK value (though no chip is likely to reach that clockspeed, especially for everyday usage!).

Remember that the CPU clockspeed is determined by the BCLK value times a pre-set multiplier. Unlocked processors will allow enthusiasts to adjust the multiplier up or down as they please, while non-K edition chips will likely only permit lower multipliers with higher-than-default multipliers locked out. Further, Intel will allow the adventurous to overclock the BLCK value above the pre-defined 100, 125, and 167MHz options, but the chip maker expects most chips will max out at anywhere between five-to-seven percent higher than normal. PC Perspective’s Morry Teitelman speculates that slightly higher BCLK overclocks may be possible if you have a good chip and adequate cooling, however.

Intel Logo.jpg

Similar to current-generation Ivy Bridge (and Sandy Bridge before that) processors, Intel will pack Haswell processors with its own HD Graphics pGPU. The new HD Graphics will be unlocked and the graphics ratio will be able to scale up to a maximum of 60 in 50MHz steps for a potential maximum of 3GHz. The new processor graphics cards will also benefit from Intel’s IVR (programmable voltage) circuitry. The HD Graphics and CPU are fed voltage from the integrated voltage regulator (IVR), and is controlled by adjusting the Vccin value. The default is 1.8V, but it supports a recommended range of 1.8V to 2.3V with a maximum of 3V.

Finally, Intel is opening up the memory controller to further overclocking. Intel will allow enthusiasts to overclock the memory in either 200MHz or 266MHz increments, which allows for a maximum of either 2,000MHz or 2,666MHz respectively. The default voltage will depend on the particular RAM DIMMs you use, but can be controlled via the Vddq IVR setting.

It remains to be seen how Intel will lock down the various processor SKUs, especially the non-K edition chips, but at least now we have an idea of how a fully-unlocked Haswell processor will overclock. On a positive note, it is similar to what we have become used to with Ivy Bridge, so similar overclocking strategies for getting the most out of processors should still apply with a bit of tweaking. I’m interested to see how the integration of the voltage regulation hardware will affect overclocking though. Hopefully it will live up to the promises of increased efficiency!

Are you gearing up for a Haswell overhaul of your system, and do you plan to overclock?

Source: AnandTech

Intel Announces New Atom SoCs for Low Power Server, Networking, and Storage Hardware

Subject: Processors | April 11, 2013 - 08:45 PM |
Tagged: s12x9, Intel, idf, atom

In addition to Intel's announcement of new Xeon processors, the company is launching three new Atom-series processors for servers later this year. The new processor lineups include the Intel Atom S12x9 family for storage applications, Rangeley processors for networking gear, and Avoton SoCs for low-power micro-servers.

Intel Atom Inside Logo.jpg

The Intel Atom S12x9 family takes the existing S1200 processors and makes a few tweaks to optimize the SoCs for storage servers and other storage appliances. For reference, the Intel Atom S1200 series of processors feature sub-9W TDPs, 1MB of cache, and two physical CPU cores clocked at up to 2GHz. However, Intel did not list the individual S12x9 SKUs or specifications, so it is unknown if they will also be clocked at up to 2GHz. The new Atom S12x9 processors will feature 40 PCI-E 2.0 lanes (26 Root Port and 16 Non-Transparent Bridge) to provide ample bandwidth between I/O and processor. The SoCs also feature hardware RAID acceleration, Native Dual-Casting, and Asynchronous DRAM Self-Refresh. Native Dual-Casting allows data to be read from one source and written to two memory locations simultaneously while Asynchronous DRAM Self-Refresh protects data during a power failure.

The new chips are available now to customers and will be available in OEM systems later this year. Vendors that plan to release systems with the S12x9 processors include Accusys, MacroSAN, Qnap, and Qsan.

Intel is also introducing a new series of processors --- codenamed Rangeley -- is intended to power future networking gear. The 22nm Atom SoC is slated to be available sometime in the second half of this year (2H'13). Intel is positioning the Rangeley processors at entry-level to mid-range routers, switches, and security appliances.

While S12x9 and Rangeley are targeted at specific tasks, the company is also releasing a general purpose Atom processor codenamed Avoton. The Avoton SoCs are aimed at low power micro-servers, and is Intel's answer to ARM chips in the server room. Avoton is Intel's second generation 64-bit Atom processor series. It uses the company's Silvermont architecture on a 22nm process. The major update with Avoton is the inclusion of an Ethernet controller built into the processor itself. According to Intel, building networking into the processor instead of as a chip on a separate add-on board results in "significant improvements in performance per watt." These chips are currently being sampled to partners, and should be available in Avoton-powered servers later this year (2H'13).

This year is certainly shaping up to be an interesting year for Atom processors. I'm excited to see how the battle unfolds between the ARM and Atom-based solutions in the data center.

Source: Intel (PDF)

Lenovo Allegedly Expanding Chip Design Team, Will Design Its Own Mobile Processors

Subject: Processors | April 3, 2013 - 08:35 AM |
Tagged: mobile, Lenovo, electrical engineering, chip design, arm

According to a recent article in the EE Times, Beijing-based PC OEM Lenovo many be entering the mobile chip design business. An anonymous source allegedly familiar with the matter has indicated that Lenovo will be expanding its Integrated Circuits design team to 100 engineers by the second-half of this year. Further, Lenovo will reportedly task the newly-expanded team with designing an ARM processor of its own to join the ranks of Apple, Intel, NVIDIA, Qualcomm, Huawei, Samsung, and others.

It is unclear whether Lenovo simply intends to license an existing ARM core and graphics module or if the design team expansion is merely the begining of a growing division that will design a custom chip for its smartphones and Chromebooks to truly differentiate itself and take advantage of vertical integration.

Junko Yoshida of the EE Times article notes that Lenovo was turned away by Samsung when it attempted to use the company's latest Exynos Octa processor. I think that might contribute to the desire to have its own chip design team, but it may also be that the company believes it can compete in a serious way and set its lineup of smartphones apart from the crowd (as Apple has managed to do) as it pursues further Chinese market share and slowly moves its phones into the United States market.

Details are scarce, but it is at least an intriguing protential future for the company. It will be interesting to see if Lenovo is able to make it work in this extremely-competitive and expensive area.

Do you think Lenovo has what it takes to design its own mobile chip? Is it a good idea?

Source: EE Times

CEO Jen-Hsun Huang Sells Windows RT... A Little Bit.

Subject: Editorial, General Tech, Processors, Shows and Expos | March 20, 2013 - 06:26 PM |
Tagged: windows rt, nvidia, GTC 2013

NVIDIA develops processors, but without an x86 license they are only able to power ARM-based operating systems. When it comes to Windows, that means Windows Phone or Windows RT. The latter segment of the market has disappointing sales according to multiple OEMs, which Microsoft blames them for, but the jolly green GPU company is not crying doomsday.

surface-cover.jpg

NVIDIA just skimming the Surface RT, they hope.

As reported by The Verge, NVIDIA CEO Jen-Hsun Huang was optimistic that Microsoft would eventually let Windows RT blossom. He noted how Microsoft very often "gets it right" at some point when they push an initiative. And it is true, Microsoft has a history of turning around perceived disasters across a variety of devices.

They also have a history of, as they call it, "knifing the baby."

I think there is a very real fear for some that Microsoft could consider Intel's latest offerings as good enough to stop pursuing ARM. Of course, the more the pursue ARM, the more their business model will rely upon the-interface-formerly-known-as-Metro and likely all of its certification politics. As such, I think it is safe to say that I am watching the industry teeter on a fence with a bear on one side and a pack of rabid dogs on the other. On the one hand, Microsoft jumping back to Intel would allow them to perpetuate the desktop and all of the openness it provides. On the other hand, even if they stick with Intel they likely will just kill the desktop anyway, for the sake of user confusion and the security benefits of cert. We might just have less processor manufacturers when they do that.

So it could be that NVIDIA is confident that Microsoft will push Windows RT, or it could be that NVIDIA is pushing Microsoft to continue to develop Windows RT. Frankly, I do not know which would be better... or more accurately, worse.

Source: The Verge

Welcome Richland, another refined die from AMD

Subject: Processors | March 12, 2013 - 02:52 PM |
Tagged: VLIW4, trinity, Richland, piledriver, notebook, mobile, hd 8000, APU, amd, A10-5750

The differences between Richland and Trinity are not earth shattering but there are certainly some refinements implemented by AMD in the A10-5750.  One very noticeable one is support for DDR3-1866 as well as better power management for both the CPU and GPU; with new temperature balancing algorithms and measurement the ability to balance the load properly has increased from Trinity.  Many AMD users will be more interested in the GPU portion of the die than the CPU, as that is where AMD actually has as lead on Intel and this particular chip contains the HD8650G, with clocks of 720MHz boost and 533MHz base and increase from the previous generation of 35 and 37MHz respectively.  You can read more about the other three models that will be released over at The Tech Report.

Don't forget Josh either!

TR_dice.jpg

"AMD has formally introduced the first members of its Richland APU family. We have the goods on the chips and Richland's new power management tech, which combines temperature-based inputs with bottleneck-aware clock boosting."

Here are some more Processor articles from around the web:

Processors

NVIDIA Releases Tegra 4i: I Shall Name It... Mini-Me!

Subject: Processors | February 20, 2013 - 09:35 PM |
Tagged: Tegra 4i, tegra 4, tegra 3, Tegra 2, tegra, phoenix, nvidia, icera, i500

 

The NVIDIA Tegra 4 and Shield project were announced at this year’s CES, but there were other products in the pipeline that were just not quite ready to see the light of day at that time.  While Tegra 4 is an impressive looking part for mobile applications, it is not entirely appropriate for the majority of smart phones out there.  Sure, the nebulous “Superphone” category will utilize Tegra 4, but that is not a large part of the smartphone market.  The two basic issues with Tegra 4 is that it pulls a bit more power at the rated clockspeeds than some manufacturers like, and it does not contain a built-in modem for communication needs.

Tegra 4i_die_shot.png

The die shot of the Tegra 4i.  A lot going on in this little guy.

NVIDIA bought up UK modem designer Icera to help create true all-in-one SOCs.  Icera has a unique method with building their modems that they say is not only more flexible than what others are offering, but also much more powerful.  These modems skip a lot of fixed function units that most modems are made of and rely on high speed general purpose compute units and an interesting software stack to create smaller modems with greater flexibility when it comes to wireless standards.  At CES NVIDIA showed off the first product of this acquisition, the i500.  This is a standalone chip and is set to be offered with the Tegra 4 SOC.

Yesterday NVIDIA introduced the Tegra 4i, formerly codenamed “Grey”.  This is a combined Tegra SOC with the Icera i500 modem.  This is not exactly what we were expecting, but the results are actually quite exciting.  Before I get too out of hand about the possibilities of the chip, I must make one thing perfectly clear.  The chip itself will not be available until Q4 2013.  It will be released in limited products with greater availability in Q1 2014.  While NVIDIA is announcing this chip, end users will not get to use it until much later this year.  I believe this issue is not so much that NVIDIA cannot produce the chips, but rather the design cycles of new and complex cell phones do not allow for rapid product development.

NV_T4i_Feat.png

Tegra 4i really should not be confused for the slightly earlier Tegra 4.  The 4i actually uses the 4th revision of the Cortex A9 processor rather than the Cortex A15 in the Tegra 4.  The A9 has been a mainstay of modern cell phone processors for some years now and offers a great deal of performance when considering die size and power consumption.  The 4th revision improves IPC of the A9 in a variety of ways (memory management, prefetch, buffers, etc.), so it will perform better than previous Cortex A9 solutions.  Performance will not approach that provided by the much larger and complex A15 cores, but it is a nice little boost from what we have previously seen.

The Tegra 4 features a 72 core GPU (though NVIDIA has still declined to detail the specifics of their new mobile graphics technology- these ain’t Kepler though), while the 4i features a nearly identical unit featuring 60 cores.  There is no word so far as to what speed these will be running at or how performance really compares to the latest graphics products from ARM, Imagination, or Qualcomm.

The chip is made on TSMC’s 28 nm HPM process and features core speeds up to 2.3 GHz.  We again have no information on if that will be all four cores at that speed or turbo functionality with one core.  The design adopts the previous 4+1 core setup with four high speed cores and one power saving core.  Considering how small each core is (Cortex A9 or A15) it is not a waste of silicon as compared to the potential power savings.  The HPM process is the high power version rather than the LPM (low power) used for Tegra 4.  My guess here is that the A9 cores are not going to pull all that much power anyway due to their simpler design as compared to A15.  Hitting 2.3 GHz is also a factor in the process decision.  Also consider that +1 core that is fabricated slightly differently than the other four to allow for slower transistor switching speed with much lower leakage.

NV_T4_Comp.png

The die size looks to be in the 60 to 65 mm squared range.  This is not a whole lot larger than the original Tegra 2 which was around 50 mm squared.  Consider that the Tegra 4i has three more cores, a larger and more able GPU portion, and the integrated Icera i500 modem.  The modem is a full Cat 3 LTE capable unit (100 mbps), so bandwidth should not be an issue for this phone.  The chip has all of the features of the larger Tegra 4, such as the Computational Photography Architecture, Image Signal Processor, video engine, and the “optimized memory interface”.  All of those neat things that NVIDIA showed off at CES will be included.  The only other major feature that is not present is the ability to output 3200x2000 resolutions.  This particular chip is limited to 1920x1200.  Not a horrific tradeoff considering this will be a smartphone SOC with a max of 1080P resolution for the near future.

We expect to see Tegra 4 out in late Q2 in some devices, but not a lot.  While Tegra 4 is certainly impressive, I would argue that Tegra 4i is the more marketable product with a larger chance of success.  If it were available today, I would expect its market impact to be similar to what we saw with the original 28nm Krait SOCs from Qualcomm last year.  There is simply a lot of good technology in this core.  It is small, it has a built-in modem, and performance per mm squared looks to be pretty tremendous.  Power consumption will be appropriate for handhelds, and perhaps might turn out to be better than most current solutions built on 28 nm and 32 nm processes.

NV_Phoenix.png

NVIDIA also developed the Phoenix Reference Phone which features the Tegra 4i.  This is a rather robust looking unit with a 5” screen and 1080P resolution.  It has front and rear facing cameras, USB and HDMI ports, and is only 8 mm thin.  Just as with the original Tegra 3 it features the DirectTouch functionality which uses the +1 core to handle all touch inputs.  This makes it more accurate and sensitive as compared to other solutions on the market.

Overall I am impressed with this product.  It is a very nice balance of performance, features, and power consumption.  As mentioned before, it will not be out until Q4 2013.  This will obviously give the competition some time to hone their own products and perhaps release something that will not only compete well with Tegra 4i in its price range, but exceed it in most ways.  I am not entirely certain of this, but it is a potential danger.  The potential is low though, as the design cycles for complex and feature packed cell phones are longer than 6 to 7 months.  While NVIDIA has had some success in the SOC market, they have not had a true homerun yet.  Tegra 2 and Tegra 3 had their fair share of design wins, but did not ship in numbers that came anywhere approaching Qualcomm or Samsung.  Perhaps Tegra 4i will be that breakthrough part for NVIDIA?  Hard to say, but when we consider how aggressive this company is, how deep their developer relations, and how feature packed these products seem to be, then I think that NVIDIA will continue to gain traction and marketshare in the SOC market.

Source: NVIDIA

I can Haswell overclock?

Subject: Processors | January 25, 2013 - 06:11 PM |
Tagged: haswell, Intel, overclocking, speculation, BCLK

hardCOREware is engaging in a bit of informed speculation on how overclocking the upcoming Haswell chips will be accomplished.  Now that Intel has relaxed the draconian lock down of frequencies and multipliers that they enforced for a few generations of chips, overclockers are once again getting excited about their new chips.  They talk about the departure of the Front Side Bus and the four frequencies which overclockers have been using in modern generations and then share their research on why the inclusion of a GPU on the CPU might just make overclockers very happy.

BIOS_02T.jpg

"This is an overclocking preview of Intel’s upcoming Haswell platform. We have noticed that they have made an architectural change that may be a great benefit to overclockers. Check out our thoughts on the potential return of BCLK overclocking!"

Here are some more Processor articles from around the web:

Processors

Source: hardCOREware