Flash player not detected. Click here to install flash.
« 1 2 3 4 5 »
Author:
Subject: Processors
Manufacturer: AMD
Tagged: Zen, amd

Gunning for Broadwell-E

As I walked away from the St. Regis in downtown San Francisco tonight, I found myself wandering through the streets towards my hotel with something unique in tow. It was a smile. I was smiling, thinking about what AMD had just demonstrated and showed at its latest Zen processor reveal. The importance of this product launch can literally not be overstated for a company struggling to find a foothold to hang on to in a market that it once had a definitive lead. It’s been many years since I left a conference call, or a meeting, or a press conference feeling genuinely hopefully and enthusiastic about what AMD has shown me. Tonight I had that.

AMD’s CEO Lisa Su, and CTO Mark Papermaster, took stage down the street from the Intel Developer Forum to roll out a handful of new architectural details about the Zen architecture while also showing the first performance results comparing it to competing parts from Intel. The crowd in attendance, a mix of media and analysts, were impressed. The feeling was palpable in the room.

zenicon.jpg

It’s late as I write this, and while there are some interesting architecture details to discuss, I think it is in everyone’s best interest that we touch on them lightly for now, and instead refocus on the deep-dive once the Hot Chips information comes out early next week. What you really want to know is clear: can Zen make Intel work again? Can Zen make that $1700 price tag on the Broadwell-E 6950X seem even more ludicrous? Yes.

The Zen Architecture

Much of what was discussed from the Zen architecture is a re-release of what has been out in recent months. This is a completely new, from the ground up, microarchitecture and not a revamp of the aging Bulldozer design. It integrated SMT (simultaneous multi-threading), a first for an AMD CPU, to better take efficient advantage of a longer pipeline. Intel has had HyperThreading for a long time now and AMD is finally joining the fold. A high bandwidth and low latency caching system is used to “feed the beast” as Papermaster put it and utilizing 14nm process technology (starting at Global Foundries) gives efficiency, and scaling a significant bump while enabling AMD to scale from notebooks to desktops to servers with the same architecture.

zenpm-10.jpg

By far the most impressive claim from AMD thus far was that of a 40% increase in IPC over previous AMD designs. That’s a HUGE claim and is key to the success or failure of Zen. AMD proved to me today that the claims are real and that we will see the immediate impact of that architecture bump from day one.

zenpm-4.jpg

Press was told of a handful of high level changes to the new architecture as well. Branch prediction gets a complete overhaul. This marks the first AMD processor to have a micro-op cache. Wider execution width with broader instruction schedulers are integrated, all of which adds up to much higher instruction level parallelism to improve single threaded performance.

zenpm-6.jpg

Performance improvements aside, throughput and efficiency go up with Zen as well. AMD has integrated an 8MB L3 cache and improved prefetching for up 5x the cache bandwidth available per core on the CPU. SMT makes sure the pipeline stays full to prevent “bubbles” that introduce latency and lower efficiency while region-specific power gating means that we’ll see Zen in notebooks as well as enterprise servers in 2017. It truly is an impressive design from AMD.

zenfull-27.jpg

Summit Ridge, the enthusiast platform that will be the first product available with Zen, is based on the AM4 platform and processors will go up to 8-cores and 16-threads. DDR4 memory support is included, PCI Express 3.0 and what AMD calls “next-gen” IO – I would expect a quick leap forward for AMD to catch up on things like NVMe and Thunderbolt.

The Real Deal – Zen Performance

As part of today’s reveal, AMD is showing the first true comparison between Zen and Intel processors. Sure, AMD showed a Zen-powered system running the upcoming Deus Ex running at 4K with a system powered by the Fury X, but the really impressive results where shown when comparing Zen to a Broadwell-E platform.

zenfull-29.jpg

Using Blender to measure the performance of a rendering workload (a Zen CPU mockup of course), AMD ran an 8-core / 16-thread Zen processor at 3.0 GHz against an 8-core / 16-thread Broadwell-E processor at 3.0 GHz (likely a fixed clocked Core i7-6900K). The point of the demonstration was to showcase the IPC improvements of Zen and it worked: the render completed on the Zen platform a second or two faster than it did on the Intel Broadwell-E system.

DSC01490.jpg

Not much to look at, but Zen on the left, Broadwell-E on the right...

Of course there are lots of caveats: we didn’t setup the systems, I don’t know for sure that GPUs weren’t involved, we don’t know the final clocks of the Zen processors releasing in early 2017, etc. But I took two things away from the demonstration that are very important.

  1. The IPC of Zen is on-par or better than Broadwell.
  2. Zen will scale higher than 3.0 GHz in 8-core configurations.

AMD obviously didn’t state what specific SKUs were going to launch with the Zen architecture, what clock speeds they would run at, or even what TDPs they were targeting. Instead we were left with a vague but understandable remark of “comparable TDPs to Broadwell-E”.

Pricing? Overclocking? We’ll just have to wait a bit longer for that kind of information.

Closing Thoughts

There is clearly a lot more for AMD to share about Zen but the announcement and showcase made this week with the early prototype products have solidified for me the capability and promise of this new microarchitecture. We have asked for, and needed, as an industry, a competitor to Intel in the enthusiast CPU space – something we haven’t legitimately had since the Athlon X2 days. Zen is what we have been pining over, what gamers and consumers have needed.

zenpm-11.jpg

AMD’s processor stars might finally be aligning for a product that combines performance, efficiency and scalability at the right time. I’m ready for it –are you?

GlobalFoundries Will Allegedly Skip 10nm and Jump to Developing 7nm Process Technology In House (Updated)

Subject: Processors | August 20, 2016 - 03:06 PM |
Tagged: Semiconductor, lithography, GLOBALFOUNDRIES, global foundries, euv, 7nm, 10nm

UPDATE (August 22nd, 11:11pm ET): I reached out to GlobalFoundries over the weekend for a comment and the company had this to say:

"We would like to confirm that GF is transitioning directly from 14nm to 7nm. We consider 10nm as more a half node in scaling, due to its limited performance adder over 14nm for most applications. For most customers in most of the markets, 7nm appears to be a more favorable financial equation. It offers a much larger economic benefit, as well as performance and power advantages, that in most cases balances the design cost a customer would have to spend to move to the next node.

As you stated in your article, we will be leveraging our presence at SUNY Polytechnic in Albany, the talent and know-how gained from the acquisition of IBM Microelectronics, and the world-class R&D pipeline from the IBM Research Alliance—which last year produced the industry’s first 7nm test chip with working transistors."

An unexpected bit of news popped up today via TPU that alleges GlobalFoundries is not only developing 7nm technology (expected), but that the company will skip production of the 10nm node altogether in favor of jumping straight from the 14nm FinFET technology (which it licensed from Samsung) to 7nm manufacturing based on its own in house design process.

Reportedly, the move to 7nm would offer 60% smaller chips at three times the design cost of 14nm which is to say that this would be both an expensive and impressive endeavor. Aided by Extreme Ultraviolet (EUV) lithography, GlobalFoundries expects to be able to hit 7nm production sometime in 2020 with prototyping and small usage of EUV in the year or so leading up to it. The in house process tech is likely thanks to the research being done at the APPC (Advanced Patterning and Productivity Center) in Albany New York along with the expertise of engineers and design patents and technology (e.g. ASML NXE 3300 and 3300B EUV) purchased from IBM when it acquired IBM Microelectronics. The APPC is reportedly working simultaneously on research and development of manufacturing methods (especially EUV where extremely small wavelengths of ultraviolet light (14nm and smaller) are used to etch patterns into silicon) and supporting production of chips at GlobalFoundries' "Malta" fab in New York.

APPC in Albany NY.jpg

Advanced Patterning and Productivity Center in Albany, NY where Global Foundries, SUNY Poly, IBM Engineers, and other partners are forging a path to 7nm and beyond semiconductor manufacturing. Photo by Lori Van Buren for Times Union.

Intel's Custom Foundry Group will start pumping out ARM chips in early 2017 followed by Intel's own 10nm Cannon Lake processors in 2018 and Samsung will be offering up its own 10nm node as soon as next year. Meanwhile, TSMC has reportedly already tapped out 10nm wafers and will being prodction in late 2016/early 2017 and claims that it will hit 5nm by 2020. With its rivals all expecting production of 10nm chips as soon as Q1 2017, GlobalFoundries will be at a distinct disadvantage for a few years and will have only its 14nm FinFET (from Samsung) and possibly its own 14nm tech to offer until it gets the 7nm production up and running (hopefully!).

Previously, GlobalFoundries has stated that:

“GLOBALFOUNDRIES is committed to an aggressive research roadmap that continually pushes the limits of semiconductor technology. With the recent acquisition of IBM Microelectronics, GLOBALFOUNDRIES has gained direct access to IBM’s continued investment in world-class semiconductor research and has significantly enhanced its ability to develop leading-edge technologies,” said Dr. Gary Patton, CTO and Senior Vice President of R&D at GLOBALFOUNDRIES. “Together with SUNY Poly, the new center will improve our capabilities and position us to advance our process geometries at 7nm and beyond.” 

If this news turns out to be correct, this is an interesting move and it is certainly a gamble. However, I think that it is a gamble that GlobalFoundries needs to take to be competitive. I am curious how this will affect AMD though. While I had expected AMD to stick with 14nm for awhile, especially for Zen/CPUs, will this mean that AMD will have to go to TSMC for its future GPUs  or will contract limitations (if any? I think they have a minimum amount they need to order from GlobalFoundries) mean that GPUs will remain at 14nm until GlobalFoundries can offer its own 7nm? I would guess that Vega will still be 14nm, but Navi in 2018/2019? I guess we will just have to wait and see!

Also read:

Source: TechPowerUp
Author:
Subject: Processors
Manufacturer: AMD

Clean Sheet and New Focus

It is no secret that AMD has been struggling for some time.  The company has had success through the years, but it seems that the last decade has been somewhat bleak in terms of competitive advantages.  The company has certainly made an impact in throughout the decades with their 486 products, K6, the original Athlon, and the industry changing Athlon 64.  Since that time we have had a couple of bright spots with the Phenom II being far more competitive than expected, and the introduction of very solid graphics performance in their APUs.

Sadly for AMD their investment in the “Bulldozer” architecture was misplaced for where the industry was heading.  While we certainly see far more software support for multi-threaded CPUs, IPC is still extremely important for most workloads.  The original Bulldozer was somewhat rushed to market and was not fully optimized, while the “Piledriver” based Vishera products fixed many of these issues we have not seen the non-APU products updated to the latest Steamroller and Excavator architectures.  The non-APU desktop market has been served for the past four years with 32nm PD-SOI based parts that utilize a rebranded chipset base that has not changed since 2010.

hc_03.png

Four years ago AMD decided to change course entirely with their desktop and server CPUs.  Instead of evolving the “Bulldozer” style architecture featuring CMT (Core Multi-Threading) they were going to do a clean sheet design that focused on efficiency, IPC, and scalability.  While Bulldozer certainly could scale the thread count fairly effectively, the overall performance targets and clockspeeds needed to compete with Intel were just not feasible considering the challenges of process technology.  AMD brought back Jim Keller to lead this effort, an industry veteran with a huge amount of experience across multiple architectures.  Zen was born.

 

Hot Chips 28

This year’s Hot Chips is the first deep dive that we have received about the features of the Zen architecture.  Mike Clark is taking us through all of the changes and advances that we can expect with the upcoming Zen products.

Zen is a clean sheet design that borrows very little from previous architectures.  This is not to say that concepts that worked well in previous architectures were not revisited and optimized, but the overall floorplan has changed dramatically from what we have seen in the past.  AMD did not stand still with their Bulldozer products, and the latest Excavator core does improve upon the power consumption and performance of the original.  This evolution was simply not enough considering market pressures and Intel’s steady improvement of their core architecture year upon year.  Zen was designed to significantly improve IPC and AMD claims that this product has a whopping 40% increase in IPC (instructions per clock) from the latest Excavator core.

hc_04.png

AMD also has focused on scaling the Zen architecture from low power envelopes up to server level TDPs.  The company looks to have pushed down the top end power envelope of Zen from the 125+ watts of Bulldozer/Vishera into the more acceptable 95 to 100 watt range.  This also has allowed them to scale Zen down to the 15 to 25 watt TDP levels without sacrificing performance or overall efficiency.  Most architectures have sweet spots where they tend to perform best.  Vishera for example could scale nicely from 95 to 220 watts, but the design did not translate well into sub-65 watt envelopes.  Excavator based “Carrizo” products on the other hand could scale from 15 watts to 65 watts without real problems, but became terribly inefficient above 65 watts with increased clockspeeds.  Zen looks to address these differences by being able to scale from sub-25 watt TDPs up to 95 or 100.  In theory this should allow AMD to simplify their product stack by offering a common architecture across multiple platforms.

Click to continue reading about AMD's Zen architecture!

Now we know what happened to Josh's stream; does your camera do YUY2 encoding?

Subject: General Tech | August 19, 2016 - 01:06 PM |
Tagged: yuy2, windows 10, skype, microsoft, idiots

In their infinite wisdom, Microsoft has disabled MJPEG and H.264 encoding on USB webcams for Skype in their Adversary Update to Windows 10, leaving only YUY2 encoding as your choice.  The supposed reasoning behind this is to ensure that there is no duplication of encoding which could lead to poor performance; ironically the result of this change is poor performance for the majority of users such as Josh.  Supposedly there will be a fix released some time in September but for now the only option is to roll back your AU installation, assuming you are not already past the 10 day deadline.   You can thank Brad Sams over at Thurrott.com for getting to the bottom of the issue which has been plaguing users of Skype and pick up some more details on his post.

4520-max_headroom_31.jpg

"Microsoft made a significant change with the release of Windows 10 and support for webcams that is causing serious problems for not only consumers but also the enterprise. The problem is that after installing the update, Windows no longer allows USB webcams to use MJPEG or H264 encoded streams and is only allowing YUY2 encoding."

Here is some more Tech News from around the web:

Tech Talk

Source: Thurrott
Manufacturer: PC Perspective

Why Two 4GB GPUs Isn't Necessarily 8GB

We're trying something new here at PC Perspective. Some topics are fairly difficult to explain cleanly without accompanying images. We also like to go fairly deep into specific topics, so we're hoping that we can provide educational cartoons that explain these issues.

This pilot episode is about load-balancing and memory management in multi-GPU configurations. There seems to be a lot of confusion around what was (and was not) possible with DirectX 11 and OpenGL, and even more confusion about what DirectX 12, Mantle, and Vulkan allow developers to do. It highlights three different load-balancing algorithms, and even briefly mentions what LucidLogix was attempting to accomplish almost ten years ago.

pcper-2016-animationlogo-multiGPU.png

If you like it, and want to see more, please share and support us on Patreon. We're putting this out not knowing if it's popular enough to be sustainable. The best way to see more of this is to share!

Open the expanded article to see the transcript, below.

Use Bing in Edge for 30 hours a month and get ...

Subject: General Tech | August 22, 2016 - 01:26 PM |
Tagged: microsoft, microsoft rewards, windows 10, bing, edge

If you remember Bing Rewards then this will seem familiar, otherwise the gist of the deal is that if you browse on Edge and use Bing to search for 30 hours every month you get a bribe similar to what credit card companies offer.  You can choose between Skype credit, ad-free Outlook or Amazon gift cards, perhaps for aspirin to ease your Bing related headache; if such things seem worth your while.  The Inquirer points out that this is another reminder that Microsoft tracks all usage of Edge, otherwise they would not be able to verify the amount of Bing you used. 

Then again, to carry on the credit card analogy ...

Bing-logo-2013-880x660.png

"Microsoft Rewards is a rebrand of Bing Rewards, the firm's desperate attempt to get people using the irritating default search engine, and sure enough the bribes for using Edge apply only if you use Bing too."

Here is some more Tech News from around the web:

Tech Talk

 

Source: The Inquirer
Author:
Subject: Editorial
Manufacturer: NVIDIA

NVIDIA Today?

It always feels a little odd when covering NVIDIA’s quarterly earnings due to how they present their financial calendar.  No, we are not reporting from the future.  Yes, it can be confusing when comparing results and getting your dates mixed up.  Regardless of the date before the earnings, NVIDIA did exceptionally well in a quarter that is typically the second weakest after Q1.

NVIDIA reported revenue of $1.43 billion.  This is a jump from an already strong Q1 where they took in $1.30 billion.  Compare this to the $1.027 billion of its competitor AMD who also provides CPUs as well as GPUs.  NVIDIA sold a lot of GPUs as well as other products.  Their primary money makers were the consumer space GPUs and the professional and compute markets where they have a virtual stranglehold on at the moment.  The company’s GAAP net income is a very respectable $253 million.

results.png

The release of the latest Pascal based GPUs were the primary mover for the gains for this latest quarter.  AMD has had a hard time competing with NVIDIA for marketshare.  The older Maxwell based chips performed well against the entire line of AMD offerings and typically did so with better power and heat characteristics.  Even though the GTX 970 was somewhat limited in its memory configuration as compared to the AMD products (3.5 GB + .5 GB vs. a full 4 GB implementation) it was a top seller in its class.  The same could be said for the products up and down the stack.

Pascal was released at the end of May, but the company had been shipping chips to its partners as well as creating the “Founder’s Edition” models to its exacting specifications.  These were strong sellers throughout the end of May until the end of the quarter.  NVIDIA recently unveiled their latest Pascal based Quadro cards, but we do not know how much of an impact those have had on this quarter.  NVIDIA has also been shipping, in very limited quantities, the Tesla P100 based units to select customers and outfits.

Click to read more about NVIDIA's latest quarterly results!

AMD Announces TrueAudio Next

Subject: Graphics Cards | August 18, 2016 - 07:58 PM |
Tagged: amd, TrueAudio, trueaudio next

Using a GPU for audio makes a lot of sense. That said, the original TrueAudio was not really about that, and it didn't really take off. The API was only implemented in a handful of titles, and it required dedicated hardware that they have since removed from their latest architectures. It was not about using the extra horsepower of the GPU to simulate sound, although they did have ideas for “sound shaders” in the original TrueAudio.

amd-2016-true-audio-next.jpg

TrueAudio Next, on the other hand, is an SDK that is part of AMD's LiquidVR package. It is based around OpenCL; specifically, it uses AMD's open-source FireRays library to trace the ways that audio can move from source to receiver, including reflections. For high-frequency audio, this is a good assumption, and that range of frequencies are more useful for positional awareness in VR, anyway.

Basically, TrueAudio Next has very little to do with the original.

Interestingly, AMD is providing an interface for TrueAudio Next to reserve compute units, but optionally (and under NDA). This allows audio processing to be unhooked from the video frame rate, provided that the CPU can keep both fed with actual game data. Since audio is typically a secondary thread, it could be ready to send sound calls at any moment. Various existing portions of asynchronous compute could help with this, but allowing developers to wholly reserve a fraction of the GPU should remove the issue entirely. That said, when I was working on a similar project in WebCL, I was looking to the integrated GPU, because it's there and it's idle, so why not? I would assume that, in actual usage, CU reservation would only be enabled if an AMD GPU is the only device installed.

Anywho, if you're interested, then be sure to check out AMD's other post on it, too.

Source: AMD

NVIDIA Officially Announces GeForce GTX 1060 3GB Edition

Subject: Graphics Cards | August 18, 2016 - 02:28 PM |
Tagged: nvidia, gtx 1060 3gb, gtx 1060, graphics card, gpu, geforce, 1152 CUDA Cores

NVIDIA has officially announced the 3GB version of the GTX 1060 graphics card, and it indeed contains fewer CUDA cores than the 6GB version.

GTX1060.jpg

The GTX 1060 Founders Edition

The product page on NVIDIA.com now reflects the 3GB model, and board partners have begun announcing their versions. The MSRP on this 3GB version is set at $199, and availablity of partner cards is expected in the next couple of weeks. The two versions will be designated only by their memory size, and no other capacities of either card are forthcoming.

  GeForce GTX 1060 3GB GeForce GTX 1060 6GB
Architecture Pascal Pascal
CUDA Cores 1152 1280
Base Clock 1506 MHz 1506 MHz
Boost Clock 1708 MHz 1708 MHz
Memory Speed 8 Gbps 8 Gbps
Memory Configuration 3GB 6GB
Memory Interface 192-bit 192-bit
Power Connector 6-pin 6-pin
TDP 120W 120W

As you can see from the above table, the only specification that has changed is the CUDA core count, with base/boost clocks, memory speed and interface, and TDP identical. As to performance, NVIDIA says the 6GB version holds a 5% performance advantage over this lower-cost version, which at $199 is 20% less expensive than the previous GTX 1060 6GB.

Source: NVIDIA
Author:
Subject: Editorial
Manufacturer: ARM

A Watershed Moment in Mobile

This previous May I was invited to Austin to be briefed on the latest core innovations from ARM and their partners.  We were introduced to new CPU and GPU cores, as well as the surrounding technologies that provide the basis of a modern SOC in the ARM family.  We also were treated to more information about the process technologies that ARM would embrace with their Artisan and POP programs.  ARM is certainly far more aggressive now in their designs and partnerships than they have been in the past, or at least they are more willing to openly talk about them to the press.

arm_white.png

The big process news that ARM was able to share at this time was the design of 10nm parts using an upcoming TSMC process node.  This was fairly big news as TSMC was still introducing parts on their latest 16nm FF+ line.  NVIDIA had not even released their first 16FF+ parts to the world in early May.  Apple had dual sourced their 14/16 nm parts from Samsung and TSMC respectively, but these were based on LPE and FF lines (early nodes not yet optimized to LPP/FF+).  So the news that TSMC would have a working 10nm process in 2017 was important to many people.  2016 might be a year with some good performance and efficiency jumps, but it seems that 2017 would provide another big leap forward after years of seeming stagnation of pure play foundry technology at 28nm.

Yesterday we received a new announcement from ARM that shows an amazing shift in thought and industry inertia.  ARM is partnering with Intel to introduce select products on Intel’s upcoming 10nm foundry process.  This news is both surprising and expected.  It is surprising in that it happened as quickly as it did.  It is expected as Intel is facing a very different world than it had planned for 10 years ago.  We could argue that it is much different than they planned for 5 years ago.

Intel is the undisputed leader in process technologies and foundry practices.  They are the gold standard of developing new, cutting edge process nodes and implementing them on a vast scale.  This has served them well through the years as they could provide product to their customers seemingly on demand.  It also allowed them a leg up in technology when their designs may not have fit what the industry wanted or needed (Pentium 4, etc.).  It also allowed them to potentially compete in the mobile market with designs that were not entirely suited for ultra-low power.  x86 is a modern processor technology with decades of development behind it, but that development focused mainly on performance at higher TDP ranges.

intel.png

This past year Intel signaled their intent to move out of the sub 5 watt market and cede it to ARM and their partners.  Intel’s ultra mobile offerings just did not make an impact in an area that they were expected to.  For all of Intel’s advances in process technology, the base ARM architecture is just better suited to these power envelopes.  Instead of throwing good money after bad (in the form of development time, wafer starts, rebates) Intel has stepped away from this market.

This leaves Intel with a problem.  What to do with extra production capacity?  Running a fab is a very expensive endeavor.  If these megafabs are not producing chips 24/7, then the company is losing money.  This past year Intel has seen their fair share of layoffs and slowing down production/conversion of fabs.  The money spent on developing new, cutting edge process technologies cannot stop for the company if they want to keep their dominant position in the CPU industry.  Some years back they opened up their process products to select 3rd party companies to help fill in the gaps of production.  Right now Intel has far more production line space than they need for the current market demands.  Yes, there were delays in their latest Skylake based processors, but those were solved and Intel is full steam ahead.  Unfortunately, they do not seem to be keeping their fabs utilized at the level needed or desired.  The only real option seems to be opening up some fab space to more potential customers in a market that they are no longer competing directly in.

The Intel Custom Foundry Group is working with ARM to provide access to their 10nm HPM process node.  Initial production of these latest generation designs will commence in Q1 2017 with full scale production in Q4 2017.  We do not have exact information as to what cores will be used, but we can imagine that they will be Cortex-A73 and A53 parts in big.LITTLE designs.  Mali graphics will probably be the first to be offered on this advanced node as well due to the Artisan/POP program.  Initial customers have not been disclosed and we likely will not hear about them until early 2017.

This is a big step for Intel.  It is also a logical progression for them when we look over the changing market conditions of the past few years.  They were unable to adequately compete in the handheld/mobile market with their x86 designs, but they still wanted to profit off of this ever expanding area.  The logical way to monetize this market is to make the chips for those that are successfully competing here.  This will cut into Intel’s margins, but it should increase their overall revenue base if they are successful here.  There is no reason to believe that they won’t be.

Nehalem-Wafer_HR-crop-16x9-photo.jpg.rendition.intel_.web_.864.486.jpg

The last question we have is if the 10nm HPM node will be identical to what Intel will use for their next generation “Cannonlake” products.  My best guess is that the foundry process will be slightly different and will not provide some of the “secret sauce” that Intel will keep for themselves.  It will probably be a mobile focused process node that stresses efficiency rather than transistor switching speed.  I could be very wrong here, but I don’t believe that Intel will open up their process to everyone that comes to them hat in hand (AMD).

The partnership between ARM and Intel is a very interesting one that will benefit customers around the globe if it is handled correctly from both sides.  Intel has a “not invented here” culture that has both benefited it and caused it much grief.  Perhaps some flexibility on the foundry side will reap benefits of its own when dealing with very different designs than Intel is used to.  This is a titanic move from where Intel probably thought it would be when it first started to pursue the ultra-mobile market, but it is a move that shows the giant can still positively react to industry trends.

Subject: General Tech
Manufacturer: Audeze

Introduction, Specifications, and Design

More than an ordinary pair of headphones, the SINE headphones from Audeze feature planar magnetic drivers, and the option of direct connection to an Apple Lightning port for pure digital sound from the SINE's inline 24-bit DAC and headphone amp. So how does the "world’s first on-ear planar magnetic headphone" sound? We first had a chance to hear the SINE headphones at CES, and Audeze was kind enough to loan us a pair to test them out.

DSC_0755.jpg

"SINE headphones, with our planar magnetic technology, are the next step up in sound quality for many listeners. Instead of using ordinary dynamic drivers, our planar technology gives you a sound that’s punchy, dynamic, and detailed. In fact, it sounds like a much larger headphone! It’s lightweight, and folds flat for easy travelling. Once again, we’ve called upon our strategic partner Designworks, a BMW group subsidiary for the industrial design, and we manufacture SINE headphones in the USA at our Southern California factory."

Planar headphones certainly seem be be gaining traction in recent years. It was a pair from Audeze that I was first was able to demo a couple of years ago (the LCD-3 if I recall correctly), and I remember thinking about how precise they sounded. Granted, I was listening via a high-end headphone amp and lossless digital source at a hi-fi audio shop, so I had no frame of reference for what my own, lower-end equipment at home could do. And while the SINE headphones are certainly very advanced and convenient as an all-in-one solution to high-end audio for iOS device owners, there’s more to the story.

One the distinct advantages provided by the SINE headphones is the consistency of the experience they can provide across compatible devices. If you hear the SINE in a store (or on the floor of a tradeshow, as I did) you’re going to hear the same sound at home or on the go, provided you are using an Apple i-device. The Lightning connector provides the digital source for your audio, and the SINE’s built-in DAC and headphone amp create the analog signal that travels to the planar magnetic drivers in the headphones. In fact, if your own source material is of higher quality you can get even better sound than you might hear in a demo - and that’s the catch with headphones like this: source material matters.

DSC_0757.jpg

One of the problems with high-end components in general is their ability to reveal the limitations of other equipment in the chain. Looking past the need for quality amplification for a moment, think about the differences you’ll immediately hear from different music sources. Listen to a highly-compressed audio stream, and it can sound rather flat and lifeless. Listen to uncompressed music from your iTunes library, and you will appreciate the more detailed sound. But move up to 24-bit studio master recordings (with their greater dynamic range and significantly higher level of detail), and you’ll be transported into the world of high-res audio with the speakers, DAC, and headphone amp you need to truly appreciate the difference.

Continue reading our review of the Audeze SINE Planar Magnetic headphones!

Author:
Manufacturer: Seasonic

Introduction and Features

Introduction and Features

        

2-Banner-3.jpg

The new Seasonic PRIME 750W Titanium PSU is simply the best power supply we have tested to date. Sea Sonic Electronics Co., Ltd has been designing and building PC power supplies since 1981 and they are one of the most highly respected manufacturers on the planet. Not only do they market power supplies under their own Seasonic name but they are the OEM for many other big name brands.

Seasonic’s new PRIME lineup is being introduced with the Titanium Series, which currently includes three models: 850W, 750W, and 650W (with more to follow). Additional PRIME models with both Platinum and Gold efficiency certifications are expected later this year with models ranging from 850W up to 1200W. Wow – we are already looking forward to getting our hands on a couple of these!

3-Diagonal.jpg

The power supply we have in for review is the PRIME 750W Titanium. This unit comes with all modular cables and is certified to comply with the 80 Plus Titanium efficiency criteria; the highest available. The power supply is designed to deliver extremely tight voltage regulation on the three primary rails (+3.3V, +5V and +12V) and provides superior AC ripple and noise suppression. Add in a super-quiet 135mm cooling fan with a Fluid Dynamic Bearing and a 10-year warranty, and you have the makings for an outstanding power supply.

4a-Front.jpg

Seasonic PRIME 750W Titanium PSU Key Features:

•    650W, 750W or 850W continuous DC output
•    Ultra-high efficiency, 80 PLUS Titanium certified
•    Micro-Tolerance Load Regulation (MTLR)
•    Top-quality 135mm Fluid Dynamic Bearing fan
•    Premium Hybrid Fan Control (allows fanless operation at low power)
•    Superior AC ripple and noise suppression (under 20 mV)
•    Extended Hold-up time (above 30 ms)
•    Fully modular cabling design
•    Multi-GPU technologies supported
•    Gold-plated high-current terminals
•    Protections: OPP,OVP,UVP,SCP,OCP and OTP
•    10-Year Manufacturer’s warranty
•    MSRP for the PRIME 750W Titanium is $179.90 USD

Please continue reading our review of the Seasonic PRIME 750W Titanium PSU!

ASUS Announces the ROG GX800 4K G-SYNC Gaming Laptop with GTX 1080 SLI

Subject: Mobile | August 19, 2016 - 03:46 PM |
Tagged: UHD, ROG, Republic of Gamers, notebook, laptop, GX800, GTX 1080, gaming, g-sync, asus, 4k

ASUS has announced perhaps the most impressively-equipped gaming laptop to date. Not only does the new ROG GX800 offer dual NVIDIA GeForce GTX 1080 graphics in SLI, but these cards are powering an 18-inch 4K display panel with NVIDIA G-SYNC.

gx800_1.jpg

Not enough? The system also offers liquid cooling (via the liquid-cooling dock) which allows for overclocking of the CPU, graphics, and memory.

gx800_2.jpg

"ROG GX800 is the world’s first 18-inch real 4K UHD gaming laptop to feature the latest NVIDIA GeForce GTX 1080 in 2-Way SLI. It gives gamers desktop-like gaming performance, silky-smooth gameplay and detailed 4K UHD gaming environments. The liquid-cooled ROG GX800 features the ROG-exclusive Hydro Overclocking System that allows for extreme overclocking of the processor, graphics card and DRAM. In 3DMark 11 and Fire Strike Ultra benchmark tests, a ROG GX800 equipped with the Hydro Overclocking System scored 76% higher than other gaming laptops in the market.

ROG GX800 comes with NVIDIA G-SYNC technology and has plug-and-play compatibility with leading VR headsets to allow gamers to enjoy truly immersive VR environments. It has the MechTAG (Mechanical Tactile Advanced Gaming) keyboard with mechanical switches and customizable RGB LED backlighting for each key."

gx800_3.jpg

Specifics on availability and pricing were not included in the announcement.

Source: ASUS

IDF 2016: G.Skill Shows Off Low Latency DDR4-3333MHz Memory

Subject: Memory | August 20, 2016 - 01:25 AM |
Tagged: X99, Samsung, ripjaws, overclocking, G.Skill, ddr4, Broadwell-E

Early this week at the Intel Developer Forum in San Francisco, California G.Skill showed off new low latency DDR4 memory modules for desktop and notebooks. The company launched two Trident series DDR4 3333 MHz kits and one Ripjaws branded DDR4 3333 MHz SO-DIMM. While these speeds are not close to the fastest we have seen from them, these modules offer much tighter timings. All of the new memory modules use Samsung 8Gb chips and will be available soon.

On the desktop side of things, G.Skill demonstrated a 128GB (8x16GB) DDR4-3333 kit with CAS latencies of 14-14-14-34 running on a Asus ROG Rampage V Edition 10 motherboard with an Intel Core i7 6800K processor. They also showed a 64GB (8x8GB) kit clocked at 3333 MHz with timings of 13-13-13-33 running on a system with the same i7 6800K and Asus X99 Deluxe II motherboard.

128GB 3333MHz CL14 demo.JPG

G.Skill demonstrating 128GB DDR4-3333 memory kit at IDF 2016.

In addition to the desktop DIMMs, G.Skill showed a 32GB Ripjaws kit (2x16GB) clocked at 3333 MHz running on an Intel Skull Canyon NUC. The SO-DIMM had timings of 16-18-18-43 and ran at 1.35V.

Nowadays lower latency is not quite as important as it once was, but there is still a slight performance advantage to be had tighter timings and pure clockspeed is not the only important RAM metric. Overclocking can get you lower CAS latencies (sometimes at the cost of more voltage), but if you are not into that tedious process and are buying RAM anyway you might as well go for the modules with the lowest latencies out of the box at the clockspeeds you are looking for. I am not sure how popular RAM overclocking is these days outside of benchmark runs and extreme overclockers though to be honest.

Technical Session OC System.JPG

Overclocking Innovation session at IDF 2016.

With regards to extreme overclocking, there was reportedly an "Overclocking Innovation" event at IDF where G.Skill and Asus overclocker Elmor achieved a new CPU overclocking record of 5,731.78 MHz on the i7 6950X running on a system with G.Skill memory and Asus motherboard. The company's DDR4 record of 5,189.2 MHz was not beaten at the event, G.Skill notes in its press release (heh).

Are RAM timings important to you when looking for memory? What are your thoughts on the ever increasing clocks of new DDR4 kits with how overclocking works on the newer processors/motherboards?

Source: G.Skill

AMD's 7870 rides again, checking out the new cooler on the A10-7870K

Subject: Processors | August 22, 2016 - 05:37 PM |
Tagged: amd, a10-7870K

Leaving aside the questionable naming to instead focus on the improved cooler on this ~$130 APU from AMD.  Neoseeker fired up the fun sized, 125W rated cooler on top of the A10-7870K and were pleasantly surprised at the lack of noise even under load.  Encouraged by the performance they overclocked the chip by 500MHz to 4.4GHz and were rewarded with a stable and still very quiet system.  The review focuses more the improvements the new cooler offers as opposed to the APU itself, which has not changed.  Check out the review if you are considering a lower cost system that only speaks when spoken to.

14.jpg

"In order to find out just how much better the 125W thermal solution will perform, I am going to test the A10-7870K APU mounted on a Gigabyte F2A88X-UP4 motherboard provided by AMD with a set of 16 GB (2 x 8) DDR3 RAM modules set at 2133 MHz speed. I will then run thermal and fan speed tests so a comparison of the results will provide a meaningful data set to compare the near-silent 125W cooler to an older model AMD cooling solution."

Here are some more Processor articles from around the web:

Processors

Source: Neoseeker

Cooler Master Introduces MasterLiquid Maker 92 AIO Liquid CPU Cooler

Subject: Cases and Cooling | August 17, 2016 - 11:43 AM |
Tagged: cooler master, MasterLiquid Maker 92, AIO, liquid cooler, self contained, convertible

Cooler Master has introduced an unusual all-in-one liquid CPU cooler with their new MasterLiquid Maker 92, a design which places all of the components together on top of the CPU block.

main.jpg

We've seen a similar idea from Corsair with the cooler first found in the Bulldog system, and later introduced separately as the H5 SF mini-ITX liquid cooler. Cooler Master's design uses a different arrangement, with push-pull 92mm fans sandwiching a radiator that rotates 90º to permit either a verticle or horizontal setup. The latter position allows for better low-profile chassis compatibility, and also adds airflow to motherboard components.

main_2.jpg

Specifications:

  • Model: MLZ-H92M-A26PK-R1
  • CPU: Intel LGA 2011-v3/ 2011/ 1151/ 1150/ 1155/ 1156 socket
  • Power Connector    : SATA and 4-Pin
  • Radiator Material: Aluminum
  • Dimensions: 
  • Vertical: 99.9 x 81.6 x 167.5mm (3.9 x 3.2 x 6.6”)
  • Horizontal: 99.9 x 142 x 118.8 mm (3.9 x 5.6 x 4.7”)
  • Fan:
    • Dimension: Φ95x 25.4 mm (3.7 x 1”)
    • Airflow: 49.7 CFM (max)
    • Air Pressure: 6.4 mmH2O (max)
    • Noise Level: 30 dBA (max)
  • Pump:
  • Noise Level: <12 dBA (max)
  • MTTF: 175,000 hours
  • L-10 Life: 50,000 hours
  • Rated Voltage: 12VDC
  • Warranty: 5 Years

design.jpg

Cooler Master is offer pre-orders on a first-come, first-serve basis beginning August 30 from this page. Pricing is not listed.

HP's Latest Omen Desktop Puts a Gaming System in a Cube

Subject: Systems | August 17, 2016 - 04:25 PM |
Tagged: PC, Omen 900, Omen, hp, gaming, desktop, cube, computer

HP has introduced a new pre-built gaming desktop, and while the Omen series has existed for a while the new Omen offers a very different chassis design.

omen_01.jpg

This Omen isn't just cube-like, it's actually a cube (Image credit: HP)

Inside the specifications look like the typical pre-built gaming rig, with processors up to an Intel Core i7 6700K and graphics options including AMD's Radeon RX 480 and the NVIDIA GeForce GTX 1080. Configurations on HP's online store start at $1799 for a version with a GTX 960, a 1TB spinning hard drive, and a single 8GB DIMM. (Curiously, though reported as the "Omen X", the current listing is for an "Omen 900".)

OMEN_X_Radeon_Performance.jpg

A look inside an AMD Crossfire configuration (Image credit: HP via The Verge)

HP is certainly no stranger to unusual chassis designs, as those who remember the Blackbird 002 (which Ryan stood on - and reviewed - here) and subsequent Firebird 803 systems will know. The Verge is reporting that HP will offer the chassis as a standalone product for $599, itself an unusual move for the company.

omen_2.jpg

(Image credit: HP)

The new Omen desktop goes on sale officially starting tomorrow.

Source: The Verge
Manufacturer: NVIDIA

Is Enterprise Ascending Outside of Consumer Viability?

So a couple of weeks have gone by since the Quadro P6000 (update: was announced) and the new Titan X launched. With them, we received a new chip: GP102. Since Fermi, NVIDIA has labeled their GPU designs with a G, followed by a single letter for the architecture (F, K, M, or P for Fermi, Kepler, Maxwell, and Pascal, respectively), which is then followed by a three digit number. The last digit is the most relevant one, however, as it separates designs by their intended size.

nvidia-2016-Quadro_P6000_7440.jpg

Typically, 0 corresponds to a ~550-600mm2 design, which is about as larger of a design that fabrication labs can create without error-prone techniques, like multiple exposures (update for clarity: trying to precisely overlap multiple designs to form a larger integrated circuit). 4 corresponds to ~300mm2, although GM204 was pretty large at 398mm2, which was likely to increase the core count while remaining on a 28nm process. Higher numbers, like 6 or 7, fill back the lower-end SKUs until NVIDIA essentially stops caring for that generation. So when we moved to Pascal, jumping two whole process nodes, NVIDIA looked at their wristwatches and said “about time to make another 300mm2 part, I guess?”

The GTX 1080 and the GTX 1070 (GP104, 314mm2) were born.

nvidia-2016-gtc-pascal-banner.png

NVIDIA already announced a 600mm2 part, though. The GP100 had 3840 CUDA cores, HBM2 memory, and an ideal ratio of 1:2:4 between FP64:FP32:FP16 performance. (A 64-bit chunk of memory can store one 64-bit value, two 32-bit values, or four 16-bit values, unless the register is attached to logic circuits that, while smaller, don't know how to operate on the data.) This increased ratio, even over Kepler's 1:6 FP64:FP32, is great for GPU compute, but wasted die area for today's (and tomorrow's) games. I'm predicting that it takes the wind out of Intel's sales, as Xeon Phi's 1:2 FP64:FP32 performance ratio is one of its major selling points, leading to its inclusion in many supercomputers.

Despite the HBM2 memory controller supposedly being actually smaller than GDDR5(X), NVIDIA could still save die space while still providing 3840 CUDA cores (despite disabling a few on Titan X). The trade-off is that FP64 and FP16 performance had to decrease dramatically, from 1:2 and 2:1 relative to FP32, all the way down to 1:32 and 1:64. This new design comes in at 471mm2, although it's $200 more expensive than what the 600mm2 products, GK110 and GM200, launched at. Smaller dies provide more products per wafer, and, better, the number of defective chips should be relatively constant.

Anyway, that aside, it puts NVIDIA in an interesting position. Splitting the xx0-class chip into xx0 and xx2 designs allows NVIDIA to lower the cost of their high-end gaming parts, although it cuts out hobbyists who buy a Titan for double-precision compute. More interestingly, it leaves around 150mm2 for AMD to sneak in a design that's FP32-centric, leaving them a potential performance crown.

nvidia-2016-pascal-volta-roadmap-extremetech.png

Image Credit: ExtremeTech

On the other hand, as fabrication node changes are becoming less frequent, it's possible that NVIDIA could be leaving itself room for Volta, too. Last month, it was rumored that NVIDIA would release two architectures at 16nm, in the same way that Maxwell shared 28nm with Kepler. In this case, Volta, on top of whatever other architectural advancements NVIDIA rolls into that design, can also grow a little in size. At that time, TSMC would have better yields, making a 600mm2 design less costly in terms of waste and recovery.

If this is the case, we could see the GPGPU folks receiving a new architecture once every second gaming (and professional graphics) architecture. That is, unless you are a hobbyist. If you are? I would need to be wrong, or NVIDIA would need to somehow bring their enterprise SKU into an affordable price point. The xx0 class seems to have been pushed up and out of viability for consumers.

Or, again, I could just be wrong.

That old chestnut again? Intel compares their current gen hardware against older NVIDIA kit

Subject: General Tech | August 17, 2016 - 12:41 PM |
Tagged: nvidia, Intel, HPC, Xeon Phi, maxwell, pascal, dirty pool

There is a spat going on between Intel and NVIDIA over the slide below, as you can read about over at Ars Technica.  It seems that Intel have reached into the industries bag of dirty tricks and polished off an old standby, testing new hardware and software against older products from their competitors.  In this case it was high performance computing products which were tested, Intel's new Xeon Phi against NVIDIA's Maxwell, tested on an older version of the Caffe AlexNet benchmark.

NVIDIA points out that not only would they have done better than Intel if an up to date version of the benchmarking software was used, but that the comparison should have been against their current architecture, Pascal.  This is not quite as bad as putting undocumented flags into compilers to reduce the performance of competitors chips or predatory discount programs but it shows that the computer industry continues to have only a passing acquaintance with fair play and honest competition.

intel-xeon-phi-performance-claim.jpg

"At this juncture I should point out that juicing benchmarks is, rather sadly, par for the course. Whenever a chip maker provides its own performance figures, they are almost always tailored to the strength of a specific chip—or alternatively, structured in such a way as to exacerbate the weakness of a competitor's product."

Here is some more Tech News from around the web:

Tech Talk

Source: Ars Technica

Gigabyte BRIX Gaming UHD Combines 2.6L Chassis with Discrete GPU

Subject: Systems | August 17, 2016 - 04:37 PM |
Tagged: UHD, SFF, IDF 2016, idf, gigabyte, gaming, brix

While wandering around the exhibit area at this year’s Intel Developers Forum, I ran into our friends at Gigabyte a brand new BRIX small form factor PC. The BRIX Gaming UHD takes the now-standard NUC/BRIX block shape and literally raises it up, extending the design vertically to allow for higher performance components and the added cooling capability to integrate them.

brixuhd06.jpg

The design of the BRIX Gaming UHD combines a brushed aluminum housing with a rubber base and bordering plastic sections to create a particularly stunning design that is both simple and interesting. Up top is a fan that pulls air through the entire chassis, running over the heatsink for the CPU and GPU. This is similar in function to the Mac Pro, though this is a much more compact device with a very different price point and performance target.

brixuhd08.jpg

Around the back you’ll find all the connections that the BRIX Gaming UHD supplies: three (!!) mini DisplayPort connections, a full size HDMI output, four USB 3.0 ports, a USB 3.1 connection, two wireless antennae ports, Gigabit Ethernet and audio input and output. That is a HUGE amount of connectivity options and is more than many consumer’s current large-scale desktops.

brixuhd07.jpg

The internals of the system are impressive and required some very custom design for cooling and layout.

brixuhd02.jpg

The discrete NVIDIA graphics chip (in this case the GTX 950) is on the left chamber while the Core i7-6500HQ Skylake processor is on the left side along with the memory slot and wireless card.

brixuhd04.jpg

Gigabyte measures the size of the BRIX Gaming UHD at 2.6 liters. Because of that compact space there is no room for hard drives: you get access to two M.2 2280 slots for storage instead. There are two SO-DIMM slots for DDR4 memory up to 2133 MHz, integrated 802.11ac support and support for quad displays.

Availability and pricing are still up in the air, though early reports are that starting cost will be $1300. Gigabyte updated me and tells me that the BRIX Gaming UHD will be available in October and that an accurage MSRP has not been set. It would not surprise me if this model never actually saw the light of day and instead Gigabyte waited for NVIDIA’s next low powered Pascal based GPU, likely dubbed the GTX 1050. We’ll keep an eye on the BRIX Gaming UHD from Gigabyte to see what else transpires, but it seems the trend of small form factor PCs that sacrifice less in terms of true gaming potential continues.