Flash player not detected. Click here to install flash.
« 1 2 3 4 5 »
Author:
Subject: Processors
Manufacturer: AMD

Clean Sheet and New Focus

It is no secret that AMD has been struggling for some time.  The company has had success through the years, but it seems that the last decade has been somewhat bleak in terms of competitive advantages.  The company has certainly made an impact in throughout the decades with their 486 products, K6, the original Athlon, and the industry changing Athlon 64.  Since that time we have had a couple of bright spots with the Phenom II being far more competitive than expected, and the introduction of very solid graphics performance in their APUs.

Sadly for AMD their investment in the “Bulldozer” architecture was misplaced for where the industry was heading.  While we certainly see far more software support for multi-threaded CPUs, IPC is still extremely important for most workloads.  The original Bulldozer was somewhat rushed to market and was not fully optimized, while the “Piledriver” based Vishera products fixed many of these issues we have not seen the non-APU products updated to the latest Steamroller and Excavator architectures.  The non-APU desktop market has been served for the past four years with 32nm PD-SOI based parts that utilize a rebranded chipset base that has not changed since 2010.

hc_03.png

Four years ago AMD decided to change course entirely with their desktop and server CPUs.  Instead of evolving the “Bulldozer” style architecture featuring CMT (Core Multi-Threading) they were going to do a clean sheet design that focused on efficiency, IPC, and scalability.  While Bulldozer certainly could scale the thread count fairly effectively, the overall performance targets and clockspeeds needed to compete with Intel were just not feasible considering the challenges of process technology.  AMD brought back Jim Keller to lead this effort, an industry veteran with a huge amount of experience across multiple architectures.  Zen was born.

 

Hot Chips 28

This year’s Hot Chips is the first deep dive that we have received about the features of the Zen architecture.  Mike Clark is taking us through all of the changes and advances that we can expect with the upcoming Zen products.

Zen is a clean sheet design that borrows very little from previous architectures.  This is not to say that concepts that worked well in previous architectures were not revisited and optimized, but the overall floorplan has changed dramatically from what we have seen in the past.  AMD did not stand still with their Bulldozer products, and the latest Excavator core does improve upon the power consumption and performance of the original.  This evolution was simply not enough considering market pressures and Intel’s steady improvement of their core architecture year upon year.  Zen was designed to significantly improve IPC and AMD claims that this product has a whopping 40% increase in IPC (instructions per clock) from the latest Excavator core.

hc_04.png

AMD also has focused on scaling the Zen architecture from low power envelopes up to server level TDPs.  The company looks to have pushed down the top end power envelope of Zen from the 125+ watts of Bulldozer/Vishera into the more acceptable 95 to 100 watt range.  This also has allowed them to scale Zen down to the 15 to 25 watt TDP levels without sacrificing performance or overall efficiency.  Most architectures have sweet spots where they tend to perform best.  Vishera for example could scale nicely from 95 to 220 watts, but the design did not translate well into sub-65 watt envelopes.  Excavator based “Carrizo” products on the other hand could scale from 15 watts to 65 watts without real problems, but became terribly inefficient above 65 watts with increased clockspeeds.  Zen looks to address these differences by being able to scale from sub-25 watt TDPs up to 95 or 100.  In theory this should allow AMD to simplify their product stack by offering a common architecture across multiple platforms.

Click to continue reading about AMD's Zen architecture!

GlobalFoundries Will Allegedly Skip 10nm and Jump to Developing 7nm Process Technology In House (Updated)

Subject: Processors | August 20, 2016 - 03:06 PM |
Tagged: Semiconductor, lithography, GLOBALFOUNDRIES, global foundries, euv, 7nm, 10nm

UPDATE (August 22nd, 11:11pm ET): I reached out to GlobalFoundries over the weekend for a comment and the company had this to say:

"We would like to confirm that GF is transitioning directly from 14nm to 7nm. We consider 10nm as more a half node in scaling, due to its limited performance adder over 14nm for most applications. For most customers in most of the markets, 7nm appears to be a more favorable financial equation. It offers a much larger economic benefit, as well as performance and power advantages, that in most cases balances the design cost a customer would have to spend to move to the next node.

As you stated in your article, we will be leveraging our presence at SUNY Polytechnic in Albany, the talent and know-how gained from the acquisition of IBM Microelectronics, and the world-class R&D pipeline from the IBM Research Alliance—which last year produced the industry’s first 7nm test chip with working transistors."

An unexpected bit of news popped up today via TPU that alleges GlobalFoundries is not only developing 7nm technology (expected), but that the company will skip production of the 10nm node altogether in favor of jumping straight from the 14nm FinFET technology (which it licensed from Samsung) to 7nm manufacturing based on its own in house design process.

Reportedly, the move to 7nm would offer 60% smaller chips at three times the design cost of 14nm which is to say that this would be both an expensive and impressive endeavor. Aided by Extreme Ultraviolet (EUV) lithography, GlobalFoundries expects to be able to hit 7nm production sometime in 2020 with prototyping and small usage of EUV in the year or so leading up to it. The in house process tech is likely thanks to the research being done at the APPC (Advanced Patterning and Productivity Center) in Albany New York along with the expertise of engineers and design patents and technology (e.g. ASML NXE 3300 and 3300B EUV) purchased from IBM when it acquired IBM Microelectronics. The APPC is reportedly working simultaneously on research and development of manufacturing methods (especially EUV where extremely small wavelengths of ultraviolet light (14nm and smaller) are used to etch patterns into silicon) and supporting production of chips at GlobalFoundries' "Malta" fab in New York.

APPC in Albany NY.jpg

Advanced Patterning and Productivity Center in Albany, NY where Global Foundries, SUNY Poly, IBM Engineers, and other partners are forging a path to 7nm and beyond semiconductor manufacturing. Photo by Lori Van Buren for Times Union.

Intel's Custom Foundry Group will start pumping out ARM chips in early 2017 followed by Intel's own 10nm Cannon Lake processors in 2018 and Samsung will be offering up its own 10nm node as soon as next year. Meanwhile, TSMC has reportedly already tapped out 10nm wafers and will being prodction in late 2016/early 2017 and claims that it will hit 5nm by 2020. With its rivals all expecting production of 10nm chips as soon as Q1 2017, GlobalFoundries will be at a distinct disadvantage for a few years and will have only its 14nm FinFET (from Samsung) and possibly its own 14nm tech to offer until it gets the 7nm production up and running (hopefully!).

Previously, GlobalFoundries has stated that:

“GLOBALFOUNDRIES is committed to an aggressive research roadmap that continually pushes the limits of semiconductor technology. With the recent acquisition of IBM Microelectronics, GLOBALFOUNDRIES has gained direct access to IBM’s continued investment in world-class semiconductor research and has significantly enhanced its ability to develop leading-edge technologies,” said Dr. Gary Patton, CTO and Senior Vice President of R&D at GLOBALFOUNDRIES. “Together with SUNY Poly, the new center will improve our capabilities and position us to advance our process geometries at 7nm and beyond.” 

If this news turns out to be correct, this is an interesting move and it is certainly a gamble. However, I think that it is a gamble that GlobalFoundries needs to take to be competitive. I am curious how this will affect AMD though. While I had expected AMD to stick with 14nm for awhile, especially for Zen/CPUs, will this mean that AMD will have to go to TSMC for its future GPUs  or will contract limitations (if any? I think they have a minimum amount they need to order from GlobalFoundries) mean that GPUs will remain at 14nm until GlobalFoundries can offer its own 7nm? I would guess that Vega will still be 14nm, but Navi in 2018/2019? I guess we will just have to wait and see!

Also read:

Source: TechPowerUp
Manufacturer: PC Perspective

Why Two 4GB GPUs Isn't Necessarily 8GB

We're trying something new here at PC Perspective. Some topics are fairly difficult to explain cleanly without accompanying images. We also like to go fairly deep into specific topics, so we're hoping that we can provide educational cartoons that explain these issues.

This pilot episode is about load-balancing and memory management in multi-GPU configurations. There seems to be a lot of confusion around what was (and was not) possible with DirectX 11 and OpenGL, and even more confusion about what DirectX 12, Mantle, and Vulkan allow developers to do. It highlights three different load-balancing algorithms, and even briefly mentions what LucidLogix was attempting to accomplish almost ten years ago.

pcper-2016-animationlogo-multiGPU.png

If you like it, and want to see more, please share and support us on Patreon. We're putting this out not knowing if it's popular enough to be sustainable. The best way to see more of this is to share!

Open the expanded article to see the transcript, below.

AMD Gains Significant Market Share in Q2 2016

Subject: Graphics Cards | August 24, 2016 - 10:34 AM |
Tagged: nvidia, market share, jpr, jon peddie, amd

As reported by both Mercury Research and now by Jon Peddie Research, in a graphics add-in card market that dropped dramatically in Q2 2016 in terms of total units shipped, AMD has gained significant market share against NVIDIA.

GPU Supplier Market share this QTR Market share last QTR Market share last year
AMD 29.9% 22.8% 18.0%
NVIDIA 70.0% 77.2% 81.9%
Total 100% 100% 100%

Source: Jon Peddie Research

Last year at this time, AMD was sitting at 18% market share in terms of units sold, an absolutely dismal result compared to NVIDIA's dominating 81.9%. Over the last couple of quarters we have seen AMD gain in this space, and keeping in mind that Q2 2016 does not include sales of AMD's new Polaris-based graphics cards like the Radeon RX 480, the jump to 29.9% is a big move for the company. As a result, NVIDIA falls back to 70% market share for the quarter, which is still a significant lead over the AMD.

Numbers like that shouldn't be taken lightly - for AMD to gain 7 points of market share in a single quarter indicates a substantial shift in the market. This includes all add-in cards: budget, mainstream, enthusiast and even workstation class products. One report I am received says that NVIDIA card sales specifically dropped off in Q2, though the exact reason why isn't known, and as a kind of defacto result, AMD gained sales share.

unnamed.png

There are several other factors to watch with this data however. First, the quarterly drop in graphics card sales was -20% in Q2 when compared to Q1. That is well above the average seasonal Q1-Q2 drop, which JPR claims to be -9.7%. Much of this sell through decrease is likely due to consumers expecting releases of both NVIDIA Pascal GPUs and AMD Polaris GPUs, stalling sales as consumers delay their purchases. 

The NVIDIA GeForce GTX 1080 launched on May 17th and the GTX 1070 on May 29th. The company has made very bold claims about product sales of Pascal parts so I am honestly very surprised that the overall market would drop the way it did in Q2 and that NVIDIA would fall behind AMD as much as it has. Q3 2016 may be the defining time for both GPU vendors however as it will show the results of the work put into both new architectures and both new product lines. NVIDIA reported record profits recently so it will be interesting to see how that matches up to unit sales.

Use Bing in Edge for 30 hours a month and get ...

Subject: General Tech | August 22, 2016 - 01:26 PM |
Tagged: microsoft, microsoft rewards, windows 10, bing, edge

If you remember Bing Rewards then this will seem familiar, otherwise the gist of the deal is that if you browse on Edge and use Bing to search for 30 hours every month you get a bribe similar to what credit card companies offer.  You can choose between Skype credit, ad-free Outlook or Amazon gift cards, perhaps for aspirin to ease your Bing related headache; if such things seem worth your while.  The Inquirer points out that this is another reminder that Microsoft tracks all usage of Edge, otherwise they would not be able to verify the amount of Bing you used. 

Then again, to carry on the credit card analogy ...

Bing-logo-2013-880x660.png

"Microsoft Rewards is a rebrand of Bing Rewards, the firm's desperate attempt to get people using the irritating default search engine, and sure enough the bribes for using Edge apply only if you use Bing too."

Here is some more Tech News from around the web:

Tech Talk

 

Source: The Inquirer
Author:
Manufacturer: ASUS

Specifications and Card Breakdown

The flurry of retail built cards based on NVIDIA's new Pascal GPUs has been hitting us hard at PC Perspective. So much in fact that, coupled with new gaming notebooks, new monitors, new storage and a new church (you should listen to our podcast, really) output has slowed dramatically. How do you write reviews for all of these graphics cards when you don't even know where to start? My answer: blindly pick one and start typing away.

07.jpg

Just after launch day of the GeForce GTX 1060, ASUS sent over the GTX 1060 Turbo 6GB card. Despite the name, the ASUS Turbo line of GTX 10-series graphics cards is the company's most basic, most stock iteration of graphics cards. That isn't necessarily a drawback though - you get reference level performance at the lowest available price and you still get the promises of quality and warranty from ASUS.

With a target MSRP of just $249, does the ASUS GTX 1060 Turbo make the cut for users looking for that perfect mainstream 1080p gaming graphics card? Let's find out.

Continue reading our review of the ASUS GeForce GTX 1060 Turbo 6GB!

Subject: General Tech
Manufacturer: Audeze

Introduction, Specifications, and Design

More than an ordinary pair of headphones, the SINE headphones from Audeze feature planar magnetic drivers, and the option of direct connection to an Apple Lightning port for pure digital sound from the SINE's inline 24-bit DAC and headphone amp. So how does the "world’s first on-ear planar magnetic headphone" sound? We first had a chance to hear the SINE headphones at CES, and Audeze was kind enough to loan us a pair to test them out.

DSC_0755.jpg

"SINE headphones, with our planar magnetic technology, are the next step up in sound quality for many listeners. Instead of using ordinary dynamic drivers, our planar technology gives you a sound that’s punchy, dynamic, and detailed. In fact, it sounds like a much larger headphone! It’s lightweight, and folds flat for easy travelling. Once again, we’ve called upon our strategic partner Designworks, a BMW group subsidiary for the industrial design, and we manufacture SINE headphones in the USA at our Southern California factory."

Planar headphones certainly seem be be gaining traction in recent years. It was a pair from Audeze that I was first was able to demo a couple of years ago (the LCD-3 if I recall correctly), and I remember thinking about how precise they sounded. Granted, I was listening via a high-end headphone amp and lossless digital source at a hi-fi audio shop, so I had no frame of reference for what my own, lower-end equipment at home could do. And while the SINE headphones are certainly very advanced and convenient as an all-in-one solution to high-end audio for iOS device owners, there’s more to the story.

One the distinct advantages provided by the SINE headphones is the consistency of the experience they can provide across compatible devices. If you hear the SINE in a store (or on the floor of a tradeshow, as I did) you’re going to hear the same sound at home or on the go, provided you are using an Apple i-device. The Lightning connector provides the digital source for your audio, and the SINE’s built-in DAC and headphone amp create the analog signal that travels to the planar magnetic drivers in the headphones. In fact, if your own source material is of higher quality you can get even better sound than you might hear in a demo - and that’s the catch with headphones like this: source material matters.

DSC_0757.jpg

One of the problems with high-end components in general is their ability to reveal the limitations of other equipment in the chain. Looking past the need for quality amplification for a moment, think about the differences you’ll immediately hear from different music sources. Listen to a highly-compressed audio stream, and it can sound rather flat and lifeless. Listen to uncompressed music from your iTunes library, and you will appreciate the more detailed sound. But move up to 24-bit studio master recordings (with their greater dynamic range and significantly higher level of detail), and you’ll be transported into the world of high-res audio with the speakers, DAC, and headphone amp you need to truly appreciate the difference.

Continue reading our review of the Audeze SINE Planar Magnetic headphones!

Intel Revises All SSD Product Lines - 3D NAND Everywhere!

Subject: Storage | August 25, 2016 - 06:26 PM |
Tagged: ssd, Pro 6000p, Intel, imft, E 6000p, E 5420s, DC S3520, DC P3520, 600p, 3d nand

Intel announced the production of 3D NAND a little over a year ago, and we've now seen production ramp up to the point where they are infusing it into nearly every nook and cranny of their SSD product lines.

ssd-3d-nand-composite-form-factor-16x9.png.rendition.intel_.web_.720.405.png

The most relevant part for our readers will be a long overdue M.2 2280 SSD. These will kick off with the 600p:

ch-1.jpg

An overseas forum member over at chiphell got their hands on a 600p and ran some quick tests. From their photo (above), we can confirm the controller is not from Intel, but rather from Silicon Motion. The NAND is naturally from Intel, as is likely their controller firmware implementation, as these parts go through the same lengthy validation process as their other products.

Intel is going for the budget consumer play here. The flash will be running in TLC mode, likely with an SLC cache. Specs are respectable - 1.8GB/s reads, 560MB/s writes, random read 155k, random write 128k (4KB QD=32). By respectable specs I mean in light of the pricing:

600p-6000p pricing.png

Wow! These prices are ranging from $0.55/GB at 128GB all the way down to $0.35/GB for the 1TB part.

You might have noticed the Pro 6000p in that list. Those are nearly identical to the 600p save some additional firmware / software tweaks to support IT infrastructure remote secure erase.

Intel also refreshed their DataCenter (DC) lineup. The SSD DC S3520 (SATA) and P3520 (PCIe/NVMe) were also introduced as a refresh, also using Intel's 3D NAND. We published our exclusive review of the Intel SSD DC P3520 earlier today, so check there for full details on that enterprise front. Before we move on, a brief moment of silence for the P3320 - soft-launched in April, but discontinued before it shipped. We hardly knew ye.

Lastly, Intel introduced a few additional products meant for the embedded / IoT sector. The SSD E 6000p is an M.2 PCIe part similar to the first pair of products mentioned in this article, while the SSD E 5420s comes in 2.5" and M.2 SATA flavors. The differentiator on these 'E' parts is enhanced AES 256 crypto.

Most of these products will be available 'next week', but the 600p 360GB (to be added) and 1TB capacities will ship in Q4.

Abbreviated press blast appears after the break.

Source: Intel

Deus Ex: GPU-kind divided

Subject: General Tech | August 24, 2016 - 03:30 PM |
Tagged: gaming, deus ex: mankind divided

You are probably wondering what kind of performance you will see when you run the new Deus Ex after you purchase it; as obviously you did not pre-order the game.  TechPowerUp has you covered as they have tested the retail version of the game with a variety of cards to give you an idea of the load your GPU will be under.  They started out testing memory usage with a Titan, running Ultra settings at 4K will use up to 5.5GB of memory, so mid range cards will certainly suffer at that point.  Since not many of us are sporting Titans in our cases they also tried out the GTX 1060, 980Ti and 1080 along with the RX 480 and Fury X at a variety of settings.  Read through their review to garner a rough estimate of your expected performance in Mankind Divided.

screen4.jpg

"Deus Ex Mankind Divided has just been released today. We bring you a performance analysis using the most popular graphics cards, at four resolutions, including 4K, at both Ultra and High settings. We also took a closer look at VRAM usage."

Here is some more Tech News from around the web:

Gaming

Source: TechPowerUp
Subject: Storage
Manufacturer: Intel

Introduction, Specifications and Packaging

Introduction:

Intel launched their Datacenter 'P' Series parts a little over two years ago. Since then, the P3500, P3600, and P3700 lines have seen various expansions and spinoffs. The most recent to date was the P3608, which packed two full P3600's into a single HHHL form factor. With Intel 3D XPoint / Optane parts lurking just around the corner, I had assumed there would be no further branches of the P3xxx line, but Intel had other things in mind. IMFT 3D NAND offers greater die capacities at a reduced cost/GB, apparently even in MLC form, and Intel has infused this flash into their new P3520:

DSC03033.jpg

Remember the P3500 series was Intel's lowest end of the P line, and as far as performance goes, the P3520 actually takes a further step back. The play here is to get the proven quality control and reliability of Intel's datacenter parts into a lower cost product. While the P3500 launched at $1.50/GB, the P3520 pushes that cost down *well* below $1/GB for a 2TB HHHL or U.2 SSD.

Read on for our full review of the Intel DC P3520 SSD!

Love upgrading memory on your laptop? Double check any Apollo Lake machines you like.

Subject: General Tech | August 24, 2016 - 01:01 PM |
Tagged: ultraportable, LPDDR4, Intel, apollo lake

A report from DigiTimes is bad news for those who like to upgrade their ultraportable laptops.  To cut down on production costs companies like Acer, Lenovo, Asustek Computer, HP and Dell will use on-board memory as opposed to DIMMs on their Apollo Lake based machines.  This should help keep the costs of flipbooks, 2 in 1's and other small machines stable or even lower them by a small amount but does mean that they cannot easily be upgraded. Many larger notebooks will also switch to this style of memory so be sure to do your research before purchasing a new mobile system.

industry’s-first-8-gigabit-Gb-low-power-double-data-rate-4-LPDDR4-mobile-DRAM.jpg

"Notebook vendors have mostly adopted on-board memory designs in place of DIMMs to make their Intel Apollo Lake-based notebooks as slim as possible, according to sources from Taiwan's notebook supply chain"

Here is some more Tech News from around the web:

Tech Talk

Source: DigiTimes

EVGA's Water Cooled GTX 1080 FTW Hybrid Runs Cool and Quiet

Subject: Graphics Cards | August 23, 2016 - 04:18 PM |
Tagged: water cooling, pascal, hybrid cooler, GTX 1080, evga

EVGA recently launched a water cooled graphics card that pairs the GTX 1080 processor with the company's FTW PCB and a closed loop (AIO) water cooler to deliver a heavily overclockable card that will set you back $730.

The GTX 1080 FTW Hybrid is interesting because the company has opted to use the same custom PCB design as its FTW cards rather than a reference board. This FTW board features improved power delivery with a 10+2 power phase, two 8-pin PCI-E power connectors, Dual BIOS, and adjustable RGB LEDs. The cooler is shrouded with backlit EVGA logos and has a fan to air cool the memory and VRMs that is reportedly quiet and uses a reverse swept blade design (like their ACX air coolers) rather than a traditional blower style fan. The graphics processor is cooled by a water loop.

EVGA GTX 1080 FTW Hybrid.jpg

The water block and pump sit on top of the GPU with tubes running out to the 120mm radiator. Luckily the fan on the radiator can be easily disconnected, allowing users to use their own fan if they wish. According to Youtuber Jayztwocents, the Precision XOC software controls the fan speed of the fan on the card itself but users can not adjust the radiator fan speed themselves. You can connect your own fan to your motherboard and control it that way, however.

Display outputs include one DVI-D, one HDMI, and three DisplayPort outputs (any four of the five can be used simultaneously).

Out of the box this 215W TDP graphics card has a factory overclock of 1721 MHz base and 1860 MHz boost. Thanks to the water cooler, the GPU stays at a frosty 42°C under load. When switched to the slave BIOS (which has a higher power limit and more aggressive fan curve), the card GPU Boosted to 2025 and hit 51°C (he managed to keep that to 44°C by swapping his own EK-Vardar fan onto the radiator). Not bad, especially considering the Founder's Edition hit 85°C on air in our testing! Unfortunately, EVGA did not touch the memory and left the 8GB of GDDR5X at the stock 10 GHz.

  GTX 1080 GTX 1080 FTW Hybrid GTX 1080 FTW Hybrid Slave BIOS
GPU GP104 GP104 GP104
GPU Cores 2560 2560 2560
Rated Clock 1607 MHz 1721 MHz 1721 MHz
Boost Clock 1733 MHz 1860 MHz 2025 MHz
Texture Units 160 160 160
ROP Units 64 64 64
Memory 8GB 8GB 8GB
Memory Clock 10000 MHz 10000 MHz 10000 MHz
TDP 180 watts 215 watts ? watts
Max Tempurature 85°C 42°C 51°C
MSRP (current) $599 ($699 FE) $730 $730

The water cooler should help users hit even higher overclocks and/or maintain a consistent GPU Boost clock at much lower temperatures than on air. The GTX 1080 FTW Hybrid graphics card does come at a bit of a premium at $730 (versus $699 for Founders or ~$650+ for custom models), but if you have the room in your case for the radiator this might be a nice option! (Of course custom water cooling is more fun, but it's also more expensive, time consuming, and addictive. hehe)

What do you think about these "hybrid" graphics cards?

Source: EVGA

IDF 2016: G.Skill Shows Off Low Latency DDR4-3333MHz Memory

Subject: Memory | August 20, 2016 - 01:25 AM |
Tagged: X99, Samsung, ripjaws, overclocking, G.Skill, ddr4, Broadwell-E

Early this week at the Intel Developer Forum in San Francisco, California G.Skill showed off new low latency DDR4 memory modules for desktop and notebooks. The company launched two Trident series DDR4 3333 MHz kits and one Ripjaws branded DDR4 3333 MHz SO-DIMM. While these speeds are not close to the fastest we have seen from them, these modules offer much tighter timings. All of the new memory modules use Samsung 8Gb chips and will be available soon.

On the desktop side of things, G.Skill demonstrated a 128GB (8x16GB) DDR4-3333 kit with CAS latencies of 14-14-14-34 running on a Asus ROG Rampage V Edition 10 motherboard with an Intel Core i7 6800K processor. They also showed a 64GB (8x8GB) kit clocked at 3333 MHz with timings of 13-13-13-33 running on a system with the same i7 6800K and Asus X99 Deluxe II motherboard.

128GB 3333MHz CL14 demo.JPG

G.Skill demonstrating 128GB DDR4-3333 memory kit at IDF 2016.

In addition to the desktop DIMMs, G.Skill showed a 32GB Ripjaws kit (2x16GB) clocked at 3333 MHz running on an Intel Skull Canyon NUC. The SO-DIMM had timings of 16-18-18-43 and ran at 1.35V.

Nowadays lower latency is not quite as important as it once was, but there is still a slight performance advantage to be had tighter timings and pure clockspeed is not the only important RAM metric. Overclocking can get you lower CAS latencies (sometimes at the cost of more voltage), but if you are not into that tedious process and are buying RAM anyway you might as well go for the modules with the lowest latencies out of the box at the clockspeeds you are looking for. I am not sure how popular RAM overclocking is these days outside of benchmark runs and extreme overclockers though to be honest.

Technical Session OC System.JPG

Overclocking Innovation session at IDF 2016.

With regards to extreme overclocking, there was reportedly an "Overclocking Innovation" event at IDF where G.Skill and Asus overclocker Elmor achieved a new CPU overclocking record of 5,731.78 MHz on the i7 6950X running on a system with G.Skill memory and Asus motherboard. The company's DDR4 record of 5,189.2 MHz was not beaten at the event, G.Skill notes in its press release (heh).

Are RAM timings important to you when looking for memory? What are your thoughts on the ever increasing clocks of new DDR4 kits with how overclocking works on the newer processors/motherboards?

Source: G.Skill

AMD's 7870 rides again, checking out the new cooler on the A10-7870K

Subject: Processors | August 22, 2016 - 05:37 PM |
Tagged: amd, a10-7870K

Leaving aside the questionable naming to instead focus on the improved cooler on this ~$130 APU from AMD.  Neoseeker fired up the fun sized, 125W rated cooler on top of the A10-7870K and were pleasantly surprised at the lack of noise even under load.  Encouraged by the performance they overclocked the chip by 500MHz to 4.4GHz and were rewarded with a stable and still very quiet system.  The review focuses more the improvements the new cooler offers as opposed to the APU itself, which has not changed.  Check out the review if you are considering a lower cost system that only speaks when spoken to.

14.jpg

"In order to find out just how much better the 125W thermal solution will perform, I am going to test the A10-7870K APU mounted on a Gigabyte F2A88X-UP4 motherboard provided by AMD with a set of 16 GB (2 x 8) DDR3 RAM modules set at 2133 MHz speed. I will then run thermal and fan speed tests so a comparison of the results will provide a meaningful data set to compare the near-silent 125W cooler to an older model AMD cooling solution."

Here are some more Processor articles from around the web:

Processors

Source: Neoseeker
Subject: Mobile
Manufacturer: HUAWEI

Introduction and Specifications

Immediately reminiscent of other phablet devices, the Mate 8 from HUAWEI is a characteristically large, thin slab of a smartphone. But under the hood there's quite a departure from the norm, as the SoC powering the device is new to the high-end phone market - no Qualcomm, Samsung, or even MediaTek here.

DSC_0300.jpg

"The Mate 8 takes the look and feel of the Mate series to a whole new level. Boasting a vivid 6" FHD display, an ultra slim design, a re-designed fingerprint sensor that's faster and more reliable, and a sleek aluminum unibody design, the Mate 8 is sure to impress."

The HiSilicon Kirin 950 powers the Mate 8; an 8-core design comprised of 4x ARM Cortex-A72 cores clocked at up to 2.3 GHz, and 4x ARM Cortex-A53 cores clocked at up to 1.80 GHz. Memory is 3GB for our sample, with 32GB storage; with 4GB RAM and 64GB storage is also available.

kirin.jpg

The Mate 8 looks every bit a premium device, and the metal and glass construction of the handset feels solid. It also feels rather light (185g) given its size. But how does it perform? This is an especially interesting question given the unusual silicon in the Mate 8, but the Kirin 950's Cortex-A72 is the most powerful ARM design (at least until the Cortex-A73, announced this summer, finds its way into devices).

In this review we'll explore the overall quality of the HUAWEI Mate 8, and go over usage impressions. And, of course, we'll look at some performance benchmarks to see how this Kirin 950 SoC stacks up against recent Snapdragon and Apple SoCs.

DSC_0275.jpg

Continue reading our review of the HUAWEI Mate 8 smartphone!!

ASUS tossed everything they could find onto the Rampage V Extreme 10

Subject: Motherboards | August 25, 2016 - 02:39 PM |
Tagged: ROG, rampage v edition 10, asus

Remember in the 90's when all the cool people had lights glowing from underneath their cars?  Now your motherboard can do the same thing, but with extra colour choices and even different effects!  Leaving the RGB disease alone for now, the features on the motherboard are impressive, dual USB 3.1 Type-C ports, support for both M.2 and the Dublin version of storage, PCIe lane switches and even a mulligan button to let you retry a failed POST before having to reset your overclocking settings.  The SupremeFX Hi-Fi audio codec on the board supports proper headphone thanks to the fan controller-like expansion which requires a 6 pin PCI-Express power connector to run; it even comes with coasters. 

That is more than enough about the features, to see how well it performs you can pop by [H]ard|OCP.

glow.jpg

"ASUS celebrates the 10th anniversary of its Republic of Gamers brand in style with the new Rampage V Extreme 10! To properly commemorate its decade of innovation, this motherboard needs to be nothing short of the best motherboard ASUS has ever built and a worthy successor to the Rampage name. "

Here are some more Motherboard articles from around the web:

Motherboards

Source: [H]ard|OCP

Thermaltake's Core P100 Pedestal - Add an extension to your case, no permits required

Subject: Cases and Cooling | August 26, 2016 - 01:33 PM |
Tagged: thermaltake, core 100 pedestal, W100 Super Tower Chassis

The Thermaltake Core P100 is something new to the market, except perhaps for the Cooler Master HAF Stacker 935 case components.  It adds additional space to Thermaltake's W100 chassis and is aptly named as the P100 is placed underneath the W100.  You will need to assemble it as it ships in pieces, just as the W100 does so expect to put some work into setting up these cases.  Once assembled it measures 9.8x12.2x26.7" and gives you space to add additional radiators to your system, you could place the PSU in there and still fit in some smaller radiators or perhaps even fill it with drives.  Drop by [H]ard|OCP to see some of the possiblities, including a complete mini-ITX build.

1470629420OiNWtYv5BA_4_13_l.jpg

"The Thermaltake Core P100 Pedestal is an expansion part for the Thermaltake W100 full tower case previously reviewed here. What the P100 does is give you the ability to expand you cooling system's ability or give you space for extra storage among other things into an entirely self-contained unit below the W100 chassis."

Here are some more Cases & Cooling reviews from around the web:

CASES & COOLING

Source: [H]ard|OCP

A trio of mechanical keyboards from AiZO, the new MGK L80 lineup

Subject: General Tech | August 22, 2016 - 03:14 PM |
Tagged: AiZO, MGK L80, Kailh, gaming keyboard, input

The supply of mechanical keyboards continues to grow, once Cherry MX was the only supplier of switches and only a few companies sold the products.  Now we have choice in manufacturer as well as the switch type we want, beyond the choice of Red, Brown, Blue and so on.  AiZO chose to use Kailh switches in their MGK L80 lineup, your choice of click type and also included a wrist rest for those who desire such a thing.  Modders Inc tested out the three models on offer, they are a bit expensive but do offer a solid solution for your mechanical keyboard desires.

IMG_9339.jpg

"The MGK L80 series is the latest line of gaming keyboards manufactured by AZIO. Available in red, blue or RGB backlighting, the MGK L80 offers mechanical gaming comfort with a choice of either Kailh brown or blue switch mounted on an elegant brushed aluminum surface."

Here is some more Tech News from around the web:

Tech Talk

Source: Modders Inc

Samsung and SK Hynix Discuss The Future of High Bandwidth Memory (HBM) At Hot Chips 28

Subject: Memory | August 25, 2016 - 02:39 AM |
Tagged: TSV, SK Hynix, Samsung, hot chips, hbm3, hbm

Samsung and SK Hynix were in attendance at the Hot Chips Symposium in Cupertino, California to (among other things) talk about the future of High Bandwidth Memory (HBM). In fact, the companies are working on two new HBM products: HBM3 and an as-yet-unbranded "low cost HBM." HBM3 will replace HBM2 at the high end and is aimed at the HPC and "prosumer" markets while the low cost HBM technology lowers the barrier to entry and is intended to be used in mainstream consumer products.

As currently planned, HBM3 (Samsung refers to its implementation as Extreme HBM) features double the density per layer and at least double the bandwidth of the current HBM2 (which so far is only used in NVIDIA's planned Tesla P100). Specifically, the new memory technology offers up 16Gb (~2GB) per layer and as many as eight (or more) layers can be stacked together using TSVs into a single chip. So far we have seen GPUs use four HBM chips on a single package, and if that holds true with HBM3 and interposer size limits, we may well see future graphics cards with 64GB of memory! Considering the HBM2-based Tesla will have 16 and AMD's HBM-based Fury X cards had 4GB, HBM3 is a sizable jump!

Capacity is not the only benefit though. HBM3 doubles the bandwidth versus HBM2 with 512GB/s (or more) of peak bandwidth per stack! In the theoretical example of a graphics card with 64GB of HBM3 (four stacks), that would be in the range of 2 TB/s of theoretical maximum peak bandwidth! Real world may be less, but still that is many terabytes per second of bandwidth which is exciting because it opens a lot of possibilities for gaming especially as developers push graphics further towards photo realism and resolutions keep increasing. HBM3 should be plenty for awhile as far as keeping the GPU fed with data on the consumer and gaming side of things though I'm sure the HPC market will still crave more bandwidth.

Samsung further claims that HBM3 will operate at similar (~500MHz) clocks to HBM2, but will use "much less" core voltage (HBM2 is 1.2V).

HBM Four Stacked.jpg

Stacked HBM memory on an interposer surrounding a processor. Upcoming HBM technologies will allow memory stacks with double the number of layers.

HBM3 is perhaps the most interesting technologically; however, the "low cost HBM" is exciting in that it will enable HBM to be used in the systems and graphics cards most people purchase. There were less details available on this new lower cost variant, but Samsung did share a few specifics. The low cost HBM will offer up to 200GB/s per stack of peak bandwidth while being much cheaper to produce than current HBM2. In order to reduce the cost of production, their is no buffer die or ECC support and the number of Through Silicon Vias (TSV) connections have been reduced. In order to compensate for the lower number of TSVs, the pin speed has been increased to 3Gbps (versus 2Gbps on HBM2). Interestingly, Samsung would like for low cost HBM to support traditional silicon as well as potentially cheaper organic interposers. According to NVIDIA, TSV formation is the most expensive part of interposer fabrication, so making reductions there (and somewhat making up for it in increased per-connection speeds) makes sense when it comes to a cost-conscious product. It is unclear whether organic interposers will win out here, but it is nice to seem them get a mention and is an alternative worth looking into.

Both high bandwidth and low latency memory technologies are still years away and the designs are subject to change, but so far they are both plans are looking rather promising. I am intrigued by the possibilities and hope to see new products take advantage of the increased performance (and in the latter case lower cost). On the graphics front, HBM3 is way too far out to see a Vega release, but it may come just in time for AMD to incorporate it into its high end Navi GPUs, and by 2020 the battle between GDDR and HBM in the mainstream should be heating up.

What are your thoughts on the proposed HBM technologies?

Source: Ars Technica

Podcast #414 - AMD Zen Architecture Details, Lightning Headphones, AMD GPU Market Share and more!

Subject: General Tech | August 25, 2016 - 10:51 AM |
Tagged: Zen, video, seasonic, Polaris, podcast, Omen, nvidia, market share, Lightning, hp, gtx 1060 3gb, gpu, brix, Audeze, asus, architecture, amd

PC Perspective Podcast #414 - 08/25/2016

Join us this week as we discuss the newly released architecture details of AMD Zen, Audeze headphones, AMD market share gains and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts:  Ryan Shrout, Allyn Malventano, Josh Walrath and Jeremy Hellstrom

Program length: 1:37:15
  1. Week in Review:
  2. News items of interest:
  3. Hardware/Software Picks of the Week
  4. Closing/outro