Flash player not detected. Click here to install flash.
« 1 2 3 4 5 »

AMD Launching Ryzen 5 Six Core Processors Soon (Q2 2017)

Subject: Processors | February 24, 2017 - 07:17 AM |
Tagged: Zen, six core, ryzen 5, ryzen, hexacore, gaming, amd

While AMD's Ryzen lineup and pricing has leaked out, only the top three Ryzen 7 processors are available for pre-order (with availability on March 2nd). Starting at $329 for the eight core sixteen thread Ryzen 7 1700, these processors are aimed squarely at enthusiasts craving top-end performance. It seems that enthusiasts looking for cheaper and better price/performance options for budget gaming and work machines will have to wait a bit for Ryzen 5 and Ryzen 3 which will reportedly launch in the second quarter and second half of 2017 respectively. Two six core Ryzen 5 processors will launch somewhere between April and June with the Ryzen 3 quad cores (along with mobile and "Raven Ridge" APU parts) following in the summer to end-of-year timeframe hopefully hitting that back-to-school and holiday shopping launch windows respectively.

AMD Ryzen Die Shot_six core.jpg

Image via reddit (user noiserr). Guru3d has another die shot. Six cores will be created by disabling one core from each CCX.

Thanks to leaks, the two six core Ryzen 5 CPUs are the Ryzen 5 1600X at $259 and Ryzen 5 1500 at $229. The Ryzen 5 1600X is a 95W TDP CPU with six cores and twelve threads at 3.6 GHz base to 4.0 GHz boost with 16MB of L3 cache. AMD is pitting this chip against the Intel Core i5 7600K which is a $240 quad core Kaby Lake part sans Hyper-Threading. Meanwhile, the Ryzen 5 1500 is a 65W processor clocked at 3.2 GHz base and 3.5 GHz boost with 16 MB of L3 cache.

Note that the Ryzen 5 1600X features AMD's XFR (extreme frequency) technology which the Ryzen 5 1500 lacks. Both processors are unlocked and can be overclocked, however. 

Interestingly, Antony Leather over at Forbes managed to acquire some information on how AMD is making these six core parts. According to his source, AMD is disabling one core (and its accompanying L2 cache) from each four core Core Complex (CCX). Doing this this way (rather than taking two cores from one CCX) should keep things balanced. It also allows AMD to keep all of the processors 16MB of L3 cache enabled and each of the remaining three cores of each complex will be able to access the L3 cache as normal. Previous rumors had suggested that the CCXes were "indivisible" and six cores were not possible, but it appears that AMD is able to safely disable at least one core of a complex without compromising the whole thing. I doubt we will be seeing any odd number core count CPUs from AMD though (like their old try at selling tri-core parts that later were potentially able to be unlocked). I am glad that AMD was able to create six core parts while leaving the entire L3 cache intact.

What is still not clear is whether these six core Ryzen 5 parts are made by physically disabling the core from the complex or if the cores are simply disabled/locked out in the micro code or BIOS/UEFI. It would be awesome if, in the future when yields are to the point where binning is more for product segmentation than because of actual defects, those six core processors could be unlocked! 

The top end Ryzen 7 processors are looking to be great performers and a huge leap over Excavator while at least competing with Intel's latest at multi-threaded performance (I will wait for independent benchmarks for single threaded where even from AMD the benchmark scores are close although these benchmark runs look promising). These parts are relatively expensive though, and the cheaper Ryzen 5 and Ryzen 3 (and Raven Ridge APUs) are where AMD will see the most potential sales due to a much bigger market. I am looking forward to seeing more information on the lower end chips and how they will stack up against Intel and its attempts to shift into high gear with moves like enabling Hyper-Threading on lower end Kaby Lake Pentiums and possibly on new Core i5s (that's still merely a rumor though). Intel certainly seems to be taking notice of Ryzen and the reignited competition in the desktop processor space is very promising for consumers!

Are you holding out for a six core or quad core Ryzen CPU or are you considering a jump to the high-end Ryzen 7s?

Source: TechPowerUp
Author:
Subject: Editorial
Manufacturer: AMD

Zen vs. 40 Years of CPU Development

Zen is nearly upon us.  AMD is releasing its next generation CPU architecture to the world this week and we saw CPU demonstrations and upcoming AM4 motherboards at CES in early January.  We have been shown tantalizing glimpses of the performance and capabilities of the “Ryzen” products that will presumably fill the desktop markets from $150 to $499.  I have yet to be briefed on the product stack that AMD will be offering, but we know enough to start to think how positioning and placement will be addressed by these new products.

zen_01.jpg

To get a better understanding of how Ryzen will stack up, we should probably take a look back at what AMD has accomplished in the past and how Intel has responded to some of the stronger products.  AMD has been in business for 47 years now and has been a major player in semiconductors for most of that time.  It really has only been since the 90s where AMD started to battle Intel head to head that people have become passionate about the company and their products.

The industry is a complex and ever-shifting one.  AMD and Intel have been two stalwarts over the years.  Even though AMD has had more than a few challenging years over the past decade, it still moves forward and expects to compete at the highest level with its much larger and better funded competitor.  2017 could very well be a breakout year for the company with a return to solid profitability in both CPU and GPU markets.  I am not the only one who thinks this considering that AMD shares that traded around the $2 mark ten months ago are now sitting around $14.

 

AMD Through 1996

AMD became a force in the CPU industry due to IBM’s requirement to have a second source for its PC business.  Intel originally entered into a cross licensing agreement with AMD to allow it to produce x86 chips based on Intel designs.  AMD eventually started to produce their own versions of these parts and became a favorite in the PC clone market.  Eventually Intel tightened down on this agreement and then cancelled it, but through near endless litigation AMD ended up with a x86 license deal with Intel.

AMD produced their own Am286 chip that was the first real break from the second sourcing agreement with Intel.  Intel balked at sharing their 386 design with AMD and eventually forced the company to develop its own clean room version.  The Am386 was released in the early 90s, well after Intel had been producing those chips for years. AMD then developed their own version of the Am486 which then morphed into the Am5x86.  The company made some good inroads with these speedy parts and typically clocked them faster than their Intel counterparts (eg. Am486 40 MHz and 80 MHz vs. the Intel 486 DX33 and DX66).  AMD priced these points lower so users could achieve better performance per dollar using the same chipsets and motherboards.

zen_02.jpg

Intel released their first Pentium chips in 1993.  The initial version was hot and featured the infamous FDIV bug.  AMD made some inroads against these parts by introducing the faster Am486 and Am5x86 parts that would achieve clockspeeds from 133 MHz to 150 MHz at the very top end.  The 150 MHz part was very comparable in overall performance to the Pentium 75 MHz chip and we saw the introduction of the dreaded “P-rating” on processors.

There is no denying that Intel continued their dominance throughout this time by being the gold standard in x86 manufacturing and design.  AMD slowly chipped away at its larger rival and continued to profit off of the lucrative x86 market.  William Sanders III set the bar higher about where he wanted the company to go and he started on a much more aggressive path than many expected the company to take.

Click here to read the rest of the AMD processor editorial!

AMD Supports CrossFire On B350 and X370 Chipsets, However SLI Limited to X370

Subject: Motherboards | February 26, 2017 - 06:29 AM |
Tagged: x370, sli, ryzen, PCI-E 3.0, gaming, crossfire, b350, amd

Computerbase.de recently published an update (translated) to an article outlining the differences between AMD’s AM4 motherboard chipsets. As it stands, the X370 and B350 chipsets are set to be the most popular chipsets for desktop PCs (with X300 catering to the small form factor crowd) especially among enthusiasts. One key differentiator between the two chipsets was initially support for multi-GPU configurations with X370. Now that motherboards have been revealed and are up for pre-order now, it turns out that the multi-GPU lines have been blurred a bit. As it stands, both B350 and X370 will support AMD’s CrossFire multi-GPU technology and the X370 alone will also have support for NVIDIA’s SLI technology.

The AM4 motherboards equipped with the B350 and X370 chipsets that feature two PCI-E x16 expansion slots will run as x8 in each slot in a dual GPU setup. (In a single GPU setup, the top slot can run at full x16 speeds.) Which is to say that the slots behave the same across both chipsets. Where the chipsets differ is in support for specific GPU technologies where NVIDIA’s SLI is locked to X370. TechPowerUp speculates that the decision to lock SLI to its top-end chipset is due, at least in part, to licensing costs. This is not a bad thing as B350 was originally not going to support any dual x16 slot multi-GPU configurations, but now motherboard manufacturers are being allowed to enable it by including a second slot and AMD will reportedly permit CrossFire usage (which costs AMD nothing in licensing). Meanwhile the most expensive X370 chipset will support SLI for those serious gamers that demand and can afford it. Had B350 supported SLI and carried the SLI branding, they likely would have been ever so slightly more expensive than they are now. Of course, DirectX 12's multi-adapter will work on either chipset so long as the game supports it.

  X370 B350 A320 X300 / B300 / A300 Ryzen CPU Bristol Ridge APU
PCI-E 3.0 0 0 0 4 20 (18 w/ 2 SATA) 10
PCI-E 2.0 8 6 4 0 0 0
USB 3.1 Gen 2 2 2 1 1 0 0
USB 3.1 Gen 1 6 2 2 2 4 4
USB 2.0 6 6 6 6 0 0
SATA 6 Gbps 4 2 2 2 2 2
SATA RAID 0/1/10 0/1/10 0/1/10 0/1    
Overclocking Capable? Yes Yes No Yes (X300 only)    
SLI Yes No No No    
CrossFire Yes Yes No No    

Multi-GPU is not the only differentiator though. Moving up from B350 to X370 will get you 6 USB 3.1 Gen 1 (USB 3.0) ports versus 2 on B350/A30/X300, two more PCI-E 2.0 lanes (8 versus 6), and two more SATA ports (6 total usable; 4 versus 2 coming from the chipset).

Note that X370, B350, and X300 all support CPU overclocking. Hopefully this helps you when trying to decide which AM4 motherboard to pair with your Ryzen CPU once the independent benchmarks are out. In short, if you must have SLI you are stuck ponying up for X370, but if you plan to only ever run a single GPU or tend to stick with AMD GPUs and CrossFire, B350 gets you most of the way to a X370 for a lot less money! You do not even have to give up any USB 3.1 Gen 2 ports though you limit your SATA drive options (it’s all about M.2 these days anyway heh).

For those curious, looking around on Newegg I notice that most of the B350 motherboards have that second PCI-E 3.0 x16 slot and CrossFire support listed in their specifications and seem to average around $99.  Meanwhile X370 starts at $140 and rockets up from there (up to $299!) depending on how much bling you are looking for!

Are you going for a motherboard with the B350 or X370 chipset? Will you be rocking multiple graphics cards?

Also read:

NVIDIA Announces GeForce GTX 1080 Ti 11GB Graphics Card, $699, Available Next Week

Subject: Graphics Cards | March 1, 2017 - 03:59 AM |
Tagged: pascal, nvidia, gtx 1080 ti, gp102, geforce

Tonight at a GDC party hosted by CEO Jen-Hsun Huang, NVIDIA announced the GeForce GTX 1080 Ti graphics card, coming next week for $699. Let’s dive right into the specifications!

card1.jpg

  GTX 1080 Ti Titan X (Pascal) GTX 1080 GTX 980 Ti TITAN X GTX 980 R9 Fury X R9 Fury R9 Nano
GPU GP102 GP102 GP104 GM200 GM200 GM204 Fiji XT Fiji Pro Fiji XT
GPU Cores 3584 3584 2560 2816 3072 2048 4096 3584 4096
Base Clock 1480 MHz 1417 MHz 1607 MHz 1000 MHz 1000 MHz 1126 MHz 1050 MHz 1000 MHz up to 1000 MHz
Boost Clock 1600 MHz 1480 MHz 1733 MHz 1076 MHz 1089 MHz 1216 MHz - - -
Texture Units 224 224 160 176 192 128 256 224 256
ROP Units 88 96 64 96 96 64 64 64 64
Memory 11GB 12GB 8GB 6GB 12GB 4GB 4GB 4GB 4GB
Memory Clock 11000 MHz 10000 MHz 10000 MHz 7000 MHz 7000 MHz 7000 MHz 500 MHz 500 MHz 500 MHz
Memory Interface 352-bit 384-bit G5X 256-bit G5X 384-bit 384-bit 256-bit 4096-bit (HBM) 4096-bit (HBM) 4096-bit (HBM)
Memory Bandwidth 484 GB/s 480 GB/s 320 GB/s 336 GB/s 336 GB/s 224 GB/s 512 GB/s 512 GB/s 512 GB/s
TDP 220 watts 250 watts 180 watts 250 watts 250 watts 165 watts 275 watts 275 watts 175 watts
Peak Compute 10.6 TFLOPS 10.1 TFLOPS 8.2 TFLOPS 5.63 TFLOPS 6.14 TFLOPS 4.61 TFLOPS 8.60 TFLOPS 7.20 TFLOPS 8.19 TFLOPS
Transistor Count 12.0B 12.0B 7.2B 8.0B 8.0B 5.2B 8.9B 8.9B 8.9B
Process Tech 16nm 16nm 16nm 28nm 28nm 28nm 28nm 28nm 28nm
MSRP (current) $699 $1,200 $599 $649 $999 $499 $649 $549 $499

The GTX 1080 Ti looks a whole lot like the TITAN X launched in August of last year. Based on the 12B transistor GP102 chip, the new GTX 1080 Ti will have 3,584 CUDA core with a 1.60 GHz Boost clock. That gives it the same processor count as Titan X but with a slightly higher clock speed which should make the new GTX 1080 Ti slightly faster by at least a few percentage points and has a 4.7% edge in base clock compute capability. It has 28 SMs, 28 geometry units, 224 texture units.

archoverview.jpg

Interestingly, the memory system on the GTX 1080 Ti gets adjusted – NVIDIA has disabled a single 32-bit memory controller to give the card a total of 352-bit wide bus and an odd-sounding 11GB memory capacity. The ROP count also drops to 88 units. Speaking of 11, the memory clock on the G5X implementation on GTX 1080 Ti will now run at 11 Gbps, a boost available to NVIDIA thanks to a chip revision from Micron and improvements to equalization and reverse signal distortion.

memoryeye.jpg

The TDP of the new part is 220 watts, falling between the Titan X and the GTX 1080. That’s an interesting move considering that the GP102 was running at 250 watts with the Titan product. The cooler has been improved compared to the GTX 1080, offering quieter fan speeds and lower temperatures when operating at the same power envelope.

coolerperf.jpg

Performance estimates from NVIDIA put the GTX 1080 Ti about 35% faster than the GTX 1080, the largest “kicker performance increase” that we have seen from a flagship Ti launch.

perf.jpg

Pricing is going to be set at $699 so don't expect to find this in any budget builds. But for the top performing GeForce card on the market, it's what we expect. It should be on virtual shelves starting next week.

(Side note, with the GTX 1080 getting a $100 price drop tonight, I think we'll find this new lineup very compelling to enthusiasts.)

card2.jpg

card3.jpg

NVIDIA did finally detail its tiled caching rendering technique. We'll be diving more into that in a separate article with a little more time for research.

One more thing…

In another interesting move, NVIDIA is going to be offering “overclocked” versions of the GTX 1080 and GTX 1060 with +1 Gbps memory speeds. Partners will be offering them with some undisclosed price premium.

1080oc.jpg

I don’t know how much performance this will give us but it’s clear that NVIDIA is preparing its lineup for the upcoming AMD Vega release.

GeForce_GTX_1080ti_3qtr_Front_Left_1488313915.jpg

We’ll have more news from NVIDIA and GDC as it comes!

Source: NVIDIA

Gigabyte is Ryzen up to the challenge of their rivals

Subject: Motherboards | February 24, 2017 - 10:30 PM |
Tagged: aorus, gigabyte, ryzen, b350, x370

Gigabyte have lead with five motherboards, two X370s under Aorus and three B350s with Gigabyte branding.  They all share some traits in common such as RGB Fusion with 16.8 million colours to choose from and an application to allow you to customize the light show to your own specifications.  It supports control from your phone if you are so addicted to the glow you need to play with your system from across the room. 

lightshow.PNG

Smartfan 5 indicates the presence of five headers for fans or pumps that will work with PWM and standard voltage fans, which can draw up to 12V at 2A.  The boards also have six temperature sensors to give you feedback on the effectiveness of your cooling and modify it with the included application.  Most models will offer Thunderbolt 3, Intel GbE NICs and an ASMedia 2142 USB 3.1 controllers which they claim can provide up to 16Gb/s.  All will have high end audio solutions, often featuring a headphone pre-amp and high quality capacitors.  There are a lot more features specific to each board, so make sure to click through to check out your favourites.

gigabyte.PNG

The Aorus boards, the GA-AX370-Gaming K7 and GA-AX370-GAMING 5 are very similar but if you plan on playing with your BCLK it is the K7 which includes Gigabyte's Turbo B-Clock.  The Gigabyte lineup includes the GA-AB350M, GA-AB350-Gaming and GA-AB350-GAMING 3.  The GA-AB350M is the only mATX Ryzen board of these five for those looking to build a smaller system.  For audiophiles the full size the GAMING 3 includes an ALC1220 codec as opposed to the ALC 887 used on the other two models. 

You can expect to see reviews of these boards which offer far more details on perfomance and features after they are released on March 2nd.  Full PR under the break.

Source: Gigabyte

AMD Unveils Next-Generation GPU Branding, Details - Radeon RX Vega

Subject: General Tech | February 28, 2017 - 10:46 PM |
Tagged: amd, Vega, radeon rx vega, radeon, gdc 2017, capsaicin, rtg, HBCC, FP16

Today at the AMD Capsaicin & Cream event at GDC 2017, Senior VP of the Radeon Technologies Group, Raja Koduri officially revealed the branding that AMD will use for their next generation GPU products.

While we usually see final product branding deviate from their architectural code names (e.g. Polaris becoming the Radeon RX 460, 470 and 480), AMD this time has decided to embrace the code name for the retail naming scheme for upcoming graphics cards featuring the new GPU – Radeon RX Vega.

RadeonRXVega.jpg

However, we didn't just get a name for Vega-based GPUs. Raja also went into some further detail and showed some examples of technologies found in Vega.

First off is the High-Bandwidth Cache Controller found in Vega products. We covered this technology during our Vega architecture preview last month at CES, but today we finally saw a demo of this technology in action.

Vega-HBCCslide.jpg

Essentially, the High-Bandwidth Cache Controller (HBCC) allows Vega GPUs to address all available memory in the system (including things like NVMe SSDs, system DRAM and network storage.) AMD claims that by using the already fast memory you have available on your PC to augment onboard GPU memory (such as HBM2) they will be able to offer less expensive graphics cards that ultimately offer access to much more memory than current graphics cards.

Vega-HBCC.jpg

The demo that they showed on stage featured Deus Ex: Mankind Divided running on a system with a Vega GPU running with 2GB of VRAM, and Ryzen CPU. By turning HBCC on, they were able to show a 50% increase in average FPS, and a 100% increase in minimum FPS.

While we probably won't actually see a Vega product with such a small VRAM implementation, it was impressive to see how HBCC was able to dramatically improve the playability of a 2GB GPU on a game that has no special optimizations to take advantage of the High-Bandwidth Cache.

The other impressive demo running on Vega at the Capsaicin & Cream event centered around what AMD is calling Rapid Pack Math.

Rapid Pack Math is an implementation of something we have been hearing and theorizing a lot about lately, the use of FP16 shaders for some graphic effects in games. By using half-precision FP16 shaders instead of the current standard FP32 shaders, developers are able to get more performance out of the same GPU cores. In specific, Rapid Pack Math allows developers to run half-precision FP16 shaders at exactly 2X the speed of traditional standard-precision FP32 shaders.

TressFX-FP16.jpg

While the lower precision of FP16 shaders won't be appropriate for all GPU effects, AMD was showing a comparison of their TressFX hair rendering technology running on both standard and half-precision shaders. As you might expect, AMD was able to render twice the amount of hair strands per second, making for a much more fluid experience.

Vega-shirt.jpg

Just like we saw with the lead up to the Polaris GPU launch, AMD seems to be releasing a steady stream of information on Vega. Now that we have the official branding for Vega, we eagerly await getting our hands on these new High-end GPUs from AMD.

 

If you can’t open it, you don’t own it - Macchina opens up your car's hardware

Subject: General Tech | February 27, 2017 - 05:56 PM |
Tagged: M2, Arduino Due, macchina, Kickstarter, open source, DIY

There is a Kickstarter out there for all you car enthusiasts and owners, the Arduino Duo based Macchina M2 which allows you to diagnose and change how your car functions.  They originally developed the device during a personal project to modify a Ford Contour into an electric car, which required serious reprogramming of sensors and other hardware in the car.  They realized that their prototype could be enhanced to allow users to connect into the hardware of their own cars to monitor performance, diagnose issues or even modify the performance.  Slashdot has the links and their trademarked reasonable discourse for those interested, if you have the hardware already you can get the M2 interface $45, $79 or more for the hardware and accessories.

5ca788192bfdfed89131bea2e7a39a8b_original.png

"Challenging "the closed, unpublished nature of modern-day car computers," their M2 device ships with protocols and libraries "to work with any car that isn't older than Google." With catchy slogans like "root your ride" and "the future is open," they're hoping to build a car-hacking developer community, and they're already touting the involvement of Craig Smith, the author of the Car Hacker's Handbook from No Starch Press."

Here is some more Tech News from around the web:

Tech Talk

Source: Slashdot

Farm out your hard drive for profit?

Subject: General Tech | February 24, 2017 - 08:04 PM |
Tagged: storj, farming, bitcoin

Startup company Storj has a new twist on an old service, they are offering secure, distributed storage but the storage is located on hard drives which consumers are renting to them.  You can set up an account and get 1.5 cents per gigabyte you give to them.  You certainly are not going to get rich running out and buying some SSDs to use but if you have a few old HDDs kicking around perhaps you would like to make a few crypto-coins on the side.  They current have 8200 farmers and more than 15000 users so there is certainly some interest.  On the other hand residential internet stability and the reliability of consumer hard drives could lead to unexpected interruptions to your access.  Drop by The Register for links to sign up for the service or sell some space if you are interested.

storj_hdd_rental.jpg

"The network consists of the internet and a shared community of “farmers”, users who rent out their spare desktop hard drive space and bandwidth. Payment, at $0.015/GB, is via a cryptocurrency: namely, Bitcoin."

Here is some more Tech News from around the web:

Tech Talk

 

Source: The Register

GDC: NVIDIA Announces GTX 1080 Price Drop to $499

Subject: Graphics Cards | March 1, 2017 - 03:55 AM |
Tagged: pascal, nvidia, GTX 1080, GDC

Update Feb 28 @ 10:03pm It's official, NVIDIA launches $699 GTX 1080 Ti.

NVIDIA is hosting a "Gaming Celebration" live event during GDC 2017 to talk PC gaming and possibly launch new hardware (if rumors are true!). During the event, NVIDIA CEO Jen-Hsun Huang made a major announcement regarding its top-end GTX 1080 graphics card with a price drop to $499 effective immediately.

NVIDIA 499 GTX 1080.png

The NVIDIA GTX 1080 is a pascal based graphics card with 2560 CUDA cores paired with 8GB of GDDR5X memory. Graphics cards based on this GP104 GPU are currently selling for around $580 to $700 (most are around $650+/-) with the "Founders Edition" having an MSRP of $699. The $499 price teased at the live stream represents a significant price drop compared to what the graphics cards are going for now. NVIDIA did not specify if the new $499 MSRP was the new Founders Edition price or an average price that includes partner cards as well but even if it only happened on the reference cards, the partners would have to adjust their prices downwards accordingly to compete.

I suspect that NVIDIA is making such a bold move to make room in their lineup for a new product (the long-rumored 1080 Ti perhaps?) as well as a pre-emptive strike against AMD and their Radeon RX Vega products. This move may also be good news for GTX 1070 pricing as they may also see price drops to make room for cheaper GTX 1080 partner cards that come in below the $499 price point.

If you have been considering buying a new graphics card, NVIDIA has sweetened the pot a bit especially if you had already been eyeing a GTX 1080. (Note that while the price drop is said to be effective immediately, at the time of writing Amazon was still showing "normal"/typical prices for the cards. Enthusiasts might have to wait a few hours or days for the retailers to catch up and update their sites.)

This makes me a bit more excited to see what AMD will have to offer with Vega as well as the likelihood of a GTX 1080 Ti launch happening sooner rather than later!

Source: NVIDIA

Podcast #438 - Vulkan, Logitech G213, Ryzen Preorders, and more!

Subject: Editorial | February 23, 2017 - 05:16 PM |
Tagged: podcast, vulkan, ryzen, qualcomm, Qt, mesh, g213, eero, corsair, bulldog

PC Perspective Podcast #438 - 02/23/17

Join us for Vulkan one year later, Logitech G213 Keyboard, eero home mesh networking, Ryzen Pre Orders, and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts: Allyn Malventano, Ken Addison, Josh Walrath, Jermey Hellstrom

Program length: 0:58:01

Podcast topics of discussion:
  1. Week in Review:
  2. News items of interest:
  3. Hardware/Software Picks of the Week
    1. Allyn: SS64.com - Nifty programmer's reference for scripting, web, db
    2. Ken: Dell refurbished XPS 13
  4. Closing/outro
 

Source:

Overclockers Push Ryzen 7 1800X to 5.2 GHz On LN2, Break Cinebench Record

Subject: Processors | March 1, 2017 - 02:06 AM |
Tagged: Zen, Ryzen 1800X, ryzen, overclocking, LN2, Cinebench, amd

During AMD’s Ryzen launch event a team of professional overclockers took the stage to see just how far they could push the top Zen-based processor. Using a bit of LN2 (liquid nitrogen) and a lot of voltage, the overclocking team was able to hit an impressive 5.20 GHz with all eight cores (16 threads) enabled!

Ryzen Cinebench Benchmark Record.png

In addition to the exotic LN2 cooling, the Ryzen 7 1800X needed 1.875 volts to hit 5.20 GHz. That 5.20 GHz was achieved by setting the base clock at 137.78 MHz and the multiplier at 37.75. Using these settings, the chip was even stable enough to benchmark with a score of 2,363 on Cinebench R15’s multi-threaded test.

According to information from AMD, a stock Ryzen 7 1800X comes clocked at 3.6 GHz base and up to 4 GHz boost (XFR can go higher depending on HSF) and is able to score 1,619 in Cinebench. The 30% overclock to 5.20 GHz got the overclockers an approximately 45% higher CInebench score.

Further, later in the overclocking event, they managed to break a Cinebench world record of 2,445 points by achieving a score of 2,449 (it is not clear what clockspeed this was at). Not bad for a brand-new processor!

AMD Ryzen 1800X Overclocked On LN2 to 5GHz.jpg

The overclocking results are certainly impressive, and suggest that Ryzen may be a decent overclocker so long as you have the cooling setup to get it there (the amount of voltage needed is a bit worrying though heh). Interestingly, HWBot shows a Core i7 6900K (also 8C/16T) hitting 5.22 GHz and scoring 2,146 in CInebench R15. That Ryzen can hit similar numbers with all cores and threads turned on is promising.

I am looking forward to seeing what people are able to hit on air and water cooling and if XFR will work as intended and get most of the way to a manual overclock without the effort of manually overclocking. I am also curious how the power phases and overclocking performance will stack up on motherboards using the B350 versus X370 chipsets. With the eight core chips able to hit 5.2, I expect the upcoming six core Ryzen 5 and four core Ryzen 3 processors to clock even higher which would certainly help gaming performance for budget builds!

Austin Evans was able to get video of the overclocking event which you can watch here (Vimeo).

Also read:

Source: Hexus
Author:
Manufacturer: NVIDIA

VR Performance Evaluation

Even though virtual reality hasn’t taken off with the momentum that many in the industry had expected on the heels of the HTC Vive and Oculus Rift launches last year, it remains one of the fastest growing aspects of PC hardware. More importantly for many, VR is also one of the key inflection points for performance moving forward; it requires more hardware, scalability, and innovation than any other sub-category including 4K gaming.  As such, NVIDIA, AMD, and even Intel continue to push the performance benefits of their own hardware and technology.

Measuring and validating those claims has proven to be a difficult task. Tools that we used in the era of standard PC gaming just don’t apply. Fraps is a well-known and well-understood tool for measuring frame rates and frame times utilized by countless reviewers and enthusiasts. But Fraps lacked the ability to tell the complete story of gaming performance and experience. NVIDIA introduced FCAT and we introduced Frame Rating back in 2013 to expand the capabilities that reviewers and consumers had access to. Using more sophisticated technique that includes direct capture of the graphics card output in uncompressed form, a software-based overlay applied to each frame being rendered, and post-process analyzation of that data, we were able to communicate the smoothness of a gaming experience, better articulating it to help gamers make purchasing decisions.

pipe1.jpg

VR pipeline when everything is working well.

For VR though, those same tools just don’t cut it. Fraps is a non-starter as it measures frame rendering from the GPU point of view and completely misses the interaction between the graphics system and the VR runtime environment (OpenVR for Steam/Vive and OVR for Oculus). Because the rendering pipeline is drastically changed in the current VR integrations, what Fraps measures is completely different than the experience the user actually gets in the headset. Previous FCAT and Frame Rating methods were still viable but the tools and capture technology needed to be updated. The hardware capture products we used since 2013 were limited in their maximum bandwidth and the overlay software did not have the ability to “latch in” to VR-based games. Not only that but measuring frame drops, time warps, space warps and reprojections would be a significant hurdle without further development.  

pipe2.jpg

VR pipeline with a frame miss.

NVIDIA decided to undertake the task of rebuilding FCAT to work with VR. And while obviously the company is hoping that it will prove its claims of performance benefits for VR gaming, it should not be overlooked the investment in time and money spent on a project that is to be open sourced and free available to the media and the public.

vlcsnap-2017-02-27-11h31m17s057.png

NVIDIA FCAT VR is comprised of two different applications. The FCAT VR Capture tool runs on the PC being evaluated and has a similar appearance to other performance and timing capture utilities. It uses data from Oculus Event Tracing as a part of the Windows ETW and SteamVR’s performance API, along with NVIDIA driver stats when used on NVIDIA hardware to generate performance data. It will and does work perfectly well on any GPU vendor’s hardware though with the access to the VR vendor specific timing results.

fcatvrcapture.jpg

Continue reading our preview of the new FCAT VR tool!

Flipped your lid and want to reattach it?

Subject: Processors | February 23, 2017 - 04:07 PM |
Tagged: Intel, Skylake, kaby lake, delidding, relidding

[H]ard|OCP have been spending a lot of time removing the integrated heatspreader on recent Intel chips to see what effect it has on temperatures under load.  Along the way we picked up tips on 3D printing a delidder and thankfully there was not much death along the way.  One of their findings from this testing was that it can be beneficial to reattach the lid after changing out the thermal interface material and they have published a guide on how to do so.   You will need a variety of tools, from Permatex Red RTV to razor blades, by way of isopropyl alcohol and syringes; as well as a steady hand.  You may have many of the items on hand already and none are exceptionally expensive.

1487134654mHmb7IfVSy_1_10_l.jpg

"So we have covered a lot about taking your shiny new Intel CPUs apart lately, affectionately known as "delidding." What we have found in our journey is that "relidding" the processor might be an important part of the process as well. But what if you do not have a fancy tool that will help you put Humpty back together again?"

Here are some more Processor articles from around the web:

Processors

Source: [H]ard|OCP

ZeniMax Seeks an Injunction Against Oculus VR

Subject: General Tech | February 27, 2017 - 12:01 PM |
Tagged: zenimax, Oculus

As far as I know, it’s fairly common to seek injunctions during legal fights over intellectual rights cases, so I’m not sure how surprising this should be. Still, after the $500 million USD judgment against Oculus, ZeniMax has indeed filed for a court order to, according to UploadVR, block the usage of Oculus PC software, Oculus Mobile software, and the plug-ins for Unity and Unreal Engine. They also demand, as usual, that Oculus deletes all copies of the infringing code and a few other stipulations.

oculus-2016-riftkit.jpg

I should stress that this is just a filing. It would need to be accepted for it to have any weight.

The timing is quite disruptive to Oculus, too, even if by total co-incidence. Epic Games is about to release their flagship, Oculus-exclusive title, Robo Recall, which was intended to be released for free to those who have Oculus Touch controllers. If it succeeds, and that’s way more if than when at this point, then that could sting for whoever gets stuck with the game’s invoice, which (I assume) would be Oculus.

Personally, I’m not quite sure how far this will go. Based on my memory of the jury decision, ZeniMax is entitled to $500 million USD for prior damages, and nothing for ongoing damages. You would think that, if a jury ruled that the infringement has no lasting effect, that an injunction wouldn’t recover any of that non-existent value. On the other hand, I’m not a judge (or anyone else of legal relevance) so what I reason doesn’t really matter outside the confines of this website.

We’ll need to wait and see if this goes anywhere.

Source: UploadVR
Subject: Motherboards
Manufacturer: GIGABYTE

Introduction and Technical Specifications

Introduction

02-20161029174351_big.jpg

Courtesy of GIGABYTE

With the release of Intel Z270 chipset, GIGABYTE is unveiling its AORUS line of products. The AORUS branding will be used to differentiate enthusiast and gamer friendly products from their other product lines, similar to how ASUS uses the ROG branding to differentiate their high performance product line. The Z270X-Gaming 5 is among the first to be released as part of GIGABYTE's AORUS line. The board features the black and white branding common to the AORUS product line, with the rear panel cover and chipset featuring the brand logos. The board is designed around the Intel Z270 chipset with in-built support for the latest Intel LGA1151 Kaby Lake processor line (as well as support for Skylake processors) and Dual Channel DDR4 memory running at a 2400MHz speed. The Z270X-Gaming 5 can be found in retail with an MSRP of $189.99.

03-board.jpg

Courtesy of GIGABYTE

04-board-flyapart.jpg

Courtesy of GIGABYTE

GIGABYTE integrated the following features into the Z270X-Gaming 5 motherboard: three SATA-Express ports; one U.2 32Gbps port; two M.2 PCIe x4 capable ports with Intel Optane support built-in; two RJ-45 GigE ports - an Intel I219-V Gigabit NIC and a Rivet Networks Killer E2500 NIC; three PCI-Express x16 slots; three PCI-Express x1 slots; ASMedia 8-Channel audio subsystem; integrated DisplayPort and HDMI video ports; and USB 2.0, 3.0, and 3.1 Type-A and Type-C port support.

Continue reading our preview of the GIGABYTE Z270X-Gaming 5 motherboard!

SDXC SD cards come at a big premium; too bad we can't slide an M.2 SSD into our cameras

Subject: Storage | February 27, 2017 - 10:23 PM |
Tagged: sdxc, sd card, patriot, lx series

You may recall a while back Allyn put together an article detailing the new types of SD cards hitting the market which will support 4K recording in cameras.  Modders Inc just wrapped up a review of one of these cards, Patriot's 256GB LX Series SDXC card with an included adapter for those who need it.  The price certainly implies it is new technology, $200 for 256GB of storage is enough to make anyone pause, so the question becomes why one would pay such a premium. Their benchmarks offer insight into this, with 83Mb/s write and 96Mb/s read in both ATTO and CrystalDisk proving that this is a far cry from the performance of older SD cards and worthy of that brand new ultra high definition camera you just picked up.  Lets us hope the prices plummet as they did with the previous generations of cards.

Front.jpg

"Much like Mary Poppins bag of wonders, Patriot too has a method of fitting a substantial amount of goodness in a small space with the release of their 256GB LX Series SDXC class 10 memory card. Featuring an impressive 256GB of storage and boasting this as an “ultra high speed” card for QHD video production and high resolution photos."

Here are some more Storage reviews from around the web:

Storage

 

Source: Modders Inc

30 nanoseconds is way too slow, down with the latency gap!

Subject: General Tech | February 23, 2017 - 03:45 PM |
Tagged: hbll, cache, l3 cache, Last Level Cache

There is an insidious latency gap lurking in your computer between your DRAM and your CPUs L3 cache.  The size of the latency depends on your processor as not all L3 cache are created equally but regardless there are wasted CPU cycles which could be reclaimed.   Piecemakers Technology, the Industrial Technology Research Institute of Taiwan and Intel are on the case, with a project to design something to fit in that niche between the CPU and DRAM.  Their prototype Last Level Cache is a chip with 17ns latency which would improve the efficiency at which L3 cache could be filled to pass onto the next level in the CPU.  The Register likens it to the way Intel has fit XPoint between the speed of SSDs and DRAM.  It will be interesting to see how this finds its way onto the market.

dram_l3_cache_gap.jpg

"Jim Handy of Objective Analysis writes about this: "Furthermore, there's a much larger latency gap between the processor's internal Level 3 cache and the system DRAM than there is between any adjacent cache levels.""

Here is some more Tech News from around the web:

Tech Talk

Source: The Register

The X370s aren't here yet so take a gander at the fancy X270 from GIGABYTE

Subject: Motherboards | February 28, 2017 - 09:24 PM |
Tagged: intel z270, Aorus Z270X Gaming 9, gigabyte

What an interesting time it will be with Intel slinging Z270's at the same time AMD's Z370 arrives on the scene; there is no possible way some people could get confused.  It will also make the next generation of board names interesting as the two companies fight for numbering rights.  GIGABYTE's Aorus Z270X Gaming 9 comes with an impressive price tag of $500, so it will be interesting to see if [H]ard|OCP finds the feature set on the board worth of that investment.  The four 16x PCIe 3.0 slots will support four GPUs simultaneously and there are both a pair of M.2 and U.2 slots, to say nothing of the onboard SoundBlaster.  Head on over to read through the full review.

1487923677FlKVyNgDxY_1_10_l.jpg

"GIGABYTE’s Z270X Gaming 9 is one of the most feature rich and ultra-high end offerings you’ll see for the Z270 chipset this year. We were super fond of last year’s similar offering and as a result, the Z270X Gaming 9 has very large shoes to fill. With its massive feature set and overclocking prowess, it is poised to be one of the best motherboards of the year."

Here are some more Motherboard articles from around the web:

Motherboards

Source: [H]ard|OCP

The Microsoft Store's unintentional cash back offer

Subject: General Tech | February 28, 2017 - 08:48 PM |
Tagged: microsoft, oops, Lawsuit

If you purchased anything from the Microsoft store between November 2013 and February 24 of this year and live in the USA you could be eligible for up to $100 in cash damages.  It seems that the credit card information they provided on receipts contained more than half of your credit card numbers which is in violation of a law implemented in 2003 which states that no more than five numbers can be shown on receipts.  Now that the judgment against Microsoft is in, the proposed settlement for Microsoft to set aside $1,194,696US for customers who were affected by this issue.  The settlement needs to be approved by the judge so you cannot claim your money immediately, keep an eye out for more new.  The Register have posted links to the original lawsuit as well as the judgment right here.

535990119.jpg

"On Friday, the Redmond giant agreed to give up roughly seven minutes of its quarterly revenue to a gaggle of Microsoft Store customers who claimed that their receipts displayed more of their payment card numbers than legally allowed."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register

Linked Multi-GPU Arrives... for Developers

The Khronos Group has released the Vulkan 1.0.42.0 specification, which includes experimental (more on that in a couple of paragraphs) support for VR enhancements, sharing resources between processes, and linking similar GPUs. This spec was released alongside a LunarG SDK and NVIDIA drivers, which are intended for developers, not gamers, that fully implement these extensions.

I would expect that the most interesting feature is experimental support for linking similar GPUs together, similar to DirectX 12’s Explicit Linked Multiadapter, which Vulkan calls a “Device Group”. The idea is that the physical GPUs hidden behind this layer can do things like share resources, such as rendering a texture on one GPU and consuming it in another, without the host code being involved. I’m guessing that some studios, like maybe Oxide Games, will decide to not use this feature. While it’s not explicitly stated, I cannot see how this (or DirectX 12’s Explicit Linked mode) would be compatible in cross-vendor modes. Unless I’m mistaken, that would require AMD, NVIDIA, and/or Intel restructuring their drivers to inter-operate at this level. Still, the assumptions that could be made with grouped devices are apparently popular with enough developers for both the Khronos Group and Microsoft to bother.

microsoft-dx12-build15-linked.png

A slide from Microsoft's DirectX 12 reveal, long ago.

As for the “experimental” comment that I made in the introduction... I was expecting to see this news around SIGGRAPH, which occurs in late-July / early-August, alongside a minor version bump (to Vulkan 1.1).

I might still be right, though.

The major new features of Vulkan 1.0.42.0 are implemented as a new classification of extensions: KHX. In the past, vendors, like NVIDIA and AMD, would add new features as vendor-prefixed extensions. Games could query the graphics driver for these abilities, and enable them if available. If these features became popular enough for multiple vendors to have their own implementation of it, a committee would consider an EXT extension. This would behave the same across all implementations (give or take) but not be officially adopted by the Khronos Group. If they did take it under their wing, it would be given a KHR extension (or added as a required feature).

The Khronos Group has added a new layer: KHX. This level of extension sits below KHR, and is not intended for production code. You might see where this is headed. The VR multiview, multi-GPU, and cross-process extensions are not supposed to be used in released video games until they leave KHX status. Unlike a vendor extension, the Khronos Group wants old KHX standards to drop out of existence at some point after they graduate to full KHR status. It’s not something that NVIDIA owns and will keep it around for 20 years after its usable lifespan just so old games can behave expectedly.

khronos-group-logo.png

How long will that take? No idea. I’ve already mentioned my logical but uneducated guess a few paragraphs ago, but I’m not going to repeat it; I have literally zero facts to base it on, and I don’t want our readers to think that I do. I don’t. It’s just based on what the Khronos Group typically announces at certain trade shows, and the length of time since their first announcement.

The benefit that KHX does bring us is that, whenever these features make it to public release, developers will have already been using it... internally... since around now. When it hits KHR, it’s done, and anyone can theoretically be ready for it when that time comes.