Author:
Subject: Mobile
Manufacturer: ASUS

Overview

A few months ago at Computex, NVIDIA announced their "GeForce GTX with Max-Q Design" initiative. Essentially, the heart of this program is the use of specifically binned GTX 1080, 1070 and 1060 GPUs. These GPUs have been tested and selected during the manufacturing process to ensure lower power draw at the same performance levels when compared to the GPUs used in more traditional form factors like desktop graphics cards.

slide1.png

In order to gain access to these "Max-Q" binned GPUs, notebook manufacturers have to meet specific NVIDIA guidelines on noise levels at thermal load (sub-40 dbA). To be clear, NVIDIA doesn't seem to be offering reference notebook designs (as demonstrated by the variability in design across the Max-Q notebooks) to partners, but rather ideas on how they can accomplish the given goals.

slide2.png

At the show, NVIDIA and some of their partners showed off several Max-Q notebooks. We hope to take a look at all of these machines in the coming weeks, but today we're focusing on one of the first, the ASUS ROG Zephyrus.

IMG_4744.JPG

ASUS ROG Zephyrus  (configuration as reviewed)
Processor Intel Core i7-7700HQ (Kaby Lake)
Graphics NVIDIA Geforce GTX 1080 with Max-Q Deseign (8GB)
Memory 24GB DDR4  (8GB Soldered + 8GBx2 DIMM)
Screen 15.6-in 1920x1080 120Hz G-SYNC 
Storage

512GB Samsung SM961 NVMe

Camera HD Webcam
Wireless 802.11ac
Connections Thunderbolt 3
HDMI 2.0
4 x USB 3.0
Audio combo jack
Power 50 Wh Battery, 230W AC Adapter
Dimensions 378.9mm x 261.9mm x 17.01-17.78mm (14.92" x 10.31" x 0.67"-0.70")
4.94 lbs. (2240.746 g)
OS Windows 10 Home
Price $2700 - Amazon.com

As you can see, the ASUS ROG Zephyrus has the specifications of a high-end gaming desktop, let alone a gaming notebook. In some gaming notebook designs, the bottleneck comes down to CPU horsepower more than GPU horsepower. That doesn't seem to be the case here. The powerful GTX 1080 GPU is paired with a quad-core HyperThread Intel processor capable of boosting up to 3.8 GHz. 

Continue reading our review of the ASUS Zephyrus Max-Q Gaming Notebook!

Author:
Manufacturer: AKiTiO

A long time coming

External video cards for laptops have long been a dream of many PC enthusiasts, and for good reason. It’s compelling to have a thin-and-light notebook with great battery life for things like meetings or class, with the ability to plug it into a dock at home and enjoy your favorite PC games.

Many times we have been promised that external GPUs for notebooks would be a viable option. Over the years there have been many commercial solutions involving both industry standard protocols like ExpressCard, as well as proprietary connections to allow you to externally connect PCIe devices. Inspiring hackers have also had their hand with this for many years, cobbling together interesting solutions using mPCIe and M.2 ports on their notebooks which were meant for other devices.

With the introduction of Intel’s Thunderbolt standard in 2011, there was a hope that we would finally achieve external graphics nirvana. A modern, Intel-backed protocol promising PCIe x4 speeds (PCIe 2.0 at that point) sounded like it would be ideal for connecting GPUs to notebooks, and in some ways it was. Once again the external graphics communities managed to get it to work through the use of enclosures meant to connect other non-GPU PCIe devices such as RAID and video capture cards to systems. However, software support was still a limiting factor. You were required to use an external monitor to display your video, and it still felt like you were just riding the line between usability and a total hack. It felt like we were never going to get true universal support for external GPUs on notebooks.

Then, seemingly of out of nowhere, Intel decided to promote native support for external GPUs as a priority when they introduced Thunderbolt 3. Fast forward, and we've already seen a much larger adoption of Thunderbolt 3 on PC notebooks than we ever did with the previous Thunderbolt implementations. Taking all of this into account, we figured it was time to finally dip our toes into the eGPU market. 

For our testing, we decided on the AKiTio Node for several reasons. First, at around $300, it's by far the lowest cost enclosure built to support GPUs. Additionally, it seems to be one of the most compatible devices currently on the market according to the very helpful comparison chart over at eGPU.io. The eGPU site is a wonderful resource for everything external GPU, over any interface possible, and I would highly recommend heading over there to do some reading if you are interested in trying out an eGPU for yourself.

The Node unit itself is a very utilitarian design. Essentially you get a folded sheet metal box with a Thunderbolt controller and 400W SFX power supply inside.

DSC03490.JPG

In order to install a GPU into the Node, you must first unscrew the enclosure from the back and slide the outer shell off of the device.

DSC03495.JPG

Once inside, we can see that there is ample room for any graphics card you might want to install in this enclosure. In fact, it seems a little too large for any of the GPUs we installed, including GTX 1080 Ti models. Here, you can see a more reasonable RX 570 installed.

Beyond opening up the enclosure to install a GPU, there is very little configuration required. My unit required a firmware update, but that was easily applied with the tools from the AKiTio site.

From here, I simply connected the Node to a ThinkPad X1, installed the NVIDIA drivers for our GTX 1080 Ti, and everything seemed to work — including using the 1080 Ti with the integrated notebook display and no external monitor!

Now that we've got the Node working, let's take a look at some performance numbers.

Continue reading our look at external graphics with the Thunderbolt 3 AKiTiO Node!

Podcast #457 - Radeon Vega FE, NVIDIA Multi-Die, Ryzen Pro, and more!

Subject: General Tech | July 6, 2017 - 10:40 AM |
Tagged: video, Vega FE, starcraft, seasonic, ryzen pro, radeon, podcast, nvidia, Multi-Die, gtx 1060, galax

PC Perspective Podcast #457 - 07/6/17

Join us for Radeon Vega FE, NVIDIA Multi-Die, Ryzen Pro, and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath

Peanut Gallery: Alex Lustenberg, Ken Addison

Program length: 1:08:04
 
Podcast topics of discussion:
  1. Week in Review:
      1. RX Vega perf leak
    1. 0:33:10 Casper!
  2. News items of interest:
  3. Hardware/Software Picks of the Week
  4. Closing/outro

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

Source:
Manufacturer: NVIDIA

Performance not two-die four.

When designing an integrated circuit, you are attempting to fit as much complexity as possible within your budget of space, power, and so forth. One harsh limitation for GPUs is that, while your workloads could theoretically benefit from more and more processing units, the number of usable chips from a batch shrinks as designs grow, and the reticle limit of a fab’s manufacturing node is basically a brick wall.

What’s one way around it? Split your design across multiple dies!

nvidia-2017-multidie.png

NVIDIA published a research paper discussing just that. In their diagram, they show two examples. In the first diagram, the GPU is a single, typical die that’s surrounded by four stacks of HBM, like GP100; the second configuration breaks the GPU into five dies, four GPU modules and an I/O controller, with each GPU module attached to a pair of HBM stacks.

NVIDIA ran simulations to determine how this chip would perform, and, in various workloads, they found that it out-performed the largest possible single-chip GPU by about 45.5%. They scaled up the single-chip design until it had the same amount of compute units as the multi-die design, even though this wouldn’t work in the real world because no fab could actual lithograph it. Regardless, that hypothetical, impossible design was only ~10% faster than the actually-possible multi-chip one, showing that the overhead of splitting the design is only around that much, according to their simulation. It was also faster than the multi-card equivalent by 26.8%.

While NVIDIA’s simulations, run on 48 different benchmarks, have accounted for this, I still can’t visualize how this would work in an automated way. I don’t know how the design would automatically account for fetching data that’s associated with other GPU modules, as this would probably be a huge stall. That said, they spent quite a bit of time discussing how much bandwidth is required within the package, and figures of 768 GB/s to 3TB/s were mentioned, so it’s possible that it’s just the same tricks as fetching from global memory. The paper touches on the topic several times, but I didn’t really see anything explicit about what they were doing.

amd-2017-epyc-breakdown.jpg

If you’ve been following the site over the last couple of months, you’ll note that this is basically the same as AMD is doing with Threadripper and EPYC. The main difference is that CPU cores are isolated, so sharing data between them is explicit. In fact, when that product was announced, I thought, “Huh, that would be cool for GPUs. I wonder if it’s possible, or if it would just end up being Crossfire / SLI.”

Apparently not? It should be possible?

I should note that I doubt this will be relevant for consumers. The GPU is the most expensive part of a graphics card. While the thought of four GP102-level chips working together sounds great for 4K (which is 4x1080p in resolution) gaming, quadrupling the expensive part sounds like a giant price-tag. That said, the market of GP100 (and the upcoming GV100) would pay five-plus digits for the absolute fastest compute device for deep-learning, scientific research, and so forth.

The only way I could see this working for gamers is if NVIDIA finds the sweet-spot for performance-to-yield (for a given node and time) and they scale their product stack with multiples of that. In that case, it might be cost-advantageous to hit some level of performance, versus trying to do it with a single, giant chip.

This is just my speculation, however. It’ll be interesting to see where this goes, whenever it does.

NVIDIA Releases GeForce 384.76 Drivers

Subject: Graphics Cards | June 30, 2017 - 05:26 PM |
Tagged: nvidia, graphics drivers

Aligning with the LawBreakers “Rise Up” open beta, as well as the Spider-Man: Homecoming VR Experience VR experience, intentionally written twice, NVIDIA has released new graphics drivers!

nvidia-geforce.png

The GeForce Game Ready 384.76 WHQL drivers were published yesterday on GeForce Experience and their website. Apart from game-specific optimizations, the driver also fixes a bunch of issues, many of which seem very important. First, if you are a fan of Firefall, and your system was unable to launch the game, this driver should remedy that. The driver also claims to remove some or all of the stuttering experienced by GTX 1080, GTX 1070, and GTX 1060 GPUs on Prey 2. Texture corruption in No Man’s Sky, for those who still play the game in an SLI configuration, should be fixed as well, which I believe was a long standing issue, although I could be wrong (as I haven’t been following that game). Vulkan support on Doom (2016) has also been improved.

I should note that, when I tried to custom install the driver through GeForce Experience, the install “failed” three times -- as in, the installed wouldn’t even draw the install button. Eventually, it gave me an install button, and it installed just fine. Not sure what’s going on with that, but I thought you all should know.

Source: NVIDIA

NVIDIA and Epic Games Announce "Edge" Program

Subject: Graphics Cards | June 28, 2017 - 11:00 PM |
Tagged: epic games, ue4, nvidia, geforce, giveaway

If you are an indie game developer, and you could use a little more GPU performance, NVIDIA is hosting a hardware giveaway. Starting at the end of July, and ongoing until Summer 2018, NVIDIA and Epic Games will be giving away GeForce GTX 1080 and GeForce GTX 1080 Ti cards to batches of Unreal Engine 4 projects.

epic-ue4-infiltrator.jpg

To enter, you need to share screenshots and videos of your game on Twitter, Facebook, and Instagram, tagging both UnrealEngine and NVIDIA. (The specific accounts are listed on the Unreal Engine blog post that announces this initiative.) They will also feature these projects on both the Unreal Engine and the NVIDIA blog, which is just as valuable for indie projects.

So... hey! Several chances at free hardware!

Source: Epic Games

AMD Releases Radeon ProRender for Blender and SolidWorks

Subject: General Tech | June 28, 2017 - 06:24 PM |
Tagged: solidworks, ray tracing, radeon, prorender, nvidia, mental ray, Blender, amd

AMD has released a free ray-tracing engine for Blender, as well as Maya, 3D Studio Max, and SolidWorks, called Radeon ProRender. It uses a physically-based workflow, which allows multiple materials to be expressed in a single, lighting-independent shader, making it easy to color objects and have them usable in any sensible environment.

amd-2017-prorender-mikeP.jpg

Image Credit: Mike Pan (via Twitter)

I haven’t used it yet, and I definitely haven’t tested how it stacks up against Cycles, but we’re beginning to see some test renders from Blender folks. It looks pretty good, as you can see with the water-filled Cornell box (above). Moreover, it’s rendered on an NVIDIA GPU, which I’m guessing they had because of Cycles, but that also shows that AMD is being inclusive with their software.

Radeon ProRender puts more than a little pressure on Mental Ray, which is owned by NVIDIA and licensed on annual subscriptions. We’ll need to see how quality evolves, but, as you see in the test render above, it looks pretty good so far... and the price can’t be beat.

Source: AMD

NVIDIA Partners Launching Mining Focused P106-100 and P104-100 Graphics Cards

Subject: Graphics Cards | June 26, 2017 - 11:29 PM |
Tagged: pascal, nvidia, nicehash, mining, gp106-100, gp104-100, cryptocurrency

In addion to the AMD-based mining graphics cards based on the RX 470 Polaris silicon that have appeared online, NVIDIA and its partners are launching cryptocurrency mining cards based on GP106 and GP104 GPUs. Devoid of any GeForce or GTX branding, these cost controlled cards focused on mining lack the usual array of display outputs and have much shorter warranties (rumors point at a 3 month warranty restriction imposed by NVIDIA). So far Asus, Colorful, EVGA, Inno3D, MSI, and Zotac "P106-100" cards based on GP106 (GTX 1060 equivalent) silicon have been spotted online with Manli and Palit reportedly also working on cards. Many of these manufacturers are also also planning "P104-100" cards based on GP104 or the GTX 1070 though much less information is available at the moment. Pricing is still up in the air but pre-orders are starting to pop up overseas so release dates and prices will hopefully become official soon.

ASUS GP106-100 MINER.jpg

These mining oriented cards appear to be equipped with heatsinks similar to their gaming oriented siblings, but have fans rated for 24/7 operation. Further, while the cards can be overclocked they are clocked out of the box at reference clock speeds and allegedly have bolstered power delivery hardware to keep the cards mining smoothly under 24/7 operation. The majority of cards from NVIDIA partners lack any display outputs (the Colorful card has a single DVI out) which helps a bit with ventilation by leaving both slots vented. These cards are intended to be run in headless system or with systems that also have graphics integrated into the CPU (miners not wanting to waste a PCI-E slot!).

  Base Clock Boost Clock Memory (Type) Pricing
ASUS MINING-P106-6G 1506 MHz 1708 MHz 6 GB (GDDR5) @ 8 GHz $226
Colorful P106-100 WK1/WK2 1506 MHz 1708 MHz 6GB (GDDR5) @ 8 GHz ?
EVGA GTX1060 6G P106 1506 MHz 1708 MHz 6GB (GDDR5) @ 8 GHz $284?
Inno3D P106-100 Compact 1506 Mhz 1708 MHz 6GB (GDDR5) @ 8 GHz ?
Inno3D P106-100 Twin 1506 MHz 1708 MHz 6GB (GDDR5) @ 8 GHz ?
MSI P106-100 MINER 1506 MHz 1708 MHz 6GB (GDDR5) @ 8 GHz $224
MSI P104-100 MINER TDB TBD 6GB (GDDR5X) @ ? ?
ZOTAC P106-100 1506 MHz 1708 MHz 6GB (GDDR5) @ 8 GHz ?

Looking at the Nicehash Profitability Calculator, the GTX 1060 and GTX 1070 are rated at 20.13 MH/s and 28.69 MH/s at DaggerHashimoto (Etherium) mining respectively with many users able to get a good bit higher hash rates with a bit of overclocking (and in the case of AMD undervolting to optimize power efficiency). NVIDIA cards tend to be good for other algorithms as well such as ZCash and Libry and Equihash (at least those were the majority of coins my 750 Ti mined likely due to it not having the memory to attempt ETH mining heh). The calculator estimates these GPUs at 0.00098942 BTC per day and 0.00145567 BTC per day respectivey. If difficulty and exchange rate were to remains constant that amounts to an income of $1197.95 per year for a GP106 and $1791.73 per year for a GP104 GPU and ROI in under 3 months. Of course cryptocurrency to USD exchange rates will not remain constant, there are transactions and mining fees, and mining difficulty will rise as more hardware is added to the network as miners so these estimated numbers will be lower in reality. Also, these numbers are before electricity, maintainence time, and failed hardware costs, but currently mining alt coins is still very much profitable using graphics cards.

AMD and NVIDIA (and their AIB partners) are hoping to get in on this action with cards binned and tuned for mining and at their rumored prices placing them cheaper than their gaming focused RX and GTX variants miners are sure to scoop these cards up in huge batches (some of the above cards are only availabe in large orders). Hopefully this will alleviate the strain on the gaming graphics card market and bring prices back down closer to their original MSRPs for gamers!

Also read:

What are your thoughts on all this GPU mining and cryptocurrency / blockchain technology stuff?

Source: Videocardz

Mining specific cards are real - ASUS and Sapphire GP106 and RX 470 show up

Subject: Graphics Cards | June 26, 2017 - 12:21 PM |
Tagged: radeon, nvidia, mining, geforce, cryptocurrency, amd

It appears that the prediction of mining-specific graphics cards was spot on and we are beginning to see the release of them from various AMD and NVIDIA board partners. ASUS has launched both a GP106-based solution and an RX 470 offering, labeled as being built exclusively for mining. And Sapphire has tossed it's hat into the ring with RX 470 options as well.

EZUPrFwrJaMtsGyJ_setting_000_1_90_end_500.png

The most interesting release is the ASUS MINING-P106-6G, a card that takes no official NVIDIA or GeForce branding, but is clearly based on the GP106 GPU that powers the GeForce GTX 1060. It has no display outputs, so you won't be able to use this as a primary graphics card down the road. It is very likely that these GPUs have bad display controllers on the chip, allowing NVIDIA to make use of an otherwise unusable product.

MINING-P106-6G_IO_500.png

The specifications on the ASUS page list this product as having 1280 CUDA cores, a base clock of 1506 MHz, a Boost clock of 1708 MHz, and 6GB of GDDR5 running at 8.0 GHz. Those are identical specs to the reference GeForce GTX 1060 product.

The ASUS MINING-RX470-4G is a similar build but using the somewhat older, but very efficient for mining, Radeon RX 470 GPU. 

MINING-RX470-4G_2D_500.png

Interestingly, the ASUS RX 470 mining card has openings for a DisplayPort and HDMI connection, but they are both empty, leaving the single DVI connection as the only display option.

MINING-RX470-4G_IO_500.png

The Mining RX 470 has 4GB of GDDR5, 2048 stream processors, a base clock of 926 MHz and a boost clock of 1206 MHz, again, the same as the reference RX 470 product.

We have also seen Sapphire versions of the RX 470 for mining show up on Overclockers UK with no display outputs and very similar specifications.

GX37YSP_168450_800x800.jpg

In fact, based on the listings at Overclockers UK, Sapphire has four total SKUs, half with 4GB and half with 8GB, binned by clocks and by listing the expected MH/s (megahash per second) performance for Ethereum mining.

63634076269.png

These releases show both NVIDIA and AMD (and its partners) desire to continue cashing in on the rising coin mining and cryptocurrency craze. For AMD, this allows them to find an outlet for the RX 470 GPU that might have otherwise sat in inventory with the upgraded RX 500-series out on the market. For NVIDIA, using GPUs that have faulty display controllers for mining-specific purposes allows it to be better utilize production and gain some additional profit with very little effort.

Those of you still looking to buy GPUs at reasonable prices for GAMING...you remember, what these products were built for...are still going to have trouble finding stock on virtual or physical shelves. Though the value of compute power has been dropping over the past week or so (an expected result of increase interesting in the process), I feel we are still on the rising side of this current cryptocurrency trend.

Source: Various

The GeForce GTX USB drive is real and small and fun

Subject: General Tech | June 23, 2017 - 05:13 PM |
Tagged: nvidia, gtx, geforce gtx usb drive, geforce

What started as merely an April Fool's prank by NVIDIA has now turned into one of the cutest little promotions I've ever seen. Originally "launched" as part of the GeForce G-ASSIST technology that purported to offer AI-enabled gaming if you were away from your keyboard, NVIDIA actually built the tiny, adorable, GeForce GTX USB Key.

gtxusb1.jpg

This drive was made to look like the GeForce GTX 1080 Founders Edition graphics card and was only produced in a quantity of 1080. I happen to find a 64GB option in a Fedex box this morning when I cam into the office.

gtxusb2.jpg

Performance on this USB 3.0 based drive is pretty solid, peaking at 111 MB/s on reads and 43 MB/s on writes. 

GTX 64GB USB-1.png

If you want of these for yourself, you need to be signed up through GeForce Experience and opting in to the GeForce newsletter. Do that, and you're entered. 

gtxusb3.jpg

We have some more pictures of the USB drive below (including the surprising interior shot!), so click this link to see them.