Subject: General Tech | August 17, 2017 - 12:48 PM | Jeremy Hellstrom
Tagged: nvidia, pascal, grid, tesla, Quadro vDWS
NVIDIA have updated their GRID virtual PC architecture to allow up to 24 virtual desktops, each with a 1GB desktop, doubling the previous capacity of their virtual machine tool. Along with this increase comes a new service called Quadro vDWS which allows you to power those virtual desktops with one of their HPC cards like their Pascal-based line of Tesla GPU accelerators. For workflows which incorporate things such as VR or photorealism this will offer a significant increase in performance; unfortunately Minesweeper will not see any improvements. NVIDIA accompanied this launch with a new blade server, the Tesla P6 which has 16GB of memory which can be split down to 16 1GB virtual desktops. Drop by The Inquirer for more information including on where to get this new software.
"NVIDIA has announced a new software suite which will allow users to virtualise an operating system to turn the company's ridiculously powerful Tesla GPU servers into powerful workstations."
Here is some more Tech News from around the web:
- Nokia 8 vs Galaxy S8 specs comparison @ The Inquirer
- Roku Gets Tough On Pirate Channels, Warns Users @ Slashdot
- Toshiba must allow Western Digital access to joint-venture assets @ The Register
- OCUK’s Andrew Gibson clears up RX Vega64 pricing disaster @ Kitguru
- How to build your own DIY makeshift levitation machine at home @ The Register
Subject: General Tech | August 9, 2017 - 12:43 PM | Jeremy Hellstrom
Tagged: nvidia, autonomous vehicles, HPC
NVIDIA has previously shown their interest in providing the brains for autonomous vehicles, their Xavier chip is scheduled for release some time towards the end of the year. They are continuing their efforts to break into this market by investing in start ups in a program called GPU Ventures. Today DigiTimes reports that NVIDIA purchased a stake in a Chinese company called Tusimple which is developing autonomous trucks. The transportation of goods may not be as interesting to the average consumer as self driving cars but the market could be more lucrative; there are a lot of trucks on the roads of the world and they are unlikely to be replaced any time soon.
"Tusimple, a Beijing-based startup focused on developing autonomous trucks, has disclosed that Nvidia will make a strategic investment to take a 3% stake in the company. Nvidia's investment is part of a a Series B financing round, Tusimple indicated."
Here is some more Tech News from around the web:
- Microsoft launches Outlook.com beta because it's not Gmail or Yahoo @ The Inquirer
- Intel will unveil 8th-gen 'Coffee Lake' processors on 21 August @ The Inquirer
- Microsoft Dumps Notorious Chinese Secure Certificate Vendor @ Slashdot
- It's 2017 and Hyper-V can be pwned by a guest app, Windows by a search query, Office by... @ The Register
- Core-blimey! Intel's Core i9 18-core monster – the numbers @ The Register
Subject: Graphics Cards | July 25, 2017 - 06:36 PM | Jeremy Hellstrom
Tagged: evga, Kingpin, 1080 ti, nvidia
A fancy new card with a fancy way of spelling K|NGP|N has just been announced by EVGA. It is a rather attractive card, eschewing RGBitis for a copper heatsink peeking through the hexagonal grill and three fans. The only glowing parts indicate the temperature of the GPU, memory and PWM controller; a far more functional use.
As you would expect, the card arrives with default clocks, a base clock of 1582MHz and boost of 1695MHz, however the card is guaranteed to hit 2025MHz and higher when you overclock the cards. The base model ships with a dual-slot profile, however EVGA chose to move the DVI port down, leaving the top of the card empty except for cooling vents, this also means you could purchase a Hydro Copper Waterblock and reduce the cards height to a single slot.
The card currently holds several single GPU World Records:
- 3DMark Time Spy World Record – 14,219
- 3DMark Fire Strike Extreme World Record – 19,361
- 3DMark Fire Strike World Record – 31,770
- UNIGINE Superposition – 8,642
July 25th, 2017 - The GeForce® GTX™ 1080 Ti was designed to be the most powerful desktop GPU ever created, and indeed it was. EVGA built upon its legacy of innovative cooling solutions and powerful overclocking with its GTX 1080 Ti SC2 and FTW3 graphics cards. Despite the overclocking headroom provided by the frigid cooling of EVGA's patented iCX Technology, the potential of the GTX 1080 Ti still leaves room for one more card at the top...and man is it good to be the K|NG.
A few months ago at Computex, NVIDIA announced their "GeForce GTX with Max-Q Design" initiative. Essentially, the heart of this program is the use of specifically binned GTX 1080, 1070 and 1060 GPUs. These GPUs have been tested and selected during the manufacturing process to ensure lower power draw at the same performance levels when compared to the GPUs used in more traditional form factors like desktop graphics cards.
In order to gain access to these "Max-Q" binned GPUs, notebook manufacturers have to meet specific NVIDIA guidelines on noise levels at thermal load (sub-40 dbA). To be clear, NVIDIA doesn't seem to be offering reference notebook designs (as demonstrated by the variability in design across the Max-Q notebooks) to partners, but rather ideas on how they can accomplish the given goals.
At the show, NVIDIA and some of their partners showed off several Max-Q notebooks. We hope to take a look at all of these machines in the coming weeks, but today we're focusing on one of the first, the ASUS ROG Zephyrus.
|ASUS ROG Zephyrus (configuration as reviewed)|
|Processor||Intel Core i7-7700HQ (Kaby Lake)|
|Graphics||NVIDIA Geforce GTX 1080 with Max-Q Deseign (8GB)|
|Memory||24GB DDR4 (8GB Soldered + 8GBx2 DIMM)|
|Screen||15.6-in 1920x1080 120Hz G-SYNC|
512GB Samsung SM961 NVMe
4 x USB 3.0
Audio combo jack
|Power||50 Wh Battery, 230W AC Adapter|
|Dimensions||378.9mm x 261.9mm x 17.01-17.78mm (14.92" x 10.31" x 0.67"-0.70")
4.94 lbs. (2240.746 g)
|OS||Windows 10 Home|
|Price||$2700 - Amazon.com|
As you can see, the ASUS ROG Zephyrus has the specifications of a high-end gaming desktop, let alone a gaming notebook. In some gaming notebook designs, the bottleneck comes down to CPU horsepower more than GPU horsepower. That doesn't seem to be the case here. The powerful GTX 1080 GPU is paired with a quad-core HyperThread Intel processor capable of boosting up to 3.8 GHz.
A long time coming
External video cards for laptops have long been a dream of many PC enthusiasts, and for good reason. It’s compelling to have a thin-and-light notebook with great battery life for things like meetings or class, with the ability to plug it into a dock at home and enjoy your favorite PC games.
Many times we have been promised that external GPUs for notebooks would be a viable option. Over the years there have been many commercial solutions involving both industry standard protocols like ExpressCard, as well as proprietary connections to allow you to externally connect PCIe devices. Inspiring hackers have also had their hand with this for many years, cobbling together interesting solutions using mPCIe and M.2 ports on their notebooks which were meant for other devices.
With the introduction of Intel’s Thunderbolt standard in 2011, there was a hope that we would finally achieve external graphics nirvana. A modern, Intel-backed protocol promising PCIe x4 speeds (PCIe 2.0 at that point) sounded like it would be ideal for connecting GPUs to notebooks, and in some ways it was. Once again the external graphics communities managed to get it to work through the use of enclosures meant to connect other non-GPU PCIe devices such as RAID and video capture cards to systems. However, software support was still a limiting factor. You were required to use an external monitor to display your video, and it still felt like you were just riding the line between usability and a total hack. It felt like we were never going to get true universal support for external GPUs on notebooks.
Then, seemingly of out of nowhere, Intel decided to promote native support for external GPUs as a priority when they introduced Thunderbolt 3. Fast forward, and we've already seen a much larger adoption of Thunderbolt 3 on PC notebooks than we ever did with the previous Thunderbolt implementations. Taking all of this into account, we figured it was time to finally dip our toes into the eGPU market.
For our testing, we decided on the AKiTio Node for several reasons. First, at around $300, it's by far the lowest cost enclosure built to support GPUs. Additionally, it seems to be one of the most compatible devices currently on the market according to the very helpful comparison chart over at eGPU.io. The eGPU site is a wonderful resource for everything external GPU, over any interface possible, and I would highly recommend heading over there to do some reading if you are interested in trying out an eGPU for yourself.
The Node unit itself is a very utilitarian design. Essentially you get a folded sheet metal box with a Thunderbolt controller and 400W SFX power supply inside.
In order to install a GPU into the Node, you must first unscrew the enclosure from the back and slide the outer shell off of the device.
Once inside, we can see that there is ample room for any graphics card you might want to install in this enclosure. In fact, it seems a little too large for any of the GPUs we installed, including GTX 1080 Ti models. Here, you can see a more reasonable RX 570 installed.
Beyond opening up the enclosure to install a GPU, there is very little configuration required. My unit required a firmware update, but that was easily applied with the tools from the AKiTio site.
From here, I simply connected the Node to a ThinkPad X1, installed the NVIDIA drivers for our GTX 1080 Ti, and everything seemed to work — including using the 1080 Ti with the integrated notebook display and no external monitor!
Now that we've got the Node working, let's take a look at some performance numbers.
Subject: General Tech | July 6, 2017 - 10:40 AM | Alex Lustenberg
Tagged: video, Vega FE, starcraft, seasonic, ryzen pro, radeon, podcast, nvidia, Multi-Die, gtx 1060, galax
PC Perspective Podcast #457 - 07/6/17
Join us for Radeon Vega FE, NVIDIA Multi-Die, Ryzen Pro, and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the iTunes Store (audio only)
- Google Play - Subscribe to our audio podcast directly through Google Play!
- RSS - Subscribe through your regular RSS reader (audio only)
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath
Peanut Gallery: Alex Lustenberg, Ken Addison
Performance not two-die four.
When designing an integrated circuit, you are attempting to fit as much complexity as possible within your budget of space, power, and so forth. One harsh limitation for GPUs is that, while your workloads could theoretically benefit from more and more processing units, the number of usable chips from a batch shrinks as designs grow, and the reticle limit of a fab’s manufacturing node is basically a brick wall.
What’s one way around it? Split your design across multiple dies!
NVIDIA published a research paper discussing just that. In their diagram, they show two examples. In the first diagram, the GPU is a single, typical die that’s surrounded by four stacks of HBM, like GP100; the second configuration breaks the GPU into five dies, four GPU modules and an I/O controller, with each GPU module attached to a pair of HBM stacks.
NVIDIA ran simulations to determine how this chip would perform, and, in various workloads, they found that it out-performed the largest possible single-chip GPU by about 45.5%. They scaled up the single-chip design until it had the same amount of compute units as the multi-die design, even though this wouldn’t work in the real world because no fab could actual lithograph it. Regardless, that hypothetical, impossible design was only ~10% faster than the actually-possible multi-chip one, showing that the overhead of splitting the design is only around that much, according to their simulation. It was also faster than the multi-card equivalent by 26.8%.
While NVIDIA’s simulations, run on 48 different benchmarks, have accounted for this, I still can’t visualize how this would work in an automated way. I don’t know how the design would automatically account for fetching data that’s associated with other GPU modules, as this would probably be a huge stall. That said, they spent quite a bit of time discussing how much bandwidth is required within the package, and figures of 768 GB/s to 3TB/s were mentioned, so it’s possible that it’s just the same tricks as fetching from global memory. The paper touches on the topic several times, but I didn’t really see anything explicit about what they were doing.
If you’ve been following the site over the last couple of months, you’ll note that this is basically the same as AMD is doing with Threadripper and EPYC. The main difference is that CPU cores are isolated, so sharing data between them is explicit. In fact, when that product was announced, I thought, “Huh, that would be cool for GPUs. I wonder if it’s possible, or if it would just end up being Crossfire / SLI.”
Apparently not? It should be possible?
I should note that I doubt this will be relevant for consumers. The GPU is the most expensive part of a graphics card. While the thought of four GP102-level chips working together sounds great for 4K (which is 4x1080p in resolution) gaming, quadrupling the expensive part sounds like a giant price-tag. That said, the market of GP100 (and the upcoming GV100) would pay five-plus digits for the absolute fastest compute device for deep-learning, scientific research, and so forth.
The only way I could see this working for gamers is if NVIDIA finds the sweet-spot for performance-to-yield (for a given node and time) and they scale their product stack with multiples of that. In that case, it might be cost-advantageous to hit some level of performance, versus trying to do it with a single, giant chip.
This is just my speculation, however. It’ll be interesting to see where this goes, whenever it does.
Subject: Graphics Cards | June 30, 2017 - 05:26 PM | Scott Michaud
Tagged: nvidia, graphics drivers
Aligning with the LawBreakers “Rise Up” open beta, as well as the Spider-Man: Homecoming VR Experience VR experience, intentionally written twice, NVIDIA has released new graphics drivers!
The GeForce Game Ready 384.76 WHQL drivers were published yesterday on GeForce Experience and their website. Apart from game-specific optimizations, the driver also fixes a bunch of issues, many of which seem very important. First, if you are a fan of Firefall, and your system was unable to launch the game, this driver should remedy that. The driver also claims to remove some or all of the stuttering experienced by GTX 1080, GTX 1070, and GTX 1060 GPUs on Prey 2. Texture corruption in No Man’s Sky, for those who still play the game in an SLI configuration, should be fixed as well, which I believe was a long standing issue, although I could be wrong (as I haven’t been following that game). Vulkan support on Doom (2016) has also been improved.
I should note that, when I tried to custom install the driver through GeForce Experience, the install “failed” three times -- as in, the installed wouldn’t even draw the install button. Eventually, it gave me an install button, and it installed just fine. Not sure what’s going on with that, but I thought you all should know.
Subject: Graphics Cards | June 28, 2017 - 11:00 PM | Scott Michaud
Tagged: epic games, ue4, nvidia, geforce, giveaway
If you are an indie game developer, and you could use a little more GPU performance, NVIDIA is hosting a hardware giveaway. Starting at the end of July, and ongoing until Summer 2018, NVIDIA and Epic Games will be giving away GeForce GTX 1080 and GeForce GTX 1080 Ti cards to batches of Unreal Engine 4 projects.
To enter, you need to share screenshots and videos of your game on Twitter, Facebook, and Instagram, tagging both UnrealEngine and NVIDIA. (The specific accounts are listed on the Unreal Engine blog post that announces this initiative.) They will also feature these projects on both the Unreal Engine and the NVIDIA blog, which is just as valuable for indie projects.
So... hey! Several chances at free hardware!
Subject: General Tech | June 28, 2017 - 06:24 PM | Scott Michaud
Tagged: solidworks, ray tracing, radeon, prorender, nvidia, mental ray, Blender, amd
AMD has released a free ray-tracing engine for Blender, as well as Maya, 3D Studio Max, and SolidWorks, called Radeon ProRender. It uses a physically-based workflow, which allows multiple materials to be expressed in a single, lighting-independent shader, making it easy to color objects and have them usable in any sensible environment.
Image Credit: Mike Pan (via Twitter)
I haven’t used it yet, and I definitely haven’t tested how it stacks up against Cycles, but we’re beginning to see some test renders from Blender folks. It looks pretty good, as you can see with the water-filled Cornell box (above). Moreover, it’s rendered on an NVIDIA GPU, which I’m guessing they had because of Cycles, but that also shows that AMD is being inclusive with their software.
Radeon ProRender puts more than a little pressure on Mental Ray, which is owned by NVIDIA and licensed on annual subscriptions. We’ll need to see how quality evolves, but, as you see in the test render above, it looks pretty good so far... and the price can’t be beat.