Flash player not detected. Click here to install flash.
« 1 2 3 4 5 »

Microsoft Converts Unreal Engine 4 to UWP

Subject: General Tech | July 27, 2016 - 08:47 PM |
Tagged: microsoft, epic games, unreal engine, unreal engine 4, ue4, uwp

The head of Epic Games, Tim Sweeney, doesn't like UWP too much, at least as it exists today (and for noble reasons). He will not support the new software (app) platform unless Microsoft makes some clear changes that guarantee perpetual openness. There really isn't anything, technically or legally, to prevent Microsoft (or an entity with authority over Microsoft, like governments, activists groups who petition government, and so forth) from undoing their changes going forward. If Microsoft drops support for Win32, apart from applications that are converted using Project Centennial or something, their catalog would be tiny.

Ridiculously tiny.

SteamOS would kick its butt levels of tiny, let alone OSX, Android, and countless others.

As a result, Microsoft keeps it around, despite its unruliness. Functionality that is required by legitimate software make it difficult to prevent malware, and, even without an infection, it can make the system just get junked up over time.

microsoft-2016-uwp-logo.png

UWP, on the other hand, is slimmer, contained, and authenticated with keys. This is theoretically easier to maintain, but at the expense of user control and freedom; freedom to develop and install software anonymously and without oversight. The first iteration was with Windows RT, which was basically iOS, right down to the “you cannot ship a web browser unless it is a reskin of Internet Explorer ((replace that for Safari in iOS' case))” and “content above ESRB M and PEGI 16 are banned from the OS” levels of control.

Since then, content guidelines have increased, sideloading has been added, and so forth. That said, unlike the technical hurdles of Win32, there's nothing to prevent Microsoft from, in the future, saying “Okay, we have enough software for lock in. Sideloading is being removed in Windows 10 version 2810” or something. I doubt that the current administration wants to do this, especially executives like Phil Spencer, but their unwillingness to make it impossible to be done in the future is frustrating. This could be a few clauses in the EULA that make it easy for users to sue Microsoft if a feature is changed, and/or some chunks of code that breaks compatibility if certain openness features are removed.

Some people complain that he wasn't this concerned about iOS, but he already said that it was a bad decision in hindsight. Apple waved a shiny device around, and it took a few years for developers to think “Wait a minute, what did I just sign away?” iOS is, indeed, just as bad as UWP could turn into, if not worse.

Remember folks, once you build a tool for censorship, they will come. They may also have very different beliefs about what should be allowed or disallowed than you do. This is scary stuff, albeit based on good intentions.

That rant aside, Microsoft's Advanced Technology Group (ATG) has produced a fork of Unreal Engine 4, which builds UWP content. It is based upon Unreal Engine 4.12, and they have apparently merged changes up to version 4.12.5. This makes sense, of course, because that version is required to use Visual Studio 2015 Update 3.

If you want to make a game in Unreal Engine 4 for the UWP platform, then you might be able to use Microsoft's version. That said, it is provided without warranty, and there might be some bugs that cropped up, which Epic Games will probably not help with. I somehow doubt that Microsoft will have a dedicated team that merges all fixes going forward, and I don't think this will change Tim's mind (although concrete limitations that guarantee openness might...). Use at your own risk, I guess, especially if you don't care about potentially missing out on whatever is added for 4.13 and on (unless you add it yourself).

The fork is available on Microsoft's ATG GitHub, with lots of uppercase typing.

Rumor: Nintendo NX Uses NVIDIA Tegra... Something

Subject: Graphics Cards, Systems, Mobile | July 27, 2016 - 07:58 PM |
Tagged: nvidia, Nintendo, nintendo nx, tegra, Tegra X1, tegra x2, pascal, maxwell

Okay so there's a few rumors going around, mostly from Eurogamer / DigitalFoundry, that claim the Nintendo NX is going to be powered by an NVIDIA Tegra system on a chip (SoC). DigitalFoundry, specifically, cites multiple sources who claim that their Nintendo NX development kits integrate the Tegra X1 design, as seen in the Google Pixel C. That said, the Nintendo NX release date, March 2017, does provide enough time for them to switch to NVIDIA's upcoming Pascal Tegra design, rumored to be called the Tegra X2, which uses NVIDIA's custom-designed Denver CPU cores.

Preamble aside, here's what I think about the whole situation.

First, the Tegra X1 would be quite a small jump in performance over the WiiU. The WiiU's GPU, “Latte”, has 320 shaders clocked at 550 MHz, and it was based on AMD's TeraScale 1 architecture. Because these stream processors have single-cycle multiply-add for floating point values, you can get its FLOP rating by multiplying 320 shaders, 550,000,000 cycles per second, and 2 operations per clock (one multiply and one add). This yields 352 GFLOPs. The Tegra X1 is rated at 512 GFLOPs, which is just 45% more than the previous generation.

This is a very tiny jump, unless they indeed use Pascal-based graphics. If this is the case, you will likely see a launch selection of games ported from WiiU and a few games that use whatever new feature Nintendo has. One rumor is that the console will be kind-of like the WiiU controller, with detachable controllers. If this is true, it's a bit unclear how this will affect games in a revolutionary way, but we might be missing a key bit of info that ties it all together.

nvidia-2016-shieldx1consoles.png

As for the choice of ARM over x86... well. First, this obviously allows Nintendo to choose from a wider selection of manufacturers than AMD, Intel, and VIA, and certainly more than IBM with their previous, Power-based chips. That said, it also jives with Nintendo's interest in the mobile market. They joined The Khronos Group and I'm pretty sure they've said they are interested in Vulkan, which is becoming the high-end graphics API for Android, supported by Google and others. That said, I'm not sure how many engineers exist that specialize in ARM optimization, as most mobile platforms try to abstract this as much as possible, but this could be Nintendo's attempt to settle on a standardized instruction set, and they opted for mobile over PC (versus Sony and especially Microsoft, who want consoles to follow high-end gaming on the desktop).

Why? Well that would just be speculating on speculation about speculation. I'll stop here.

Subject: Storage
Manufacturer: DeepSpar

Introduction, Packaging, and Internals

Introduction

Being a bit of a storage nut, I have run into my share of failed and/or corrupted hard drives over the years. I have therefore used many different data recovery tools to try to get that data back when needed. Thankfully, I now employ a backup strategy that should minimize the need for such a tool, but there will always be instances of fresh data on a drive that went down before a recent backup took place or a neighbor or friend that did not have a backup.

I’ve got a few data recovery pieces in the cooker, but this one will be focusing on ‘physical data recovery’ from drives with physically damaged or degraded sectors and/or heads. I’m not talking about so-called ‘logical data recovery’, where the drive is physically fine but has suffered some corruption that makes the data inaccessible by normal means (undelete programs also fall into this category). There are plenty of ‘hard drive recovery’ apps out there, and most if not all of them claim seemingly miraculous results on your physically failing hard drive. While there are absolutely success stories out there (most plastered all over testimonial pages at those respective sites), one must take those with an appropriate grain of salt. Someone who just got their data back with a <$100 program is going to be very vocal about it, while those who had their drive permanently fail during the process are likely to go cry quietly in a corner while saving up for a clean-room capable service to repair their drive and attempt to get their stuff back. I'll focus more on the exact issues with using software tools for hardware problems later in this article, but for now, surely there has to be some way to attempt these first few steps of data recovery without resorting to software tools that can potentially cause more damage?

DSC01505-.jpg

Well now there is. Enter the RapidSpar, made by DeepSpar, who hope this little box can bridge the gap between dedicated data recovery operations and home users risking software-based hardware recoveries. DeepSpar is best known for making advanced tools used by big data recovery operations, so they know a thing or two about this stuff. I could go on and on here, but I’m going to save that for after the intro page. For now let’s get into what comes in the box.

Note: In this video, I read the MFT prior to performing RapidNebula Analysis. It's optimal to reverse those steps. More on that later in this article.

Read on for our full review of the RapidSpar!

MSI's Custom RX 480 Gaming Graphics Cards Coming Mid August

Subject: Graphics Cards | July 27, 2016 - 03:43 AM |
Tagged: Twin Frozr VI, Radeon RX 480, polaris 10, msi

MSi is jumping full force into custom RX 480s with its upcoming line of Radeon RX 480 Gaming series including factory overclocked Gaming X and (slightly lower end) Gaming cards in both 8GB and 4GB SKUs. All four of the new graphics cards use a custom 8 phase power design, custom PCB with Military Class 4 components, and perhaps most importantly a beefy Twin Frozr VI cooler. The overclockable cards will be available by the middle of next month.

Specifically, MSI will be launching the RX 480 GAMING X 8G and RX 480 GAMING X 4G with 8GB and 4GB of GDDR5 memory respectively. These cards will have solid metal backplates and the highest factory overclocks. Below these cards sit the RX 480 GAMING 8G and RX480 GAMING 4G with the same TWIN FROZR VI cooler but sans backplate and with lower out of the box clockspeeds. Aside from those aspects, the cards all appear to offer identical features.

MSI Radeon RX 480 Gaming X 8GB.png

The new Gaming series graphics cards feature 8-pin PCI-E power connectors and 8-phase power design on a custom PCB that should allow users to push Polaris 10 quite a bit without running into issues of overheating the VRMs. The Twin Frozr VI cooler uses a nickel plated copper base plate, three 8mm copper heatpipes, a large aluminum fin array, and two large fans that spin down while the GPU temperature is under 60°C. The heatsink results in a larger than reference card that is both wider and longer at 276mm, but the size is made up for by offering 22% better cooling performance according to MSI. Further, RGB LEDs backlight the MSI logo on the side of the card. The metal backplate on the X variants should help dissipate slightly more heat than the non X models.

All for Polaris-based graphics cards offer a single DL-DVI, two HDMI, and two DisplayPort video outputs. The inclusion of two HDMI ports rather than three DP ports is allegedly to more easily support VR users by allowing them to have an HDMI connected monitor and headset connected at the same time without using adapters.

  RX 480 Gaming X 8G RX 480 Gaming X 4G RX 480 Gaming 8G RX 480 Gaming 4G RX 480 Reference
GPU Clock (OC Mode) 1316 MHz 1316 MHz 1292 MHz 1292 MHz 1266 MHz
GPU Clock (Gaming Mode) 1303 MHz 1303 MHz 1279 MHz 1279 MHz 1266 MHz
GPU Clock (Silent Mode) 1266 MHz 1266 MHz 1266 MHz 1266 MHz 1266 MHz
Memory 8GB GDDR5 4GB GDDR5 8GB GDDR5 4GB GDDR5 8GB or 4GB GDDR5
Memory Clock 8100 MHz 8100 MHz 8000 MHz (?) 8000 MHz (?) 8000 MHz
Backplate Yes Yes No No No
Card Length 276mm 276mm 276mm 276mm 241mm
MSRP ? ? ? ? $249 for 8GB, $199 for 4GB

The GAMING and GAMING X RX 480s offer two tiers of factory overclocks that users can select using MSI's software utility. The non X GAMING cards will clock up to 1279 MHz in Gaming Mode and 1292 MHz in OC Mode. In Silent Mode the card will run at the same 1266 MHz boost speed as AMD's reference design card. Meanwhile the RX 480 GAMING X cards will boost up to 1303 MHz in Gaming Mode and 1316 MHz in OC Mode. In addition, MSI is bumping up the memory clockspeeds to 8100 MHz in OC Mode which is a nice surprise! MSI's announcement is not exactly clear, but it appears that the non X versions do not have factory overlcocked memory and it remains at the reference 8000 MHz.

Pricing has not yet been announced, but the cards will reportedly be on sale worldwide by mid August.

I am looking forward to seeing how far reviewers and users are able to push Polaris 10 with the Twin Frozr cooler and 8-phase VRMs!

Source: Guru3D

AMD Introduces Radeon Pro SSG: A Professional GPU Paired With Low Latency Flash Storage (Updated)

Subject: Graphics Cards | July 27, 2016 - 01:56 AM |
Tagged: solid state, radeon pro, Polaris, gpgpu, amd

UPDATE (July 27th, 1am ET):  More information on the Radeon Pro SSG has surfaced since the original article. According to AnandTech, the prototype graphics card actually uses an AMD Fiji GPU. The Fiji GPU is paired onboard PCI-E based storage using the same PEX8747 bridge chip used in the Radeon Pro Duo. Storage is handled by two PCI-E 3.0 x4 M.2 slots that can accommodate up to 1TB of NAND flash storage. As I mentioned below, having the storage on board the graphics card vastly reduces latency by reducing the number of hops and not having to send requests out to the rest of the system. AMD had more numbers to share following their demo, however.

From the 8K video editing demo, the dual Samsung 950 Pro PCI-E SSDs (in RAID 0) on board the Radeon Pro SSG hit 4GB/s while scrubbing through the video. That same video source stored on a Samsung 950 Pro attached to the motherboard had throughput of only 900MB/s. In theory, reaching out to system RAM still has raw throughput advantages (with DDR4 @ 3200 MHz  on a Haswell-E platform theroretically capable of 62 GB/s reads and 47 GB/s writes though that would be bottlenecked by the graphics card having to go over the PCI-E 3.0 x16 link and it's maximum of 15.754 GB/s.). Of course if you can hold it in (much smaller) GDDR5 (300+GB/s depending on clocks and memory bus width) or HBM (1TB/s) and not have to go out to any other storage tier that's ideal but not always feasible especially in the HPC world.

However, having onboard storage on the same board as the GPU only a single "hop" away vastly reduces latency and offers much more total storage space than most systems have in DDR3 or DDR4. In essence, the solid state storage on the graphics card (which developers will need to specifically code for) acts as a massive cache for streaming in assets for data sets and workloads that are highly impacted by latency. This storage is not the fastest, but is the next best thing for holding active data outside of GDDR5/x or HBM. For throughput intensive workloads reaching out to system RAM will be better Finally, reaching out to system attached storage should be the last resort as it will be the slowest and most latent. Several commentors mentioned using a PCI-E based SSD in a second slot on the motherboard accessed much like GPUs in CrossFire communicate now (DMA over the PCI-E bus) which is an interesting idea that I had not considered.

Per my understanding of the situation, I think that the on board SSG storage would still be slightly more beneficial than this setup but it would get you close (I am assuming the GPU would be able to directly interact and request data from the SSD controller and not have to rely on the system CPU to do this work but I may well be mistaken. I will have to look into this further and ask the experts heh). On the prototype Radeon Pro SSG the M.2 slots are actually able to be seen as drives by the system and OS so it is essentially acting as if there was a PCI-E adapter card in a slot on the motherboard holding those drives but that may not be the case should this product actually hit the market. I do question their choice to go with Fiji rather than Polaris, but it sounds like they built the prototype off of the Radeon Pro Duo platform so I suppose it would make sense there.

Hopefully the final versions in 2017 or beyond use at least Vega though :).

 Alongside the launch of new Radeon Pro WX (workstation) series graphics cards, AMD teased an interesting new Radeon Pro product: the Radeon Pro SSG. This new professional graphics card pairs a Polaris GPU with up ot a terabyte of on board solid state storage and seeks to solve one of the biggest hurdles in GP GPU performance when dealing with extremely large datasets which is latency.

AMD Radeon Pro SSG.jpg

One of the core focuses of AMD's HSA (heterogeneous system architecture) is unified memory and the ability of various processors (CPU, GPU, specialized co-processors, et al) to work together efficiently by being able to access and manipulate data from the same memory pool without having to copy data bck and forth between CPU-accessible memory and GPU-accessible memory. With the Radeon Pro SSG, this idea is not fully realized (it is more of a sidestep), but it will move performance further. It does not eliminate the need to copy data to the GPU before it can work on it, but once copied the GPU will be able to work on data stored in what AMD describes as a one terabyte frame buffer. This memory will be solid state and very fast, but more importantly it will be able to get at the data with much lower latency than previous methods. AMD claims the solid state storage (likely NAND but they have not said) will link with the GPU over a dedicated PCI-E bus. I suppose that if you can't bring the GPU to the data, you bring the data to the GPU!

Considering AMD's previous memory champ – the Radeon W9100 – maxed out at 32GB of GDDR5, the teased Radeon Pro SSG with its 1TB of purportedly low latency onboard flash storage opens up a slew of new possibilities for researchers and professionals in media, medical, and scientific roles working with massive datasets for imaging, creation, and simulations! I expect that there are many professionals out there eager to get their hands on one of these cards! They will be able to as well thanks to a beta program launching shortly, so long as they have $10,000 for the hardware!

AMD gave a couple of examples in their PR on the potential benefits of its "solid state graphics" including the ability to image a patient's beating heart in real time to allow medical professionals to examine and spot issues as early as possible and using the Radeon Pro SSG to edit and scrub through 8K video in real time at 90 FPS versus 17 with current offerings. On the scientific side of things being able to load up entire models into the new graphics memory (not as low latency as GDDR5 or HBM certainly) will be a boon as will being able to get data sets as close to the GPU as possible into servers using GPU accelerated databases powering websites accessed by millions of users.

It is not exactly the HSA future I have been waiting for ever so impatiently, but it is a nice advancement and an intriguing idea that I am very curious to see how well it pans out and if developers and researchers will truly take advantage of and use to further their projects. I suspect something like this could be great for deep learning tasks as well (such as powering the "clouds" behind self driving cars perhaps).

Stay tuned to PC Perspective for more information as it develops.

This is definitely a product that I will be watching and I hope that it does well. I am curious what Nvidia's and Intel's plans are here as well! What are your thoughts on AMD's "Solid State Graphics" card? All hype or something promising?

Source: AMD

PC Perspective Hardware Workshop 2016 @ Quakecon 2016 in Dallas, TX

Subject: Editorial, General Tech | July 26, 2016 - 03:45 PM |
Tagged: workshop, video, streaming, quakecon, prizes, live, giveaways

It is that time of year again: another installment of the PC Perspective Hardware Workshop! We will be presenting on the main stage at Quakecon 2016 being held in Dallas, TX August 4-7.

webbanner-horizontal.jpg
 

Main Stage - Quakecon 2016

Saturday, August 6th, 10:00am CT

Our thanks go out to the organizers of Quakecon for allowing us and our partners to put together a show that we are proud of every year.  We love giving back to the community of enthusiasts and gamers that drive us to do what we do!  Get ready for 2 hours of prizes, games and raffles and the chances are pretty good that you'll take something out with you - really, they are pretty good!

Our primary partners at the event are those that threw in for our ability to host the workshop at Quakecon and for the hundreds of shirts we have ready to toss out!  Our thanks to NVIDIALogitech and ASUS!!

nvidia_logo_small.png

LogitechG_horz_RGB_cyan_MD.png

Logo-Asus.png

Live Streaming

If you can't make it to the workshop - don't worry!  You can still watch the workshop live on our live page as we stream it over one of several online services.  Just remember this URL: http://pcper.com/live and you will find your way!

 

PC Perspective LIVE Podcast and Meetup

We are planning on hosting any fans that want to watch us record our weekly PC Perspective Podcast (http://pcper.com/podcast) on Wednesday or Thursday evening in our meeting room at the Hilton Anatole.  I don't yet know exactly WHEN or WHERE the location will be, but I will update this page accordingly on Wednesday July 16th when we get the data.  You might also consider following me on Twitter for updates on that status as well.

After the recording, we'll hop over the hotel bar for a couple drinks and hang out.  We have room for at leaast 50-60 people to join us in the room but we'll still be recording if just ONE of you shows up.  :)

Prize List (will continue to grow!)

Continue reading to see the list of prizes for the workshop!!!

Author:
Subject: Storage
Manufacturer: Angelbird

Cool your jets

Cool Your Jets: Can the Angelbird Wings PX1 Heatsink-Equipped PCIe Adapter Tame M.2 SSD Temps?

Introduction to the Angelbird Wings PX1

PCIe-based M.2 storage has been one of the more exciting topics in the PC hardware market during the past year. With tremendous performance packed into a small design no larger than a stick of chewing gum, PCIe M.2 SSDs open up new levels of storage performance and flexibility for both mobile and desktop computing. But these tiny, powerful drives can heat up significantly under load, to the point where thermal performance throttling was a critical concern when the drives first began to hit the market.

While thermal throttling is less of a concern for the latest generation of NVMe M.2 SSDs, Austrian SSD and accessories firm Angelbird wants to squash any possibility of performance-killing heat with its Wings line of PCIe SSD adapters. The company's first Wings-branded product is the PX1, a x4 PCIe adapter that can house an M.2 SSD in a custom-designed heatsink.

wings-px1-1.jpg

Angelbird claims that its aluminum-coated copper-core heatsink design can lower the operating temperature of hot M.2 SSDs like the Samsung 950 Pro, thereby preventing thermal throttling. But at a list price of $75, this potential protection doesn't come cheap. We set out to test the PX1's design to see if Angelbird's claims about reduced temperatures and increased performance hold true.

PX1 Design & Installation

PC Perspective's Allyn Malventano was impressed with the build quality of Angelbird's products when he reviewed its "wrk" series of SSDs in late 2014. Our initial impression of the PX1 revealed that Angelbird hasn't lost a step in that regard during the intervening years.

wings-px1-2.jpg

The PX1 features an attractive black design and removable heatsink, which is affixed to the PCB via six hex screws. A single M-key M.2 port resides in the center of the adapter, with mounting holes to accommodate 2230, 2242, 2260, 2280, and 22110-length drives.

Continue reading our review of the Angelbird Wings PX1 Heatsink PCIe Adapter!

Crucial Expands MX300 SATA SSD Lineup, Adds 1TB, 525GB, 275GB M.2 2280

Subject: Storage | July 26, 2016 - 02:34 PM |
Tagged: MX300, micron, M.2, crucial, 525GB, 275GB, 1TB

We reviewed the Crucial MX300 750GB SSD a few months back. It was a good drive that tested well, and thanks to its IMFT 3D NAND, it came in at a very competitive price point. Today Crucial has rearranged that lineup a bit:

mx300-full-ssd-intro-image.png

The following capacities are being added to the MX300 lineup:

  • 1TB      $260 ($0.26/GB)
  • 525GB $130 ($0.25/GB)
  • 275GB  $70  ($0.25/GB)
  • 275GB * M.2 2280

The new capacities will be what is sold moving forward (starting 'late August'), with the 750GB model shifting to 'Limited Edition' status. That $0.25/GB carrying all the way down to the lowest capacity is significant, as typically we see higher cost/GB due to controller/PCB/packaging have more impact. Without that coming into play, we get a nearly 300GB SSD coming in at $70!

Specs and expected performance remain the same across all capacities, save a dip in random read performance on the 275GB models, mainly due to the reduced die count / parallelism. We'll take a look at these new capacities just as soon as samples arrive.

Full press blast appears after the break.

Source: Crucial

Gigabyte Rolls Out New GTX 1060 Graphics Cards

Subject: Graphics Cards | July 26, 2016 - 12:36 AM |
Tagged: windforce, pascal, gigabyte, GeForce GTX 1060

In a recent press release, Gigabyte announced that it will soon be adding four new GTX 1060 graphics cards to its lineup. The new cards feature Windforce series coolers and custom PCBs. At the high end is the GTX 1060 G1 Gaming followed by the GTX 1060 Windforce OC, small form factor friendly GTX 1060 Mini ITX OC, and the budget minded GTX 1060 D5. While the company has yet to divulge pricing or availability, the cards should be out within the next month or two.

All of the upcoming cards use a custom design that uses a custom PCB and power phase setup paired with Gigabyte's dual – or in the case of the Mini ITX card – single fan Windforce air cooler. Unfortunately, exact specifications for all of the cards except the high end model are unknown including core and memory clocks. The coolers use a dual composite heatpipe that directly touches the GPU to pull heat away and is dissipated by an aluminum fin stack. The fans are 90mm on all of the cards with the dual fan models using a design that has each fan spinning alternate directions of the other. The cards feature 6GB of GDDR5 memory as well as DVI, HDMI, and DisplayPort video outputs. For example, the Mini ITX OC graphics card (which is only 17cm long) and features two DVI, one HDMI, and one DP output.

Gigabyte GTX 1060 G1 Gaming.jpg

More information is available on the GTX 1060 G1 Gaming. This card is a dual slot dual fan design with a 6+1 power phase (reference is 3+1) powered by a single 8-pin power connector. The fans are shrouded and there is a metal backplate to aid in stability and cooling. Gigabyte claims that its "GPU Gauntlet" technology ensures users get heavily overclockable chips thanks to sorting and using the most promising chips.

The 16nm Pascal GPU is factory overclocked to 1847 MHz boost and 1620 MHz base clockspeeds in OC mode and 1809 MHz boost and 1594 MHz base in gaming mode. Users will be able to use the company's Xtreme Engine software to dial up the overclocks further as well as mess with the RGB LEDs. For comparison, the reference clockspeeds are 1708 MHz boost and 1506 MHz base. Gigabyte has left the 6GB of GDDR5 memory untouched at 8008 MHz.

Gigabyte GTX 1060 Mini ITX OC.jpg

The other cards should have similarly decent factory overclocks, but it is hard to say exactly what they will be out of the box. While I am not a big fan of the aesthetics, the Windforce coolers should let users push Pascal fairly far (for air cooling).

I would guess that the Gigabyte GTX 1060 G1 Gaming will MSRP for just above $300 while the lower end cards will be around $260 (the Mini ITX OC may be at a slight premium above that).

What do you think about Gigabyte's new cards?

Source: Guru3D

 AMD FireRender Technology Now ProRender, Part of GPUOpen

Subject: General Tech, Graphics Cards | July 25, 2016 - 09:48 PM |
Tagged: siggraph 2016, Siggraph, capsaicin, amd, 3D rendering

At their Capsaicin Siggraph event tonight AMD has announced that what was previously announced as the FireRender rendering engine is being officially launched as AMD Radeon ProRender, and this is becoming open-source as part of AMD's GPUOpen initiative.

capsaicin.PNG

From AMD's press release:

AMD today announced its powerful physically-based rendering engine is becoming open source, giving developers access to the source code.

As part of GPUOpen, Radeon ProRender (formerly previewed as AMD FireRender) enables creators to bring ideas to life through high-performance applications and workflows enhanced by photorealistic rendering.

GPUOpen is an AMD initiative designed to assist developers in creating ground-breaking games, professional graphics applications and GPU computing applications with much greater performance and lifelike experiences, at no cost and using open development tools and software.

Unlike other renderers, Radeon ProRender can simultaneously use and balance the compute capabilities of multiple GPUs and CPUs – on the same system, at the same time – and deliver state-of-the-art GPU acceleration to produce rapid, accurate results.

Radeon ProRender plugins are available today for many popular 3D content creation applications, including Autodesk® 3ds Max®, SOLIDWORKS by Dassault Systèmes and Rhino®, with Autodesk® Maya® coming soon. Radeon ProRender works across Windows®, OS X and Linux®, and supports AMD GPUs, CPUs and APUs as well as those of other vendors.

Source: AMD

AMD Announces Radeon Pro WX Series Graphics Cards

Subject: Graphics Cards | July 25, 2016 - 09:30 PM |
Tagged: siggraph 2016, Siggraph, Radeon Pro WX Series, Radeon Pro WX 7100, Radeon Pro WX 5100, Radeon Pro WX 4100, radeon, capsaicin, amd

AMD has announced new Polaris-based professional graphics cards at Siggraph 2016 this evening, with the Radeon Pro WX 4100, WX 5100, and WX 7100 GPUs.

Radeon Pro WX 7100.jpg

The AMD Radeon Pro WX 7100 GPU (Image credit: AMD)

From AMD's official press release:

AMD today unveils powerful new solutions to address modern content creation and engineering: the new Radeon Pro WX Series of professional graphics cards, which harness the award-winning Polaris architecture and is designed to deliver exceptional capabilities for the immersive computing era.

Radeon Pro solutions and the new Radeon Pro WX Series of professional graphics cards represent a fundamentally different approach for professionals rooted in a commitment to open, non-proprietary software and performant, feature-rich hardware that empowers people to create the “art of the impossible”.

The new Radeon Pro WX series graphics cards deliver on the promise of this new era of creation, are optimized for open source software, and are designed for creative professionals and those pushing the boundaries of science, technology and engineering.

Radeon Pro WX 5100.jpg

The AMD Radeon Pro WX 5100 GPU (Image credit: AMD)

Radeon Pro WX Series professional graphics cards are designed to address specific demands of the modern content creation era:

  • Radeon Pro WX 7100 GPU is capable of handling demanding design engineering and media and entertainment workflows and is AMD’s most affordable workstation solution for professional VR content creation.
  • Radeon Pro WX 5100 GPU is the ideal solution for product development, powered by the impending game-engine revolution in design visualization.
  • Radeon Pro WX 4100 GPU provides great performance in a half-height design, finally bringing mid-range application performance demanded by CAD professionals to small form factor (SFF) workstations

Radeon Pro WX 4100.jpg

The AMD Radeon Pro WX 4100 GPU (Image credit: AMD)

A breakdown of the known specifications for these new GPUs was provided by AnandTech in their report on the WX Series:

WX_Series_Comparison.PNG

Chart credit: AnandTech

Source: AMD

Sapphire's Custom Polaris 10-Based Nitro+ RX 480 Coming Next Month

Subject: Graphics Cards | July 25, 2016 - 08:49 PM |
Tagged: sapphire, Radeon RX 480, polaris 10, nitro+, nitro

More details on custom graphics cards based around AMD's RX 480 reference GPU are starting to trickle out now that the official shipping dates are approaching (it appears many of the cards will be available next month). Sapphire is the latest AIB to provide all the juicy details on its custom Nitro+ Radeon RX 480 card!

The Nitro+ RX 480 is a dual slot card with a Dual X cooler that features two 95mm quick connect fans, vented aluminum backplate, black shroud, and aluminum heatsink. The graphics card is powered by a single 8-pin PCI-E power connector which should be enough to allow overclocking headroom and alleviate any worries over pulling too much amperage over the PEG slot on the motherboard.

Sapphire NitroPlus RX 480.png

Sapphire is using high end capacitors and black diamond 4 chokes. The twin fan cooler supports "quick connect" which lets users easily pull out the fans for cleaning or replacement (which seems like a neat feature considering how dusty my PC can get (it doesn't help that my corgi loves to lay against my tower heh)). RGB LEDs illuminate the Sapphire logo and fans.

Of course, all of the LEDs can be controlled by software or a button on the back of the card to change colors in response to temperatures, fan speed, cycling through all colors, and turned off completely. 

Sapphire NitroPlus RX 480 Backplate.png

The company also uses an aluminum backplate which has a nice design to it (nice to see the only part of the card most will see getting some attention for once heh) as well as vents that allow hot air to escape. Air is pulled into the card from the two fans and pushed out the back of the card and up through the backplate. I am interested to see how much this design actually improved cooling.

Rear IO includes a single DL-DVI output along with two DisplayPort 1.4 and two HDMI 2.0b video outputs. This configuration results in a smaller air intake but also lets you hook up both a HDMI monitor and VR headset. While there are five connectors, only four may be used at the same time.

While Sapphire did not touch the memory, it did factory overclock the Polaris 10 GPU to up to 1,342 MHz boost. Compared to the reference boost clockspeed of 1,266 this is a decent jump, especially for a factory out of the box overclock. Users should be able to push the GPU further though exactly how far remains to be seen and will depend on the cooler and the quality of their specific chip.

Sapphire's Nitro+ RX 480 will reportedly be available as soon as next week in both 4GB and 8GB models. The 4GB will run $220 while the 8GB card will cost $269. If these numbers hold true, that is only a $20 premium over the reference designs which certainly seems like a great value all things considered! I am looking forward to the reviews on this slick looking card and I hope that the performance and build quality are up to snuff! 

Also read: The AMD Radeon RX 480 Review - The Polaris Promise

Source: Sapphire

Checking out the MSI GTX 1070 Gaming Z

Subject: Graphics Cards | July 25, 2016 - 06:51 PM |
Tagged: msi, gtx 1070, Gaming Z, Twin Frozr VI, factory overclocked

The Tech Report had a chance to see what the MSI Twin Frozr VI cooler can do to a GTX 1070, they have just wrapped up a review of the Gaming Z edition of that NVIDIA card.  It comes with a respectable frequency bump when you enable OC mode, 1657 MHz base and 1860 MHz boost.  When they tested it under load the GPU stayed below 70C so there should be room to push the card further.  Check out the full benchmark suite in their full review.

card.jpg

"Nvidia's second Pascal graphics card, the GeForce GTX 1070, aims to set a new bar for graphics performance in the $379-and-up price range. We put MSI's GeForce GTX 1070 Gaming Z card through the wringer to see how a more affordable Pascal card performs."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Seasonic Flagship PRIME 750W, when they upgrade they mean business

Subject: Cases and Cooling | July 25, 2016 - 04:53 PM |
Tagged: modular psu, Seasonic PRIME, 750w

It has been about a year since Seasonic released a brand new PSU as they do not tend to flood the market with incremental upgrades to their PSU families.  While this may hurt their business a little as newer users do not see reviews or advertisements frequently, long term enthusiasts take note when a new PSU arrives.  This fully modular PSU offers a single 12V rail capable of delivering 744W @ 62A and offers six 6+2 PCIe power cables, it even still has a floppy connector for those desperate times when you need to pull one out.  [H]ard|OCP strapped the PSU to their torture bench and this Seasonic unit came out with a Gold medal.  Check out the full review here.

1468806612uZQTmOQBWa_2_8_l.jpg

"Seasonic has never been big on marketing-speak. Outside of its impressive specifications, and a list of features, this is all it has to say. "The creation of the PRIME Series is a renewed testimony of Seasonic's determination to push the limits of power supply design in every aspect." Let's see if that is true, or the shortest sales pitch ever."

Here are some more Cases & Cooling reviews from around the web:

CASES & COOLING

Source: [H]ard|OCP

SIGGRAPH 2016 -- NVIDIA Announces Pascal Quadro GPUs: Quadro P5000 and Quadro P6000

Subject: Graphics Cards | July 25, 2016 - 04:48 PM |
Tagged: siggraph 2016, Siggraph, quadro, nvidia

SIGGRAPH is the big, professional graphics event of the year, bringing together tens of thousands of attendees. They include engineers from Adobe, AMD, Blender, Disney (including ILM, Pixar, etc.), NVIDIA, The Khronos Group, and many, many others. Not only are new products announced, but many technologies are explained in detail, down to the specific algorithms that are used, so colleagues can advance their own research and share in kind.

But new products will indeed be announced.

nvidia-2016-Quadro_P6000_7440.jpg

The NVIDIA Quadro P6000

NVIDIA, having just launched a few Pascal GPUs to other markets, decided to announce updates to their Quadro line at the event. Two cards have been added, the Quadro P5000 and the Quadro P6000, both at the top end of the product stack. Interestingly, both use GDDR5X memory, meaning that neither will be based on the GP100 design, which is built around HBM2 memory.

nvidia-2016-Quadro_P5000_7460.jpg

The NVIDIA Quadro P5000

The lower end one, the Quadro P5000, should look somewhat familiar to our reader. Exact clocks are not specified, but the chip has 2560 CUDA cores. This is identical to the GTX 1080, but with twice the memory: 16GB of GDDR5X.

Above it sits the Quadro P6000. This chip has 3840 CUDA cores, paired with 24GB of GDDR5X. We have not seen a GPU with exactly these specifications before. It has the same number of FP32 shaders as a fully unlocked GP100 die, but it doesn't have HBM2 memory. On the other hand, the new Titan X uses GP102, combining 3584 CUDA cores with GDDR5X memory, although only 12GB of it. This means that the Quadro P6000 has 256 more (single-precision) shader units than the Titan X, but otherwise very similar specifications.

Both graphics cards have four DisplayPort 1.4 connectors, as well as a single DVI output. These five connectors can be used to drive up to four, 4K, 120Hz monitors, or four, 5K, 60Hz ones. It would be nice if all five connections could be used at once, but what can you do.

nvidia-2016-irayvr.png

Pascal has other benefits for professional users, too. For instance, Simultaneous Multi-Projection (SMP) is used in VR applications to essentially double the GPU's geometry processing ability. NVIDIA will be pushing professional VR at SIGGRAPH this year, also launching Iray VR. This uses light fields, rendered on devices like the DGX-1, with its eight GP100 chips connected by NVLink, to provide accurately lit environments. This is particularly useful for architectural visualization.

No price is given for either of these cards, but they will launch in October of this year.

Source: NVIDIA

SIGGRAPH 2016: NVIDIA Takes Over mental ray for Maya

Subject: General Tech | July 25, 2016 - 04:47 PM |
Tagged: nvidia, mental ray, maya, 3D rendering

NVIDIA purchased Mental Images, the German software developer that makes the mental ray renderer, all the way back in 2007. It has been bundled with every copy of Maya for a very long time now. In fact, my license of Maya 8, which I purchased back in like, 2006, came with mental ray in both plug-in format, and stand-alone.

nvidia-2016-mentalray-benchmark.png

Interestingly, even though nearly a decade has passed since NVIDIA's acquisition, Autodesk has been the middle-person that end-users dealt with. This will end soon, as NVIDIA announced, at SIGGRAPH, that they will “be serving end users directly” with their mental ray for Maya plug-in. The new plug-in will show results directly in the viewport, starting at low quality and increasing until the view changes. They are obviously not the first company to do this, with Cycles in Blender being a good example, but I would expect that it is a welcome feature for users.

nvidia-2016-mentalray-benchmarknums.png

Benchmark results are by NVIDIA

At the same time, they are also announcing GI-Next. This will speed up global illumination in mental ray, and it will also reduce the number of options required to tune the results to just a single quality slider, making it easier for artists to pick up. One of their benchmarks shows a 26-fold increase in performance, although most of that can be attributed to GPU acceleration from a pair of GM200 Quadro cards. CPU-only tests of the same scene show a 4x increase, though, which is still pretty good.

The new version of mental ray for Maya is expected to ship in September, although it has been in an open beta (for existing Maya users) since February. They do say that “pricing and policies will be announced closer to availability” though, so we'll need to see, then, how different the licensing structure will be. Currently, Maya ships with a few licenses of mental ray out of the box, and has for quite some time.

Source: NVIDIA

You can run your RX 480 on Linux kernel 4.7

Subject: General Tech | July 25, 2016 - 01:12 PM |
Tagged: linux, kernel 4.7, security, rx 480, LoadPin

For now we are awaiting the benchmarks but with the release of this new kernel, Linux users will be able to run the new RX 480 from AMD.  The new kernel also contains a new security feature called LoadPin which ensures that kernel-loaded files come from within the same file system in an attempt to maintain security without requiring each file to be individually signed.  There were also some improvements made to network drivers along with several other changes which The Inquirer covers in their own unique manner.

linuxkernel.jpg

"Despite it being two weeks since RC7, the final patch wasn't all that big and much of it is trivial one- and few-liners. There's a couple of network drivers that got a bit more loving."

Here is some more Tech News from around the web:

Tech Talk

Source: The Inquirer
Manufacturer: XSPC

Introduction and Technical Specifications

Introduction

02-raystorm-pro-1-2.jpg

Courtesy of XSPC

03-raystorm-pro-3-2-2.jpg

Courtesy of XSPC

04-pro-white.jpg

Courtesy of XSPC

XSPC is a well established name in the enthusiast cooling market, offering a wide range of custom cooling components and kits. Their newest CPU waterblock, the Raystorm Pro, offers a new look and optimized design in comparison to their last generation Raystorm CPU waterblock. The block features an all copper design with a dual metal / acrylic hold down plate for illumination around the outside edge of the block. The Raystorm Pro is compatible with all current CPU sockets with the currect mounting kit.

Continue reading our review of the XSPC Raystorm Pro CPU waterblock!

Subject: General Tech
Manufacturer: Microsoft

Make Sure You Understand Before the Deadline

I'm fairly sure that any of our readers who want Windows 10 have already gone through the process to get it, and the rest have made it their mission to block it at all costs (or they don't use Windows).

microsoft-ballmer-goodbye.jpg

Regardless, there has been quite a bit of misunderstanding over the last couple of years, so it's better to explain it now than a week from now. Upgrading to Windows 10 will not destroy your original Windows 7 or Windows 8.x license. What you are doing is using that license to register your machine with Windows 10, which Microsoft will create a digital entitlement for. That digital entitlement will be good “for the supported lifetime of the Windows 10-enabled device”.

There's three misconceptions that kept recurring from the above paragraph.

First, “the supported lifetime of the Windows 10-enabled device” doesn't mean that Microsoft will deactivate Windows 10 on you. Instead, it apparently means that Microsoft will continue to update Windows 10, and require that users will keep the OS somewhat up to date (especially the Home edition). If an old or weird piece of hardware or software in your device becomes incompatible with that update, even if it is critical for the device to function, then Microsoft is allowing itself to shrug and say “that sucks”. There's plenty of room for legitimate complaints about this, and Microsoft's recent pattern of weakened QA and support, but the specific complaint that Microsoft is just trying to charge you down the line? False.

Second, even though I already stated it earlier in this post, I want to be clear: you can still go back to Windows 7 or Windows 8.x. Microsoft is granting the Windows 10 license for the Windows 7 or Windows 8.x device in addition to the original Windows 7 or Windows 8.x license granted to it. The upgrade process even leaves the old OS on your drive for a month, allowing the user to roll back through a recovery process. I've heard people say that, occasionally, this process can screw a few things up. It's a good idea to manage your own backup before upgrading, and/or plan on re-installing Windows 7 or 8.x the old fashioned way.

This brings us to the third misconception: you can re-install Windows 10 later!

If you upgrade to Windows 10, decide that you're better with Windows 7 or 8.x for a while, but decide to upgrade again in a few years, then your machine (assuming the hardware didn't change enough to look like a new device) will still use that Windows 10 entitlement that was granted to you on your first, free upgrade. You will need to download the current Windows 10 image from Microsoft's website, but, when you install it, you should be able to just input an empty license key (if they still ask for it by that point) and Windows 10 will pull down validation from your old activation.

If you have decided to avoid Windows 10, but based that decision on the above three, incorrect points? You now have the tools to make an informed decision before time runs out. Upgrading to Windows 10 (Update (immediate): waiting until it verifies that it successfully activated!) and rolling back is annoying, and it could be a hassle if it doesn't go cleanly (or your go super-safe and back-up ahead of time), but it might save you some money in the future.

On the other hand, if you don't want Windows 10, and never want Windows 10, then Microsoft will apparently stop asking Windows 7 and Windows 8.x users starting on the 29th, give or take.

NVIDIA Release 368.95 Hotfix Driver for DPC Latency

Subject: Graphics Cards | July 22, 2016 - 05:51 PM |
Tagged: pascal, nvidia, graphics drivers

Turns out the Pascal-based GPUs suffered from DPC latency issues, and there's been an ongoing discussion about it for a little over a month. This is not an area that I know a lot about, but it's a system that schedules workloads by priority, which provides regular windows of time for sound and video devices to update. It can be stalled by long-running driver code, though, which could manifest as stutter, audio hitches, and other performance issues. With a 10-series GeForce device installed, users have reported that this latency increases about 10-20x, from ~20us to ~300-400us. This can increase to 1000us or more under load. (8333us is ~1 whole frame at 120FPS.)

nvidia-2015-bandaid.png

NVIDIA has acknowledged the issue and, just yesterday, released an optional hotfix. Upon installing the driver, while it could just be psychosomatic, the system felt a lot more responsive. I ran LatencyMon (DPCLat isn't compatible with Windows 8.x or Windows 10) before and after, and the latency measurement did drop significantly. It was consistently the largest source of latency, spiking in the thousands of microseconds, before the update. After the update, it was hidden by other drivers for the first night, although today it seems to have a few spikes again. That said, Microsoft's networking driver is also spiking in the ~200-300us range, so a good portion of it might be the sad state of my current OS install. I've been meaning to do a good system wipe for a while...

nvidia-2016-hotfix-pascaldpc.png

Measurement taken after the hotfix, while running Spotify.
That said, my computer's a mess right now.

That said, some of the post-hotfix driver spikes are reaching ~570us (mostly when I play music on Spotify through my Blue Yeti Pro). Also, Photoshop CC 2015 started complaining about graphics acceleration issues after installing the hotfix, so only install it if you're experiencing problems. About the latency, if it's not just my machine, NVIDIA might still have some work to do.

It does feel a lot better, though.

Source: NVIDIA