Flash player not detected. Click here to install flash.
« 1 2 3 4 5 »

PC Perspective Hardware Workshop 2016 @ Quakecon 2016 in Dallas, TX

Subject: Editorial, General Tech | July 26, 2016 - 03:45 PM |
Tagged: workshop, video, streaming, quakecon, prizes, live, giveaways

It is that time of year again: another installment of the PC Perspective Hardware Workshop! We will be presenting on the main stage at Quakecon 2016 being held in Dallas, TX August 4-7.

webbanner-horizontal.jpg
 

Main Stage - Quakecon 2016

Saturday, August 6th, 10:00am CT

Our thanks go out to the organizers of Quakecon for allowing us and our partners to put together a show that we are proud of every year.  We love giving back to the community of enthusiasts and gamers that drive us to do what we do!  Get ready for 2 hours of prizes, games and raffles and the chances are pretty good that you'll take something out with you - really, they are pretty good!

Our primary partners at the event are those that threw in for our ability to host the workshop at Quakecon and for the hundreds of shirts we have ready to toss out!  Our thanks to NVIDIALogitech and ASUS!!

nvidia_logo_small.png

LogitechG_horz_RGB_cyan_MD.png

Logo-Asus.png

Live Streaming

If you can't make it to the workshop - don't worry!  You can still watch the workshop live on our live page as we stream it over one of several online services.  Just remember this URL: http://pcper.com/live and you will find your way!

 

PC Perspective LIVE Podcast and Meetup

We are planning on hosting any fans that want to watch us record our weekly PC Perspective Podcast (http://pcper.com/podcast) on Wednesday or Thursday evening in our meeting room at the Hilton Anatole.  I don't yet know exactly WHEN or WHERE the location will be, but I will update this page accordingly on Wednesday July 16th when we get the data.  You might also consider following me on Twitter for updates on that status as well.

After the recording, we'll hop over the hotel bar for a couple drinks and hang out.  We have room for at leaast 50-60 people to join us in the room but we'll still be recording if just ONE of you shows up.  :)

Prize List (will continue to grow!)

Continue reading to see the list of prizes for the workshop!!!

Rumor: Nintendo NX Uses NVIDIA Tegra... Something

Subject: Graphics Cards, Systems, Mobile | July 27, 2016 - 07:58 PM |
Tagged: nvidia, Nintendo, nintendo nx, tegra, Tegra X1, tegra x2, pascal, maxwell

Okay so there's a few rumors going around, mostly from Eurogamer / DigitalFoundry, that claim the Nintendo NX is going to be powered by an NVIDIA Tegra system on a chip (SoC). DigitalFoundry, specifically, cites multiple sources who claim that their Nintendo NX development kits integrate the Tegra X1 design, as seen in the Google Pixel C. That said, the Nintendo NX release date, March 2017, does provide enough time for them to switch to NVIDIA's upcoming Pascal Tegra design, rumored to be called the Tegra X2, which uses NVIDIA's custom-designed Denver CPU cores.

Preamble aside, here's what I think about the whole situation.

First, the Tegra X1 would be quite a small jump in performance over the WiiU. The WiiU's GPU, “Latte”, has 320 shaders clocked at 550 MHz, and it was based on AMD's TeraScale 1 architecture. Because these stream processors have single-cycle multiply-add for floating point values, you can get its FLOP rating by multiplying 320 shaders, 550,000,000 cycles per second, and 2 operations per clock (one multiply and one add). This yields 352 GFLOPs. The Tegra X1 is rated at 512 GFLOPs, which is just 45% more than the previous generation.

This is a very tiny jump, unless they indeed use Pascal-based graphics. If this is the case, you will likely see a launch selection of games ported from WiiU and a few games that use whatever new feature Nintendo has. One rumor is that the console will be kind-of like the WiiU controller, with detachable controllers. If this is true, it's a bit unclear how this will affect games in a revolutionary way, but we might be missing a key bit of info that ties it all together.

nvidia-2016-shieldx1consoles.png

As for the choice of ARM over x86... well. First, this obviously allows Nintendo to choose from a wider selection of manufacturers than AMD, Intel, and VIA, and certainly more than IBM with their previous, Power-based chips. That said, it also jives with Nintendo's interest in the mobile market. They joined The Khronos Group and I'm pretty sure they've said they are interested in Vulkan, which is becoming the high-end graphics API for Android, supported by Google and others. That said, I'm not sure how many engineers exist that specialize in ARM optimization, as most mobile platforms try to abstract this as much as possible, but this could be Nintendo's attempt to settle on a standardized instruction set, and they opted for mobile over PC (versus Sony and especially Microsoft, who want consoles to follow high-end gaming on the desktop).

Why? Well that would just be speculating on speculation about speculation. I'll stop here.

SIGGRAPH 2016: NVIDIA Takes Over mental ray for Maya

Subject: General Tech | July 25, 2016 - 04:47 PM |
Tagged: nvidia, mental ray, maya, 3D rendering

NVIDIA purchased Mental Images, the German software developer that makes the mental ray renderer, all the way back in 2007. It has been bundled with every copy of Maya for a very long time now. In fact, my license of Maya 8, which I purchased back in like, 2006, came with mental ray in both plug-in format, and stand-alone.

nvidia-2016-mentalray-benchmark.png

Interestingly, even though nearly a decade has passed since NVIDIA's acquisition, Autodesk has been the middle-person that end-users dealt with. This will end soon, as NVIDIA announced, at SIGGRAPH, that they will “be serving end users directly” with their mental ray for Maya plug-in. The new plug-in will show results directly in the viewport, starting at low quality and increasing until the view changes. They are obviously not the first company to do this, with Cycles in Blender being a good example, but I would expect that it is a welcome feature for users.

nvidia-2016-mentalray-benchmarknums.png

Benchmark results are by NVIDIA

At the same time, they are also announcing GI-Next. This will speed up global illumination in mental ray, and it will also reduce the number of options required to tune the results to just a single quality slider, making it easier for artists to pick up. One of their benchmarks shows a 26-fold increase in performance, although most of that can be attributed to GPU acceleration from a pair of GM200 Quadro cards. CPU-only tests of the same scene show a 4x increase, though, which is still pretty good.

The new version of mental ray for Maya is expected to ship in September, although it has been in an open beta (for existing Maya users) since February. They do say that “pricing and policies will be announced closer to availability” though, so we'll need to see, then, how different the licensing structure will be. Currently, Maya ships with a few licenses of mental ray out of the box, and has for quite some time.

Source: NVIDIA

Crucial Expands MX300 SATA SSD Lineup, Adds 1TB, 525GB, 275GB M.2 2280

Subject: Storage | July 26, 2016 - 02:34 PM |
Tagged: MX300, micron, M.2, crucial, 525GB, 275GB, 1TB

We reviewed the Crucial MX300 750GB SSD a few months back. It was a good drive that tested well, and thanks to its IMFT 3D NAND, it came in at a very competitive price point. Today Crucial has rearranged that lineup a bit:

mx300-full-ssd-intro-image.png

The following capacities are being added to the MX300 lineup:

  • 1TB      $260 ($0.26/GB)
  • 525GB $130 ($0.25/GB)
  • 275GB  $70  ($0.25/GB)
  • 275GB * M.2 2280

The new capacities will be what is sold moving forward (starting 'late August'), with the 750GB model shifting to 'Limited Edition' status. That $0.25/GB carrying all the way down to the lowest capacity is significant, as typically we see higher cost/GB due to controller/PCB/packaging have more impact. Without that coming into play, we get a nearly 300GB SSD coming in at $70!

Specs and expected performance remain the same across all capacities, save a dip in random read performance on the 275GB models, mainly due to the reduced die count / parallelism. We'll take a look at these new capacities just as soon as samples arrive.

Full press blast appears after the break.

Source: Crucial

Checking out the MSI GTX 1070 Gaming Z

Subject: Graphics Cards | July 25, 2016 - 06:51 PM |
Tagged: msi, gtx 1070, Gaming Z, Twin Frozr VI, factory overclocked

The Tech Report had a chance to see what the MSI Twin Frozr VI cooler can do to a GTX 1070, they have just wrapped up a review of the Gaming Z edition of that NVIDIA card.  It comes with a respectable frequency bump when you enable OC mode, 1657 MHz base and 1860 MHz boost.  When they tested it under load the GPU stayed below 70C so there should be room to push the card further.  Check out the full benchmark suite in their full review.

card.jpg

"Nvidia's second Pascal graphics card, the GeForce GTX 1070, aims to set a new bar for graphics performance in the $379-and-up price range. We put MSI's GeForce GTX 1070 Gaming Z card through the wringer to see how a more affordable Pascal card performs."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Manufacturer: XSPC

Introduction and Technical Specifications

Introduction

02-raystorm-pro-1-2.jpg

Courtesy of XSPC

03-raystorm-pro-3-2-2.jpg

Courtesy of XSPC

04-pro-white.jpg

Courtesy of XSPC

XSPC is a well established name in the enthusiast cooling market, offering a wide range of custom cooling components and kits. Their newest CPU waterblock, the Raystorm Pro, offers a new look and optimized design in comparison to their last generation Raystorm CPU waterblock. The block features an all copper design with a dual metal / acrylic hold down plate for illumination around the outside edge of the block. The Raystorm Pro is compatible with all current CPU sockets with the currect mounting kit.

Continue reading our review of the XSPC Raystorm Pro CPU waterblock!

Seasonic Flagship PRIME 750W, when they upgrade they mean business

Subject: Cases and Cooling | July 25, 2016 - 04:53 PM |
Tagged: modular psu, Seasonic PRIME, 750w

It has been about a year since Seasonic released a brand new PSU as they do not tend to flood the market with incremental upgrades to their PSU families.  While this may hurt their business a little as newer users do not see reviews or advertisements frequently, long term enthusiasts take note when a new PSU arrives.  This fully modular PSU offers a single 12V rail capable of delivering 744W @ 62A and offers six 6+2 PCIe power cables, it even still has a floppy connector for those desperate times when you need to pull one out.  [H]ard|OCP strapped the PSU to their torture bench and this Seasonic unit came out with a Gold medal.  Check out the full review here.

1468806612uZQTmOQBWa_2_8_l.jpg

"Seasonic has never been big on marketing-speak. Outside of its impressive specifications, and a list of features, this is all it has to say. "The creation of the PRIME Series is a renewed testimony of Seasonic's determination to push the limits of power supply design in every aspect." Let's see if that is true, or the shortest sales pitch ever."

Here are some more Cases & Cooling reviews from around the web:

CASES & COOLING

Source: [H]ard|OCP

Gigabyte Rolls Out New GTX 1060 Graphics Cards

Subject: Graphics Cards | July 26, 2016 - 12:36 AM |
Tagged: windforce, pascal, gigabyte, GeForce GTX 1060

In a recent press release, Gigabyte announced that it will soon be adding four new GTX 1060 graphics cards to its lineup. The new cards feature Windforce series coolers and custom PCBs. At the high end is the GTX 1060 G1 Gaming followed by the GTX 1060 Windforce OC, small form factor friendly GTX 1060 Mini ITX OC, and the budget minded GTX 1060 D5. While the company has yet to divulge pricing or availability, the cards should be out within the next month or two.

All of the upcoming cards use a custom design that uses a custom PCB and power phase setup paired with Gigabyte's dual – or in the case of the Mini ITX card – single fan Windforce air cooler. Unfortunately, exact specifications for all of the cards except the high end model are unknown including core and memory clocks. The coolers use a dual composite heatpipe that directly touches the GPU to pull heat away and is dissipated by an aluminum fin stack. The fans are 90mm on all of the cards with the dual fan models using a design that has each fan spinning alternate directions of the other. The cards feature 6GB of GDDR5 memory as well as DVI, HDMI, and DisplayPort video outputs. For example, the Mini ITX OC graphics card (which is only 17cm long) and features two DVI, one HDMI, and one DP output.

Gigabyte GTX 1060 G1 Gaming.jpg

More information is available on the GTX 1060 G1 Gaming. This card is a dual slot dual fan design with a 6+1 power phase (reference is 3+1) powered by a single 8-pin power connector. The fans are shrouded and there is a metal backplate to aid in stability and cooling. Gigabyte claims that its "GPU Gauntlet" technology ensures users get heavily overclockable chips thanks to sorting and using the most promising chips.

The 16nm Pascal GPU is factory overclocked to 1847 MHz boost and 1620 MHz base clockspeeds in OC mode and 1809 MHz boost and 1594 MHz base in gaming mode. Users will be able to use the company's Xtreme Engine software to dial up the overclocks further as well as mess with the RGB LEDs. For comparison, the reference clockspeeds are 1708 MHz boost and 1506 MHz base. Gigabyte has left the 6GB of GDDR5 memory untouched at 8008 MHz.

Gigabyte GTX 1060 Mini ITX OC.jpg

The other cards should have similarly decent factory overclocks, but it is hard to say exactly what they will be out of the box. While I am not a big fan of the aesthetics, the Windforce coolers should let users push Pascal fairly far (for air cooling).

I would guess that the Gigabyte GTX 1060 G1 Gaming will MSRP for just above $300 while the lower end cards will be around $260 (the Mini ITX OC may be at a slight premium above that).

What do you think about Gigabyte's new cards?

Source: Guru3D

Microsoft Converts Unreal Engine 4 to UWP

Subject: General Tech | July 27, 2016 - 08:47 PM |
Tagged: microsoft, epic games, unreal engine, unreal engine 4, ue4, uwp

The head of Epic Games, Tim Sweeney, doesn't like UWP too much, at least as it exists today (and for noble reasons). He will not support the new software (app) platform unless Microsoft makes some clear changes that guarantee perpetual openness. There really isn't anything, technically or legally, to prevent Microsoft (or an entity with authority over Microsoft, like governments, activists groups who petition government, and so forth) from undoing their changes going forward. If Microsoft drops support for Win32, apart from applications that are converted using Project Centennial or something, their catalog would be tiny.

Ridiculously tiny.

SteamOS would kick its butt levels of tiny, let alone OSX, Android, and countless others.

As a result, Microsoft keeps it around, despite its unruliness. Functionality that is required by legitimate software make it difficult to prevent malware, and, even without an infection, it can make the system just get junked up over time.

microsoft-2016-uwp-logo.png

UWP, on the other hand, is slimmer, contained, and authenticated with keys. This is theoretically easier to maintain, but at the expense of user control and freedom; freedom to develop and install software anonymously and without oversight. The first iteration was with Windows RT, which was basically iOS, right down to the “you cannot ship a web browser unless it is a reskin of Internet Explorer ((replace that for Safari in iOS' case))” and “content above ESRB M and PEGI 16 are banned from the OS” levels of control.

Since then, content guidelines have increased, sideloading has been added, and so forth. That said, unlike the technical hurdles of Win32, there's nothing to prevent Microsoft from, in the future, saying “Okay, we have enough software for lock in. Sideloading is being removed in Windows 10 version 2810” or something. I doubt that the current administration wants to do this, especially executives like Phil Spencer, but their unwillingness to make it impossible to be done in the future is frustrating. This could be a few clauses in the EULA that make it easy for users to sue Microsoft if a feature is changed, and/or some chunks of code that breaks compatibility if certain openness features are removed.

Some people complain that he wasn't this concerned about iOS, but he already said that it was a bad decision in hindsight. Apple waved a shiny device around, and it took a few years for developers to think “Wait a minute, what did I just sign away?” iOS is, indeed, just as bad as UWP could turn into, if not worse.

Remember folks, once you build a tool for censorship, they will come. They may also have very different beliefs about what should be allowed or disallowed than you do. This is scary stuff, albeit based on good intentions.

That rant aside, Microsoft's Advanced Technology Group (ATG) has produced a fork of Unreal Engine 4, which builds UWP content. It is based upon Unreal Engine 4.12, and they have apparently merged changes up to version 4.12.5. This makes sense, of course, because that version is required to use Visual Studio 2015 Update 3.

If you want to make a game in Unreal Engine 4 for the UWP platform, then you might be able to use Microsoft's version. That said, it is provided without warranty, and there might be some bugs that cropped up, which Epic Games will probably not help with. I somehow doubt that Microsoft will have a dedicated team that merges all fixes going forward, and I don't think this will change Tim's mind (although concrete limitations that guarantee openness might...). Use at your own risk, I guess, especially if you don't care about potentially missing out on whatever is added for 4.13 and on (unless you add it yourself).

The fork is available on Microsoft's ATG GitHub, with lots of uppercase typing.

SIGGRAPH 2016 -- NVIDIA Announces Pascal Quadro GPUs: Quadro P5000 and Quadro P6000

Subject: Graphics Cards | July 25, 2016 - 04:48 PM |
Tagged: siggraph 2016, Siggraph, quadro, nvidia

SIGGRAPH is the big, professional graphics event of the year, bringing together tens of thousands of attendees. They include engineers from Adobe, AMD, Blender, Disney (including ILM, Pixar, etc.), NVIDIA, The Khronos Group, and many, many others. Not only are new products announced, but many technologies are explained in detail, down to the specific algorithms that are used, so colleagues can advance their own research and share in kind.

But new products will indeed be announced.

nvidia-2016-Quadro_P6000_7440.jpg

The NVIDIA Quadro P6000

NVIDIA, having just launched a few Pascal GPUs to other markets, decided to announce updates to their Quadro line at the event. Two cards have been added, the Quadro P5000 and the Quadro P6000, both at the top end of the product stack. Interestingly, both use GDDR5X memory, meaning that neither will be based on the GP100 design, which is built around HBM2 memory.

nvidia-2016-Quadro_P5000_7460.jpg

The NVIDIA Quadro P5000

The lower end one, the Quadro P5000, should look somewhat familiar to our reader. Exact clocks are not specified, but the chip has 2560 CUDA cores. This is identical to the GTX 1080, but with twice the memory: 16GB of GDDR5X.

Above it sits the Quadro P6000. This chip has 3840 CUDA cores, paired with 24GB of GDDR5X. We have not seen a GPU with exactly these specifications before. It has the same number of FP32 shaders as a fully unlocked GP100 die, but it doesn't have HBM2 memory. On the other hand, the new Titan X uses GP102, combining 3584 CUDA cores with GDDR5X memory, although only 12GB of it. This means that the Quadro P6000 has 256 more (single-precision) shader units than the Titan X, but otherwise very similar specifications.

Both graphics cards have four DisplayPort 1.4 connectors, as well as a single DVI output. These five connectors can be used to drive up to four, 4K, 120Hz monitors, or four, 5K, 60Hz ones. It would be nice if all five connections could be used at once, but what can you do.

nvidia-2016-irayvr.png

Pascal has other benefits for professional users, too. For instance, Simultaneous Multi-Projection (SMP) is used in VR applications to essentially double the GPU's geometry processing ability. NVIDIA will be pushing professional VR at SIGGRAPH this year, also launching Iray VR. This uses light fields, rendered on devices like the DGX-1, with its eight GP100 chips connected by NVLink, to provide accurately lit environments. This is particularly useful for architectural visualization.

No price is given for either of these cards, but they will launch in October of this year.

Source: NVIDIA