Moore's Law Is Fifty Years Old!

Subject: General Tech, Graphics Cards, Processors | April 19, 2015 - 02:08 PM |
Tagged: moores law, Intel

While he was the director of research and development at Fairchild Semiconductor, Gordon E. Moore predicted that the number of components in an integrated circuits would double every year. Later, this time-step would slow to every two years; you can occasionally hear people talk about eighteen months too, but I am not sure who derived that number. In a few years, he would go on to found Intel with Robert Noyce, where they spend tens of billions of dollars annually to keep up with the prophecy.

Intel-logo.png

It works out for the most part, but we have been running into physical issues over the last few years though. One major issue is that, with our process technology dipping into the single- and low double-digit nanometers, we are running out of physical atoms to manipulate. The distance between silicon atoms in a solid at room temperature is about 0.5nm; a 14nm product has features containing about 28 atoms, give or take a few in rounding error.

Josh has a good editorial that discusses this implication with a focus on GPUs.

It has been a good fifty years since the start of Moore's Law. Humanity has been developing plans for how to cope with the eventual end of silicon lithography process shrinks. We will probably transition to smaller atoms and molecules and later consider alternative technologies like photonic crystals, which routes light in the hundreds of terahertz through a series of waveguides that make up an integrated circuit. Another interesting thought: will these technologies fall in line with Moore's Law in some way?

Source: Tom Merritt

NVIDIA Released 350.12 WHQL and AMD Released 15.4 Beta for Grand Theft Auto V

Subject: Graphics Cards | April 14, 2015 - 01:27 AM |
Tagged: nvidia, amd, GTA5

Grand Theft Auto V launched today at around midnight GMT worldwide. This corresponded to 7PM EDT for those of us in North America. Well, add a little time for Steam to unlock the title and a bit longer for Rockstar to get enough servers online. One thing you did not need to wait for was new video card drivers. Both AMD and NVIDIA have day-one drivers that provide support.

rockstart-gtav_trevor2_1920x1080.jpg

You can get the NVIDIA drivers at their landing page

You can get the AMD drivers at their release notes

Personally, I ran the game for about a half hour on Windows 10 (Build 10049) with a GeForce GTX 670. Since these drivers are not for the pre-release operating system, I tried running it on 349.90 to see how it performed before upgrading. Surprisingly, it seems to be okay (apart from a tree that was flickering in and out of existence during a cut-scene). I would definitely update my drivers if they were available and supported, but I'm glad that it seems to be playable even on Windows 10.

Source: AMD

90-some percent of the performance for 70 percent of the price; PowerColor's PCS+ R9 290X

Subject: Graphics Cards | April 6, 2015 - 04:54 PM |
Tagged: factory overclocked, powercolor pcs+, R9 290X

The lowest priced GTX 980 on Amazon is currently $530 while the PowerColor PCS+ R9 290X is $380, about 72% of the price of the GTX 980.  The performance that [H]ard|OCP saw after overclocking the 290X was much closer, in some games even matching it but usually about 5-10% slower than the GTX 980, making it quite obvious which card is the better value.  The GTX 970 is a different story, you can find a card for $310 and the performance is only slightly behind the 290X although the 290X takes a larger lead at higher resolutions.  Read through the review carefully as the performance delta and overall smoothness varies from game to game but unless you like paying to brag about your handful of extra frames the 970 and 290X are the cards offering you the best bang for your buck.

1427061263kWJDj1MtDq_1_1.jpg

"Today we examine what value the PowerColor PCS+ R9 290X holds compared to overclocked GeForce GTX 970. AMD's Radeon R9 290X pricing has dropped considerably since launch and constitutes a great value and competition for the GeForce GTX 970. At $350 this may be an excellent value compared to the competition."

Here are some more Graphics Card articles from around the web:

Graphics Cards

 

Source: [H]ard|OCP

Saved so much using Linux you can afford a Titan X?

Subject: Graphics Cards | March 27, 2015 - 04:02 PM |
Tagged: gtx titan x, linux, nvidia

Perhaps somewhere out there is a Linux user who wants a TITAN X and if there is they will like the results of Phoronix's testing.  The card works perfectly straight out of the box with the latest 346.47 driver as well as the 349.12 Beta; if you want to use Nouveau then don't buy this card.  The TITAN did not win any awards for power efficiency but for OpenCL tests, synthetic OpenGL benchmarks and Unigine on Linux it walked away a clear winner.  Phoronix, and many others, hope that AMD is working on an updated Linux driver to accompany the new 300 series of cards we will see soon to help them be more competitive on open source systems.

If you are sick of TITAN X reviews by now, just skip to their 22 GPU performance roundup of Metro Redux.

image.php_.jpg

"Last week NVIDIA unveiled the GeForce GTX TITAN X during their annual GPU Tech Conference. Of course, all of the major reviews at launch were under Windows and thus largely focused on the Direct3D performance. Now that our review sample arrived this week, I've spent the past few days hitting the TITAN X hard under Linux with various OpenGL and OpenCL workloads compared to other NVIDIA and AMD hardware on the binary Linux drivers."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: Phoronix

NVIDIA Quadro M6000 Announced

Subject: Graphics Cards | March 23, 2015 - 07:30 AM |
Tagged: quadro, nvidia, m6000, gm200

Alongside the Titan X, NVIDIA has announced the Quadro M6000. In terms of hardware, they are basically the same component: 12 GB of GDDR5 on a 384-bit memory bus, 3072 CUDA cores, and a reduction in double precision performance to 1/32nd of its single precision. The memory, but not the cache, is capable of ECC (error-correction) for enterprises who do not want a stray photon to mess up their computation. That might be the only hardware difference between it and the Titan X.

nvidia-quadro-m6000.jpg

Compared to other Quadro cards, it loses some double precision performance as mentioned earlier, but it will be an upgrade in single precision (FP32). The add-in board connects to the power supply with just a single eight-pin plug. Technically, with its 250W TDP, it is slightly over the rating for one eight-pin PCIe connector, but NVIDIA told Anandtech that they're confident that it won't matter for the card's intended systems.

That is probably true, but I wouldn't put it past someone to do something spiteful given recent events.

The lack of double precision performance (IEEE 754 FP64) could be disappointing for some. While NVIDIA would definitely know their own market better than I do, I was under the impression that a common workstation system for GPU compute was a Quadro driving a few Teslas (such as two of these). It would seem weird for a company to have such a high-end GPU be paired with Teslas that have such a significant difference in FP64 compute. I wonder what this means for the Tesla line, and whether we will see a variant of Maxwell with a large boost in 64-bit performance, or if that line will be in an awkward place until Pascal.

Or maybe not? Maybe NVIDIA is planning to launch products based on an unannounced, FP64-focused architecture? The aim could be to let the Quadro deal with the heavy FP32 calculations, while the customer could opt to load co-processors according to their double precision needs? It's an interesting thought as I sit here at my computer musing to myself, but then I immediately wonder why did they not announce it at GTC if that is the case? If that is the case, and honestly I doubt it because I'm just typing unfiltered thoughts here, you would think they would kind-of need to be sold together. Or maybe not. I don't know.

Pricing and availability is not currently known, except that it is “soon”.

Source: Anandtech

A TITANic roundup of GPUs

Subject: Graphics Cards | March 19, 2015 - 03:20 PM |
Tagged: titan x, nvidia, gtx titan x, gm200, geforce, 4k

You have read Ryan's review of the $999 behemoth from NVIDIA and now you can take the opportunity to see what other reviewers think of the card.  [H]ard|OCP tested it against the GTX 980 which shares the same cooler and is every bit as long as the TITAN X.  Along the way they found a use for the 12GB of VRAM as both Watch_Dogs and Far Cry 4 used over 7GB of memory when tested at 4k resolution though the frame rates were not really playable, you will need at least two TITAN X's to pull that off.  They will be revisiting this card in the future, providing more tests for a card with incredible performance and an even more incredible price.

14265930473m7LV4iNyQ_1_6_l.jpg

"The TITAN X video card has 12GB of VRAM, not 11.5GB, 50% more streaming units, 50% more texture units, and 50% more CUDA cores than the current GTX 980 flagship NVIDIA GPU. While this is not our full TITAN X review, this preview focuses on what the TITAN X delivers when directly compared to the GTX 980."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

NVIDIA Announces DIGITS DevBox - 28 TFLOPS, 1300 Watts, $15k

Subject: General Tech, Graphics Cards, Shows and Expos | March 17, 2015 - 03:44 PM |
Tagged: nvidia, DIGITS

At GTC, NVIDIA announced a new device called the DIGITS DevBox:

GTC-30.jpg
        

The DIGITS DevBox is a device that data scientists can purchase and install locally. Plugged into a single electrical outlet, this modified Corsair Air 540 case equipped with quad TITAN X (reviewed here) GPUs can crank out 28 TeraFLOPS of compute power. The installed CPU is a Haswell-E 5930K, and the system is rated to draw 1300W of power. NVIDIA is building these in-house as the expected volume is low, with these units likely going to universities and small compute research firms.

Why would you want such compute power?

GTC-29.jpg

DIGITS is a software package available from NVIDIA. Its purpose is to act as a tool for data scientists to manipulate deep learning environments (neural networks). This package, running on a DIGITS DevBox, will give much more compute power capability to scientists who need it for their work. Getting this tech in the hands of more scientists will accelerate this technology and lead to what NVIDIA hopes will be a ‘Big Bang’ in this emerging GPU-compute-heavy field.

GTC-4.jpg

GTC-11.jpg

More from GTC is coming soon, as well as an exclusive PC Perspective Live Stream set to start in just a few minutes! Did I mention we will be giving away a Titan X???

**Update**

Ryan interviewed the lead developer of DIGITS in the video below. This offers a great explanation (and example) of what this deep learning stuff is all about:

GTC 2015: NVIDIA Roadmap Shows Pascal with 3D Memory, NVLink and Mixed Precision Compute

Subject: Graphics Cards | March 17, 2015 - 01:47 PM |
Tagged: pascal, nvidia, gtc 2015, GTC, geforce

At the keynote of the GPU Technology Conference (GTC) today, NVIDIA CEO Jen-Hsun Huang disclosed some more updates on the roadmap for future GPU technologies.

GTC-36.jpg

Most of the detail was around Pascal, due in 2016, that will introduce three new features including mixed compute precision, 3D (stacked) memory, and NVLink. Mixed precision is a method of computing in FP16, allowing calculations to run much faster at lower accuracy than full single or double precision when they are not necessary. Keeping in mind that Maxwell doesn't have an implementation with full speed DP compute (today), it would seem that NVIDIA is targeting different compute tasks moving forward. Though details are short, mixed precision would likely indicate processing cores than can handle both data types.

3D memory is the ability to put memory on-die with the GPU directly to improve overall memory banwidth. The visual diagram that NVIDIA showed on stage indicated that Pascal would have 750 GB/s of bandwidth, compared to 300-350 GB/s on Maxwell today.

NVLink is a new way of connecting GPUs, improving on bandwidth by more than 5x over current implementations of PCI Express. They claim this will allow for connecting as many as 8 GPUs for deep learning performance improvements (up to 10x). What that means for gaming has yet to be discussed.

GTC-38.jpg

NVIDIA made some other interesting claims as well. Pascal will be more than 2x more performance per watt efficient than Maxwell, even without the three new features listed above. It will also ship (in a compute targeted product) with a 32GB memory system compared to the 12GB of memory announced on the Titan X today. Pascal will also have 4x the performance in mixed precision compute.

Watch NVIDIA Reveal the GTX TITAN X at GTC 2015

Subject: Graphics Cards, Shows and Expos | March 17, 2015 - 10:31 AM |
Tagged: nvidia, video, GTC, gtc 2015

NVIDIA is streaming today's keynote from the GPU Technology Conference (GTC) on Ustream, and we have the embed below for you to take part. NVIDIA CEO Jen-Hsun Huang will reveal the details about the new GeForce GTX TITAN X but there are going to be other announcements as well, including one featuring Tesla CEO Elon Musk.

Should be interesting!

Source: NVIDIA

PCPer Live! GeForce GTX TITAN X Live Stream and Giveaway!

Subject: Graphics Cards | March 16, 2015 - 07:13 PM |
Tagged: video, tom petersen, titan x, nvidia, maxwell, live, gtx titan x, gtx, gm200, geforce

UPDATE 2: If you missed the live stream, we now have the replay available below!

UPDATE: The winner has been announced: congrats to Ethan M. for being selected as the random winner of the GeForce GTX TITAN X graphics card!!

Get yourself ready, it’s time for another GeForce GTX live stream hosted by PC Perspective’s Ryan Shrout! This time the focus is going to be NVIDIA's brand-new GeForce GTX TITAN X graphics card, first teased a couple of weeks back at GDC. NVIDIA's Tom Petersen will be joining us live from the GPU Technology Conference show floor to discuss the GM200 GPU, it's performance and to show off some demos of the hardware in action.

GeForce_GTX_TITANX_3Qtr.jpg

And what's a live stream without a prize? One lucky live viewer will win a GeForce GTX TITAN X 12GB graphics card of their very own! That's right - all you have to do is tune in for the live stream tomorrow afternoon and you could win a Titan X!!

pcperlive.png

NVIDIA GeForce GTX TITAN X Live Stream and Giveaway

1pm PT / 4pm ET - March 17th

PC Perspective Live! Page

Need a reminder? Join our live mailing list!

The event will take place Tuesday, March 17th at 1pm PT / 4pm ET at http://www.pcper.com/live. There you’ll be able to catch the live video stream as well as use our chat room to interact with the audience. To win the prize you will have to be watching the live stream, with exact details of the methodology for handing out the goods coming at the time of the event.

Tom has a history of being both informative and entertaining and these live streaming events are always full of fun and technical information that you can get literally nowhere else.

If you have questions, please leave them in the comments below and we'll look through them just before the start of the live stream. Of course you'll be able to tweet us questions @pcper and we'll be keeping an eye on the IRC chat as well for more inquiries. What do you want to know and hear from Tom or I?

So join us! Set your calendar for this coming Tuesday at 1pm PT / 4pm ET and be here at PC Perspective to catch it. If you are a forgetful type of person, sign up for the PC Perspective Live mailing list that we use exclusively to notify users of upcoming live streaming events including these types of specials and our regular live podcast. I promise, no spam will be had!

Huge thanks to ASUS for supplying a new G751JY notebook, featuring an Intel Core i7-4710HQ and a GeForce GTX 980M 4GB GPU to power our live stream from GTC!!

NVIDIA Quadro M6000 Leaks via Deadmau5

Subject: Graphics Cards | March 14, 2015 - 02:12 PM |
Tagged: nvidia, quadro, m6000, deadmau5, gtc 2015

Sometimes information comes from the least likely of sources. Deadmau5, one of the world's biggest names in house music, posted an interesting picture to his Instagram feed a couple of days ago.

Well, that's interesting. A quick hunt on Google for the NVIDIA M6000 reveals rumors of it being a GM200-based 12GB graphics card. Sound familiar? NVIDIA recently announced the GeForce GTX TITAN X based on an identical configuration at GDC last week.

m6000-leak.jpg

A backup of the Instagram image...in case it gets removed.

With NVIDIA's GPU Technology Conference coming up starting this Tuesday, it would appear we have more than one version of GM200 incoming.

Source: Deadmau5

NVIDIA Releases GeForce GTX 960M and GTX 950M Mobile Graphics

Subject: Graphics Cards | March 12, 2015 - 11:13 PM |
Tagged: nvidia, maxwell, GTX 960M, GTX 950M, gtx 860m, gtx 850m, gm107, geforce

NVIDIA has announced new GPUs to round out their 900-series mobile lineup, and the new GTX 960M and GTX 950M are based on the same GM107 core as the previous 860M/850M parts.

geforce-gtx-960m-3qtr.png

Both GPUs feature 640 CUDA Cores and are separated by Base clock speed, with the GTX 960M operating at 1096 MHz and GTX 950M at 914 MHz. Both have unlisted maximum Boost frequencies that will likely vary based on thermal constraints. The memory interface is the other differentiator between the GPUs, with the GTX 960M sporting dedicated GDDR5 memory, and the GTX 950M can be implemented with either DDR3 or GDDR5 memory. Both GTX 960M and 950M use the same 128-bit memory interface and support up to 4GB of memory.

As reported by multiple sources the core powering the 960M/950M is a GM107 Maxwell GPU, which means that we are essentially talking about rebadged 860M/850M products, though the unlisted Boost frequencies could potentially be higher with these parts with improved silicon on a mature 28nm process. In contrast the previously announced GTX 965M is based on a cut down Maxwell GM204 GPU, with its 1024 CUDA Cores representing half of the GPU core introduced with the GTX 980.

New notebooks featuring the GTX 960M have already been announced by NVIDIA's partners, so we will soon see if there is any performance improvement to these refreshed GM107 parts.

Source: NVIDIA

MSI Announces 4GB Version of NVIDIA GeForce GTX 960 Gaming Graphics Card

Subject: Graphics Cards | March 10, 2015 - 07:35 PM |
Tagged: nvidia, msi, gtx 960, geforce, 960 Gaming, 4GB GTX 960

Manufacturers announcing 4GB versions of the GeForce GTX 960 has become a regular occurrence of late, and today MSI has announced their own 4GB GTX 960, adding a model with this higher memory capacity to their popular MSI Gaming graphics card lineup.

MSI_9604GB.jpg

The GTX 960 Gaming 4GB features an overclocked core in addition to the doubled frame buffer, with 1241MHz Base, 1304MHz Boost clocks (compared to the stock GTX 960 1127MHz Base, 1178MHz Boost clocks). The card also features their proprietary Twin Frozr V (5 for non-Romans) cooler, which they claim surpasses previous generations of their Twin Frozr coolers "by a large margin", with a new design featuring their SuperSU heat pipes and a pair of 100mm Torx fans with alternating standard/dispersion fan blades.

MSI_GTX9604GB.png

The card is set to be shown at the Intel Extreme Masters gaming event in Poland later this week, and pricing/availability have not been announced.

Source: MSI

The new 4GB ASUS Strix GeForce GTX 960

Subject: Graphics Cards | March 10, 2015 - 05:48 PM |
Tagged: strix, gtx 960, factory overclocked, DirectCU II, asus, 4GB

The ASUS Strix Series is popular around PC Perspective thanks to the hefty factory overclocks and the quiet and efficient DirectCU II cooling.  We have given away 2GB versions of the GTX 960 and Josh recently wrapped up a review of the tiny GTX 750 Ti for SFF builds. 

top.PNG

Today ASUS announced the STRIX-GTX960-DC2OC-4GD5, a 4GB version of the Strix GTX 960 with a base clock of 165MHz higher than the default at 1291MHz and with a 1317MHz boost clock and memory clocked at 7010MHz.  The DirectCU II cooling solution has proven to live up to the hype that surrounds it, indeed the cooler is whisper quiet and even under load which heavily overclocked it is much less noticeable than other solutions, especially when attached to Maxwell.

nekkid.PNG

The outputs are impressive, DVI, HDMI and three DisplayPort outputs will have you gaming on a variety of monitors and it will support 4k resolutions, at reasonable graphics settings of course.  Along with the card you get the familiar GPU Tweak utility for tweaking your card and for a limited time the card will come with a free one year XSplit Premium License to allow you to share your best and worst moments with the world.  So far only the 2GB model is showing up at Amazon, NewEgg and B&H so you might want to hold off for a few days but it is worth noting that these cards will get you a free pre-ordered copy of Witcher 3.

xsplit.PNG

Source: ASUS

GDC 15: Imagination Technologies Shows Vulkan Driver

Subject: Graphics Cards, Mobile, Shows and Expos | March 7, 2015 - 07:00 AM |
Tagged: vulkan, PowerVR, Khronos, Imagination Technologies, gdc 15, GDC

Possibly the most important feature of upcoming graphics APIs, albeit the least interesting for enthusiasts, is how much easier driver development will become. So many decisions and tasks that once laid on the shoulders of AMD, Intel, NVIDIA, and the rest will now be given to game developers or made obsolete. Of course, you might think that game developers would oppose this burden, but (from what I understand) it is a weight they already bear, just when dealing with the symptoms instead of the root problem.

imaginationtech-powervr-vulkan.jpg

This also helps other hardware vendors become competitive. Imagination Technologies is definitely not new to the field. Their graphics powers the PlayStation Vita, many earlier Intel graphics processors, and the last couple of iPhones. Despite how abrupt the API came about, they have a proof of concept driver that was present at GDC. The unfinished driver was running an OpenGL ES 3.0 demo that was converted to the Vulkan API.

A screenshot of the CPU usage was also provided, which is admittedly heavily cropped and hard to read. The one on the left claims 1.2% CPU load, with a fairly flat curve, while the one on the right claims 5% and seems to waggle more. Granted, the wobble could be partially explained by differences in the time they chose to profile.

According to Tom's Hardware, source code will be released “in the near future”.

GDC 15: Upcoming Flagship AMD Radeon R9 GPU Powering VR Demo

Subject: Graphics Cards | March 5, 2015 - 04:46 PM |
Tagged: GDC, gdc 15, amd, radeon, R9, 390x, VR, Oculus

Don't get too excited about this news, but AMD tells me that its next flagship Radeon R9 graphics card is up and running at GDC, powering an Oculus-based Epic "Showdown" demo.

amdgdc.jpg

Inside the box...

During my meeting with AMD today I was told that inside that little PC sits the "upcoming flagship Radeon R9 graphics card" but, of course, no other information was given. The following is an estimated transcript of the event:

Ryan: Can I see it?

AMD: No.

Ryan: I can't even take the side panel off it?

AMD: No.

Ryan. How can I know you're telling the truth then? Can I open up the driver or anything?

AMD: No.

Ryan: GPU-Z? Anything?

AMD: No.

Well, I tried.

Is this the rumored R9 390X with the integrated water cooler? Is it something else completely? AMD wouldn't even let me behind the system to look for a radiator so I'm afraid that is where my speculation will end.

Hooked up to the system was a Crescent Bay Oculus headset running the well-received Epic "Showdown" demo. The experience was smooth though of course there were no indications of frame rate, etc. while it was going on. After our discussion with AMD earlier in the week about its LiquidVR SDK, AMD is clearly taking the VR transition seriously. NVIDIA's GPUs might be dominating the show-floor demos but AMD wanted to be sure it wasn't left out of the discussion.

Can I just get this Fiji card already??

AMD To Release FreeSync Ready Driver on March 19th

Subject: Graphics Cards, Displays | March 5, 2015 - 11:46 AM |
Tagged: freesync, amd

Hey everyone, we just got a quick note from AMD with an update on the FreeSync technology rollout we have been expecting since CES in January (and honestly, even before that). Here is the specific quote:

AMD is very excited that monitors compatible with AMD FreeSync™ technology are now available in select regions in EMEA (Europe, Middle East and Africa). We know gamers are excited to bring home an incredibly smooth and tearing-free PC gaming experience powered by AMD Radeon™ GPUs and AMD A-Series APUs. We’re pleased to announce that a compatible AMD Catalyst™ graphics driver to enable AMD FreeSync™ technology for single-GPU configurations will be publicly available on AMD.com starting March 19, 2015. Support for AMD CrossFire™ configurations will be available the following month in April 2015.

A couple of interesting things: first, it appears that FreeSync monitors are already shipping in the EMEA regions and that is the cause for this news blast to the media. If you are buying a monitor with a FreeSync logo on it and can't use the technology, that is clearly a bit frustrating. You have just another two weeks to wait for the software to enabled your display's variable refresh rate.

freesynclogo-small.jpg

That also might be a clue as to when you can expect review embargoes and/or the release of FreeSync monitors in North America. The end is in sight!

GDC 15: Intel and Raptr Partner for PC Gaming Optimization Software

Subject: Graphics Cards | March 4, 2015 - 09:26 PM |
Tagged: GDC, gdc 15, Intel, raptr, GFE, geforce experience

One of NVIDIA's biggest achievements in the past two years has been the creation and improvement of GeForce Experience. The program started with one goal: to help PC gamers optimize game settings to match their hardware and make sure they are getting the top quality settings the hardware can handle and thus the best gaming experience. AMD followed suit shortly after with a partnership with Raptr, a company that crowd-sources data to achieve the same goal: great optimal game settings for all users of AMD hardware.

Today Intel is announcing a partnership with Raptr as well, bringing the same great feature set of Raptr to users of machines with Intel HD Graphics systems. High-end users might chuckle at the news but I actually think this feature is going to be more important for those gamers that utilize integrated graphics. Where GPU horsepower is at premium, compared to discrete graphics cards, using the in-game settings to get all available performance will likely result in the most improvement in experience of all the three major vendors.

raptor1.jpg

Raptr will continue to include game streaming capability and it will also alert the users to when updated Intel graphics drivers are available - a very welcome change to how Intel distributes them.

raptor2.jpg

Intel announced a partnership to deliver an even better gaming experience on Intel Graphics. Raptr, a leading PC gaming utility now available on Intel Graphics for the first time, delivers one-button customized optimizations that improve performance on existing hardware and the games being played, even older games. With just a little tweaking of their PC settings a user may be able to dial up the frame rate and details or even play a game they didn’t think possible.

The Raptr software scans the user’s PC and compares a given game’s performance across tens of millions of other gamers’ PCs, finding and applying the best settings for their Raptr Record system. And Raptr’s gameplay recording tools leverage the video encoding in Intel® Quick Sync technology to record and stream gameplay with virtually no impact on system performance. Driver updates are a snap too, more on Raptr for Intel available here.

Hopefully we'll see this application pre-installed on notebooks going forward (can't believe I'm saying that) but I'd like to see as many PC gamers as possible, even casual ones, get access to the best gaming experience their hardware can provide.

GDC 15: Intel shows 3DMark API Overhead Test at Work

Subject: Graphics Cards, Processors | March 4, 2015 - 08:46 PM |
Tagged: GDC, gdc 15, API, dx12, DirectX 12, dx11, Mantle, 3dmark, Futuremark

It's probably not a surprise to most that Futuremark is working on a new version of 3DMark around the release of DirectX 12. What might be new for you is that this version will include an API overhead test, used to evaluate a hardware configuration's ability to affect performance in Mantle, DX11 and DX12 APIs.

3dmark1.jpg

While we don't have any results quite yet (those are pending and should be very soon), Intel was showing the feature test running at an event at GDC tonight. In what looks like a simple cityscape being rendered over and over, the goal is to see how many draw calls, or how fast the CPU can react to a game engine, the API and hardware can be.

The test was being showcased on an Intel-powered notebook using a 5th Generation Core processor, code named Broadwell. Obviously this points to the upcoming support for DX12 (though obviously not Mantle) that Intel's integrated GPUs will provide.

It should be very interesting to see how much of an advantage DX12 offers over DX11, even on Intel's wide ranges of consumer and enthusiast processors.

GDC 15: PhysX Is Now Shared Source to UE4 Developers

Subject: General Tech, Graphics Cards, Shows and Expos | March 4, 2015 - 05:52 PM |
Tagged: GDC, gdc 15, nvidia, epic games, ue4, unreal engine 4, PhysX, apex

NVIDIA and Epic Games have just announced that Unreal Engine 4 developers can view and modify the source of PhysX. This also includes the source for APEX, which is NVIDIA's cloth and destruction library. It does not include any of the other libraries that are under the GameWorks banner, but Unreal Engine 4 does not use them anyway.

epic-ue4-nvidia-physx.jpg

This might even mean that good developers can write their own support for third-party platforms, like OpenCL. That would probably be a painful process, but it should be possible now. Of course, that support would only extend to their personal title, and anyone who they share their branch with.

If you are having trouble finding it, you will need to switch to a branch that has been updated to PhysX 3.3.3 with source, which is currently just “Master”. “Promoted” and earlier seem to be back at PhysX 3.3.2, which is still binary-only. It will probably take a few months to trickle down to an official release. If you are still unable to find it, even though you are on the “Master” branch, the path to NVIDIA's source code is: “Unreal Engine/Engine/Source/ThirdParty/PhysX/”. From there you can check out the various subdirectories for PhysX and APEX.

NVIDIA will be monitoring pull requests sent to that area of Unreal Engine. Enhancements might make it back upstream to PhysX proper, which would then be included in future versions of Unreal Engine and anywhere else that PhysX is used.

In other news, Unreal Engine 4 is now free of its subscription. The only time Epic will ask for money is when you ship a game and royalties are due. This is currently 5% of gross revenue, with the first $3000 (per product, per calendar quarter) exempt. This means that you can make legitimately free (no price, no ads, no subscription, no microtransactions, no Skylander figurines, etc.) game in UE4 for free now!

Source: Epic Games