Subject: General Tech, Graphics Cards | June 2, 2014 - 05:52 PM | Scott Michaud
Tagged: nvidia, geforce, geforce experience, ShadowPlay
NVIDIA has just launched another version of their GeForce Experience, incrementing the version to 2.1. This release allows video of up to "2500x1600", which I assume means 2560x1600, as well as better audio-video synchronization in Adobe Premiere. Also, because why stop going after FRAPS once you start, it also adds an in-game framerate indicator. It also adds push-to-talk for recording the microphone.
Another note: when GeForce Experience 2.0 launched, it introduced streaming of the user's desktop. This allowed recording of OpenGL and windowed-mode games by simply capturing an entire monitor. This mode was not capable of "Shadow Mode", which I believed was because they thought users didn't want a constant rolling video to be taken of their desktop in the event that they wanted to save a few minutes of it at some point. Turns out that I was wrong; the feature was coming and it arrived with GeForce Experience 2.1.
GeForce Experience 2.1 is now available at NVIDIA's website, unless it already popped up a notification for you.
Subject: General Tech | May 29, 2014 - 07:43 PM | Jeremy Hellstrom
Tagged: nvidia, gameworks, dirty pool, business as usual, amd
The topic on NVIDIA Gameworks was discussed at great length on last night's PCPer Podcast and from the live comments as well as the comments on Ryan's original story this is obviously a topic which draws strong opinions. As it is always best to limit yourself to debating topics of which you are familiar with the facts The Tech Report's article on the aftereffects of the Forbes story is well worth a read. Cyril had a chance to speak with a rep from NVIDIA's driver development team about Hallock's comments pertaining to NVIDIA's Gameworks and the legitimacy of AMD's complaints. As you might expect there is a lot of denial and finger pointing from both sides; what long time enthusiasts might describe as 'business as usual'. Both sides of this argument have vehemently denied ever attempting to undermine each others business but yet both sides can point to specific instances in which the competition has used questionable methods to get a leg (or hair) up on the competition.
"Earlier today, I spoke with Cem Cebenoyan, Director of Engineering for Developer Technology at Nvidia, who offered a rebuttal to a Forbes story we covered yesterday. In that story, AMD's Robert Hallock alleged that Nvidia's GameWorks program prevents AMD from working with game developers on GPU optimizations."
Here is some more Tech News from around the web:
- Seagate buys LSI flash technology from Avago for $450m @ The Inquirer
- Samsung outs first vertical 3D NAND flash SSD aimed at PC market @ The Inquirer
- Asustek to establish unified cloud platform and increase personal cloud users @ DigiTimes
- WWDC 2014 Preview: iOS 8, Mac OS X 10.10 and hardware expected @ The Inquirer
- Rap chap tapped for $3 BEELLION: Apple finally buys Dr Dre's Beats @ The Register
- Linksys EA6900 AC1900 802.11ac Wireless Router @ Kitguru
- How to Manage Printers in Linux @ Linux.com
- rying Out Steam In-Home Streaming With Hamachi For Remote Play @ Legit Reviews
Podcast #302 - ASUS PB287Q 4K Monitor, NVIDIA and AMD's fight over GameWorks, Haswell-E Leaks and more!
Subject: General Tech | May 29, 2014 - 02:51 PM | Ken Addison
Tagged: video, podcast, asus, 4k, pb287q, nvidia, amd, gameworks, ubisoft, watch dogs, crucial, mx100, tegra k1, gsync
PC Perspective Podcast #302 - 05/29/2014
Join us this week as we discuss the ASUS PB287Q 4K Monitor, NVIDIA and AMD's fight over GameWorks, Haswell-E Leaks and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Josh Walrath, Jeremy Hellstrom, and Allyn Maleventano
Week in Review:
News items of interest:
Hardware/Software Picks of the Week:
Allyn: For Josh - the Wenger Giant Knife
The AMD Argument
Earlier this week, a story was posted in a Forbes.com blog that dove into the idea of NVIDIA GameWorks and how it was doing a disservice not just on the latest Ubisoft title Watch_Dogs but on PC gamers in general. Using quotes from AMD directly, the author claims that NVIDIA is actively engaging in methods to prevent game developers from optimizing games for AMD graphics hardware. This is an incredibly bold statement and one that I hope AMD is not making lightly. Here is a quote from the story:
Gameworks represents a clear and present threat to gamers by deliberately crippling performance on AMD products (40% of the market) to widen the margin in favor of NVIDIA products. . . . Participation in the Gameworks program often precludes the developer from accepting AMD suggestions that would improve performance directly in the game code—the most desirable form of optimization.
The example cited on the Forbes story is the recently released Watch_Dogs title, which appears to show favoritism towards NVIDIA GPUs with performance of the GTX 770 ($369) coming close the performance of a Radeon R9 290X ($549).
It's evident that Watch Dogs is optimized for Nvidia hardware but it's staggering just how un-optimized it is on AMD hardware.
Watch_Dogs is the latest GameWorks title released this week.
I decided to get in touch with AMD directly to see exactly what stance the company was attempting to take with these kinds of claims. No surprise, AMD was just as forward with me as they appeared to be in the Forbes story originally.
The AMD Stance
Central to AMD’s latest annoyance with the competition is the NVIDIA GameWorks program. First unveiled last October during a press event in Montreal, GameWorks combines several NVIDIA built engine functions into libraries that can be utilized and accessed by game developers to build advanced features into games. NVIDIA’s website claims that GameWorks is “easy to integrate into games” while also including tutorials and tools to help quickly generate content with the software set. Included in the GameWorks suite are tools like VisualFX which offers rendering solutions like HBAO+, TXAA, Depth of Field, FaceWorks, HairWorks and more. Physics tools include the obvious like PhysX while also adding clothing, destruction, particles and more.
Subject: Graphics Cards | May 28, 2014 - 11:19 AM | Ryan Shrout
Tagged: titan z, nvidia, gtx, geforce
Though delayed by a month, today marks the official release of NVIDIA's Titan Z graphics card, the dual GK110 beast with the $3000 price tag. The massive card was shown for the first time in March at NVIDIA's GPU Technology Conference and our own Tim Verry was on the grounds to get the information.
The details remain the same:
Specifically, the GTX TITAN Z is a triple slot graphics card that marries two full GK110 (big Kepler) GPUs for a total of 5,760 CUDA cores, 448 TMUs, and 96 ROPs with 12GB of GDDR5 memory on a 384-bit bus (6GB on a 384-bit bus per GPU). For the truly adventurous, it appears possible to SLI two GTX Titan Z cards using the single SLI connector. Display outputs include two DVI, one HDMI, and one DisplayPort connector.
The difference now of course is that all the clock speeds and pricing are official.
A base clock speed of 705 MHz with a Boost rate of 876 MHz places it well behind the individual GPU performance of a GeForce GTX 780 Ti or GTX Titan Black (rated at 889/980 MHz). The memory clock speed remains the same at 7.0 Gbps and you are still getting a massive 6GB of memory per GPU.
Maybe most interesting with the release of the GeForce GTX Titan Z is that NVIDIA seems to have completely fixated on non-DIY consumers with the card. We did not receive a sample of the Titan Z (nor did we get one of the Titan Black) and when I inquired as to why, NVIDIA PR stated that they were "only going to CUDA developers and system builders."
I think it is more than likely that after the release of AMD's Radeon R9 295X2 dual GPU graphics card on April 8th, with a price tag of $1500 (half of the Titan Z), the target audience was redirected. NVIDIA already had its eye on the professional markets that weren't willing to dive into the Quadro/Tesla lines (CUDA developers will likely drop $3k at the drop of a hat to get this kind of performance density). But a side benefit of creating the best flagship gaming graphics card on the planet was probably part of the story - and promptly taken away by AMD.
I still believe the Titan Z will be an impressive graphics card to behold both in terms of look and style and in terms of performance. But it would take the BIGGEST NVIDIA fans to be able to pass up buying a pair of Radeon R9 295X2 cards for a single GeForce GTX Titan Z. At least that is our assumption until we can test one for ourselves.
I'm still working to get my hands on one of these for some testing as I think the ultra high end graphics card coverage we offer is incomplete without it.
Several of NVIDIA's partners are going to be offering the Titan Z including EVGA, ASUS, MSI and Zotac. Maybe the most intersting though is EVGA's water cooled option!
So, what do you think? Anyone lining up for a Titan Z when they show up for sale?
Subject: Displays | May 22, 2014 - 11:30 PM | Sebastian Peak
Tagged: nvidia, monitor, g-sync, acer, 4k
We've been talking about the benefits 4K for a while, most recently with the Samsung U28D590D, which added single-stream 60Hz support to the mix, but there have certainly been some drawbacks with 4K monitors to date. Between usually low refresh rates and the general problem of getting smooth images on the screen (not to mention the high price of entry into 4K) there have been some legitimate questions about when to upgrade. Well, an interesting new product announcement from a surprising source might change things.
With a logo like that, who needs product photos?
Today, Acer is announcing an interesting alternative: the world’s first 4K monitor with integrated NVIDIA G-SYNC technology.
The XB280HK will be a 28" display, and (provided you have an NVIDIA graphics card and were looking to make the move to 4K) the benefits of G-SYNC - which include minimizing stutter and eliminating tearing - seem ideal for extremely high-res gaming.
We’ll be eagerly awaiting a look at the performance of this new monitor. (Or even a look at it, since Acer did not release a product photo!)
The details are scarce, but Acer says this will be a part of their “XB0” series of gaming monitors. Here are some specs for this 28” 3840x2160 display, which features three proprietary technologies from Acer:
- “Flicker-less” which Acer says is implemented at the power supply level to reduce screen flicker
- “Low-dimming” which sounds like an ambient light sensor to dim the monitor in low light
- “ComfyView” non-glare screen
Of interest, the Acer XB280HK is likely using a TN panel given the claimed "170/170 degree" viewing angle.
The hardware needed for good 4K frame rates are definitely up there, and with G-SYNC onboard the XB280HK will probably not be in the low-end of the 4K price range, but we shall see!
Subject: General Tech, Graphics Cards, Mobile | May 22, 2014 - 04:58 PM | Scott Michaud
Tagged: tegra k1, nvidia, iris pro, iris, Intel, hd 4000
The Chinese tech site, Evolife, acquired a few benchmarks for the Tegra K1. We do not know exactly where they got the system from, but we know that it has 4GB of RAM and 12 GB of storage. Of course, this is the version with four ARM Cortex-A15 cores (not the upcoming, 64-bit version based on Project Denver). On 3DMark Ice Storm Unlimited, it was capable of 25737 points, full system.
Image Credit: Evolife.cn
You might remember that our tests with an Intel Core i5-3317U (Ivy Bridge), back in September, achieved a score of 25630 on 3DMark Ice Storm. Of course, that was using the built-in Intel HD 4000 graphics, not a discrete solution, but it still kept up for gaming. This makes sense, though. Intel HD 4000 (GT2) graphics has a theoretical performance of 332.8 GFLOPs, while the Tegra K1 is rated at 364.8 GFLOPs. Earlier, we said that its theoretical performance is roughly on par with the GeForce 9600 GT, although the Tegra K1 supports newer APIs.
Of course, Intel has released better solutions with Haswell. Benchmarks show that Iris Pro is able to play Battlefield 4 on High settings, at 720p, with about 30FPS. The HD 4000 only gets about 12 FPS with the same configuration (and ~30 FPS on Low). This is not to compare Intel to NVIDIA's mobile part, but rather compare Tegra K1 to modern, mainstream laptops and desktops. It is getting fairly close, especially with the first wave of K1 tablets entering at the mid-$200 USD MSRP in China.
As a final note...
There was a time where Tim Sweeney, CEO of Epic Games, said that the difference between high-end and low-end PCs "is something like 100x". Scaling a single game between the two performance tiers would be next-to impossible. He noted that ten years earlier, that factor was more "10x".
Now, an original GeForce Titan is about 12x faster than the Tegra K1 and they support the same feature set. In other words, it is easier to develop a game for the PC and high-end tablet than it was to develop an PC game for high-end and low-end machines, back in 2008. PC Gaming is, once again, getting healthier.
Subject: General Tech, Graphics Cards, Processors, Mobile | May 15, 2014 - 05:02 PM | Scott Michaud
Tagged: nvidia, xaiomi, mipad, tegra k1
Tegra K1 is NVIDIA's new mobile processor and this first to implement the Kepler graphics architecture. In other words, it has all of the same graphics functionality as a desktop GPU with 364 GigaFLOPs of performance (a little faster than a GeForce 9600 GT). This is quite fast for a mobile product. For instance, that amount of graphics performance could max out Unreal Tournament 3 to 2560x1600 and run Crysis at 720p. Being Kepler, it supports OpenGL 4.4, OpenGL ES 3.1, DirectX 11 and 12, and GPU compute languages.
Xiaomi is launching their MiPad in Beijing, today, with an 8-inch 2048x1536 screen and the Tegra K1. They will be available in June (for China) starting at $240 USD for the 16GB version and going up to $270 for the 64GB version. Each version has 2GB of RAM, an 8MP rear-facing camera, and a 5MP front camera.
Now, we wait and see if any Tegra K1 devices come to North America and Europe - especially at that price point.
Subject: General Tech | May 12, 2014 - 09:42 PM | Scott Michaud
Tagged: nvidia, shield, half-life 2, Portal
What would Gordon Freeman do? He would tell everyone to... ... oh right.
Well, apparently he is available on the NVIDIA SHIELD, now, along with Portal. I am not talking about GameStream. These two games have been ported to Android, but only through the SHIELD. From their screenshots, the mobile games look pretty good, especially Portal with its look-through mechanics.
As usual, whenever NVIDIA really wants something, they will often parachute engineers through your skylights to do it for you. The company revolves around delivering experiences to their customers, which is a good mindset for a company to have. This is one of the main reasons for Microsoft and the success of PC gaming, especially in the late 90's with their DirectX efforts.
If you have an NVIDIA SHIELD, Half Life 2 and Portal are available now for $9.99, through TegraZone.
Subject: General Tech, Graphics Cards | May 12, 2014 - 08:00 PM | Scott Michaud
Tagged: titan z, nvidia, gtx titan z, geforce
To a crowd of press and developers at their GTC summit, NVIDIA announced the GeForce GTX Titan Z add-in board (AIB). Each of the two, fully unlocked, GK110 GPUs would each have access to 6GB of GDDR5 memory (12GB total). The card was expected to be available on May 8th but has yet to surface. As NVIDIA has yet to comment on the situation, many question whether it ever will.
And then we get what we think are leaked benchmarks (note: two pictures).
One concern about the Titan Z was its rated 8 TeraFLOPs of compute performance. This is a fairly sizable reduction from the theoretical maximum of 10.24 TeraFLOPs of two Titan Black processors and even less than two first-generation Titans (9 TeraFLOPs combined). We expected that this is due to reduced clock rates. What we did not expect is for benchmarks to show the GPUs boost way above those advertised levels, and even beyond the advertised boost clocks of the Titan Black and the 780 Ti. The card was seen pushing 1058 MHz in some sections, which leads to a theoretical compute performance of 12.2 TeraFLOPs (6.1 TeraFLOPs per GPU) in single precision. That is a lot.
These benchmarks also show that NVIDIA has a slight lead over AMD's R9 295X2 in many games, except Battlefield 4 and Sleeping Dogs (plus 3DMark and Unigine). Of course, these benchmarks measure the software reported frame rate and frame times and those may or may not be indicative of actual performance. While I would say that the Titan Z appears to have a slight performance lead over the R9 295X2, although a solid argument for an AMD performance win exists, it does so double the cost (at its expected $3000 USD price point). That is not up for debate.
So, until NVIDIA says anything, the Titan Z is in limbo. I am sure there exists CUDA developers who await its arrival. Personally, I would just get three Titan Blacks since you are going to need to manually schedule your workloads across multiple processors anyway (or 780 Tis if 32-bit arithmetic is enough precision). That is, of course, unless you cannot physically fit enough GeForce Titan Blacks in your motherboard and, as such, you require two GK110 chips per AIB (but not enough to bother writing a cluster scheduling application).
Get notified when we go live!