Subject: Graphics Cards | June 16, 2015 - 12:53 AM | Sebastian Peak
Tagged: the phantom pain, nvidia, metal gear solid, graphics, gpus, geforce, gameworks
A blog post on NVIDIA's site indicates that Konami's upcoming game Metal Gear Solid 5: The Phantom Pain will make use of NVIDIA technologies, a move that will undoubtedly rankle AMD graphics users who can't always see the full benefit of GameWorks enhancements.
"The world of Metal Gear Solid V: The Phantom Pain is going to be 200 times larger than the one explored in Ground Zeroes. Because so much of this game’s action depends on stealth, graphics are a key part of the gameplay. Shadows, light, and terrain have to be rendered perfectly. That’s a huge challenge in a game where the hero is free to find his own way from one point to another. Our engineers are signed up to work closely with Konami to get the graphics just right and to add special effects."
Now technically this quote doesn't confirm the use of any proprietary NVIDIA technology, though it sounds like that's exactly what will be taking place. In the wake of the Witcher 3 HairWorks controversy any such enhancements will certainly be looked upon with interest (especially as the next piece of big industry news will undoubtedly be coming with AMD's announcement later today at E3).
It's hard to argue with better graphical quality in high profile games such as the latest Metal Gear Solid installment, but there is certainly something to be said for adherence to open standards to ensure a more unified experience across GPUs. The dialog about inclusion though adherence to standards vs. proprietary solutions has been very heated with the FreeSync/G-Sync monitor refresh debate, and GameWorks is a series of tools that serves to further divide gamers, even as it provides an enhanced experience with GeForce GPUs.
Such advantages will likely matter less with DirectX 12 mitigating some differences with more efficiency in the vein of AMD's Mantle API, and if the rumored Fiji cards from AMD offer superior performance and arrive priced competitively this will matter even less. For now even though details are nonexistent expect an NVIDIA GeForce GPU to have the advantage in at least some graphical aspects of the latest Metal Gear title when it arrives on PC.
Subject: Graphics Cards | May 29, 2015 - 11:05 AM | Sebastian Peak
Tagged: rumors, radeon, hbm, graphics, gpu, Fury, Fiji, amd
Another rumor has emerged about an upcoming GPU from AMD, and this time it's a possible name for the HBM-powered Fiji card a lot of us have been speculating about.
The rumor from VideoCardz via Expreview (have to love the multiple layers of reporting here) states the the new card will be named Radeon Fury:
"Radeon Fury would be AMD’s response to growing popularity of TITAN series. It is yet unclear how AMD is planning to adopt Fury naming schema. Are we going to see Fury XT or Fury PRO? Well, let’s just wait and see. This rumor also means that Radeon R9 390X will be a direct rebrand of R9 290X with 8GB memory."
Of course this is completely unsubstantiated, and Fury is a branding scheme from the ATI days, but who knows? I can only hope that if true, AMD will adopt all caps: TITAN! FURY! Feel the excitement. What do you think of this possible name for the upcoming AMD flagship GPU?
Subject: Graphics Cards | February 16, 2015 - 11:04 AM | Sebastian Peak
Tagged: SFF, nvidia, mini-ITX GPU, mini-itx, gtx 960, graphics, gpu, geforce, asus
ASUS returns to the mini-ITX friendly form-factor with the GTX 960 Mini (officially named GTX960-MOC-2GD5 for maximum convenience), their newest NVIDIA GeForce GTX 960 graphics card.
Other than the smaller size to allow compatibility with a wider array of small enclosures, the GTX 960 Mini also features an overclocked core and promises "20% cooler and vastly quieter" performance from its custom heatsink and CoolTech fan. Here's a quick rundown of key specs:
- 1190 MHz Base Clock / 1253 MHz Boost Clock
- 1024 CUDA cores
- 2GB 128-bit GDDR5 @ 7010 MHz
- 3x DisplayPort, 1x HDMI 2.0, 1x DVI output
No word on the pricing or availability of the card just yet. The other mini-ITX version of the GTX 960 on the market from Gigabyte has been selling for $199.99, so expect this to run somewhere between $200-$220 at launch.
ASUS has reused this image from the GTX 970 Mini launch, and so have I
The product page is up on the ASUS website so availability seems imminent.
Subject: Processors | January 18, 2015 - 05:16 PM | Sebastian Peak
Tagged: SoC, rumor, processor, leak, iris pro, Intel, graphics, cpu, carrizo, APU, amd
A new report of leaked benchmarks paints a very interesting picture of the upcoming AMD Carrizo mobile APU.
Image credit: SiSoftware
Announced as strictly mobile parts, Carrizo is based on the next generation Excavator core and features what AMD is calling one of their biggest ever jumps in efficiency. Now alleged leaked benchmarks are showing significant performance gains as well, with numbers that should elevate the IGP dominance of AMD's APUs.
Image credit: WCCFtech
"The A10 7850K scores around 270 Mpix/s while Intel’s HD5200 Iris Pro scores a more modest 200 Mpix/s. Carriso scores here over 600 Mpix/s which suggests that Carrizo is more than twice as fast as Kaveri and three times faster than Iris Pro. To put this into perspective this is what an R7 265 graphics card scores, a card that offers the same graphics performance inside the Playstation 4."
While the idea of desktop APUs with greatly improved graphics and higher efficency is tantalizing, AMD has made it clear that these will be mobile-only parts at launch. When asked by Anandtech, AMD had this to say about the possibility of a desktop variant:
“With regards to your specific question, we expect Carrizo will be seen in BGA form factor desktops designs from our OEM partners. The Carrizo project was focused on thermally constrained form factors, which is where you'll see the big differences in performance and other experiences that consumers value.”
The new mobile APU will be manufactured with the same 28nm process as Kaveri, with power consumption up to 35W for the Carrizo down to a maximum of 15W for the ultra-mobile Carrizo-L parts.
Subject: Graphics Cards | February 19, 2014 - 04:43 PM | Jeremy Hellstrom
Tagged: geforce, gm107, gpu, graphics, gtx 750 ti, maxwell, nvidia, video
We finally saw Maxwell yesterday, with a new design for the SMs called SMM each of which consist of four blocks of 32 dedicated, non-shared CUDA cores. In theory that should allow NVIDIA to pack more SMMs onto the card than they could with the previous SMK units. This new design was released on a $150 card which means we don't really get to see what this new design is capable of yet. At that price it competes with AMD's R7 260X and R7 265, at least if you can find them at their MSRP and not at inflated cryptocurrency levels. Legit Reviews contrasted the performance of two overclocked GTX 750 Ti to those two cards as well as to the previous generation GTX 650Ti Boost on a wide selection of games to see how it stacks up performance-wise which you can read here.
That is of course after you read Ryan's full review.
"NVIDIA today announced the new GeForce GTX 750 Ti and GTX 750 video cards, which are very interesting to use as they are the first cards based on NVIDIA's new Maxwell graphics architecture. NVIDIA has been developing Maxwell for a number of years and have decided to launch entry-level discrete graphics cards with the new technology first in the $119 to $149 price range. NVIDIA heavily focused on performance per watt with Maxwell and it clearly shows as the GeForce GTX 750 Ti 2GB video card measures just 5.7-inches in length with a tiny heatsink and doesn't require any internal power connectors!"
Here are some more Graphics Card articles from around the web:
- MSI GTX 750 Ti Gaming Video Card Review @HiTech Legion
- NVIDIA GeForce GTX 750 Ti @ Benchmark Reviews
- ASUS GTX 750 OC 1 GB @ techPowerUp
- MSI GTX 750 Ti Gaming 2 GB @ techPowerUp
- NVIDIA GeForce GTX 750Ti the Arrival of Maxwell @HiTech Legion
- Palit GTX 750 Ti StormX Dual 2 GB @ techPowerUp
- The GTX 750 Ti Review; Maxwell Arrives @ Hardware Canucks
- Nvidia GeForce GTX 750 Ti vs. AMD Radeon R7 265 @ Legion Hardware
- MSI GTX750Ti OC Twin Frozr @ Kitguru
- NVIDIA GeForce GTX 750 Ti 2 GB @ techPowerUp
- NVIDIA GeForce GTX 750 Ti "Maxwell" On Linux @ Phoronix
- A quick look at Mantle on AMD's Kaveri APU @ The Tech Report
- Sapphire Radeon R9 Tri-X OC video card @ Hardwareoverclock
- AMD Radeon R9 290: Still Not Good For Linux Users @ Phoronix
- AMD Radeon R7 265 2GB Video Card Review @ Legit Reviews
- Sapphire Radeon R7 260X OC 2GB Graphics Card Review @ Techgage
- XFX Double Dissipation R9 280X @ [H]ard|OCP
What we know about Maxwell
I'm going to go out on a limb and guess that many of you reading this review would not have normally been as interested in the launch of the GeForce GTX 750 Ti if a specific word hadn't been mentioned in the title: Maxwell. It's true, the launch of GTX 750 Ti, a mainstream graphics card that will sit in the $149 price point, marks the first public release of the new NVIDIA GPU architecture code named Maxwell. It is a unique move for the company to start at this particular point with a new design, but as you'll see in the changes to the architecture as well as the limitations, it all makes a certain bit of sense.
For those of you that don't really care about the underlying magic that makes the GTX 750 Ti possible, you can skip this page and jump right to the details of the new card itself. There I will detail the product specifications, performance comparison and expectations, etc.
If you are interested in learning what makes Maxwell tick, keep reading below.
The NVIDIA Maxwell Architecture
When NVIDIA first approached us about the GTX 750 Ti they were very light on details about the GPU that was powering it. Even though the fact it was built on Maxwell was confirmed the company hadn't yet determined if it was going to do a full architecture deep dive with the press. In the end they went somewhere in between the full detail we are used to getting with a new GPU design and the original, passive stance. It looks like we'll have to wait for the enthusiast GPU class release to really get the full story but I think the details we have now paint the story quite clearly.
During the course of design the Kepler architecture, and then implementing it with the Tegra line in the form of the Tegra K1, NVIDIA's engineering team developed a better sense of how to improve the performance and efficiency of the basic compute design. Kepler was a huge leap forward compared to the likes of Fermi and Maxwell is promising to be equally as revolutionary. NVIDIA wanted to address both GPU power consumption as well as finding ways to extract more performance from the architecture at the same power levels.
The logic of the GPU design remains similar to Kepler. There is a Graphics Processing Cluster (GPC) that houses Simultaneous Multiprocessors (SM) built from a large number of CUDA cores (stream processors).
GM107 Block Diagram
Readers familiar with the look of Kepler GPUs will instantly see changes in the organization of the various blocks of Maxwell. There are more divisions, more groupings and fewer CUDA cores "per block" than before. As it turns out, this reorganization was part of the ability for NVIDIA to improve performance and power efficiency with the new GPU.
It wouldn’t be February if we didn’t hear the Q4 FY14 earnings from NVIDIA! NVIDIA does have a slightly odd way of expressing their quarters, but in the end it is all semantics. They are not in fact living in the future, but I bet their product managers wish they could peer into the actual Q4 2014. No, the whole FY14 thing relates back to when they made their IPO and how they started reporting. To us mere mortals, Q4 FY14 actually represents Q4 2013. Clear as mud? Lord love the Securities and Exchange Commission and their rules.
The past quarter was a pretty good one for NVIDIA. They came away with $1.144 billion in gross revenue and had a GAAP net income of $147 million. This beat the Street’s estimate by a pretty large margin. As a response, trading of NVIDIA’s stock has gone up in after hours. This has certainly been a trying year for NVIDIA and the PC market in general, but they seem to have come out on top.
NVIDIA beat estimates primarily on the strength of the PC graphics division. Many were focusing on the apparent decline of the PC market and assumed that NVIDIA would be dragged down by lower shipments. On the contrary, it seems as though the gaming market and add-in sales on the PC helped to solidify NVIDIA’s quarter. We can look at a number of factors that likely contributed to this uptick for NVIDIA.
Subject: General Tech, Graphics Cards | February 25, 2013 - 01:32 PM | Jeremy Hellstrom
Tagged: jon peddie, graphics, market share
If last weeks report from Jon Peddie Research on sales for all add in and integrated graphics had you worried, the news this week is not gong to help boost your confidence. This week the report focuses solely on add in boards and the drop is dramatic; Q4 2012 sales plummeted just short of 20% compared to Q3 2012. When you look at the entire year, sales dropped 10% overall as AMD's APUs are making serious inroads into the mobile market, as are Intel's, with many notebooks being sold without a discrete GPU. The losses are coming from the mainstream market, enthusiast level GPUs actually saw a slight increase in sales but the small volume is utterly drowned by the mainstream market. You can check out the full press release here.
"JPR found that AIB shipments during Q4 2012 behaved according to past years with regard to seasonality, but the drop was considerably more dramatic. AIB shipments decreased 17.3% from the last quarter (the 10 year average is just -0.68%). On a year-to-year comparison, shipments were down 10%."
Here is some more Tech News from around the web:
- 3DMark Review @ OCC
- Trendnet N300 Easy-N-Range Extender @ Rbmods
- NETGEAR ProSafe GS110T Gigabit SmartSwitch @ Benchmark Reviews
- Quantum computer one step closer after ‘true’ quantum calculation @ The Register
- Microsoft brings Azure back online @ The Register
- Understanding Camera Optics & Smartphone Camera Trends, A Presentation by Brian Klug @ AnandTech
- MWC Sunday roundup: HP Slate, Ascend P2 and Firefox phones @ The Inquirer
- AMD releases Firepro R5000 with remote display technology @ The Inquirer
- The TR Podcast 129: PlayStation 4, Titan, and more
In case you missed it...
In one of the last pages of our recent NVIDIA GeForce GTX TITAN graphics card review we included an update to our Frame Rating graphics performance metric that details the testing method in more detail and showed results for the first time. Because it was buried so far into the article, I thought it was worth posting this information here as a separate article to solict feedback from readers and help guide the discussion forward without getting lost in the TITAN shuffle. If you already read that page of our TITAN review, nothing new is included below.
I am still planning a full article based on these results sooner rather than later; for now, please leave me your thoughts, comments, ideas and criticisms in the comments below!
Why are you not testing CrossFire??
If you haven't been following our sequence of stories that investigates a completely new testing methodology we are calling "frame rating", then you are really missing out. (Part 1 is here, part 2 is here.) The basic premise of Frame Rating is that the performance metrics that the industry is gathering using FRAPS are inaccurate in many cases and do not properly reflect the real-world gaming experience the user has.
Because of that, we are working on another method that uses high-end dual-link DVI capture equipment to directly record the raw output from the graphics card with an overlay technology that allows us to measure frame rates as they are presented on the screen, not as they are presented to the FRAPS software sub-system. With these tools we can measure average frame rates, frame times and stutter, all in a way that reflects exactly what the viewer sees from the game.
We aren't ready to show our full sets of results yet (soon!) but the problems lie in that AMD's CrossFire technology shows severe performance degradations when viewed under the Frame Rating microscope that do not show up nearly as dramatically under FRAPS. As such, I decided that it was simply irresponsible of me to present data to readers that I would then immediately refute on the final pages of this review (Editor: referencing the GTX TITAN article linked above.) - it would be a waste of time for the reader and people that skip only to the performance graphs wouldn't know our theory on why the results displayed were invalid.
Many other sites will use FRAPS, will use CrossFire, and there is nothing wrong with that at all. They are simply presenting data that they believe to be true based on the tools at their disposal. More data is always better.
Here are these results and our discussion. I decided to use the most popular game out today, Battlefield 3 and please keep in mind this is NOT the worst case scenario for AMD CrossFire in any way. I tested the Radeon HD 7970 GHz Edition in single and CrossFire configurations as well as the GeForce GTX 680 and SLI. To gather results I used two processes:
- Run FRAPS while running through a repeatable section and record frame rates and frame times for 60 seconds
- Run our Frame Rating capture system with a special overlay that allows us to measure frame rates and frame times with post processing.
Here is an example of what the overlay looks like in Battlefield 3.
Frame Rating capture on GeForce GTX 680s in SLI - Click to Enlarge
The column on the left is actually the visuals of an overlay that is applied to each and every frame of the game early in the rendering process. A solid color is added to the PRESENT call (more details to come later) for each individual frame. As you know, when you are playing a game, multiple frames will make it on any single 60 Hz cycle of your monitor and because of that you get a succession of colors on the left hand side.
By measuring the pixel height of those colored columns, and knowing the order in which they should appear beforehand, we can gather the same data that FRAPS does but our results are seen AFTER any driver optimizations and DX changes the game might make.
Frame Rating capture on Radeon HD 7970 CrossFire - Click to Enlarge
Here you see a very similar screenshot running on CrossFire. Notice the thin silver band between the maroon and purple? That is a complete frame according to FRAPS and most reviews. Not to us - we think that frame rendered is almost useless.
In our previous article and video, I introduced you to our upcoming testing methodology for evaluating graphics cards based not only frame rates but on frame smoothness and the efficiency of those frame rates. I showed off some of the new hardware we are using for this process and detailed how direct capture of graphics card output allows us to find interesting frame and animation anomalies using some Photoshop still frames.
Today we are taking that a step further and looking at a couple of captured videos that demonstrate a "stutter" and walking you through, frame by frame, how we can detect, visualize and even start to measure them.
This video takes a couple of examples of stutter in games, DiRT 3 and Dishonored to be exact, and shows what they look like in real time, at 25% speed and then finally in a much more detailed frame-by-frame analysis.
Obviously this is just a couple instances of what a stutter is and there are often times less apparent in-game stutters that are even harder to see in video playback. Not to worry - this capture method is capable of seeing those issues as well and we plan on diving into the "micro" level as well shortly.
We aren't going to start talking about whose card and what driver is being used yet and I know that there are still a lot of questions to be answered on this topic. You will be hearing more quite soon from us and I thank you all for your comments, critiques and support.
Let me know below what you thought of this video and any questions that you might have.