Subject: Graphics Cards | June 24, 2013 - 11:57 PM | Tim Verry
Tagged: palit, gtx 780, gaming, super jetstream, jetstream
AIB partner Palit has announced a speedy GTX 780 of its own with the GTX 780 Super JetStream graphics card. This card has a triple fan cooler and is one of the fastest GTX 780’s announced so far (matching the GPU clocks of the Gainward Phantom GLH).
The Palit GTX 780 Super JetStream clocks the GPU’s 2,304 CUDA cores to 980 MHz base and 1033 MHz boost. Palit has also slightly overclocked the 3GB GDDR5 memory at 6200 MHz. For comparison, NVIDIA clocks the reference card at 863 MHz base, 900 MHz boost, and 6008 MHz memory. Palit is also producing a non-Super JetStream card clocked at 902 MHz base and 954 MHz boost.
The differentiating factor here beyond the factory overclock is Palit’s own JetStream cooler. This cooler, well, cools an aluminum fin stack (copper base) using two 80mm fans on either side of a single center-mounted 90mm fan. The fans sit beneath a black and gold colored shroud. According to Palit, the JetStream cooler is rated at 6 dB quieter and 10-degrees Celsius cooler than the reference NVIDIA cooler.
Additionally, the GTX 780 Super JetStream comes with an 8-phase PWM with DrMOS technology.
Palit has not yet released details on where and when the GPU will be available, or how much it will cost.
Subject: Graphics Cards | June 24, 2013 - 03:28 PM | Tim Verry
Tagged: phantom glh, gtx 780, gk110, gaming, gainward
The rumored GTX 760 graphics cards are still not available, but graphics enthusiasts do have a number of new factory overclocked GTX 780 cards with custom coolers to drool over. One such new GTX 780 card is the so-called GTX 780 Phantom GLH card from Gainward. This card is 2.5 slot monster that pairs the GTX 780 GPU with custom power phases and a giant block of aluminum and copper to support a healthy factory overclock.
This new Gainward Phanton GLH card pushes the GTX 780 GPU farther than the company's own GTX 780 Phantom. It has a base clock of 980MHz, boost clock of 1033 MHz, and slightly overclocked 6200 MHz memory. Of course, being based on NVIDIA's GTX 780 chip, the Phanton GLH features 2,304 CUDA cores and 192 Texture Units within 12 SMX units. The Phantom GLH's 3GB of overclocked GDDR5 memory affords the card 297.6 GB/s of memory bandwidth. Gainward claims that the new card is up to 19% faster than NVIDIA's reference GTX 780 graphics card.
To put that in perspective, the Gainward GTX 780 Phantom (non-GLH) is clocked at 902 MHz base and 954 MHz boost. Further, NVIDIA"s stock GTX 780 is has GPU clockspeeds of 863 MHz base, 900 MHz boost, and 6008 MHz for the memory. In other words, it is an impressive factory overclock, and I'm interested to see how much headroom is left for enthusiasts to push the chip further with the included cooler.
Other features of the upcoming Gainward GTX 780 Phanton GLH include an 8-phase PWM with DrMOS technology, a large aluminum fin stack with removable fans that is connected to a copper GPU block via five 8mm heatpipes, and an EXPERTmode option in the company's overclocking utility. Video outputs are the same as the reference design, with two DVI, one DisplayPort, and one HDMI port.
There is no word on pricing or when (and where) it will be available, but expect this beastly card to come at a premium. Although, as one of the fastest factory overclocked GTX 780 cards (soon to be) available, it may be worth it!
Subject: Graphics Cards | June 20, 2013 - 04:05 PM | Ryan Shrout
Tagged: radeon, nvidia, geforce, frame rating, fcat, crossfire, amd
Well, the date has been set. AMD publicly stated on its @AMDRadeon Twitter account that a new version the prototype driver we originally previewed with the release of the Radeon HD 7990 in April will be released to the public on July 31st. For a problem that many in the industry didn't think existed.
Big news for CrossFire! We plan to release our driver that delivers improved multi-GPU frame pacing on July 31. More info soon.
— AMD Radeon Graphics (@AMDRadeon) June 20, 2013
Since that April release AMD has been very quiet about its driver changes and actually has refused to send me updated prototypes over the spring. Either they have it figured out or they are worried they haven't - but it looks like we'll find out at the end of next month and I feel pretty confident that the team will be able to address the issues we brought to light.
For those of you that might have missed the discussion, our series of Frame Rating stories will tell you all about the issues with frame pacing and stutter in regards to AMD's CrossFire multi-GPU technology.
- Frame Rating Dissected: Full Details on Capture-based Graphics Performance Testing
- Frame Rating: Visual Effects of Vsync on Gaming Animation
- Frame Rating: AMD Improves CrossFire with Prototype Driver
AMD gave the media a prototype driver in April to test with the Radeon HD 7990, a card that depends on CrossFire to work correctly, and the improvements were pretty drastic.
So what can we expect on July 31st? A driver that will give users the option to disable or enable the frame pacing technology they are developing - though I am still of the mindset that disabling is never advantageous. More to come in the next 30 days!
Subject: Graphics Cards | June 20, 2013 - 12:28 PM | Tim Verry
Tagged: nvidia, kepler, gtx 760, GK104, gk-104, gaming
There have been rumors of a new mid-range Kepler-based graphics card coming that will be next entry in the GTX 700-series. This new GPU is rumored to be called the GeForce GTX 760. If the specifications are true, the card will fit between the existing GTX 660 and GTX 660 Ti graphics cards as far as hardware specifications and pricing. While it will be under the GTX 700-series, it will not have the faster 7Gbps memory clockspeed of the other 700-series cards.
As far as specifications, Videocardz claims to have the final specifications list in a recent news post. The GTX 760 is rumored to be the latest graphics card to use NVIDIA's GK-104 "Kepler" GPU. The GTX 760 will have some units disabled for a GPU with 1,152 CUDA cores, 96 Texture Manipulation Units (TMUs), and 32 Raster Operations Processors (ROPs). The GPU supports NVIDIA's latest GPU Boost 2.0 technology which will automatically ratchet up the Boost clockspeed so long as temperature allows. It has a base clockspeed of 980 MHz and a boost clockspeed of 1,033 Mhz.
GTX 760 graphics cards will allegedly come in both 2GB and 4GB GDDR5 memory flavors. NVIDIA is clocking the memory at 6 Gbps (1502 MHz), which makes it the fist 700-series part to not take advantage of faster memory chips. However, there is a bit of saving grace as NVIDIA has moved to a 256-bit memory bus. This allows the card to still see a respectable bump in memory bandwidth of 192 GB/s on the GTX 760 versus the GTX 660/GTX 660 Ti's 144.2 GB/s bandwidth.
Compared to the existing mid-range 600-series cards, the GTX 760 has base and boost GPU clockspeeds equal to the GTX 660 (and faster than the GTX 660 TI). Memory clockspeed is also unchanged on the new card, though it has a wider memory bus. The GTX 760 has 192 more CUDA cores than the GTX 660, but 192 fewer CUDA cores versus the GTX 660 Ti. TMUs are also sit evenly between the two 600-series cards, but the GTX 760 does have 8 more ROPs enabled than both the 660 and 660 Ti.
Graphics cards with the upcoming GTX 760 GPU will be powered by two 6-pin PCI-E power connectors, and it has a 170W TDP. That power consumption puts the card between the 150W GTX 660 Ti and the higher-end 230W GTX 770. It appears that the card will not come with the high-end stock metallic cooler used in the other 700-series cards, though the various AIBs are likely to fit the GPU with their own custom aftermarket coolers. Video outputs on the cards will incluce DVI-I, DVI-D, HDMI, and DisplayPort.
The chart below compares the specifications between the GTX 660, GTX 660 Ti, GTX 770, and the rumored GTX 760.
|GTX 760||GTX 660||GTX 660 Ti||GTX 770|
|GPU Base||980 MHz||980 Mhz||915 Mhz||1046 Mhz|
|GPU Boost||1033 MHz||1033 MHz||980 Mhz||1085 MHz|
|Memory Clock||1502 MHz||1502 MHx||1502 Mhz||1752 Mhz|
|Bandwidth||192 GB/s||144.2 GB/s||144.2 GB/s||224 GB/s|
|TDP||170 W||140 W||150 W||230 W|
The card is supposedly going to be released on June 25th for around $300. It will compete with AMD's 7950 with boost graphics card. Further, the card will be an alternative to NVIDIA's own GTX 660 Ti and an upgrade for gamers still running GTX 560 cards with the company's older Fermi-based GPU.
OpenCL Support in a Meaningful Way
Adobe had OpenCL support since last year. You would never benefit from its inclusion unless you ran one of two AMD mobility chips under Mac OSX Lion, but it was there. Creative Cloud, predictably, furthers this trend with additional GPGPU support for applications like Photoshop and Premiere Pro.
This leads to some interesting points:
- How OpenCL is changing the landscape between Intel and AMD
- What GPU support is curiously absent from Adobe CC for one reason or another
- Which GPUs are supported despite not... existing, officially.
This should be very big news for our readers who do production work whether professional or for a hobby. If not, how about a little information about certain GPUs that are designed to compete with the GeForce 700-series?
Subject: Graphics Cards | June 18, 2013 - 03:39 PM | Ryan Shrout
Tagged: radeon, nvidia, geforce, frostbite 3, ea, dice, amd
The original source article at IGN.com has been updated with some new information. Now they are saying the agreement between AMD and EA is "non-exclusive and gamers using other components will be supported."
The quote from an EA rep says as follows:
DICE has a partnership with AMD specifically for Battlefield 4 on PC to showcase and optimize the game for AMD hardware," an EA spokesperson said. "This does not exclude DICE from working with other partners to ensure players have a great experience across a wide set of PCs for all their titles.
END UPDATE #3
This could be a huge deal for NVIDIA and AMD in the coming months - according to a story at IGN.com, AMD has entered into an agreement with EA that will allow them exclusive rights to optimization for all games based around the Frostbite 3 engine. That includes Battlefield 4, Mirror's Edge 2, Need for Speed Rivals and many more games due out this year and in 2014. Here is the quote that is getting my attention:
Starting with the release of Battlefield 4, all current and future titles using the Frostbite 3 engine — Need for Speed Rivals, Mirror's Edge 2, etc. — will ship optimized exclusively for AMD GPUs and CPUs. While Nvidia-based systems will be supported, the company won't be able to develop and distribute updated drivers until after each game is released.
Battlefield 4 will be exclusive optimized for AMD hardware.
This is huge news for AMD as the Frostbite 3 engine will be used for all EA published games going forward with the exception of sports titles. The three mentioned above are huge but this also includes Star Wars Battlefront, Dragon Age and even the next Mass Effect so I can't really emphasize enough how big of a win this could be for AMD's marketing and developer relations teams.
I am particularly interested in this line as well:
While Nvidia-based systems will be supported, the company won't be able to develop and distribute updated drivers until after each game is released.
The world of PC optimizations and partnerships has been around for a long time so this isn't a huge surprise for anyone that follows PC gaming. What is bothersome to me is that both EA and AMD
are saying are rumored to have agreed that NVIDIA won't get access to the game as it is being developed - something that is CRUCIAL for day-of driver releases and performance tweaks for GeForce card owners. In most cases, both AMD and NVIDIA developer relations teams get early access to game builds for PC titles in order to validate compatibility and to improve performance of these games for the public release. Without these builds, NVIDIA would be at a big disadvantage. This is exactly what happend with the recent Tomb Raider release.
AMD called me to reiterate their stance that competition does not automatically mean cutting out the other guy. In the Tomb Raider story linked above, Neil Robison, AMD's Senior Director of Consumer and Graphics Alliances, states quite plainly: "The thing that angers me the most is when I see a request to debilitate a game. I understand winning, I get that, and I understand aggressive companies, I get that. Why would you ever want to introduce a feature on purpose that would make a game not good for half the gaming audience?"
So what do we take away from that statement, made in a story published in March, and today's rumor? We have to take AMD at its word until we see solid evidence otherwise, or enough cases of this occurring to feel like I am being duped but AMD wants us all to know that they are playing the game the "right way." That stance just happens to be counter to this rumor.
NVIDIA had performance and compatibility issues with Tomb Raider upon release.
The irony in all of this is that AMD has been accusing NVIDIA of doing this exact thing for years - though without any public statements from developers, publishers or NVIDIA. When Batman: Arkham Asylum was launched AMD basically said that NVIDIA had locked them out of supporting antialiasing. In 2008, Assassin's Creed dropped DX 10.1 support supposedly because NVIDIA asked them too, who didn't have support for it at the time in GeForce cards. Or even that NVIDIA was disabling cores for PhysX CPU support to help prop up GeForce sales. At the time, AMD PR spun this as the worst possible thing for a company to do in the name of gamers, that is was bad for the industry, etc. But times change as opportunity changes.
The cold truth is that this is why AMD decided to take the chance that NVIDIA was allegedly unwilling to and take the console design wins that are often noted as being "bad business." If settling for razor thin margins on the consoles is a risk, the reward that AMD is hoping to get is exactly this: benefits in other markets thanks to better relationships with game developers.
Will the advantage be with AMD thanks to PS4 and Xbox One hardware?
At E3 I spoke in-depth with both NVIDIA and AMD executives about this debate and as you might expect both have very different opinions about what is going to transpire in the next 12-24 months. AMD views this advantage (being in the consoles) as the big bet that is going to pay off for the more profitable PC space. NVIDIA thinks that AMD still doesn't have what it takes to truly support developers in the long run and they don't have the engineers to innovate on the technology side. In my view, having Radeon-based processors in the Xbox One and Playstation 4 (as well as the Wii U I guess) gives AMD a head start but won't win them the race for the hearts and minds of PC gamers. There is still a lot of work to be done for that.
Before this story broke I was planning on outlining another editorial on this subject and it looks like it just got promoted to a top priority. There appear to be a lot of proverbial shoes left to drop in this battle, but it definitely needs more research and discussion.
Remember the issues with Batman: Arkham Asylum? I do.
I asked both NVIDIA and AMD for feedback on this story but only AMD has replied thus far. Robert Hallock, PR manager for gaming and graphics, Graphics Business Unit at AMD sent me this:
It makes sense that game developers would focus on AMD hardware with AMD hardware being the backbone of the next console generation. At this time, though, our relationship with EA is exclusively focused on Battlefield 4 and its hardware optimizations for AMD CPUs, GPUs and APUs.
Not much there, but he is also not denying of the original report coming from IGN. It might just be too early for a more official statement. I will update this story with information from NVIDIA if I hear anything else.
What do YOU think about this announcement though? Is this good news for AMD and bad news for NVIDIA? Is it good or bad for the gamer and in particular, the PC gamer? Your input will help guide or upcoming continued talks with NVIDIA and AMD on the subject.
Just so we all have some clarification on this and on the potential for validity of the rumor, this is where I sourced the story from this afternoon:
END UPDATE #2
Kepler-based Mobile GPUs
Late last month, just before the tech world blew up from the mess that is Computex, NVIDIA announced a new line of mobility discrete graphics parts under the GTX 700M series label. At the time we simply posted some news and specifications about the new products but left performance evaluation for a later time. Today we have that for the highest end offering, the GeForce GTX 780M.
As with most mobility GPU releases it seems, the GTX 700M series is not really a new GPU and only offers cursory feature improvements. Based completely on the Kepler line of parts, the GTX 700M will range from 1536 CUDA cores on the GTX 780M to 768 cores on the GTX 760M.
The flagship GTX 780M is essentially a desktop GTX 680 card in a mobile form factor with lower clock speeds. With 1536 CUDA cores running at 823 MHz and boosting to higher speeds depending on the notebook configuration, a 256-bit memory controller running at 5 GHz, the GTX 780M will likely be the fastest mobile GPU you can buy. (And we’ll be testing that in the coming pages.)
The GTX 760M, 765M and 770M offering ranges of performance that scale down to 768 cores at 657 MHz. NVIDIA claims we’ll see the GTX 760M in systems as small as 14-in and below with weights at 2kg or so from vendors like MSI and Acer. For Ultrabooks and thinner machines you’ll have to step down to smaller, less power hungry GPUs like the GT 750 and 740 but even then we expect NVIDIA to have much faster gaming performance than the Haswell-based processor graphics.
Subject: General Tech, Graphics Cards, Processors, Shows and Expos | June 13, 2013 - 02:26 AM | Scott Michaud
Tagged: E3, E3 13, amd
The Electronic Entertainment Expo (E3) is the biggest event of the year for millions of gamers. The majority of coverage ends up gawking over the latest news out of Microsoft, Sony, or Nintendo, and we certainly will provide our insights in those places if we believe they have been insufficiently explained, but E3 is also a big time for PC gamers too.
5 GHz and unlocked to go from there.
AMD, specifically, has a lot to say this year. In the year of the next-gen console reveals, AMD provides the CPU architecture for two of the three devices and have also designed each of the three GPUs. This just leaves a slight win for IBM, who is responsible for the WiiU main processor, for whatever that is worth. Unless the Steam Box comes to light and without ties to AMD, it is about as close to a clean sweep as any hardware manufacturer could get.
But for the PCs among us...
For those who have seen the EA press conference, you have probably seen lots of sports. If you stuck around after the sports, you probably saw Battlefield 4 being played by 64 players on stage. AMD has been pushing, very strongly, for developer relations over the last year. DICE, formerly known for being an NVIDIA-friendly developer, did not exhibit Battlefield 4 "The Way It's Meant to be Played" at the EA conference. According to one of AMD's Twitter accounts:
— AMD Radeon Graphics (@AMDRadeon) June 12, 2013
On the topic of "Gaming Evolved" titles, AMD is partnering with Square Enix to optimize Thief for GCN and A-Series APUs. The Press Release specifically mentioned Eyefinity and Crossfire support along with a DirectX 11 rendering engine; of course, the enhancements with real, interesting effects are the seemingly boring ones they do not mention.
The last major point from their E3 event was the launch of their 5 GHz FX processors. For more information on that part, check out Josh's thoughts from a couple of days ago.
A necessary gesture
NVIDIA views the gaming landscape as a constantly shifting medium that starts with the PC. But the company also sees mobile gaming, cloud gaming and even console gaming as part of the overall ecosystem. But that is all tied together by an investment in content – the game developers and game publishers that make the games that we play on PCs, tablets, phones and consoles.
The slide above shows NVIDIA targeting for each segment – expect for consoles obviously. NVIDIA GRID will address the cloud gaming infrastructure, GeForce and the GeForce Experience will continue with the PC systems and NVIDIA SHIELD and the Tegra SoC will get the focus for the mobile and tablet spaces. I find it interesting that NVIDIA has specifically called out Steam under the PC – maybe a hint of the future for the upcoming Steam Box?
The primary point of focus for today’s press meeting was to talk about the commitment that NVIDIA has to the gaming world and to developers. AMD has been talking up their 4-point attack on gaming that starts really with the dominance in the console markets. But NVIDIA has been the leader in the PC world for many years and doesn’t see that changing.
With several global testing facilities, the most impressive of which exists in Russia, NVIDIA tests more games, more hardware and more settings combinations than you can possibly imagine. They tune drivers and find optimal playing settings for more than 100 games that are now wrapped up into the GeForce Experience software. They write tools for developers to find software bottlenecks and test for game streaming latency (with the upcoming SHIELD). They invest more in those areas than any other hardware vendor.
This is a list of technologies that NVIDIA claims they invented or developed – an impressive list that includes things like programmable shaders, GPU compute, Boost technology and more.
Many of these turned out to be very important in the development and advancement of gaming – not for PCs but for ALL gaming.
Subject: Graphics Cards | June 10, 2013 - 07:28 PM | Jeremy Hellstrom
Tagged: gtx 770, msi N770 Lightning, overclocking
[H]ard|OCP liked the new GTX 770 Lightning from MSI but thought they would like it better overclocked, perhaps even more than a GTX 680 or HD7970. The triplets below are, from top to bottom, the GTX 680, the GTX 770 and the HD7970, all from the overclocked Lightning family. By using MSI's Afterburner utility [H] pushed the card to 1241MHz on the core and 7.8GHz effective for the RAM, higher than the factory overclock. That speed boost put its performance on par with the overclocked GTX680 but it seems that the impressive speeds that the 7970 Lightning is capable of leaves it comfortably in the lead.
"We take the new MSI N770 Lightning and overclock it to its maximum potential. We will compare it with a highly overclocked MSI GeForce GTX 680 Lightning and GIGABYTE Radeon HD 7970. Each GPU is getting its best chance to show us how well it can perform, as all of these GPUs are highly overclocked."
Here are some more Graphics Card articles from around the web:
- GALAXY GeForce GTX 650 Ti Boost @ [H]ard|OCP
- NVIDIA GTX 770 2GB @ eTeknix
- NVidia GTX 770 Video Card Review @ Ninjalane
- Nvidia GeForce GTX 770 @ Bjorn3D
- EVGA GTX 770 SC 2GB with ACX Cooler Video Card Review @ HiTech Legion
- Gigabyte GTX 650Ti BOOST 2GB OC Video Card @ HiTech Legion
- MSI N770 Lightning Overclocking @ [H]ard|OCP
- NVIDIA GeForce GTX 770 Reviewed in 2-Way SLI and NVIDIA Surround @ Legit Reviews
- Inno3D iChill GeForce GTX 780 review: almost a Titan @ Hardware.info
- ASUS GTX 670 Direct CU Mini @ Kitguru
- Inno3D iChill GeForce GTX 660 @ Hardware.info
- EVGA GeForce GTX 780 Superclocked ACX Cooling Video Card Review @ Legit Reviews
- Gigabyte GTX 780 WindForce OC 3GB Video Card Review @ Madshrimps
- Gigabyte GeForce GTX 770 OC WindForce 3x 2GB @ eTeknix
- iXBT Labs Review: i3DSpeed, May 2013
- Gigabyte HD 7790 2GB @ Bjorn3D
Get notified when we go live!