Subject: Graphics Cards | May 24, 2013 - 06:10 PM | Jeremy Hellstrom
Tagged: nvidia, gtx 780, gk110, geforce
With 768 more CUDA Cores than the 680 but 384 less than the TITAN the 780 offers improvements over the previous generation and will be available for about $350 less than the TITAN. As you can see in [H]ard|OCP's testing it does outperform the 680 and 7970 but not by a huge margin which hurts the price to performance ratio and makes it more attractive for 680 owners to pick up a second card for SLI. AMD owners with previous generation cards and deep pockets might be tempted to pick up a pair of these cards as they show very good frame rating results in Ryan's review.
"NVIDIA's new GeForce GTX 780 video card has finally been unveiled. We review the GTX 780 with real world gaming with the most intense 3D games, including Metro: Last Light. If the GTX TITAN had you excited but was a bit out of your price range, the GTX 780 should hold your excitement while being a lot less expensive."
Here are some more Graphics Card articles from around the web:
- Nvidia's GeForce GTX 780 @ The Tech Report
- EVGA GTX 780 Superclocked w/ ACX Cooler 3 GB @ techPowerUp
- NVIDIA GeForce GTX 780 3GB Overclocked - Closing the gap on the GTX TITAN @ Tweaktown
- Zotac GeForce GTX 780 @ Bjorn3D
- Nvidia GeForce GTX 780 @ Bjorn3D
- The Almost Titan: NVIDIA GeForce GTX 780 Review @ Techgage
- The NVIDIA GeForce GTX 780 @ TechARP
- Nvidia GeForce GTX 780 3GB @ eTeknix
- GeForce GTX 780 Review: The Titan Descendant @ TechSpot
- NVIDIA GeForce GTX 780 SLI @ techPowerUp
- Gigabyte GTX 780 WindForce OC 3 GB @ techPowerUp
- Nvidia GTX 780 @ LanOC Reviews
- NVIDIA GeForce GTX 780 3 GB @ techPowerUp
- NVIDIA GeForce GTX 780 Video Card Review @Hi Tech Legion
- Nvidia GeForce GTX 780 review: Titan Light @ Hardware.info
- NVIDIA GeForce GTX 780 Review @ OCC
- NVIDIA GeForce GTX 780 3GB @ Tweaktown
- NVIDIA GeForce GTX 780 @ Hardware Canucks
- NVIDIA GEFORCE GTX 780 @ Overclockers.com
- NVIDIA GeForce GTX 780 Graphics Card @ Benchmark Reviews
- ZOTAC GeForce GTX TITAN AMP! Edition 6144 MB @ techPowerUp
- How to Install NVIDIA Drivers @ OCC
- NVIDIA GeForce Chips Comparison Table @ Hardware Secrets
- How to Install AMD Drivers Guide @ OCC
- Gallium3D Continues Improving OpenGL For Older Radeon GPUs @ Phoronix
- Sapphire HD7990 QuadFireX @ Kitguru
- MSI Radeon HD 7790 1GB OC Overclocked @ Tweaktown
- HIS 7790 iCooler Turbo 1GB GDDR5 Video Card Review @ Madshrimps
Subject: Editorial, General Tech, Graphics Cards, Systems | May 23, 2013 - 06:40 PM | Scott Michaud
Tagged: xbox one, xbox, unreal engine, ps4, playstation 4, epic games
Unreal Engine 4 was presented at the PlayStation 4 announcement conference through a new Elemental Demo. We noted how the quality seemed to have dropped in the eight months following E3 while the demo was being ported to the console hardware. The most noticeable differences were in the severely reduced particle counts and the non-existent fine lighting details; of course, Epic pumped the contrast in the PS4 version which masked the lack of complexity as if it were a stylistic choice.
Still, the demo was clearly weakened. The immediate reaction was to assume that Epic Games simply did not have enough time to optimize the demo for the hardware. That is true to some extent, but there are theoretical limits on how much performance you can push out of hardware at 100% perfect utilization.
Now that we know both the PS4 and, recently, the Xbox One: it is time to dissect more carefully.
A recent LinkedIn post from EA Executive VP and CTO, Rajat Taneja, claims that the Xbox One and PS4 are a generation ahead of highest-end PC on the market. While there are many ways to interpret that statement, in terms of raw performance that statement is not valid.
As of our current knowledge, the PlayStation 4 contains an eight core AMD "Jaguar" CPU with an AMD GPU containing 18 GCN compute units, consisting of a total of 1152 shader units. Without knowing driving frequencies, this chip should be slightly faster than the Xbox One's 768 shader units within 12 GCN compute units. The PS4 claims their system has a total theoretical 2 teraFLOPs of performance and the Xbox One would almost definitely be slightly behind that.
Back in 2011, the Samaritan Demo was created by Epic Games to persuade console manufacturers. This demo was how Epic considered the next generation of consoles to perform. They said, back in 2011, that this demo would theoretically require 2.5 teraFLOPs of performance for 30FPS at true 1080p; ultimately their demo ran on the PC with a single GTX 680, approximately 3.09 teraFLOPs.
This required performance, (again) approximately 2.5 teraFLOPs, is higher than what is theoretically possible for the consoles, which is less than 2 teraFLOPs. The PC may have more overhead than consoles, but the PS4 and Xbox One would be too slow even with zero overhead.
Now, of course, this does not account for reducing quality where it will be the least noticeable and other cheats. Developers are able to reduce particle counts and texture resolutions in barely-noticeable places; they are also able to render below 1080p or even below 720p, as was the norm for our current console generation, to save performance for more important things. Perhaps developers might even use different algorithms which achieve the same, or better, quality for less computation at the expense of more sensitivity to RAM, bandwidth, or what-have-you.
But, in the end, Epic Games did not get the ~2.5 teraFLOPs they originally hoped for when they created the Samaritan Demo. This likely explains, at least in part, why the Elemental Demo looked a little sad at Sony's press conference: it was a little FLOP.
Update, 5/24/2013: Mark Rein of Epic Games responds to the statement made by Rajat Taneja of EA. While we do not know his opinion on consoles... we know his opinion on EA's opinion:
— Mark Rein (@MarkRein) May 23, 2013
Subject: Editorial, Graphics Cards | May 23, 2013 - 12:08 PM | Ryan Shrout
Tagged: video, live, gtx 780, gk110
Missed the LIVE stream? You can catch the video reply of it below!
Hopefully by now you have read over our review of the new NVIDIA GeForce GTX 780 graphics card that launched this morning. Taking the GK110 GPU, cutting off some more cores, and setting a price point of $650 will definitely create some interesting discussion.
Join me today at 2pm ET / 11am PT as we discuss the GTX 780, our review and take your questions. You can leave them in the comments below, no registration required.
GK110 Gets a Lower Price Point
If you want to ask us some questions about the GTX 780 or our review, join us for a LIVE STREAM at 2pm EDT / 11am PDT on our LIVE page.
When NVIDIA released the GeForce GTX Titan in February there was a kind of collective gasp across the enthusiast base. Half of that intake of air was from people amazed at the performance they were seeing on a single GPU graphics cards powered by the GK110 chip. The other half was from people aghast of the $1000 price point that NVIDIA launched it at. The GTX Titan was the fastest single GPU card in the world, without any debate, but with it came a cost we hadn't seen in some time. Even with the debate between it, the GTX 690 and the HD 7990, the Titan was likely my favorite GPU, cost no concerns.
Today we see the extension of the GK110, by cutting it back some, and releasing a new card. The GeForce GTX 780 3GB is based on the same chip as the GTX Titan but with additional SMX units disabled, a lower CUDA core count and less memory. But as you'll soon see, the performance delta between it and the GTX 680 and Radeon HD 7970 GHz is pretty impressive. The $650 price tag though - maybe not.
We held a live stream the day this review launched at http://pcper.com/live. You can see the replay that goes over our benchmark results and thoughts on the GTX 780 below.
The GeForce GTX 780 - A Cut Down GK110
As I mentioned above, the GTX 780 is a pared-down GK110 GPU and for more information on that particular architecture change, you should really take a look at my original GTX Titan launch article from February. There is a lot more that is different on this part compared to GK104 than simple shader counts, but for gamers most of the focus will rest there.
The chip itself is a 7.1 billion mega-ton beast though a card with the GTX 780 label is actually utilizing much fewer than that. Below you will find a couple of block diagrams that represent the reduced functionality of the GTX 780 versus the GTX Titan:
Subject: Editorial, General Tech, Graphics Cards, Processors, Systems | May 21, 2013 - 05:26 PM | Scott Michaud
Tagged: xbox one, xbox
Almost exactly three months have passed since Sony announced the Playstation 4 and just three weeks remain until E3. Ahead of the event, Microsoft unveiled their new Xbox console: The Xbox One. Being so close to E3, they are saving the majority of games until that time. For now, it is the box itself as well as its non-gaming functionality.
First and foremost, the raw specifications:
- AMD APU (5 billion transistors, 8 core, on-die eSRAM)
- 8GB RAM
- 500GB Storage, Bluray reader
- USB 3.0, 802.11n, HDMI out, HDMI in
The hardware is a definite win for AMD. The Xbox One is based upon an APU which is quite comparable to what the PS4 will offer. Unlike previous generations, there will not be too much differentiation based on available performance; I would not expect to see much of a fork in terms of splitscreen and other performance-sensitive features.
A new version of the Kinect sensor will also be present with all units which developers can depend upon. Technically speaking, the camera is higher resolution and more wide-angle; up to six skeletons can be tracked with joints able to rotate rather than just hinge. Microsoft is finally also permitting developers to use the Kinect along with a standard controller to, as they imagine, allow a user to raise their controller to block with a shield. That is the hope, but near the launch of the original Kinect, Microsoft filed a patent to allow sign language recognition: has not happened yet. Who knows whether the device will be successfully integrated into gaming applications.
Of course Microsoft is known most for system software, and the Xbox runs three lightweight operating environments. In Windows 8, you have the Modern interface which runs WinRT applications and you have the desktop app which is x86 compatible.
The Xbox One borrows more than a little from this model.
The home screen, which I am tempted to call the Start Screen, for the console has a very familiar tiled interface. They are not identical to Windows but they are definitely consistent. This interface allows for access to Internet Explorer and an assortment of apps. These apps can be pinned to the side of the screen, identical to Windows 8 modern app. I am expecting there to be "a lot of crossover" (to say the least) between this and the Windows Store; I would not be surprised if it is basically the same API. This works both when viewing entertainment content as well as within a game.
These three operating systems run at the same time. The main operating system is basically a Hyper-V environment which runs the two other operating systems simultaneously in sort-of virtual machines. These operating systems can be layered with low latency, since all you are doing is compositing them in a different order.
Lastly, they made reference to Xbox Live, go figure. Microsoft is seriously increasing their server capacity and expects developers to utilize Azure infrastructure to offload "latency-insensitive" computation for games. While Microsoft promises that you can play games offline, this obviously does not apply to features (or whole games) which rely upon the back-end infrastructure.
And yes, I know you will all beat up on me if I do not mention the SimCity debacle. Maxis claimed that much of the game requires an online connection due to the complicated server requirements; after a crack allowed offline functionality, it was clear that the game mostly operates fine on a local client. How much will the Xbox Live cloud service offload? Who knows, but that is at least their official word.
Now to tie up some loose ends. The Xbox One will not be backwards compatible with Xbox 360 games although that is no surprise. Also, Microsoft says they are allowing users to resell and lend games. That said, games will be installed and not require the disc, from what I have heard. Apart from the concerns about how much you can run on a single 500GB drive, once the game is installed rumor has it that if you load it elsewhere (the rumor is even more unclear about whether "elsewhere" counts accounts or machines) you will need to pay a fee to Microsoft. In other words? Basically not a used game.
Well, that has it. You can be sure we will add more as information comes forth. Comment away!
Subject: Graphics Cards | May 20, 2013 - 12:54 PM | Tim Verry
Tagged: VIA, Q1 2013, nvidia, jpr, Intel, gpu market share, amd
Market analytics firm Jon Peddie Research recently released estimated market share and GPU shipment numbers from Q1 2013. The report includes information on AMD, NVIDIA, Intel, and Via and covered IGPs, processor graphics, and discrete GPUs included in desktop and mobile systems powered by X86 hardware. The report includes x86 tablets but otherwise does not factor in GPUs used in ARM devices like NVIDIA's Tegra chips. Year over Year, the PC market is down 12.6% and the GPU market declined by 12.9%. It is not all bad news for the PC market and discrete GPU makers, however. GPUs through 2016 are expected to exhibit a compound annual growth rate (CAGR) of 2.6% with as many as 394 million discrete GPUs shipped in 2016 alone.
In Q1 2013, the PC market is down 13.7% versus last quarter (Q4 2012) but the GPU market only declined 3.2%. This discrepency is explained as the result of people adding multiple GPUs to a single PC system, including adding a single discrete card to a system that already has processor graphics or an APU. By the end of Q1 2013, Intel holds 61.8% market share followed by AMD in second place with 20.2% and NVIDIA with 18%. Notably VIA is out of the game with 0.0% market share.
In terms of GPU shipments, NVIDIA had a relatively good first quarter of this year with an increase of 7.6% for notebook GPUs and desktop GPU shipments that remained flat. Overall, NVIDIA saw an increase in PC graphics shipments of 3.6%. On the other hand, x86 CPU giant Intel saw desktop and notebook GPUs slip by 3% and 6.3% respectively. Overall, that amounts to PC graphics shipments that fell by 5.3%. In between NVIDIA and Intel, AMD moved 30% more desktop chips (including APUs) versus Q4 2012. Meanwhile, Notebook chips (including APUs) fell by 7.3%. AMD's overall PC graphics shipments fell by 0.3%.
In all, this is decent news for the PC market as it shows that there is still interest in desktop GPUs. The PC market itself is declining and taking the GPU market with it, but it is far from the death of the desktop PC. It is interesting that NVIDIA (which announced Q1'13 revenue of $954.7 million) managed to push more chips while AMD and Intel were on the decline since NVIDIA doesn't have a x86 CPU with integrated graphics. I'm looking forward to seeing where NVIDIA stands as far as the mobile GPU market which does include ARM-powered products.
Subject: General Tech, Graphics Cards, Processors, Mobile | May 15, 2013 - 09:02 PM | Scott Michaud
Tagged: tegra 4, hp, tablets
Sentences containing the words "Hewlett-Packard" and "tablet" can end in a question mark, an exclamation mark, or a period on occasion. The gigantic multinational technology company tried to own a whole mobile operating system with their purchase of Palm and abandoned those plans just as abruptly with such a successful $99 liquidation of $500 tablets, go figure, that they to some extent did it twice. The operating system was open sourced and at some point LG swooped in and bought it, minus patents, for use in Smart TVs.
So how about that Android?
The floodgates are open on Tegra 4 with HP announcing their SlateBook x2 hybrid tablet just a single day after NVIDIA's SHIELD move out of the projects. The SlateBook x2 uses the Tegra 4 processor to power Android 4.2.2 Jellybean along with the full Google experience including the Google Play store. Along with Google Play, the SlateBook and its Tegra 4 processor are also allowed in TegraZone and NVIDIA's mobile gaming ecosystem.
As for the device itself, it is a 10.1" Android tablet which can dock into a keyboard for extended battery life, I/O ports, and well, a hardware keyboard. You are able to attach this tablet to a TV via HDMI along with the typical USB 2.0, combo audio jack, and a full-sized SD card slot; which half any given port is available through is anyone's guess, however. Wirelessly, you have WiFi a/b/g/n and some unspecified version of Bluetooth.
The raw specifications list follows:
NVIDIA Tegra 4 SoC
- ARM Cortex A15 quad core @ 1.8 GHz
- 72 "Core" GeForce GPU @ ~672MHz, 96 GFLOPS
- 2GB DDR3L RAM ("Starts at", maybe more upon customization?)
- 64GB eMMC SSD
- 1920x1200 10.1" touch-enabled IPS display
- HDMI output
- 1080p rear camera, 720p front camera with integrated microphone
- 802.11a/b/g/n + Bluetooth (4.0??)
- Combo audio jack, USB 2.0, SD Card reader
- Android 4.2.2 w/ Full Google and TegraZone experiences.
If this excites you, then you only have to wait until some point in August; you will also, of course, need to wait until you save up about $479.99 plus tax and shipping.
Subject: Graphics Cards | May 15, 2013 - 12:00 PM | Ryan Shrout
Tagged: tomb raider, never settle reloaded, never settle, level up, Crysis 3, bundle, amd
AMD dropped us a quick note to let us in on another limited time offer for buyers of AMD Radeon graphics cards. Starting today, the Never Settle Reloaded bundle that we first told you about in February is getting an upgrade for select tiers. For new buyers of the Radeon HD 7970, 7950 and 7790 AMD will be adding Tomb Raider into the mix. Also, the Radeon HD 7870 will be getting Crysis 3.
Here is the updated, currently running AMD Radeon Level Up bundle matrix.
Now if you buy a new AMD Radeon HD 7970, HD 7950 or HD 7870 today you will get four top-level PC games including Crysis 3, Bioshock Infinite, Far Cry 3: Blood Dragon and Tomb Raider.
This is a limited time offer though that will end when supplies run out and we don't really have any idea when that will be. Check out AMD's Level Up site for more details and to find retailers offering the updated bundles.
I am curious to find out how successful these bundles have been for AMD and if NVIDIA has had a feedback on the Free-to-play bundle they offered or the new Metro: Last Light option. Do gamers put much emphasis on the game bundles that come with each graphics card or does the performance and technology make the difference?
UPDATE: I have seen a couple of questions on whether this Level Up promotion would be retroactive. According to the details I have from AMD, this promotion is NOT retroactive. If you have already purchased any of the affected cards you will not be getting the additional games.
Subject: General Tech, Graphics Cards, Processors, Systems, Mobile | May 14, 2013 - 03:54 PM | Scott Michaud
Tagged: haswell, nec
While we are not sure when it will be released or whether it will be available for North America, we have found a Haswell laptop. Actually, NEC will release two products in this lineup: a high end 1080p unit and a lower end 1366x768 model. Unfortuantely, the article is in Japanese.
IPS displays have really wide viewing angles, even top and bottom.
NEC is known for their higher-end monitors; most people equate the Dell Ultrasharp panels with professional photo and video production, but their top end offers are ofter a tier below the best from companies like NEC and Eizo. The laptops we are discussing today both contain touch-enabled IPS panels with apparently double the contrast ratio of what NEC considers standard. While these may or may not be the tip-top NEC offerings, they should at least be putting in decent screens.
Obviously the headliner for us is the introduction of Haswell. While we do not know exactly which product NEC decided to embed, we do know that they are relying upon it for their graphics performance. With the aforementioned higher-end displays, it seems likely that NEC is intending this device for the professional market. A price-tag of 190000 yen (just under $1900 USD) for the lower end and 200000 yen (just under $2000 USD) for the higher end further suggests this is their target demographic.
Clearly a Japanese model.
The professional market does not exactly have huge requirements for graphics performance, but to explicitly see NEC trust Intel for their GPU performance is an interesting twist. Intel HD 4000 has been nibbling, to say the least, on the discrete GPU marketshare in laptops. I would expect this laptop would contain one of the BGA-based parts, which are soldered onto the motherboard, for the added graphics performance.
As a final note, the higher-end model will also contain a draft 802.11ac antenna. It is expected that network performance could be up to 867 megabits as a result.
Of course I could not get away without publishing the raw specifications:
LL850/MS (Price: 200000 yen):
- Fourth-generation Intel Core processor with onboard video
- 8GB DDR3 RAM
- 1TB HDD w/ 32GB SSD caching
- BDXL (100-128GB BluRay disc) drive
- IEEE 802.11ac WiFi adapter, Bluetooth 4.0
- SDXC, Gigabit Ethernet, HDMI, USB3.0, 2x2W stereo Yamaha speakers
- 1080p IPS display with touch support
- Office Home and Business 2013 preinstalled?
LL750/MS (Price: 190000 yen):
- Fourth-generation Intel Core processor with onboard video
- 8GB DDR3 RAM
- 1TB HDD (no SSD cache)
- (Optical disc support not mentioned)
- IEEE 802.11a/b/g/n WiFi adapter, Bluetooth 4.0
- SDXC, Gigabit Ethernet, HDMI, USB3.0, 2x2W stereo Yamaha speakers
- 1366x768 (IPS?) touch-enabled display
Subject: Graphics Cards | May 10, 2013 - 07:25 PM | Jeremy Hellstrom
Tagged: titan, radeon hd 7990, nvidia, amd
If you have been wondering how the two flagship GPUs fare in a battle royal of pure frame rate you can satisfy your curiousity at [H]ard|OCP. They have tested both NVIDIA's TITAN and the finally released HD7990 in one of their latest reviews. Both cards were force to push out pixels at 5760x1200 and for the most part tied, which makes sense as they both cost $1000. The real winner was Crossfired HD 7970's which kept up with the big guns but cost $200 less to purcahse.
If that isn't extreme enough for you, they also overclocked the TITAN in a seperate review.
"We follow-up with a look at how the $999 GeForce GTX TITAN compares to the new $999 AMD Radeon HD 7990 video card. What makes this is unique is that the GeForce GTX TITAN is a single-GPU running three displays in NV Surround compared to the same priced dual-GPU CrossFire on a card Radeon HD 7990 in Eyefinity."
Here are some more Graphics Card articles from around the web:
- AMD's Radeon HD 7990 @ The Tech Report
- Diamond BV750 Low Profile 7750 @ Bjorn3D
- XFX Radeon HD 7790 Black Edition 1GB Video Card Review @ Madshrimps
- AMD Radeon HD 7790 2GB review: does another 1GB make a difference @ Hardware.info
- PowerColor HD 7790 Turbo Duo 1GB Video Card Review @ Hi Tech Legion
- AMD Radeon HD 7790 vs. Nvidia GeForce GTX 650 Ti BOOST @ X-bit Labs
- Sapphire HD7790 2GB OC @ Kitguru
- XFX R7790 Black Edition 1GB Review @ Neoseeker
- PowerColor Radeon HD 7790 TurboDuo OC 1GB @ eTeknix
- AMD Radeon HD 7990 6GB and HD 7970 GHz Edition Video Cards in CrossFireX @ Tweaktown
- AMD Radeon HD 7990 6GB Dual GPU Video Card Overclocked @ Tweaktown
- 15-Way Open vs. Closed Source NVIDIA/AMD Linux GPU Comparison @ Phoronix
- ASUS GTX650-E-2GD5 @ Hardware.info
- EVGA GeForce GTX TITAN 6GB SuperClocked Video Cards in SLI Overclocked @ Tweaktown
- Inno3D iChill GTX 650 Ti Boost 2 GB @ techPowerUp
- EVGA GeForce GTX 650 Ti Boost SuperClocked 2GB @ Tweaktown
- ZOTAC GTX Titan AMP! Edition @ Bjorn3D
- ASUS GTX 650 Ti Boost DCII @ Bjorn3D
- Asus GTX 670 DirectCU Mini 2GB @ eTeknix