Subject: Editorial, Graphics Cards | April 30, 2012 - 09:55 PM | Ryan Shrout
Tagged: nvidia, video, geforce, GTX 690, live, live review
We all know the reviews are coming soon - the GeForce GTX 690 is set to be launched this Thursday. The dual-GK104 Kepler solution with the $999 price tag will likely be the highest performing graphics card on the market (and by a lot) and we are going to be discussing the launch, the technology and a lot more in our PC Perspective Live Review.
Starting Thursday, May 3rd at 1pm EDT / 10am PDT, NVIDIA's Tom Petersen will join me at our live page (http://pcper.com/live) to talk about the new graphics card, the performance and feature characteristics that go into building a high-end solution like this and take questions from the viewers.
You might have seen our original GTX 680 Live Review where Tom and I hosted a similar event - this is definitely something you won't want to miss out on!
Be sure to set your calendars and join us Thursday afternoon for the event! You can use the chat room at http://pcper.com/live to interact and ask questions or follow me on Twitter and reply to me during the show.
Subject: Graphics Cards | April 30, 2012 - 03:17 PM | Ryan Shrout
Tagged: nvidia, kepler, GTX 690, geforce
There has been a lot of excitement building about the GeForce GTX 690 dual-GPU graphics card, with the apex during CEO Jen-Hsun Huang's keynote that released all kinds of details this weekend. We go our review sample of the new graphics beast this morning and needless to say NVIDIA felt the need to give this $999 video card a special ride.
With the imprint of "Caution: Weapons Grade Gaming Power" on the outside of the crate, NVIDIA obviously wanted to give us a chance to use the pry bar sent last week. And use it we did.
There isn't much more we can say about the card itself but I can tell you that the fit and finish of the design is just impressive to see in person.
I have included quite a few more photos of the unboxing and the card itself if you continue to the full post right here!!
Subject: Graphics Cards | April 29, 2012 - 01:25 PM | Ryan Shrout
Tagged: nvidia, geforce experience, geforce
After spilling the beans about the new GeForce GTX 690 card at the GeForce LAN in Shanghai, NVIDIA's CEO Jen-Hsun Huang also let loose on the GeForce Experience, a cloud-based service that promises to simplify the configuration of game settings based on your hardware.
The is incredibly simple but equally impressive in its scope: based on your particular hardware configuration including the processor, memory capacity, storage speed and of course the graphics card, the NVIDIA tool will set the optimal in-game settings and resolution. The breadth of being able to cover ALL the available hardware in the enthusiast market and even the mobile field is enormous but NVIDIA is confident that they have the personnel and testing systems in place to cover it all.
The process is pretty straight forward - when a user opens a game for the first time they will be presented with a screen that shows the default or current game settings side by side with the settings recommended by NVIDIA's GeForce Experience. You can simply hit apply and the configuration files will be updated to the new settings and you are ready to start gaming. Of course, users can simply use that GFE settings as a "base" and then modify them as they see fit.
This will be particularly useful for the mobile market that is usually never addressed by the in-game auto configurations from companies like Valve, resulting in incredibly low frame rates or a horrible 800x600-style experience. Now users will with machines running the somewhat unknown GT 540M will have the option to get some more modern, and hopefully realistic, settings easily applied.
Obviously the goal is make gaming on the PC as simple as gaming on consoles and this type of service will definitely move the industry in the right direction. With the beta supposedly starting in June, this is going to demand a long-term vision and constant and vigilant updates as new games, new driver revisions and user upgrades will constantly change what the "optimal" settings will be. We wish NVIDIA the best of luck to be sure and we will be testing out the service in the not too distant future.
Subject: Graphics Cards | April 28, 2012 - 11:55 PM | Ryan Shrout
Tagged: nvidia, kepler, jen-hsun huang, hd 7990, GTX 690, gtx 680, geforce, 7990
During a keynote presentation at GeForce LAN 2012 being held in Shanghai, NVIDIA's CEO Jen-Hsun Huang unveiled what many of us have been theorizing would be coming soon; the dual-GPU variant of the Kepler architecture, the GeForce GTX 690 graphics card.
Though reviews aren't going to be released yet, Huang unveiled pretty much all of the information we need to figure it out. With the full specifications listed as well as details about the stunning new design of the card and cooler, the GTX 690 is without a doubt going to be the fastest graphics card on the market when it goes on sale next month.
The GeForce GTX 690 4GB card is based on a pair of GK104 chips, each sporting 1536 CUDA cores, basically identical to the ones used in the GeForce GTX 680 2GB cards released in March. The base clock speed of these parts is slightly lower at 915 MHz but the "typical" Boost clock is set as high as 1019 MHz, pushing it pretty close to the performance of the single GPU solutions. With a total of 3072 processing cores, the GTX 690 will have insane amounts of compute horsepower.
Each GPU will have access to 2GB of independent frame buffer still running at 6 Gbps, for a grand total of 4GB on the card.
Sitting between the two GPUs will be a PCI Express 3.0 capable bridge chip from PLX supporting full x16 lanes to each GPU and a full x16 back to the host system.
In terms of power requirements, the GTX 690 will use a pair of 8-pin connectors and will have a TDP of 300 watts - actually not that high consider the TDP of the GTX 680 is 195 watts on its own. It is obvious that NVIDIA is going to be pulling the very best chips for this card, those that can run at clock speeds over 1 GHz with minimal leakage.
When the NVIDIA GeForce GTX 680 launched in March we were incredibly impressed with the performance and technology that the GPU was able to offer while also being power efficient. Fast forward nearly a month and we are still having problems finding the card in stock - a HUGE negative towards the perception of the card and the company at this point.
Still, we are promised by NVIDIA and its partners that they will soon have more on shelves, so we continue to look at the performance configurations and prepare articles and reviews for you. Today we are taking a look at the Galaxy GeForce GTX 680 2GB card - their most basic model that is based on the reference design.
If you haven't done all the proper reading about the GeForce GTX 680 and the Kepler GPU, you should definitely check out my article from March that goes into a lot more detail on that subject before diving into our review of the Galaxy card.
The Card, In Pictures
The Galaxy GTX 680 is essentially identical to the reference design with the addition of some branding along the front and top of the card. The card is still a dual-slot design, still requires a pair of 6-pin power connections and uses a very quiet fan in relation to the competition from AMD.
Subject: Graphics Cards | March 24, 2012 - 05:51 PM | Ryan Shrout
Tagged: video, tom petersen, nvidia, live review, gtx 680, geforce
On the day of the GeForce GTX 680 launch, we hosted a "Live Review" to discuss the new product features and performance while also taking questions from a live chat room and via Twitter. NVIDIA's own Tom Petersen stopped by the offices to talk with us and to show off the hardware features with some live demos of GPU Boost, overclocking and quite a bit more.
If you haven't seen the video yet, you should definitely do so; Tom does a great job explaining the new technology involved with the Kepler GPU. One caveat: the recording process was a bit off and the recording actually starts just a few minutes AFTER we actually began the live stream. Sorry!
Introduction, GT 640M Basics
About two months ago I wrote an less than enthusiastic editorial about ultrabooks that pointed out several weaknesses in the format. One particular weakness in all of the products we’ve seen to date is graphics performance. Ultrabooks so far have lacked the headroom for a discrete graphics component and have instead been saddled with a low-performance version of the already so-so Intel HD 3000 IGP.
This is a problem. Ultrabooks are expensive, yet they so far are less capable of displaying rich 3D graphics than your typical smartphone or tablet. Casual gamers will notice this and take their gaming time and dollars in that direction. Early leaked information about Ivy Bridge indicates that there has been a substantial increase in graphics capability, but the information available so far is centered on the desktop. The version that will be found in ultrabooks is unlikely to be as quick.
Today we’re looking at a potential solution - the Acer Aspire Timeline Ultra M3 equipped with Nvidia’s new GT 640M GPU. This is the first laptop to launch with a Kepler based GPU. It is also an ultrabook, albeit it one with a 15.6” display. Otherwise, it isn’t much different from other products on the market, as you can see below.
This is likely to be the only Kepler based laptop on the market for a month or two. The reason for this is Ivy Bridge - most of the manufacturers are waiting for Intel’s processor update before they go to the trouble of designing new products.
Quarter Down but Year Up
Yesterday NVIDIA released their latest financial results for Q4 2012 and FY2012. There was some good and bad mixed in the results, but overall it was a very successful year for NVIDIA.
Q4 saw gross revenue top $953.2 million US with a net income of $116 million US. This is about $53 million less in gross revenue and $62 million down in net income as compared to last quarter. There are several reasons as to why this happened, but the majority of it appears to be due to the hard drive shortage affecting add-in sales. Simply put, the increase in hard drive prices caused most OEMs to take a good look at the price points of the entire system, and oftentimes would cut out the add-in graphics and just use integrated.
Tegra 3 promises a 50% increase in revenue for NVIDIA this coming year.
Two other reasons for the lower than expected quarter were start of the transition to 28 nm products based on Kepler. They are ramping up production on 28 nm and slowing down 40 nm. Yields on 28 nm are not where they expected them to be, and there is also a shortage of wafer starts for that line. This had a pretty minimal affect overall on Q4, but it will be one of the prime reasons why revenue looks like it will be down in Q1 2013.
Four Displays for Under $70
Running multiple displays on your PC is becoming a trend that everyone is trying to jump on board with thanks in large part to the push of Eyefinity from AMD over the past few years. Gaming is a great application for multi-display configurations but in truth game compatibility and game benefits haven't reached the level I had hoped they would by 2012. But while gaming still has a way to go, the consumer applications for having more than a single monitor continue to expand and cement themselves in the minds of users.
Galaxy is the only NVIDIA partner that is really taking this market seriously with an onslaught of cards branded as MDT, Multiple Display Technology. Using non-NVIDIA hardware in conjunction with NVIDIA GPUs, Galaxy has created some very unique products for consumers like the recently reviewed GeForce GTX 570 MDT. Today we are going to be showing you the new Galaxy MDT GeForce GT 520 offering that brings support for a total of four simultaneous display outputs to a card with a reasonable cost of under $120.
The Galaxy MDT GeForce GT 520
Long time readers of PC Perspective already likely know what to expect based on the GPU we are using here but the Galaxy MDT model offers quite a few interesting changes.
The retail packaging clearly indicates the purpose of this card for users looking at running more than two displays. The GT 520 is not an incredibly powerful GPU when it comes to gaming but Galaxy isn't really pushing the card in that manner. Here are the general specs of the GPU for those that are interested:
- 48 CUDA cores
- 810 MHz core clock
- 1GB DD3 memory
- 900 MHz memory clock
- 64-bit memory bus width
- 4 ROPs
- DirectX 11 support
Guess what? Overclocked.
The NVIDIA GTX 580 GPU, based on the GF110 Fermi architecture, is old but it isn't forgotten. Released in November of 2010, NVIDIA had held the single GPU performance grown for more than a year before it was usurped by AMD and the Radeon HD 7970 just this month. Still, the GTX 580 is a solid high-end enthusiast graphics card that has wide spread availability and custom designed, overclocked models from numerous vendors making it a viable option.
Gigabyte sent us this overclocked and custom cooled model quite a while ago but we had simply fallen behind with other reviews until just after CES. In today's market the card has a bit of a different role to fill - it surely won't be able to pass up the new AMD Radeon HD 7970 but can it fight the good fight and keep NVIDIA's current lineup of GPUs more competitive until Kepler finally shows himself?
The Gigabyte GTX 580 1.5GB Super Overclock Card
With the age of the GTX 580 designs, Gigabyte had plenty of time to perfect their PCB and cooler design. This model, the Super Overclock (GV-N580SO-15I), comes in well ahead of the standard reference speeds of the GTX 580 but sticks to the same 1.5 GB frame buffer.
The clock speed is set at 855 MHz core and 1025 MHz memory, compared to the 772 MHz core speed and 1002 MHz clock rate of the reference design. That is a very healthy 10% clock rate difference that should equate to nearly that big of a gap in gaming performance where the GPU is the real bottleneck.