All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards | November 20, 2013 - 10:52 AM | Jeremy Hellstrom
Tagged: mars, asus, ROG MARS 760, gtx 760, dual gpu
Fremont, CA (November 19, 2013) - ASUS Republic of Gamers (ROG) today announced the MARS 760 graphics card featuring two GeForce GTX 760 graphics-processing units (GPUs) capable of delivering incredible gaming performance and ensuring ultra-smooth high-resolution gameplay. The MARS 760 even outpaces the GeForce GTX TITAN — with game performance that’s up to 39% faster overall. The MARS 760 is a two-slot card packed with exclusive ASUS technologies including DirectCU II for 20%-cooler and vastly quieter operation, DIGI+ voltage-regulator module (VRM) for ultra-stable power delivery and GPU Tweak, an easy-to-use utility that lets users safely overclock the two GTX 760 GPUs.
Exclusive ASUS features provide cool, quiet, durable and stable performance ASUS exclusive DirectCU II technology puts 8 highly-conductive cooling copper heatpipes in direct contact with both GPUs. These heatpipes provide extremely efficient cooling, allowing the MARS 760 to run 20% cooler and vastly quieter than reference GeForce GTX 690 cards. Dual 90mm dust-proof fans help to provide six times (6X) greater airflow than reference design. And with 4GB of GDDR5 video memory, the ASUS ROG MARS 760 is capable of delivering visuals with incredibly high frame rates and no stutter, ensuring extremely smooth gameplay — even at WQHD resolutions. An attention-grabbing LED even illuminates as the MARS 760 is operating under load.
The MARS 760 is equipped with ROG’s acclaimed DIGI+ voltage-regulation module (VRM), featuring a 12-phase power design that reduces power noise by 30% and enhances efficiency by 15%. Custom sourced black metallic capacitors offer 20%-better temperature endurance for a lifespan that’s up to five times (5X) longer. The new card is built with extremely hardwearing polymerized organic-semiconductor capacitors (POSCAPs) and has an aluminum back-plate, further lowering power noise while increasing both durability and stability to unlock overclocking potential.
The exclusive GPU Tweak tuning tool allows quick, simple and safe control over clock speeds, voltages, cooling-fan speeds and power-consumption thresholds; GPU Tweak lets users push the two GTX 760 GPUs even further. The ROG edition of GPU Tweak included with the MARS 760 also enables detailed GPU load-line calibration and VRM-frequency tuning, allowing for the most extensive control and tweaking parameters in order to maximize overclocking potential — all adjusted via an attractive and easy-to-use graphical interface.
The GPU Tweak Streaming feature, the newest addition to the GPU Tweak tool, lets users share on-screen action over the internet in real time so others can watch live as games are played. It’s even possible to add a title to the streaming window along with scrolling text, pictures and webcam images.
- NVIDIA GeForce GTX 760 SLI
- PCI Express 3.0
- 4096MB GDDR5 memory (2GB per GPU)
- 1008MHz (1072MHz boosted) core speed
- 6004 MHz (1501 MHz GDDR5) memory clock
- 512-bit memory interface
- 2560 x 1600 maximum DVI resolution
- 2 x dual-link DVI-I output
- 1 x dual-link DVI-D output
- 1 x Mini DisplayPort output
- HDMI output (via dongle)
Subject: General Tech, Graphics Cards | November 18, 2013 - 12:33 PM | Scott Michaud
Tagged: tesla, nvidia, K40, GK110b
The Tesla K20X ruled NVIDIA's headless GPU portfolio for quite some time now. The part is based on the GK110 chip with 192 shader cores disabled, like the GeForce Titan, and achieved 3.9 TeraFLOPs of compute performance (1.31 TeraFLOPs in double precision). Also, like the Titan, the K20X offers 6GB of memory.
The Tesla K40X
So the layout was basically the following: GK104 ruled the gamer market except for the, in hindsight, oddly-positioned GeForce Titan which was basically a Tesla K20X without a few features like error correction (ECC). The Quadro K6000 was the only card to utilize all 2880 CUDA cores.
Then, at the recent G-Sync event, NVIDIA CEO Jen-Hsun Huang announced the GeForce GTX 780Ti. This card uses the GK110b processor and incorporates all 2880 CUDA cores albeit with reduced double-precision performance (for the 780 Ti, not for GK110b in general). So now we have Quadro and GeForce with the full power Kepler, your move Tesla.
And they did, the Tesla K40 launched this morning and it brought more than just cores.
A brief overview
The GeForce launch was famous for its inclusion of GPU Boost, a feature absent in the Tesla line. It turns out that NVIDIA was paying attention to the feature but wanted to include it in a way that suited data centers. GeForce cards boost based on the status of the card, its temperature or its power draw. This is apparently unsuitable for data centers because they would like every unit operating at a very similar performance. The Tesla K40 has a base clock of 745 MHz but gives the data center two boost clocks that they can manually set: 810 MHz and 875 MHz.
Relative performance benchmarks
The Tesla K40 also doubles the amount of RAM to 12GB. Of course this allows for the GPU to work on larger data sets without streaming in the computation from system memory or worse.
There is currently no public information on pricing for the Tesla K40 but it is available starting today. What we do know are the launch OEM partners: ASUS, Bull, Cray, Dell, Eurotech, HP, IBM, Inspur, SGI, Sugon, Supermicro, and Tyan.
If you are interested in testing out a K40, NVIDIA has remotely hosted clusters that your company can sign up for at the GPU Test Drive website.
Subject: General Tech, Graphics Cards | November 14, 2013 - 04:54 PM | Scott Michaud
Tagged: never settle forever, never settle, battlefield 4, amd
UPDATE (11/14/2013): After many complaints from the community about the lack of availability of graphics cards that actually HAD the Battlefield 4 bundle included with them, AMD is attempting to clarify the situation. In a statement sent through email, AMD says that the previous information sent to press "was not clear and has led to some confusion" which is definitely the case. While it was implied that all customers that bought R9 series graphics cards would get a free copy of BF4, when purchased on or after November 13th, the truth is that "add-in-board partners ultimately decide which select AMD Radeon R9 SKUs will include a copy of BF4."
So, how are you to know what SKUs and cards are actually going to include BF4? AMD is trying hard to setup a landing page at http://amd.com/battlefield4 that will give gamers clear, and absolute, listings of which R9 series cards include the free copy of the game. When I pushed AMD for a timeline on exactly when these would be posted, the best I could get was "in the next day or two."
As for users that bought an R9 280X, R9 270X, R9 270, R9 290X or R9 290 after the announcement of the bundle program changes but DID NOT get a copy of BF4, AMD is going to try and help them out by offering up 1,000 Battlefield 4 keys over AMD's social channels. The cynic in me thinks this is another ploy to get more Facebook likes and Twitter followers, but in truth the logistics of verifying purchases at this point would be a nightmare for AMD. Though I don't have details on HOW they are going to distribute these keys, I certainly hope they are going to find a way to target those users that were screwed over in this mess. Follow www.facebook.com/amdgaming or www.twitter.com/amdradeon for more information on this upcoming promotion.
AMD did send over a couple of links to cards that are currently selling with Battlefield 4 included, as an example of what to look for:
As far as I know, the board partners will also decide which online outlets to offer the bundle through, so even if you see the same SKU on Amazon.com, it may not come with Battlefield 4 as well. It appears in this case, and going forward, extreme caution is in order when looking for the right card for you.
END UPDATE (11/14/2013)
AMD announced the first Never Settle on October 22nd, 2012 with Sleeping Dogs, Far Cry 3, Hitman: Absolution, and 20% off of Medal of Honor: Warfighter. The deal was valued at around $170. It has exploded since then to become a choose-your-own-bundle across a variety of tiers.
This bundle is mostly different.
Basically, apart from the R7 260X (I will get to that later), all applicable cards will receive Battlefield 4. This is a one-game promotion unlike Never Settle. Still, it is one very good game that will soon be accelerated with Mantle in an upcoming patch. It should be a good example of games based on Frostbite 3 for at least the next few years.
The qualifying cards are: R9 270, R9 270X, R9 280, R9 280X, R9 290, and R9 290X. They must be purchased from a participating retailer beginning November 13th.
The R7 260X is slightly different because it is more familiar to Never Settle. It will not have access to a free copy of Battlefield 4. Instead, the R7 260X will have access to two of six Never Settle Forever Silver Tier games: Hitman: Absolution, Sleeping Dogs, Sniper Elite (V2), Far Cry 3: Blood Dragon, DiRT 3, and (for the first time) THIEF. It is possible that other silver-tier Never Settle Forever owners, who have yet to redeem their voucher, might qualify as well. I am not sure about that. Regardless, THIEF was chosen because the developer worked closely with AMD to support both Mantle as well as TrueAudio.
Since this deal half-updates Never Settle and half-doesn't... I am unsure what this means for the future of the bundle. They seem to be simultaneously supporting and disavowing it. My personal expectation is that AMD wants to continue with Never Settle but they just cut their margins too thin with this launch. This will be a good question to revisit later in the GPU lifecycle when margins become more comfortable.
What do you think? Does AMD's hyper-aggressive hardware pricing warrant a temporary suspension of Never Settle? I mean, until today, they were being purchased without any bundle what-so-ever.
Qualifying R9-Series Cards (purchased after Nov 13 from participating retailers) can check out AMD's Battlefield 4 portal.
Qualifying R7 260X owners, on the other hand, can check out the Never Settle Forever portal.
Subject: Graphics Cards | November 13, 2013 - 06:54 PM | Ryan Shrout
Tagged: video, Mantle, apu13, amd
While attending the AMD APU13 event, an annual developer conference the company uses to promote heterogeneous computing, I got to sit in during a deep dive on the AMD Mantle, a new hardware level API first announced in September. Rather than attempt to re-explain what was explained quite well, I decided to record the session on video and then intermix the slides presented in a produced video for our readers.
The result is likely the best (and seemingly first) explanation of how Mantle actually works and what it does differently than existing APIs like DirectX and OpenGL.
Also, because we had some requests, I am embedding the live blog we ran during Johan Andersson's keynote from APU13. Enjoy!
Subject: Graphics Cards, Processors | November 12, 2013 - 03:10 PM | Ryan Shrout
Tagged: amd, Kaveri, APU, video, hsa
Yesterday at the AMD APU13 developer conference, the company showed off the upcoming Kaveri APU running Battlefield 4 completely on the integrated graphics. I was able to push the AMD guys along and get a little more personal demo to share with our readers. The Kaveri APU had some of its details revealed this week:
- Quad-core Steamroller x86
- 512 Stream Processor GPU
- 856 GFLOPS of theoretical performance
- 3.7 GHz CPU clock speed, 720 MHz GPU clock speed
AMD wanted to be sure we pointed out in this video that the estimate clock speeds for FLOP performance may not be what the demo system was run at (likely a bit lower). Also, the version of Battlefield 4 here is the standard retail version and with further improvements from the driver team as the upcoming Mantle API implementation will likely introduce even more performance for the APU.
The game was running at 1920x1080 with MOSTLY medium quality settings (lighting set to low) but the results still looked damn impressive and the frame rates were silky and smooth. Considering this is running on a desktop with integrated processor graphics, the game play experience is simply unmatched.
Memory in the system was running at 2133 MHz.
The second demo looks at the image decoding acceleration that AMD is going to enable with Kaveri APUs upon release with a driver. Essentially, as the demonstration shows in the video, AMD is overwriting the integrated Windows JPG decompression algorithm with a new one that utilizes HSA to accelerate on both the x86 and SIMD (GPU) portions of the silicon. For the most strenuous demo that used 22 MP images saw a 100% increase in performance compared to the Kaveri CPU cores alone.
Subject: Graphics Cards | November 8, 2013 - 01:41 PM | Jeremy Hellstrom
Tagged: nvidia, kepler, gtx 780 ti, gk110, geforce
Here is a roundup of the reviews of what is now the fastest single GPU card on the planet, the GTX 780 Ti, which is a fully active GK110 chip. The 7GHz GDDR5 is faster than AMD's memory but use a 384-bit memory bus which is less than the R9 290X which leads to some interesting questions about the performance of this card under high resolutions. Are you willing to pay quite a bit more for better performance and a quieter card? Check out the performance deltas at [H]ard|OCP and see if that changes your mind at all.
You can see how it measures up in ISUs in Ryan's review as well.
"NVIDIA's fastest single-GPU video card is being launched today. With the full potential of the Kepler architecture and GK110 GPU fully unlocked, how will it perform compared to the new R9 290X with new drivers? Will the price versus performance make sense? Will it out perform a TITAN? We find out all this and more."
Here are some more Graphics Card articles from around the web:
- Nvidia's GeForce GTX 780 Ti @ The Tech Report
- Nvidia GeForce GTX 780 Ti @ Bjorn3D
- NVIDIA GeForce GTX 780Ti Review @HiTech Legion
- NVIDIA GeForce GTX 780 Ti 3GB Video Card Review @ Legit Reviews
- NVIDIA GTX 780 Ti Video Card Review @ Hardware Asylum
- NVIDIA GeForce GTX 780 Ti Review @ OCC
- NVIDIA GeForce GTX 780 Ti Performance Review @ Hardware Canucks
- Nvidia GeForce GTX 780 Ti review: Titan killer @ Hardware.info
- Gigabyte GTX 780 GHz Edition 3GB @ eTeknix
- Nvidia GTX780Ti @ Kitguru
- Nvidia GTX 780 Ti @ LanOC Reviews
- NVIDIA GeForce GTX 780 Ti 3 GB @ techPowerUp
- NVIDIA GeForce GTX 780 Ti @ Benchmark Reviews
- Nvidia GTX 780 Ti 3GB @ eTeknix
- Gigabyte Radeon R9 280X @ Legion Hardware
- AMD R9 290 4GB @ eTeknix
AMD Releases Catalyst 13.11 Beta 9.2 Driver To Correct Performance Variance Issue of R9 290 Series Graphics Cards
Subject: Graphics Cards, Cases and Cooling | November 7, 2013 - 11:41 PM | Tim Verry
Tagged: R9 290X, powertune, hawaii, graphics drivers, gpu, GCN, catalyst 13.11 beta, amd, 290x
AMD recently launched its 290X graphics card, which is the new high-end single GPU solution using a GCN-based Hawaii architecture. The new GPU is rather large and incorporates an updated version of AMD's PowerTune technology to automatically adjust clockspeeds based on temperature and a maximum fan speed of 40%. Unfortunately, it seems that some 290X cards available at retail exhibited performance characteristics that varied from review units.
AMD has looked into the issue and released the following statement in response to the performance variances (which PC Perspective is looking into as well).
Hello, We've identified that there's variability in fan speeds across AMD R9 290 series boards. This variability in fan speed translates into variability of the cooling capacity of the fan-sink. The flexibility of AMD PowerTune technology enables us to correct this variability in a driver update. This update will normalize the fan RPMs to the correct values.
The correct target RPM values are 2200RPM for the AMD Radeon R9 290X "Quiet mode", and 2650RPM for the R9 290. You can verify these in GPU-Z. If you're working on stories relating to R9 290 series products, please use this driver as it will reduce any variability in fan speeds. This driver will be posted publicly tonight.
From the AMD statement, it seems to be an issue with fan speeds from card to card causing the performance variances. With a GPU that is rated to run at up to 95C, a fan limited to 40% maximum, and dynamic clockspeeds, it is only natural that cards could perform differently, especially if case airflow is not up to par. On the other hand, the specific issue pointed out by other technology review sites (per my understanding, it was initially Tom's Hardware that reported on the retail vs review sample variance) is an issue where the 40% maximum on certain cards is not actually the RPM target that AMD intended.
AMD intended for the Radeon R9 290X's fan to run at 2200RPM (40%) in Quiet Mode and the fan on the R9 290 (which has a maximum fan speed percentage of 47%) to spin at 2650 RPM in Quiet Mode. However, some cards 40% values are not actually hitting those intended RPMs, which is causing performance differences due to cooling and PowerTune adjusting the clockspeeds accordingly.
Luckily, the issue is being worked on by AMD, and it is reportedly rectified by a driver update. The driver update ensures that the fans are actually spinning at the intended speed when set to the 40% (R9 290X) or 47% (R9 290) values in Catalyst Control Center. The new driver, which includes the fix, is version Catalyst 13.11 Beta 9.2 and is available for download now.
If you are running a R9 290 or R9 290X in your system, you should consider updating to the latest driver to ensure you are getting the cooling (and as a result gaming) performance you are supposed to be getting.
Catalyst 13.11 Beta 9.2 is available from the AMD website.
- AMD Radeon R9 290X Hawaii - The Configurable GPU?
- AMD Radeon R9 290 4GB Review - Trip to Hawaii for $399
Stay tuned to PC Perspective for more information on the Radeon R9 290 series GPU performance variance issue as it develops.
Image credit: Ryan Shrout (PC Perspective).
Subject: General Tech, Graphics Cards, Systems | November 5, 2013 - 06:33 PM | Scott Michaud
Tagged: nvidia, grid, AWS, amazon
Amazon Web Services allows customers (individuals, organizations, or companies) to rent servers of certain qualities to match their needs. Many websites are hosted at their data centers, mostly because you can purchase different (or multiple) servers if you have big variations in traffic.
I, personally, sometimes use it as a game server for scheduled multiplayer events. The traditional method is spending $50-80 USD per month on a... decent... server running all-day every-day and using it a couple of hours per week. With Amazon EC2, we hosted a 200 player event (100 vs 100) by purchasing a dual-Xeon (ironically the fastest single-threaded instance) server connected to Amazon's internet backbone by 10 Gigabit Ethernet. This server cost just under $5 per hour all expenses considered. It was not much of a discount but it ran like butter.
This leads me to today's story: NVIDIA GRID GPUs are now available at Amazon Web Services. Both companies hope their customers will use (or create services based on) these instances. Applications they expect to see are streamed games, CAD and media creation, and other server-side graphics processing. These Kepler-based instances, named "g2.2xlarge", will be available along side the older Fermi-based Cluster Compute Instances ("cg1.4xlarge").
It is also noteworthy that the older Fermi-based Tesla servers are about 4x as expensive. GRID GPUs are based on GK104 (or GK107, but those are not available on Amazon EC2) and not the more compute-intensive GK110. It would probably be a step backwards for customers intending to perform GPGPU workloads for computational science or "big data" analysis. The newer GRID systems do not have 10 Gigabit Ethernet, either.
So what does it have? Well, I created an AWS instance to find out.
Its CPU is advertised as an Intel E5-2670 with 8 threads and 26 Compute Units (CUs). This is particularly odd as that particular CPU is eight-core with 16 threads; it is also usually rated by Amazon at 22 CUs per 8 threads. This made me wonder whether the CPU is split between two clients or if Amazon disabled Hyper-Threading to push the clock rates higher (and ultimately led me to just log in to an instance and see). As it turns out, HT is still enabled and the processor registers as having 4 physical cores.
The GPU was slightly more... complicated.
NVIDIA control panel apparently does not work over remote desktop and the GPU registers as a "Standard VGA Graphics Adapter". Actually, two are available in Device Manager although one has the yellow exclamation mark of driver woe (random integrated graphics that wasn't disabled in BIOS?). GPU-Z was not able to pick much up from it but it was of some help.
Keep in mind: I did this without contacting either Amazon or NVIDIA. It is entirely possible that the OS I used (Windows Server 2008 R2) was a poor choice. OTOY, as a part of this announcement, offers Amazon Machine Image (AMI)s for Linux and Windows installations integrated with their ORBX middleware.
I spot three key pieces of information: The base clock is 797 MHz, the memory size is 2990 MB, and the default drivers are Forceware 276.52 (??). The core and default clock rate, GK104 and 797 MHz respectively, are characteristic of the GRID K520 GPU with its 2 GK104 GPUs clocked at 800 MHz. However, since the K520 gives each GPU 4GB and this instance only has 3GB of vRAM, I can tell that the product is slightly different.
I was unable to query the device's shader count. The K520 (similar to a GeForce 680) has 1536 per GPU which sounds about right (but, again, pure speculation).
I also tested the server with TCPing to measure its networking performance versus the cluster compute instances. I did not do anything like Speedtest or Netalyzr. With a normal cluster instance I achieve about 20-25ms pings; with this instance I was more in the 45-50ms range. Of course, your mileage may vary and this should not be used as any official benchmark. If you are considering using the instance for your product, launch an instance and run your own tests. It is not expensive. Still, it seems to be less responsive than Cluster Compute instances which is odd considering its intended gaming usage.
Regardless, now that Amazon picked up GRID, we might see more services (be it consumer or enterprise) which utilizes this technology. The new GPU instances start at $0.65/hr for Linux and $0.767/hr for Windows (excluding extra charges like network bandwidth) on demand. Like always with EC2, if you will use these instances a lot, you can get reduced rates if you pay a fee upfront.
Subject: General Tech, Graphics Cards | October 28, 2013 - 07:21 PM | Scott Michaud
Tagged: R9 290X, amd
Hawaii launches and AMD sells their inventory (all of it, in many cases). The Radeon R9 290X brought reasonably Titan-approaching performance to the $550-600 USD dollar value. Near and dear to our website, AMD also took the opportunity to address much of the Crossfire and Eyefinity frame pacing issues.
Nitroware also took a look at the card... from a distance because they did not receive a review unit. His analysis was based on concepts, such as revisions to AMD design over the life of their Graphics Core Next architecture. The discussion goes back to the ATI Rage series of fixed function hardware and ends with a comparisson between the Radeon HD 7900 "Tahiti" and the R9 290X "Hawaii".
Our international viewers (or even curious North Americans) might also like to check out the work Dominic undertook compiling regional pricing and comparing those values to currency conversion data. There is more to an overview (or review) than benchmarks.
Subject: Graphics Cards | October 28, 2013 - 06:29 AM | Ryan Shrout
Tagged: nvidia, kepler, gtx 780 ti, gtx 780, gtx 770, geforce
A lot of news coming from the NVIDIA camp today, including some price drops and price announcements.
First up, the high-powered GeForce GTX 780 is getting dropped from $649 to $499, a $150 savings that will bring the GTX 780 into line with the competition of AMD's new Radeon R9 290X launched last week.
Next, the GeForce GTX 770 2GB is going to drop from $399 to $329 to help it compete more closely with the R9 280X.
Even you weren't excited about the R9 290X, you have to be excited by competition.
In a surprising turn of events, NVIDIA is now the company with the great bundle deal with GPUs as well! Starting today you'll be able to get a free copy of Batman: Arkham Origins, Splinter Cell: Blacklist and Assassin's Creed IV: Black Flag with the GeForce GTX 780 Ti, GTX 780 and GTX 770. If you step down to the GTX 760 or 660 you'll lose out on the Batman title.
SHIELD discounts are available as well: $100 off you buy the upper tier GPUs and $50 off if you but the lower tier.
UPDATE: NVIDIA just released a new version of GeForce Experience that enabled ShadowPlay, the ability to use Kepler GPUs to record game play in the background with almost no CPU/system ovehead. You can see Scott's initial impressions of the software right here; it seems like its going to be a pretty awesome feature.
Need more news? The yet-to-be-released GeForce GTX 780 Ti is also getting a price - $699 based on the email we just received. And it will be available starting November 7th!!
With all of this news, how does it change our stance on the graphics market? Quite a bit in fact. The huge price drop on the GTX 780, coupled with the 3-game bundle means that NVIDIA is likely offering the better hardware/software combo for gamers this fall. Yes, the R9 290X is likely still a step faster, but now you can get the GTX 780, three great games and spend $50 less.
The GTX 770 is now poised to make a case for itself against the R9 280X as well with its $70 drop. The R9 280X / HD 7970 GHz Edition was definitely a better option with its $100 price delta but with only $30 separating the two competing cards, and the three free games, again the advantage will likely fall to NVIDIA.
Finally, the price point of the GTX 780 Ti is interesting - if NVIDIA is smart they are pricing it based on comparable performance to the R9 290X from AMD. If that is the case, then we can guess the GTX 780 Ti will be a bit faster than the Hawaii card, while likely being quieter and using less power too. Oh, and again, the three game bundle.
NVIDIA did NOT announce a GTX TITAN price drop which might surprise some people. I think the answer as to why will be addressed with the launch of the GTX 780 Ti next month but from what I was hearing over the last couple of weeks NVIDIA can't make the cards fast enough to satisfy demand so reducing margin there just didn't make sense.
NVIDIA has taken a surprisingly aggressive stance here in the discrete GPU market. The need to address and silence critics that think the GeForce brand is being damaged by the AMD console wins is obviously potent inside the company. The good news for us though, and the gaming community as a whole, is that just means better products and better value for graphics card purchases this holiday.
NVIDIA says these price drops will be live by tomorrow. Enjoy!
Subject: Graphics Cards | October 24, 2013 - 11:38 AM | Jeremy Hellstrom
Tagged: radeon, R9 290X, kepler, hawaii, amd
If you didn't stay up to watch our live release of the R9 290X after the podcast last night you missed a chance to have your questions answered but you will be able to watch the recording later on. The R9 290X arrived today, bringing 4K and Crossfire reviews as well as single GPU testing on many a site including PCPer of course. You don't just have to take our word for it, [H]ard|OCP was also putting together a review of AMD's Titan killer. Their benchmarks included some games we haven't adopted yet such as ARMA III. Check out their results and compare them to ours, AMD really has a winner here.
"AMD is launching the Radeon R9 290X today. The R9 290X represents AMD's fastest single-GPU video card ever produced. It is priced to be less expensive than the GeForce GTX 780, but packs a punch on the level of GTX TITAN. We look at performance, the two BIOS mode options, and even some 4K gaming."
Here are some more Graphics Card articles from around the web:
- AMD's Radeon R9 290X graphics card @ The Tech Report
- AMD Radeon R9 290X 4GB Video Card Review @ Legit Reviews
- AMD Radeon R9 290X @ Hardware.info
- 4K Gaming Showdown - AMD R9 290X & R9 280X Vs Nvidia GTX Titan & GTX 780 @ eTeknix
- AMD Radeon R9 290X 4GB @ eTeknix
- AMD Radeon R9 290X @ Legit Reviews
- AMD Radeon R9 290X 4GB Review @ Hardware Canucks
- AMD Radeon R9 290X CrossFire @ techPowerUp
- AMD Radeon R9 290X @ Techspot
- AMD R9 290X @ Kitguru
- AMD Radeon R9 290X 4 GB @ techPowerUp
Subject: General Tech, Graphics Cards | October 23, 2013 - 04:30 PM | Scott Michaud
Tagged: amd, firepro
Currently AMD holds 18% market share with their FirePro line of professional GPUs. This compares to NVIDIA who owns 81% with Quadro. I assume the "other" category is the sum of S3 and Matrox who, together, command 1% of the professional market (just the professional market)
According to Jon Peddie of JPR, as reported by X-Bit Labs, AMD intends to wrestle back revenue left unguarded for NVIDIA. "After years of neglect, AMD’s workstation group, under the tutorage of Matt Skyner, has the backing and commitment of top management and AMD intends to push into the market aggressively." They have already gained share this year.
During AMD's 3rd Quarter (2013) earnings call, CEO Rory Read outlined the importance of the professional graphics market.
We also continue to make steady progress in another of growth businesses in the third quarter as we delivered our fifth consecutive quarter of revenue and share growth in the professional graphics area. We believe that we can continue to gain share in this lucrative part of the GPU market based on our product portfolio, design wins in flight, and enhanced channel programs.
On the same conference call (actually before and after the professional graphics sound bite), Rory noted their renewed push into the server and embedded SoC markets with 64-bit x86 and 64-bit ARM processors. They will be the only company manufacturing both x86 and ARM solutions which should be an interesting proposition for an enterprise in need of both. Why deal with two vendors?
Either way, AMD will probably be refocusing on the professional and enterprise markets for the near future. For the rest of us, this hopefully means that AMD has a stable (and confident) roadmap in the processor and gaming markets. If that is the case, a profitable Q3 is definitely a good start.
Subject: Graphics Cards | October 23, 2013 - 03:20 PM | Jeremy Hellstrom
Tagged: amd, overclocking, asus, ASUS R9 280X DirectCU II TOP, r9 280x
Having already seen what the ASUS R9 280X DirectCU II TOP can do at default speeds the obvious next step, once they had time to fully explore the options, was for [H]ard|OCP to see just how far this GPU can overclock. To make a long story short, they went from a default clock of 1070MHz up to 1230MHz and pushed the RAM to 6.6GHz from 6.4GHz though the voltage needed to be bumped from 1.2v to 1.3v. The actual frequencies are nowhere near as important as the effect on gameplay though, to see those results you will have to click through to the full article.
"We take the new ASUS R9 280X DirectCU II TOP video card and find out how high it will overclock with GPU Tweak and voltage modification. We will compare performance to an overclocked GeForce GTX 770 and find out which card comes out on top when pushed to its overclocking limits."
Here are some more Graphics Card articles from around the web:
- Gigabyte R9 280X OC 3 GB @ techPowerUp
- HIS R9 280X iPower IceQ X2 Turbo Boost Clock 3GB Video Card Review @ Madshrimps
- AMD Radeon R9 290X Versus NVIDIA GeForce GTX 780 Benchmarks @ Legit Reviews
- XFX R9 280X Black OC Edition @ Kitguru
- AMD Radeon R9 280X Video Card Review w/ ASUS, XFX and MSI @ Legit Reviews
- HIS R9 280X iPower IceQ X² Turbo and R9 270X IceQ X² Turbo @ Legion Hardware
- Sapphire Radeon R9 270X Vapor-X @ Benchmark Reviews
- MSI R9 270X Hawk Review @ OCC
- Asus Matrix R9 280X Platinum @ LanOC Reviews
- ASUS R9 280X DirectCU II TOP 3 GB @ techPowerUp
- HIS Radeon R9 280X IceQ X2 @ Benchmark Reviews
- Sapphire Toxic Edition R9 270X Video Card Review @HiTech Legion
- MSI Radeon R9 270X Gaming Video Card Review @ Ninjalane
- Sapphire Toxic R9 270X @ LanOC Review
- AMD Radeon 7000 and Radeon R200 Series Mixed CrossFire Testing @ Legit Reviews
- AMD Radeon R9 270X Graphics Card Review @ Techgage
- MSI R9 270X HAWK 2 GB @ techPowerUp
- HIS Radeon R9 270X IceQ X2 Turbo Boost @ Benchmark Reviews
- Asus R9 270X DirectCU II Top @ LanOC Reviews
- Diamond Multimedia Radeon 7870 7870PE52GV Review @ HCW
- AMD Radeon R9 270X On Linux @ Phoronix
- ASUS GTX760 DirectCU Mini OC @ Hardawre.info
- Gigabyte GeForce GTX 770 OC / GTX 780 OC @ Hardware.info
- MSI N660 Gaming Review: affordable and silent GeForce GTX 660 @ Hardawre.info
- Asus GTX 670 Direct CU Mini @ LanOC Reviews
- NVIDIA GeForce GTX 650 On Linux @ Phoronix
Subject: General Tech, Graphics Cards | October 22, 2013 - 09:21 PM | Scott Michaud
Tagged: nvidia, graphics drivers, geforce
Mid-June kicked up a storm of poop across the internet when IGN broke the AMD optimizations for Frostbite 3. It was reported that NVIDIA would not receive sample code for those games until after they launched. The article was later updated with a statement from AMD: "... the AMD Gaming Evolved program undertakes no efforts to prevent our competition from optimizing for games before their release."
Now, I assume, the confusion was caused by then-not-announced Mantle.
And, as it turns out, NVIDIA did receive the code for Battlefield 4 prior to launch. Monday, the company launched their 331.58 WHQL-certified drivers which are optimized for Batman: Arkham Origins and Battlefield 4. According to the release notes, you should even be able to use SLi out of the gate. If, on the other hand, you are a Civilization V player: HBAO+ should enhance your shadowing.
They also added a DX11 SLi profile for Watch Dogs... awkwarrrrrd.
To check out the blog at GeForce.com for a bit more information, check out the release notes, or just head over to the drivers page. If you have GeForce Experience installed, it probably already asked you to update.
Subject: Graphics Cards, Displays | October 20, 2013 - 11:50 AM | Ryan Shrout
Tagged: video, tom petersen, nvidia, livestream, live, g-sync
UPDATE: If you missed our live stream today that covered NVIDIA G-Sync technology, you can watch the replay embedded below. NVIDIA's Tom Petersen stops by to talk about G-Sync in both high level and granular detail while showing off some demonstrations of why G-Sync is so important. Enjoy!!
Last week NVIDIA hosted press and developers in Montreal to discuss a couple of new technologies, the most impressive of which was NVIDIA G-Sync, a new monitor solution that looks to solve the eternal debate of smoothness against latency. If you haven't read about G-Sync and how impressive it was when first tested on Friday, you should check out my initial write up, NVIDIA G-Sync: Death of the Refresh Rate, that not only does that, but dives into the reason the technology shift was necessary in the first place.
G-Sync essentially functions by altering and controlling the vBlank signal sent to the monitor. In a normal configuration, vBlank is a combination of the combination of the vertical front and back porch and the necessary sync time. That timing is set a fixed stepping that determines the effective refresh rate of the monitor; 60 Hz, 120 Hz, etc. What NVIDIA will now do in the driver and firmware is lengthen or shorten the vBlank signal as desired and will send it when one of two criteria is met.
- A new frame has completed rendering and has been copied to the front buffer. Sending vBlank at this time will tell the screen grab data from the card and display it immediately.
- A substantial amount of time has passed and the currently displayed image needs to be refreshed to avoid brightness variation.
In current display timing setups, the submission of the vBlank signal has been completely independent from the rendering pipeline. The result was varying frame latency and either horizontal tearing or fixed refresh frame rates. With NVIDIA G-Sync creating an intelligent connection between rendering and frame updating, the display of PC games is fundamentally changed.
Every person that saw the technology, including other media members and even developers like John Carmack, Johan Andersson and Tim Sweeney, came away knowing that this was the future of PC gaming. (If you didn't see the panel that featured those three developers on stage, you are missing out.)
But it is definitely a complicated technology and I have already seen a lot of confusion about it in our comment threads on PC Perspective. To help the community get a better grasp and to offer them an opportunity to ask some questions, NVIDIA's Tom Petersen is stopping by our offices on Monday afternoon where he will run through some demonstrations and take questions from the live streaming audience.
Be sure to stop back at PC Perspective on Monday, October 21st at 2pm ET / 11am PT as to discuss G-Sync, how it was developed and the various ramifications the technology will have in PC gaming. You'll find it all on our PC Perspective Live! page on Monday but you can sign up for our "live stream mailing list" as well to get notified in advance!
NVIDIA G-Sync Live Stream
11am PT / 2pm ET - October 21st
We also want your questions!! The easiest way to get them answered is to leave them for us here in the comments of this post. That will give us time to filter through the questions and get the answers you need from Tom. We'll take questions via the live chat and via Twitter (follow me @ryanshrout) during the event but often time there is a lot of noise to deal with.
So be sure to join us on Monday afternoon!
Subject: Graphics Cards | October 18, 2013 - 04:55 PM | Ryan Shrout
Tagged: video, tim sweeney, nvidia, Mantle, john carmack, johan andersson, g-sync, amd
If you weren't on our live stream from the NVIDIA "The Way It's Meant to be Played" tech day this afternoon, you missed a hell of an event. After the announcement of NVIDIA G-Sync variable refresh rate monitor technology, NVIDIA's Tony Tomasi brough one of the most intriguing panels of developers on stage to talk.
John Carmack, Tim Sweeney and Johan Andersson talk for over an hour, taking questions from the audience and even getting into debates amongst themselves in some instances. Topics included NVIDIA G-Sync of course, AMD's Mantle low-level API, the hurdles facing PC gaming and what direction each luminary is currently on for future development.
If you are a PC enthusiast or gamer you are definitely going to want to listen and watch the video below!
Subject: General Tech, Graphics Cards | October 18, 2013 - 10:21 AM | Scott Michaud
Tagged: nvidia, GeForce GTX 780 Ti
So the really interesting news today was G-Sync but that did not stop NVIDIA from sneaking in a new high-end graphics card. The GeForce GTX 780 Ti follows the company's old method of releasing successful products:
- Attach a seemingly arbitrary suffix to a number
In all seriousness, we know basically nothing about this card. It is entirely possible that its architecture might not even be based on GK110. We do know it will be faster than a GeForce 780 but we have no frame of reference in regards to the GeForce Titan. The two cards were already so close in performance that Ryan struggled to validate the 780's existence. Imagine how difficult it would be for NVIDIA to wedge yet another product in that gap.
And if it does outperform the Titan, what is its purpose? Sure, Titan is a GPGPU powerhouse if you want double-precision performance without purchasing a Tesla or a Quadro, but that is not really relevant for gamers yet.
We shall see, soon, when we get review samples in. You, on the other hand, will likely see more when the card launches mid-November. No word on pricing.
Subject: Graphics Cards | October 18, 2013 - 07:52 AM | Ryan Shrout
Tagged: variable refresh rate, refresh rate, nvidia, gsync, geforce, g-sync
UPDATE: I have posted a more in-depth analysis of the new NVIDIA G-Sync technology: NVIDIA G-Sync: Death of the Refres Rate. Thanks for reading!!
UPDATE 2: ASUS has announced the G-Sync enabled version of the VG248QE will be priced at $399.
During a gaming event being held in Montreal, NVIDIA unveield a new technology for GeForce gamers that the company is hoping will revolutionize the PC and displays. Called NVIDIA G-Sync, this new feature will combine changes to the graphics driver as well as change to the monitor to alter the way refresh rates and Vsync have worked for decades.
With standard LCD monitors gamers are forced to choose between a tear-free experience by enabling Vsync or playing a game with the substantial visual anomolies in order to get the best and most efficient frame rates. G-Sync changes that by allowing a monitor to display refresh rates other than 60 Hz, 120 Hz or 144 Hz, etc. without the horizontal tearing normally associated with turning off Vsync. Essentially, G-Sync allows a properly equiped monitor to run at a variable refresh rate which will improve the experience of gaming in interesting ways.
This technology will be available soon on Kepler-based GeForce graphics cards but will require a monitor with support for G-Sync; not just any display will work. The first launch monitor is a variation on the very popular 144 Hz ASUS VG248QE 1920x1080 display and as we saw with 3D Vision, supporting G-Sync will require licensing and hardware changes. In fact, NVIDIA claims that the new logic inside the panels controller is NVIDIA's own design - so you can obviously expect this to only function with NVIDIA GPUs.
DisplayPort is the only input option currently supported.
It turns out NVIDIA will actually be offering retrofitting kits for current users of the VG248QE at some yet to be disclosed cost. The first retail sales of G-Sync will ship as a monitor + retrofit kit as production was just a bit behind.
Using a monitor with a variable refresh rates allows the game to display 55 FPS on the panel at 55 Hz without any horizontal tearing. It can also display 133 FPS at 133 Hz without tearing. Anything below the 144 Hz maximum refresh rate of this monitor will be running at full speed without the tearing associated with the lack of vertical sync.
The technology that NVIDIA is showing here is impressive when seen in person; and that is really the only way to understand the difference. High speed cameras and captures will help but much like 3D Vision was, this is a feature that needs to be seen to be appreciated. How users will react to that road block will have to be seen.
Features like G-Sync show the gaming world that without the restrictions of console there is quite a bit of revolutionary steps that can be made to maintain the PC gaming advantage well into the future. 4K displays were a recent example and now NVIDIA G-Sync adds to the list.
Be sure to stop back at PC Perspective on Monday, November 21st at 2pm ET / 11am PT as we will be joined in-studio by NVIDIA's Tom Petersen to discuss G-Sync, how it was developed and the various ramifications the technology will have in PC gaming. You'll find it all on our PC Perspective Live! page on Monday but you can sign up for our "live stream mailing list" as well to get notified in advance!
NVIDIA G-Sync Live Stream
11am PT / 2pm ET - October 21st
Subject: General Tech, Graphics Cards | October 17, 2013 - 03:56 PM | Scott Michaud
Tagged: nvidia, GTX 760 OEM
A pair of new graphics cards have been announced during the first day of "The Way It's Meant To Be Played Montreal 2013" both of which intended for system builders to integrate into their products. Both cards fall under the GeForce GTX 760 branding with the names: "GeForce GTX 760 Ti (OEM)" and "GeForce GTX 760 192-bit (OEM)".
I will place the main specifications of both cards side-by-side-side with the default GeForce 760 for a little bit of reference. Be sure to check out its benchmark.
|GTX 760||GTX 760 192-bit (OEM)||GTX 760 Ti (OEM)|
|Base Clock||980 MHz||823 MHz||915 MHz|
|Boost Clock||1033 MHz||888 MHz||980 MHz|
|Memory Bandwidth||6.0 GT/s||5.8 GT/s||6.0 GT/s|
|vRAM (capacity)||2 GB||1.5 or 3 GB||2 GB|
The GeForce 760 is no slouch and, especially the GTX 760 Ti, seems to be pretty close in performance to the retail product. I could see this being a respectible addition to a Steam Machine. I still cannot understand why, like the gaming bundle, these cards were not announced during the keynote speech.
Or, for that matter, why no-one seems to be reporting on them.
Subject: General Tech, Graphics Cards | October 17, 2013 - 02:36 PM | Scott Michaud
Tagged: shield, nvidia, bundle
The live stream from NVIDIA, this morning, was full of technologies focused around the PC gaming ecosystem including mobile (but still PC-like) platforms. Today they also announced a holiday gaming bundle for their GeForce cards although that missed the stream for some reason.
If you purchase a GeForce GTX 770, 780, or Titan from a participating retailer (including online), you will receive Splinter Cell: Black List, Batman: Arkham Origins, and Assassin's Creed IV: Black Flag along with a $100-off coupon for an NVIDIA SHIELD.
If, on the other hand, you purchase a GTX 760, 680, 670, 660 Ti, or 660 from a participating retailer (again, including online), you will receive Splinter Cell: Black List and Assassin's Creed IV: Black Flag along with a $50-off coupon for the NVIDIA SHIELD.
The current price at Newegg for an NVIDIA SHIELD is $299 USD. With a $100 discount, this pushes the price point to $199. The $200 price point is a barrier, for videogame systems, under which customers tend to jump at. Reaching the sub-$200 price point could be a big deal even for customers not on the fence especially when you consider PC streaming. Could be.
Assume you were already planning on upgrading your GPU. Would you be interested in adding in an NVIDIA SHIELD for an extra $199?
Get notified when we go live!