Author:
Manufacturer: ASUS

Experience with Silent Design

In the time periods between major GPU releases, companies like ASUS have the ability to really dig down and engineer truly unique products. With the expanded time between major GPU releases, from either NVIDIA or AMD, these products have continued evolving to offer better features and experiences than any graphics card before them. The ASUS Strix GTX 780 is exactly one of those solutions – taking a GTX 780 GPU that was originally released in May of last year and twisting it into a new design that offers better cooling, better power and lower noise levels.

ASUS intended, with the Strix GTX 780, to create a card that is perfect for high end PC gamers, without crossing into the realm of bank-breaking prices. They chose to go with the GeForce GTX 780 GPU from NVIDIA at a significant price drop from the GTX 780 Ti, with only a modest performance drop. They double the reference memory capacity from 3GB to 6GB of GDDR5, to assuage any buyer’s thoughts that 3GB wasn’t enough for multi-screen Surround gaming or 4K gaming. And they change the cooling solution to offer a near silent operation mode when used in “low impact” gaming titles.

The ASUS Strix GTX 780 Graphics Card

The ASUS Strix GTX 780 card is a pretty large beast, both in physical size and in performance. The cooler is a slightly modified version of the very popular DirectCU II thermal design used in many of the custom built ASUS graphics cards. It has a heat dissipation area more than twice that of the reference NVIDIA cooler and uses larger fans that allow them to spin slower (and quieter) at the improved cooling capacity.

IMG_0325.JPG

Out of the box, the ASUS Strix GTX 780 will run at 889 MHz base clock and 941 MHz Boost clock, a fairly modest increase over the 863/900 MHz rates of the reference card. Obviously with much better cooling and a lot of work being done on the PCB of this custom design, users will have a lot of headroom to overclock on their own, but I continue to implore companies like ASUS and MSI to up the ante out of the box! One area where ASUS does impress is with the memory – the Strix card features a full 6GB of GDDR5 running 6.0 GHz, twice the capacity of the reference GTX 780 (and even GTX 780 Ti) cards. If you had any concerns about Surround or 4K gaming, know that memory capacity will not be a problem. (Though raw compute power may still be.)

Continue reading our review of the ASUS Strix GTX 780 6GB Graphics Card!!

Rumor: NVIDIA GeForce GTX 880 Is Actually September?

Subject: General Tech, Graphics Cards | August 3, 2014 - 01:59 PM |
Tagged: nvidia, maxwell, gtx 880

Just recently, we posted a story that claimed NVIDIA was preparing to launch high-end Maxwell in the October/November time frame. Apparently, that was generous. The graphics company is said to announce their GeForce GTX 880 in mid-September, with availability coming later in the month. It is expected to be based on the GM204 architecture (which previous rumors claim is 28nm).

nvidia-geforce.png

It is expected that the GeForce GTX 880 will be available with 4GB of video memory, with an 8GB version possible at some point. As someone who runs multiple (five) monitors, I can tell you that 2GB is not enough for someone of my use case. Windows 7 says the same. It kicks me out of applications to tell me that it does not have enough video memory. This would be enough reason for me to get more GPU memory.

We still do not know how many CUDA cores will be present in the GM204 chip, or if the GeForce GTX 880 will have all of them enabled (but I would be surprised if it didn't). Without any way to derive its theoretical performance, we cannot compare it against the GTX 780 or 780Ti. It could be significantly faster, it could be marginally faster, or it could be somewhere between.

But we will probably find out within two months.

Source: Videocardz

AMD Releases FreeSync Information as a FAQ

Subject: General Tech, Graphics Cards, Displays | July 29, 2014 - 06:02 PM |
Tagged: vesa, nvidia, g-sync, freesync, DisplayPort, amd

Dynamic refresh rates have two main purposes: save power by only forcing the monitor to refresh when a new frame is available, and increase animation smoothness by synchronizing to draw rates (rather than "catching the next bus" at 16.67ms, on the 16.67ms, for 60 Hz monitors). Mobile devices prefer the former, while PC gamers are interested in the latter.

Obviously, the video camera nullifies the effect.

NVIDIA was first to make this public with G-Sync. AMD responded with FreeSync, starting with a proposal that was later ratified by VESA as DisplayPort Adaptive-Sync. AMD, then, took up "Project FreeSync" as an AMD "hardware/software solution" to make use of DisplayPort Adaptive-Sync in a way that benefits PC gamers.

Today's news is that AMD has just released an FAQ which explains the standard much more thoroughly than they have in the past. For instance, it clarifies the distinction between DisplayPort Adaptive-Sync and Project FreeSync. Prior to the FAQ, I thought that FreeSync became DisplayPort Adaptive-Sync, and that was that. Now, it is sounding a bit more proprietary, just built upon an open, VESA standard.

If interested, check out the FAQ at AMD's website.

Source: AMD

NVIDIA 340.52 Drivers Are Now Available

Subject: General Tech, Graphics Cards | July 29, 2014 - 05:27 PM |
Tagged: nvidia, geforce, graphics drivers, shield tablet, shield

Alongside the NVIDIA SHIELD Tablet launch, the company has released their GeForce 340.52 drivers. This version allows compatible devices to use GameStream and it, also, is optimized for Metro: Redux and Final Fantasy XIV (China).

nvidia-geforce.png

The driver supports GeForce 8-series graphics cards, and later. As a reminder, for GPUs that are not based on the Fermi architecture (or later), 340.xx will be your last driver version. NVIDIA does intend to provided extended support for 340.xx (and earlier) drivers until April 1st, 2016. But, when Fermi, Kepler, and Maxwell move on to 343.xx, Tesla and earlier will not. That said, most of the content of this driver is aimed at Kepler and later. Either way, the driver itself is available for those pre-Fermi cards.

I should also mention that a user of Anandtech's forums noted the removal of Miracast from NVIDIA documentation. NVIDIA has yet to comment, although it is still very short notice, at this point.

Source: NVIDIA

This high end multi-GPU 4k showdown includes overclocking

Subject: Graphics Cards | July 29, 2014 - 11:27 AM |
Tagged: asus, gtx 780, R9 290X DC2 OC, sli, crossfire, STRIX GTX 780 OC 6GB, R9 290X

We have seen [H]ard|OCP test ASUS' STRIX GTX 780 OC 6GB and R9 290X DirectCU II before but this time they have been overclocked and paired up for a 4k showdown.  For a chance NewEgg gives the price advantage to AMD, $589 versus $599 at the time of writing (with odd blips in prices on Amazon).   The GTX 780 has been set to 1.2GHz and 6.6GHz while the 290X is 1.1GHz and 5.6GHz, keep in mind dual GPU setups may not reach the same frequencies as single cards.  Read on for their conclusions and decide if you prefer to brag about a higher overclock or have better overall performance.

14060235239aDa7rbLPT_1_1_l.jpg

"We take the ASUS STRIX GTX 780 OC 6GB video card and run two in SLI and overclock both of these at 4K resolutions to find the ultimate gameplay performance with 6GB of VRAM. We will also compare these to two overclocked ASUS Radeon R9 290X DirectCU II CrossFire video cards for the ultimate VRAM performance showdown."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Raptr Update Available (for Both NVIDIA and AMD GPUs)

Subject: General Tech, Graphics Cards | July 28, 2014 - 06:00 AM |
Tagged: raptr, pc game streaming

Raptr seems to be gaining in popularity. Total playtime recorded by the online service was up 15% month-over-month, from May to June. The software is made up of a few features that are designed to make the lives of PC gamers easier and better, ranging from optimizing game settings to recording gameplay. If you have used a recent version of GeForce Experience, then you probably have a good idea of what Raptr does.

raptr_game_settings_example_balanced.jpg

Today, Raptr has announced a new, major update. The version's headlining feature is hardware accelerated video recording, and streaming, for both AMD and NVIDIA GPUs. Raptr claims that their method leads to basically no performance lost, regardless of which GPU vendor is used. Up to 20 minutes of previous gameplay can be recorded after it happened and video of unlimited length can be streamed on demand.

Raptr_WOW-Quality-Video.jpg

Notice the recording overlay in the top left.

The other, major feature of this version is enhanced sharing of said videos. They can be uploaded to Raptr.com and shared to Facebook and Twitter, complete with hashtags (#BecauseYolo?)

If interested, check out Raptr at their website.

Source: Raptr

Rumor: NVIDIA GeForce 800-Series Is 28nm in Oct/Nov.

Subject: General Tech, Graphics Cards | July 24, 2014 - 04:32 PM |
Tagged: nvidia, gtx 880

Many of our readers were hoping to drop one (or more) Maxwell-based GPUs in their system for use with their 4K monitors, 3D, or whatever else they need performance for. That has not happened, nor do we even know, for sure, when it will. The latest rumors claim that the NVIDIA GeForce GTX 870 and 880 desktop GPUs will arrive in October or November. More interesting, it is expected to be based on GM204 at the current, 28nm process.

nvidia-pascal-roadmap.jpg

The recent GPU roadmap, as of GTC 2014

NVIDIA has not commented on the delay, at least that I know of, but we can tell something is up from their significantly different roadmap. We can also make a fairly confident guess, by paying attention to the industry as a whole. TSMC has been struggling to keep up with 28nm production, having increased wait times by six extra weeks in May, according to Digitimes, and whatever 20nm capacity they had was reportedly gobbled up by Apple until just recently. At around the same time, NVIDIA inserted Pascal between Maxwell and Volta with 3D memory, NVLink, and some unified memory architecture (which I don't believe they yet elaborated on).

nvidia-previous-roadmap.jpg

The previous roadmap. (Source: Anandtech)

And, if this rumor is true, Maxwell was pushed from 20nm to a wholly 28nm architecture. It was originally supposed to be host of unified virtual memory, not Pascal. If I had to make a safe guess, I would assume that NVIDIA needed to redesign their chip to 28nm and, especially with the extra delays at TSMC, cannot get the volume they need until Autumn.

Lastly, going by the launch of the 750ti, Maxwell will basically be a cleaned-up Kepler architecture. Its compute units were shifted into power-of-two partitions, reducing die area for scheduling logic (and so forth). NVIDIA has been known to stash a few features into each generation, sometimes revealing them well after retail availability, so that is not to say that Maxwell will be "a more efficient Kepler".

I expect its fundamental architecture should be pretty close, though.

Source: KitGuru

NVIDIA Preparing GeForce 800M (Laptop) Maxwell GPUs?

Subject: General Tech, Graphics Cards, Mobile | July 19, 2014 - 12:29 AM |
Tagged: nvidia, geforce, maxwell, mobile gpu, mobile graphics

Apparently, some hardware sites got their hands on an NVIDIA driver listing with several new product codes. They claim thirteen N16(P/E) chips are listed (although I count twelve (??)). While I do not have much knowledge of NVIDIA's internal product structure, the GeForce GTX 880M, based on Kepler, is apparently listed as N15E.

nvidiamaxwellroadmap.jpg

Things have changed a lot since this presentation.

These new parts will allegedly be based on the second-generation Maxwell architecture. Also, the source believes that these new GPUs will in the GeForce GTX 800-series, possibly with the MX suffix that was last seen in October 2012 with the GeForce GTX 680MX. Of course, being a long-time PC gamer, the MX suffix does not exactly ring positive with my memory. It used to be the Ti-line that you wanted, and the MX-line that you could afford. But who am I kidding? None of that is relevant these days. Get off my lawn.

Source: Videocardz

Intel AVX-512 Expanded

Subject: General Tech, Graphics Cards, Processors | July 19, 2014 - 12:05 AM |
Tagged: Xeon Phi, xeon, Intel, avx-512, avx

It is difficult to know what is actually new information in this Intel blog post, but it is interesting none-the-less. Its topic is the AVX-512 extension to x86, designed for Xeon and Xeon Phi processors and co-processors. Basically, last year, Intel announced "Foundation", the minimum support level for AVX-512, as well as Conflict Detection, Exponential and Reciprocal, and Prefetch, which are optional. This, earlier blog post was very much focused on Xeon Phi, but it acknowledged that the instructions will make their way to standard, CPU-like Xeons at around the same time.

Intel_Xeon_Phi_Family.jpg

This year's blog post brings in a bit more information, especially for common Xeons. While all AVX-512-supporting processors (and co-processors) will support "AVX-512 Foundation", the instruction set extensions are a bit more scattered.

 
Xeon
Processors
Xeon Phi
Processors
Xeon Phi
Coprocessors (AIBs)
Foundation Instructions Yes Yes Yes
Conflict Detection Instructions Yes Yes Yes
Exponential and Reciprocal Instructions No Yes Yes
Prefetch Instructions No Yes Yes
Byte and Word Instructions Yes No No
Doubleword and Quadword Instructions Yes No No
Vector Length Extensions Yes No No

Source: Intel AVX-512 Blog Post (and my understanding thereof).

So why do we care? Simply put: speed. Vectorization, the purpose of AVX-512, has similar benefits to multiple cores. It is not as flexible as having multiple, unique, independent cores, but it is easier to implement (and works just fine with having multiple cores, too). For an example: imagine that you have to multiply two colors together. The direct way to do it is multiply red with red, green with green, blue with blue, and alpha with alpha. AMD's 3DNow! and, later, Intel's SSE included instructions to multiply two, four-component vectors together. This reduces four similar instructions into a single operating between wider registers.

Smart compilers (and programmers, although that is becoming less common as compilers are pretty good, especially when they are not fighting developers) are able to pack seemingly unrelated data together, too, if they undergo similar instructions. AVX-512 allows for sixteen 32-bit pieces of data to be worked on at the same time. If your pixel only has four, single-precision RGBA data values, but you are looping through 2 million pixels, do four pixels at a time (16 components).

For the record, I basically just described "SIMD" (single instruction, multiple data) as a whole.

This theory is part of how GPUs became so powerful at certain tasks. They are capable of pushing a lot of data because they can exploit similarities. If your task is full of similar problems, they can just churn through tonnes of data. CPUs have been doing these tricks, too, just without compromising what they do well.

Source: Intel

Google I/O 2014: Android Extension Pack Announced

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | July 7, 2014 - 01:06 AM |
Tagged: tegra k1, OpenGL ES, opengl, Khronos, google io, google, android extension pack, Android

Sure, this is a little late. Honestly, when I first heard the announcement, I did not see much news in it. The slide from the keynote (below) showed four points: Tesselation, Geometry Shaders, Computer [sic] Shaders, and ASTC Texture Compression. Honestly, I thought tesselation and geometry shaders were part of the OpenGL ES 3.1 spec, like compute shaders. This led to my immediate reaction: "Oh cool. They implemented OpenGL ES 3.1. Nice. Not worth a news post."

google-android-opengl-es-extensions.jpg

Image Credit: Blogogist

Apparently, they were not part of the ES 3.1 spec (although compute shaders are). My mistake. It turns out that Google is cooking their their own vendor-specific extensions. This is quite interesting, as it adds functionality to the API without the developer needing to target a specific GPU vendor (INTEL, NV, ATI, AMD), waiting for approval from the Architecture Review Board (ARB), or using multi-vendor extensions (EXT). In other words, it sounds like developers can target Google's vendor without knowing the actual hardware.

Hiding the GPU vendor from the developer is not the only reason for Google to host their own vendor extension. The added features are mostly from full OpenGL. This makes sense, because it was announced with NVIDIA and their Tegra K1, Kepler-based SoC. Full OpenGL compatibility was NVIDIA's selling point for the K1, due to its heritage as a desktop GPU. But, instead of requiring apps to be programmed with full OpenGL in mind, Google's extension pushes it to OpenGL ES 3.1. If the developer wants to dip their toe into OpenGL, then they could add a few Android Extension Pack features to their existing ES engine.

Epic Games' Unreal Engine 4 "Rivalry" Demo from Google I/O 2014.

The last feature, ASTC Texture Compression, was an interesting one. Apparently the Khronos Group, owners of OpenGL, were looking for a new generation of texture compression technologies. NVIDIA suggested their ZIL technology. ARM and AMD also proposed "Adaptive Scalable Texture Compression". ARM and AMD won, although the Khronos Group stated that the collaboration between ARM and NVIDIA made both proposals better than either in isolation.

Android Extension Pack is set to launch with "Android L". The next release of Android is not currently associated with a snack food. If I was their marketer, I would block out the next three versions as 5.x, and name them (L)emon, then (M)eringue, and finally (P)ie.

Would I do anything with the two skipped letters before pie? (N)(O).