All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards | August 24, 2015 - 02:37 PM | Sebastian Peak
Tagged: rumor, report, Radeon R9 Nano, R9 290X, leak, hot chips, hbm, amd
A report from German-language tech site Golem contains what appears to be a slide leaked from AMD's GPU presentation at Hot Chips in Cupertino, and the results paint a very efficient picture of the upcoming Radeon R9 Nano GPU.
The spelling of "performance" doesn't mean this is fake, does it?
While only managing 3 FPS better than the Radeon R9 290X in this particular benchmark, this result was achieved with 1.9x the performance per watt of the baseline 290X in the test. The article speculates on the possible clock speed of the R9 Nano based on the relative performance, and estimates 850 MHz (which is of course up for debate as no official specs are known).
The most compelling part of the result has to be the ability of the Nano to match or exceed the R9 290X in performance, while only requiring a single 8-pin PCIe connector and needing an average of only 175 watts. With a mini-ITX friendly 15 cm board (5.9 inches) this could be one of the more compelling options for a mini gaming rig going forward.
We have a lot of questions that have yet to be answered of course, including the actual speed of both core and HBM, and just how quiet this air-cooled card might be under load. We shouldn't have to wait much longer!
Subject: Graphics Cards | August 21, 2015 - 11:30 AM | Sebastian Peak
Tagged: PC, nvidia, Matrox, jpr, graphics cards, gpu market share, desktop market share, amd, AIB, add in board
While we reported recently on the decline of overall GPU shipments, a new report out of John Peddie Research covers the add-in board segment to give us a look at the desktop graphics card market. So how are the big two (sorry Matrox) doing?
|GPU Supplier||Market Share This Quarter||Market Share Last Quarter||Market Share Last Year|
The big news is of course a drop in market share for AMD of 4.5% quarter-to-quarter, and down to just 18% from 37.9% last year. There will be many opinions as to why their share has been dropping in the last year, but it certainly didn't help that the 300-series GPUs are rebrands of 200-series, and the new Fury cards have had very limited availability so far.
The graph from Mercury Research illustrates what is almost a mirror image, with NVIDIA gaining 20% as AMD lost 20%, for a 40% swing in overall share. Ouch. Meanwhile (not pictured) Matrox didn't have a statistically meaningful quarter but still manage to appear on the JPR report with 0.1% market share (somehow) last quarter.
The desktop market isn't actually suffering quite as much as the overall PC market, and specifically the enthusiast market.
"The AIB market has benefited from the enthusiast segment PC growth, which has been partially fueled by recent introductions of exciting new powerful (GPUs). The demand for high-end PCs and associated hardware from the enthusiast and overclocking segments has bucked the downward trend and given AIB vendors a needed prospect to offset declining sales in the mainstream consumer space."
But not all is well considering overall the add-in board attach rate with desktops "has declined from a high of 63% in Q1 2008 to 37% this quarter". This is indicative of the overall trend toward integrated GPUs in the industry with AMD APUs and Intel processor graphics, as illustrated by this graphic from the report.
The year-to-year numbers show an overall drop of 18.8%, and even with their dominant 81.9% market share NVIDIA has still seen their shipments decrease by 12% this quarter. These trends seem to indicate a gloomy future for discrete graphics in the coming years, but for now we in the enthusiast community will continue to keep it afloat. It would certainly be nice to see some gains from AMD soon to keep things interesting, which might help lower prices down from their lofty $400 - $600 mark for flagship cards at the moment.
Subject: Graphics Cards, Displays | August 19, 2015 - 08:03 PM | Scott Michaud
Tagged: Intel, freesync, DisplayPort, adaptive sync
DisplayPort Adaptive-Sync is a VESA standard, pushed by AMD, that allows input signals to control when a monitor refreshes. A normal monitor redraws on a defined interval because old CRT monitors needed to scan with an electron gun, and this took time. LCDs never needed to, but they did. This process meant that the monitor was drawing a frame whether it was ready or not, which led to tearing, stutter, and other nasty effects if the GPU couldn't keep up. With Adaptive-Sync, GPUs don't “miss the train” -- the train leaves when they board.
Intel has, according to The Tech Report, decided to support Adaptive-Sync -- but not necessarily in their current product line. David Blythe of Intel would not comment on specific dates or release windows, just that it is in their plans. This makes sense for Intel because it allows their customers to push settings higher while maintaining a smooth experience, which matters a lot for users of integrated graphics.
While “AMD FreeSync” is a stack of technologies, VESA DisplayPort Adaptive-Sync should be all that is required on the monitor side. This should mean that Intel has access to all of AMD's adaptive refresh monitors, although the driver and GPU circuitry would need to be their burden. G-Sync monitors (at least those with NVIDIA-design modules -- this is currently all of them except for one laptop I think) would be off limits, though.
Subject: Graphics Cards, Systems | August 17, 2015 - 11:00 AM | Sebastian Peak
Tagged: NPD, gpu, discrete gpu, graphics, marketshare, PC industry
News from NPD Research today shows a sharp decline in discrete graphics shipments from all major vendors. Not great news for the PC industry, but not all that surprising, either.
These numbers don’t indicate a lack of discrete GPU interest in the PC enthusiast community of course, but certainly show how the mainstream market has changed. OEM laptop and (more recently) desktop makers predominantly use processor graphics from Intel and AMD APUs, though the decrease of over 7% for Intel GPUs suggests a decline in PC shipments overall.
Here are the highlights, quoted directly from NPD Research:
- AMD's overall unit shipments decreased -25.82% quarter-to-quarter, Intel's total shipments decreased -7.39% from last quarter, and Nvidia's decreased -16.19%.
- The attach rate of GPUs (includes integrated and discrete GPUs) to PCs for the quarter was 137% which was down -10.82% from last quarter, and 26.43% of PCs had discrete GPUs, which is down -4.15%.
- The overall PC market decreased -4.05% quarter-to-quarter, and decreased -10.40% year-to-year.
- Desktop graphics add-in boards (AIBs) that use discrete GPUs decreased -16.81% from last quarter.
An overall decrease of 10.4 % year-to-year indicates what I'll call the continuing evolution of the PC (rather than a decline, per se), and shows how many have come to depend on smartphones for the basic computing tasks (email, web browsing) that once required a PC. Tablets didn’t replace the PC in the way that was predicted only 5 years ago, and it’s almost become essential to pair a PC with a smartphone for a complete personal computing experience (sorry, tablets – we just don’t NEED you as much).
I would guess anyone reading this on a PC enthusiast site is not only using a PC, but probably one with discrete graphics, too. Or maybe you exclusively view our site on a tablet or smartphone? I for one won’t stop buying PC components until they just aren’t available anymore, and that dark day is probably still many years off.
Subject: Graphics Cards | August 12, 2015 - 05:29 PM | Sebastian Peak
Tagged: STRIX R9 Fury, Radeon R9 Fury, overclocking, oc, LN2, hbm, fury x, asus, amd
What happens when you unlock an AMD Fury to have the Compute Units of a Fury X, and then overclock the snot out of it using LN2? User Xtreme Addict in the HWBot forums has created a comprehensive guide to do just this, and the results are incredible.
Not for the faint of heart (image credit: Xtreme Addict)
"The steps include unlocking the Compute Units to enable Fury X grade performance, enabling the hotwire soldering pads, a 0.95v Rail mod, and of course the trimpot/hotwire VGPU, VMEM, VPLL (VDDCI) mods.
The result? A GPU frequency of 1450 MHz and HBM frequency of 1000 MHz. For the HBM that's a 100% overclock."
Beginning with a stock ASUS R9 Fury STRIX card Xtreme Addict performed some surgery to fully unlock the voltage, and unlocked the Compute Units using a tool from this Overclock.net thread.
The results? Staggering. HBM at 1000 MHz is double the rate of the stock Fury X, and a GPU core of 1450 MHz is a 400 MHz increase. So what kind of performance did this heavily overclocked card achieve?
"The performance goes up from 6237 points at default to 6756 after unlocking the CUs, then 8121 points after overclock on air cooling, to eventually end up at 9634 points when fully unleashed with liquid nitrogen."
Apparently they were able to push the card even further, ending up with a whopping 10033 score in 3DMark Fire Strike Extreme.
While this method is far too extreme for 99% of enthusiasts, the idea of unlocking a retail Fury to the level of a Fury X through software/BIOS mods is much more accessible, as is the possibility of reaching much higher clocks through advanced cooling methods.
Unfortunately, if reading through this makes you want to run out and grab one of these STRIX cards availability is still limited. Hopefully supply catches up to demand in the near future.
A quick look at stock status on Newegg for the featured R9 Fury card
Subject: Graphics Cards | August 12, 2015 - 04:43 PM | Ryan Shrout
Tagged: what year is it, voodoo 3, voodoo, video, unboxing, pci, 3dfx
Subject: Graphics Cards | August 12, 2015 - 02:44 PM | Jeremy Hellstrom
Tagged: GTX 980 Ti G1 GAMING, gigabyte, GTX 980 Ti, factory overclocked
The Gigabyte GTX 980 Ti G1 GAMING card comes with a 1152MHz Base Clock and 1241MHz Boost Clock straight out of the box and uses two 8-pin power connectors as opposed to an 8 and a 6-pin. That extra power and the WINDFORCE 3X custom cooler help you when overclocking the card beyond the frequencies it ships at. [H]ard|OCP used OC GURU II to up the voltage provided to this card and reached an overclock that hit 1367MHz in game with a 7GHz clock for the VRAM. Manually they managed to go even further, the VRAM could reach 8GHz and the GPU clock was measured at 1535 in game, a rather significant increase. The overclock increased performance by around 10% in most of the tests; which makes this card impressive even before you consider some of the other beneficial features which you can read about at [H]ard|OCP.
"Today we review a custom built retail factory overclocked GIGABYTE GTX 980 Ti G1 GAMING video card. This video card is built to overclock in every way. We'll take this video card, compare it to the AMD Radeon R9 Fury X and overclock the GIGABYTE GTX 980 Ti G1 GAMING to its highest potential. The overclocking potential is amazing."
Here are some more Graphics Card articles from around the web:
- NVIDIA GeForce GTX 980 Ti: Simply The Best For Linux Gamers @ Phoronix
- EVGA GTX 980 Ti Classified Video Card Review @ Hardware Asylum
- Zotac GTX 980 Ti AMP! Extreme @ Bjorn3d
- Is 4GB video memory enough @ The Tech Report
- EK-FC Titan X (Original CSQ) GPU block @ HardwareOverclock
- EK Waterblocks Supremacy EVO CPU Block Review @HiTech Legion
- PowerColor R9 390X PCS+ Review @ Hardware Canucks
- Sapphire Radeon R9 Fury Tri-X Review @HiTech Legion
- HIS R9 380 IceQ X2 OC 2GB Video Card Review @ Madshrimps
- Sapphire R9 Fury Tri-X OC 4 GB @ techPowerUp
Subject: Graphics Cards, Processors, Mobile | August 12, 2015 - 07:30 AM | Ryan Shrout
Tagged: snapdragon 820, snapdragon, siggraph 2015, Siggraph, qualcomm, adreno 530, adreno
Despite the success of the Snapdragon 805 and even the 808, Qualcomm’s flagship Snapdragon 810 SoC had a tumultuous lifespan. Rumors and stories about the chip and an inability to run in phone form factors without overheating and/or draining battery life were rampant, despite the company’s insistence that the problem was fixed with a very quick second revision of the part. There are very few devices that used the 810 and instead we saw more of the flagship smartphones uses the slightly cut back SD 808 or the SD 805.
Today at Siggraph Qualcomm starts the reveal of a new flagship SoC, Snapdragon 820. As the event coinciding with launch is a graphics-specific show, QC is focusing on a high level overview of the graphics portion of the Snapdragon 820, the updated Adreno 5xx architecture and associated designs and a new camera image signal processor (ISP) aiming to improve quality of photos and recording on our mobile devices.
A modern SoC from Qualcomm features many different processors working in tandem to impact the user experience on the device. While the only details we are getting today focus around the Adreno 530 GPU and Spectra ISP, other segments like connectivity (wireless), DSP, video processing and digital signal processing are important parts of the computing story. And we are well aware that Qualcomm is readying its own 64-bit processor architecture for the Kryo CPU rather than implementing the off-the-shelf cores from ARM used in the 810.
We also know that Qualcomm is targeting a “leading edge” FinFET process technology for SD 820 and though we haven’t been able to confirm anything, it looks very like that this chip will be built on the Samsung 14nm line that also built the Exynos 7420.
But over half of the processing on the upcoming Snapdragon 820 fill focus on visual processing, from graphics to gaming to UI animations to image capture and video output, this chip’s die will be dominated by high performance visuals.
Qualcomm’s lists of target goals for SD 820 visuals reads as you would expect: wanting perfection in every area. Wouldn’t we all love a phone or tablet that takes perfect photos each time, always focusing on the right things (or everything) with exceptional low light performance? Though a lesser known problem for consumers, having accurate color reproduction from capture, through processing and to the display would be a big advantage. And of course, we all want graphics performance that impresses and a user interface that is smooth and reliable while enabling NEW experience that we haven’t even thought of in the mobile form factor. Qualcomm thinks that Snapdragon 820 will be able to deliver on all of that.
Subject: Graphics Cards | August 11, 2015 - 02:54 PM | Jeremy Hellstrom
Tagged: rumour, nvidia, gtx 950
Rumours of the impending release of a GTX 950 and perhaps even a GTX 950 Ti continue to spread, most recently at Videocardz who have developed a reputation for this kind of report. Little is known at this time, the specifications are still unspecified but they have found a page showing a ASUS STRIX GTX 950, with 2GB of memory and a DirectCUII cooler. The prices shown are unlikely to represent the actual retail price, even in Finland where the capture is from.
Also spotted is a PNY GTX 950 retail box which shows us little in the way of details, the power plug is facing away from the camera so we are still unsure how many power plugs will be need./ Videocardz also reiterates their belief from the first leak that the card will 75% of a GM206 Maxwell graphics processor, with 768 CUDA cores and a 128-bit interface.
Subject: Graphics Cards | August 10, 2015 - 06:14 PM | Sebastian Peak
Tagged: overclocking, overclock, open source, nvidia, MSI Afterburner, API
An author called "2PKAQWTUQM2Q7DJG" (likely not a real name) has published a fascinating little article today on his/her Wordpress blog entitled, "Overclocking Tools for NVIDIA GPUs Suck. I Made My Own". What it contains is a full account of the process of creating an overclocking tool beyond the constraints of common utilities such as MSI Afterburner.
By probing MSI's OC utility using Ollydbg (an x86 "assembler level analysing debugger") the author was able to track down how Afterburner was working.
“nvapi.dll” definitely gets loaded here using LoadLibrary/GetModuleHandle. We’re on the right track. Now where exactly is that lib used? ... That’s simple, with the program running and the realtime graph disabled (it polls NvAPI constantly adding noise to the mass of API calls). we place a memory breakpoint on the .Text memory segment of the NVapi.dll inside MSI Afterburner’s process... Then we set the sliders in the MSI tool to get some negligible GPU underclock and hit the “apply” button. It breaks inside NvAPI… magic!
After further explaining the process and his/her source code for an overclocking utility, the user goes on to show the finished product in the form of a command line utility.
There is a link to the finished version of this utility at the end of the article, as well as the entire process with all source code. It makes for an interesting read (even for the painfully inept at programming, such as myself), and the provided link to download this mysterious overclocking utility (disguised as a JPG image file, no less) makes it both tempting and a little dubious. Does this really allow overclocking any NVIDIA GPU, including mobile? What could be the harm in trying?? In all seriousness however since some of what was seemingly uncovered in the article is no doubt proprietary, how long will this information be available?
It would probably be wise to follow the link to the Wordpress page ASAP!
Subject: Graphics Cards, Processors, Mobile, Shows and Expos | August 10, 2015 - 09:01 AM | Scott Michaud
Tagged: vulkan, spir, siggraph 2015, Siggraph, opengl sc, OpenGL ES, opengl, opencl, Khronos
When the Khronos Group announced Vulkan at GDC, they mentioned that the API is coming this year, and that this date is intended to under promise and over deliver. Recently, fans were hoping that it would be published at SIGGRAPH, which officially begun yesterday. Unfortunately, Vulkan has not released. It does hold a significant chunk of the news, however. Also, it's not like DirectX 12 is holding a commanding lead at the moment. The headers were public only for a few months, and the code samples are less than two weeks old.
The organization made announcements for six products today: OpenGL, OpenGL ES, OpenGL SC, OpenCL, SPIR, and, as mentioned, Vulkan. They wanted to make their commitment clear, to all of their standards. Vulkan is urgent, but some developers will still want the framework of OpenGL. Bind what you need to the context, then issue a draw and, if you do it wrong, the driver will often clean up the mess for you anyway. The briefing was structure to be evident that it is still in their mind, which is likely why they made sure three OpenGL logos greeted me in their slide deck as early as possible. They are also taking and closely examining feedback about who wants to use Vulkan or OpenGL, and why.
As for Vulkan, confirmed platforms have been announced. Vendors have committed to drivers on Windows 7, 8, 10, Linux, including Steam OS, and Tizen (OSX and iOS are absent, though). Beyond all of that, Google will accept Vulkan on Android. This is a big deal, as Google, despite its open nature, has been avoiding several Khronos Group standards. For instance, Nexus phones and tablets do not have OpenCL drivers, although Google isn't stopping third parties from rolling it into their devices, like Samsung and NVIDIA. Direct support of Vulkan should help cross-platform development as well as, and more importantly, target the multi-core, relatively slow threaded processors of those devices. This could even be of significant use for web browsers, especially in sites with a lot of simple 2D effects. Google is also contributing support from their drawElements Quality Program (dEQP), which is a conformance test suite that they bought back in 2014. They are going to expand it to Vulkan, so that developers will have more consistency between devices -- a big win for Android.
While we're not done with Vulkan, one of the biggest announcements is OpenGL ES 3.2 and it fits here nicely. At around the time that OpenGL ES 3.1 brought Compute Shaders to the embedded platform, Google launched the Android Extension Pack (AEP). This absorbed OpenGL ES 3.1 and added Tessellation, Geometry Shaders, and ASTC texture compression to it. It was also more tension between Google and cross-platform developers, feeling like Google was trying to pull its developers away from Khronos Group. Today, OpenGL ES 3.2 was announced and includes each of the AEP features, plus a few more (like “enhanced” blending). Better yet, Google will support it directly.
Next up are the desktop standards, before we finish with a resurrected embedded standard.
OpenGL has a few new extensions added. One interesting one is the ability to assign locations to multi-samples within a pixel. There is a whole list of sub-pixel layouts, such as rotated grid and Poisson disc. Apparently this extension allows developers to choose it, as certain algorithms work better or worse for certain geometries and structures. There were probably vendor-specific extensions for a while, but now it's a ratified one. Another extension allows “streamlined sparse textures”, which helps manage data where the number of unpopulated entries outweighs the number of populated ones.
OpenCL 2.0 was given a refresh, too. It contains a few bug fixes and clarifications that will help it be adopted. C++ headers were also released, although I cannot comment much on it. I do not know the state that OpenCL 2.0 was in before now.
And this is when we make our way back to Vulkan.
SPIR-V, the code that runs on the GPU (or other offloading device, including the other cores of a CPU) in OpenCL and Vulkan is seeing a lot of community support. Projects are under way to allow developers to write GPU code in several interesting languages: Python, .NET (C#), Rust, Haskell, and many more. The slide lists nine that Khronos Group knows about, but those four are pretty interesting. Again, this is saying that you can write code in the aforementioned languages and have it run directly on a GPU. Curiously missing is HLSL, and the President of Khronos Group agreed that it would be a useful language. The ability to cross-compile HLSL into SPIR-V means that shader code written for DirectX 9, 10, 11, and 12 could be compiled for Vulkan. He expects that it won't take long for a project to start, and might already be happening somewhere outside his Google abilities. Regardless, those who are afraid to program in the C-like GLSL and HLSL shading languages might find C# and Python to be a bit more their speed, and they seem to be happening through SPIR-V.
As mentioned, we'll end on something completely different.
For several years, the OpenGL SC has been on hiatus. This group defines standards for graphics (and soon GPU compute) in “safety critical” applications. For the longest time, this meant aircraft. The dozens of planes (which I assume meant dozens of models of planes) that adopted this technology were fine with a fixed-function pipeline. It has been about ten years since OpenGL SC 1.0 launched, which was based on OpenGL ES 1.0. SC 2.0 is planned to launch in 2016, which will be based on the much more modern OpenGL ES 2 and ES 3 APIs that allow pixel and vertex shaders. The Khronos Group is asking for participation to direct SC 2.0, as well as a future graphics and compute API that is potentially based on Vulkan.
The devices that this platform intends to target are: aircraft (again), automobiles, drones, and robots. There are a lot of ways that GPUs can help these devices, but they need a good API to certify against. It needs to withstand more than an Ouya, because crashes could be much more literal.
Subject: Graphics Cards | August 7, 2015 - 10:46 AM | Ryan Shrout
Tagged: sdk, Oculus, nvidia, direct driver mode, amd
In an email sent out by Oculus this morning, the company has revealed some interesting details about the upcoming release of the Oculus SDK 0.7 on August 20th. The most interesting change is the introduction of Direct Driver Mode, developed in tandem with both AMD and NVIDIA.
This new version of the SDK will remove the simplistic "Extended Mode" that many users and developers implemented for a quick and dirty way of getting the Rift development kits up and running. However, that implementation had the downside of additional latency, something that Oculus is trying to eliminate completely.
Here is what Oculus wrote about the "Direct Driver Mode" in its email to developers:
Direct Driver Mode is the most robust and reliable solution for interfacing with the Rift to date. Rather than inserting VR functionality between the OS and the graphics driver, headset awareness is added directly to the driver. As a result, Direct Driver Mode avoids many of the latency challenges of Extended Mode and also significantly reduces the number of conflicts between the Oculus SDK and third party applications. Note that Direct Driver Mode requires new drivers from NVIDIA and AMD, particularly for Kepler (GTX 645 or better) and GCN (HD 7730 or better) architectures, respectively.
We have heard NVIDIA and AMD talk about the benefits of direct driver implementations for VR headsets for along time. NVIDIA calls its software implementation GameWorks VR and AMD calls its software support LiquidVR. Both aim to do the same thing - give more direct access to the headset hardware to the developer while offering new ways for faster and lower latency rendering to games.
Both companies have unique features to offer as well, including NVIDIA and it's multi-res shading technology. Check out our interview with NVIDIA on the topic below:
NVIDIA's Tom Petersen came to our offices to talk about GameWorks VR
Other notes in the email include a tentative scheduled release of November for the 1.0 version of the Oculus SDK. But until that version releases, Oculus is only guaranteeing that each new runtime will support the previous version of the SDK. So, when SDK 0.8 is released, you can only guarantee support for it and 0.7. When 0.9 comes out, game developers will need make sure they are at least on SDK 0.8 otherwise they risk incompatibility. Things will be tough for developers in this short window of time, but Oculus claims its necessary to "allow them to more rapidly evolve the software architecture and API." After SDK 1.0 hits, future SDK releases will continue to support 1.0.
Subject: Graphics Cards | August 4, 2015 - 06:06 PM | Jeremy Hellstrom
Tagged: 980 Ti, asus, msi, gigabyte, evga, GTX 980 Ti G1 GAMING, GTX 980 Ti STRIX OC, GTX 980 Ti gaming 6g
If you've decided that the GTX 980 Ti is the card for you due to price, performance or other less tangible reasons you will find that there are quite a few to choose from. Each have the same basic design but the coolers and frequencies vary between manufacturers, as do the prices. That is why it is handy that The Tech Report have put together a round up of four models for a direct comparison. In the article you will see the EVGA GeForce GTX 980 Ti SC+, Gigabyte's GTX 980 Ti G1 Gaming, MSI GTX 980 Ti Gaming 6G and the ASUS Strix GTX 980 Ti OC Edition. The cards are not only checked for basic and overclocked performance, there is also noise levels and power consumption to think about, so check out the full review.
"The GeForce GTX 980 Ti is pretty much the fastest GPU you can buy.The aftermarket cards offer higher clocks and better cooling than Nvidia's reference design. But which one is right for you?"
Here are some more Graphics Card articles from around the web:
- MSI GTX 980 Ti Gaming 6G Review @ Hardware Canucks
- Palit GTX 980 Ti Super JetStream 6 GB @ techPowerUp
- MSI GeForce GTX 960 GAMING 4G @ [H]ard|OCP
- Maxwell Hits The Workstation: NVIDIA Quadro M6000 Graphics Card Review @ Techgage
- NVIDIA's Tegra X1 Delivers Stunning Performance On Ubuntu Linux @ Phoronix
- The AMD Radeon R9 Fury Is Currently A Disaster On Linux @ Phoronix
- Sapphire Nitro R9 390 8G D5 Review, Playing With Nitro @ Bjorn3d
Subject: Graphics Cards | August 1, 2015 - 07:31 AM | Scott Michaud
Tagged: nvidia, maxwell, gtx 960, gtx 950 ti, gtx 950
A couple of sites are claiming that NVIDIA intends to replace the first-generation GeForce GTX 750 Ti with more Maxwell, in the form of the GeForce GTX 950 and/or GTX 950 Ti. The general consensus is that it will run on a cut-down GM206 chip, which is currently found in the GTX 960. I will go light on the rumored specifications because this part of the rumor is single-source, from accounts of a HWBattle page that has been deleted. But for a general ballpark of performance, the GTX 960 has a full GM206 chip while the 950(/Ti) is expected to lose about a quarter of its printed shader units.
The particularly interesting part is the power, though. As we reported, Maxwell was branded as a power-efficient version of the Kepler architecture. This led to a high-end graphics cards that could be powered by the PCIe bus. According to these rumors, the new card will require a single, 8-pin power connector on top of the 75W provided by the bus. This has one of two interesting implications that I can think of.
- The 750 Ti did not sell for existing systems as well as anticipated, or
- The GM206 chip just couldn't hit that power target and they didn't want to make another die
Whichever is true, it will be interesting to see how NVIDIA brands this if/when the card launches. Creating a graphics card for systems without available power rails was a novel concept and it seemed to draw attention. That said, the rumors claim they're not doing it this time... for some reason.
Subject: Graphics Cards | July 27, 2015 - 04:33 PM | Jeremy Hellstrom
Tagged: 4k, amd, R9 FuryX, GTX 980 Ti, gtx titan x
[H]ard|OCP have set up their testbed for a 4K showdown between the similarly priced GTX 980 Ti and Radeon R9 Fury X with the $1000 TITAN X tossed in there for those with more money than sense. The test uses the new Catalyst 15.7 and the GeForce 353.30 drivers to give a more even playing field while benchmarking Witcher 3, GTA V and other games. When the dust settled the pattern was obvious and the performance differences could be seen. The deltas were not huge but when you are paying $650 + tax for a GPU even performance a few frames better or a graphical option that can be used really matters. Perhaps the most interesting result was the redemption of the TITAN X, its extra price was reflected in the performance results. Check them out for yourself here.
"We take the new AMD Radeon R9 Fury X and evaluate the 4K gaming experience. We will also compare against the price competitive GeForce GTX 980 Ti as well as a GeForce GTX TITAN X. Which video card provides the best experience and performance when gaming at glorious 4K resolution?"
Here are some more Graphics Card articles from around the web:
- PowerColor PCS+ R9 380 4GB: The Affordable 4GB Solution @ Bjorn3D
- AMD Fury X "Fiji" Voltage Scaling @ techPowerUp
- HIS Radeon R9 390 IceQ X2 OC 8GB Video Card Review @ Madshrimps
- XFX R9 380 Double Dissipation 4GB @ [H]ard|OCP
- The New AMD GPU Open-Source Driver On Linux 4.2 Works, But Still A Lot Of Work Ahead @ Phoronix
- MSI Radeon R7 370 GAMING 4G @ Phoronix
- 15-Way AMD/NVIDIA Graphics Card Comparison For 4K Linux Gaming @ Phoronix
- PNY GTX980 Ti XLR8 OC @ Kitguru
- ASUS GTX 980 Ti STRIX Gaming 6 GB @ techPowerUp
- PNY GTX 960 XLR8 Review @ OCC
- GIGABYTE GeForce GTX 970 WindForce 3X OC 4GB Graphics Card Review @ NikKTech
- Inno3D iChill GTX 980 Ti HerculeZ X3 Air Boss Ultra @ HardwareOverclock
Subject: Graphics Cards | July 24, 2015 - 12:16 PM | Sebastian Peak
Tagged: rumor, pascal, nvidia, HBM2, hbm, graphics card, gpu
An exclusive report from Fudzilla claims some outlandish numbers for the upcoming NVIDIA Pascal GPU, including 17 billion transistors and a massive amount of second-gen HBM memory.
According to the report:
"Pascal is the successor to the Maxwell Titan X GM200 and we have been tipped off by some reliable sources that it will have more than a double the number of transistors. The huge increase comes from Pascal's 16 nm FinFET process and its transistor size is close to two times smaller."
The NVIDIA Pascal board (Image credit: Legit Reviews)
Pascal's 16nm FinFET production will be a major change from the existing 28nm process found on all current NVIDIA GPUs. And if this report is accurate they are taking full advantage considering that transistor count is more than double the 8 billion found in the TITAN X.
(Image credit: Fudzilla)
And what about memory? We have long known that Pascal will be NVIDIA's first forray into HBM, and Fudzilla is reporting that up to 32GB of second-gen HBM (HBM2) will be present on the highest model, which is a rather outrageous number even compared to the 12GB TITAN X.
"HBM2 enables cards with 4 HBM 2.0 cards with 4GB per chip, or four HBM 2.0 cards with 8GB per chips results with 16GB and 32GB respectively. Pascal has power to do both, depending on the SKU."
Pascal is expected in 2016, so we'll have plenty of time to speculate on these and doubtless other rumors to come.
Subject: Graphics Cards | July 23, 2015 - 10:52 AM | Ryan Shrout
Tagged: nvidia, geforce, gtx, bundle, metal gear solid, phantom pain
NVIDIA continues with its pattern of flagship game bundles with today's announcement. Starting today, GeForce GTX 980 Ti, 980, 970 and 960 GPUs from select retailers will include a copy of Metal Gear Solid V: The Phantom Pain, due out September 15th. (Bundle is live on Amazon.com.) Also, notebooks that use the GTX 980M or 970M GPU qualify.
From NVIDIA's marketing on the bundle:
Only GeForce GTX gives you the power and performance to game like the Big Boss. Experience the METAL GEAR SOLID V: THE PHANTOM PAIN with incredible visuals, uncompromised gameplay, and advanced technologies. NVIDIA G-SYNC™ delivers smooth and stutter-free gaming, GeForce Experience™ provides optimal playable settings, and NVIDIA GameStream™ technology streams your game to any NVIDIA SHIELD™ device.
It appears that Amazon.com already has its landing page up and ready for the MGS V bundle program, so if you are hunting for a new graphics card stop there and see what they have in your range.
Let's hope that this game release goes a bit more smooth than Batman: Arkham Knight...
Subject: Graphics Cards | July 20, 2015 - 02:00 PM | Jeremy Hellstrom
Tagged: amd, linux, CS:GO
Thankfully it has been quite a while since we saw GPU driver optimization specific to .exe filenames on Windows, in the past both major providers have tweaked performance based on the name of the executable which launches the game. Until now this particular flavour of underhandedness had become passé, at least until now. Phoronix has spotted it once again, this time seeing a big jump in performance in CS:GO when they rename the binary from csgo_linux binary to hl2_Linux. The game is built on the same engine but the optimization for the Source Engine are not properly applied to CS:GO.
There is nothing nefarious about this particular example, it seems more a case of AMD's driver team being lazy, or more likely short-staffed. If you play CS:GO on Linux then rename your binary, you will see a jump in performance with no deleterious side effects. Phoronix is investigating more games to see if there are other inconsistently applied optimizations.
"Should you be using a Radeon graphics card with the AMD Catalyst Linux driver and are disappointed by the poor performance, there is a very easy workaround for gaining much better performance under Linux... In some cases a simple tweak will yield around 40% better performance!"
Here are some more Graphics Card articles from around the web:
- Open-Source Linux Graphics: A10-7870K Godavari vs. i7-4790K Haswell vs. i7-5775C Broadwell @ Phoronix
- 12K (Triple 4K Monitor) Graphics Test Bench Upgrade @ eTeknix
- MSI R9 390X GAMING vs ASUS STRIX R9 Fury @ [H]ard|OCP
- Asus Strix R9 390X Gaming OC 8G @ Bjorn3d
- Sapphire Tri-X R9 Fury 4GB @ eTeknix
- AMD R9 Fury X CrossfireX 12K Eyefinity @ eTeknix
- HIS Radeon R9 390X IceQ X2 OC 8GB Video Card Review @ Madshrimps
- XFX R9 380 4G DD, XFX Review, XFX Rocks the DD Coolers Again! @ Bjorn3d
- Asus Radeon R9 Fury Strix DC3 OC @ Kitguru
- Sapphire Tri-X Radeon R9 Fury Review @ Modders-Inc
- AMD's Latest Open-Source Driver On Linux Is Getting Competitive With Catalyst 15.7 @ Phoronix
- Zotac GTX 980 Ti AMP! Extreme Review @ Hardware Canucks
- Palit GeForce GTX 980Ti Super Jetstream @ Kitguru
- Intel Iris Pro 6200 Graphics Are A Dream Come True For Open-Source Linux Fans @ Phoronix
Subject: Graphics Cards, Processors, Mobile | July 19, 2015 - 06:59 AM | Scott Michaud
Tagged: Zen, TSMC, Skylake, pascal, nvidia, Intel, Cannonlake, amd, 7nm, 16nm, 10nm
Getting smaller features allows a chip designer to create products that are faster, cheaper, and consume less power. Years ago, most of them had their own production facilities but that is getting rare. IBM has just finished selling its manufacturing off to GlobalFoundries, which was spun out of AMD when it divested from fabrication in 2009. Texas Instruments, on the other hand, decided that they would continue manufacturing but get out of the chip design business. Intel and Samsung are arguably the last two players with a strong commitment to both sides of the “let's make a chip” coin.
So where do you these chip designers go? TSMC is the name that comes up most. Any given discrete GPU in the last several years has probably been produced there, along with several CPUs and SoCs from a variety of fabless semiconductor companies.
Several years ago, when the GeForce 600-series launched, TSMC's 28nm line led to shortages, which led to GPUs remaining out of stock for quite some time. Since then, 28nm has been the stable work horse for countless high-performance products. Recent chips have been huge, physically, thanks to how mature the process has become granting fewer defects. The designers are anxious to get on smaller processes, though.
In a conference call at 2 AM (EDT) on Thursday, which is 2 PM in Taiwan, Mark Liu of TSMC announced that “the ramping of our 16 nanometer will be very steep, even steeper than our 20nm”. By that, they mean this year. Hopefully this translates to production that could be used for GPUs and CPUs early, as AMD needs it to launch their Zen CPU architecture in 2016, as early in that year as possible. Graphics cards have also been on that technology for over three years. It's time.
Also interesting is how TSMC believes that they can hit 10nm by the end of 2016. If so, this might put them ahead of Intel. That said, Intel was also confident that they could reach 10nm by the end of 2016, right until they announced Kaby Lake a few days ago. We will need to see if it pans out. If it does, competitors could actually beat Intel to the market at that feature size -- although that could end up being mobile SoCs and other integrated circuits that are uninteresting for the PC market.
Following the announcement from IBM Research, 7nm was also mentioned in TSMC's call. Apparently they expect to start qualifying in Q1 2017. That does not provide an estimate for production but, if their 10nm schedule is both accurate and also representative of 7nm, that would production somewhere in 2018. Note that I just speculated on an if of an if of a speculation, so take that with a mine of salt. There is probably a very good reason that this date wasn't mentioned in the call.
Back to the 16nm discussion, what are you hoping for most? New GPUs from NVIDIA, new GPUs from AMD, a new generation of mobile SoCs, or the launch of AMD's new CPU architecture? This should make for a highly entertaining comments section on a Sunday morning, don't you agree?
Subject: Graphics Cards | July 17, 2015 - 08:20 AM | Sebastian Peak
Tagged: radeon, r9 nano, hbm, Fiji, amd
AMD has spilled the beans on at least one aspect of the R9 Nano: the release timeframe. On their Q2 earnings call yesterday AMD CEO Lisa Su made this telling remark:
“Fury just launched, actually this week, and we will be launching Nano in the August timeframe.”
Image credit: VideoCardz.com
Wccftech had the story based on the AMD earnings call, but unfortunately there is no other new information the card just yet. We've speculated on how much lower clocks would need to be to meet the 175W target with full Fiji silicon, and it's going to be significant. The air coolers we've seen on the Fury (non-X) cards to date have extended well beyond the PCB, and the Nano is a mini-ITX form factor design.
Regardless of where the final GPU and memory clock numbers are I think it's safe to assume there won't be much (if any) overclocking headroom. Then again, of the card does have higher performance than the 290X in a mini ITX package at 175W, I don't think OC headroom will be a drawback. I guess we'll have to keep waiting for more information on the official specs before the end of August.