Considering picking up a vintage GPU?

Subject: Graphics Cards | April 2, 2018 - 02:36 PM |
Tagged: cryptocurrency, graphics cards

It has been a while since the Hardware Leaderboard has been updated as it is incredibly depressing to try to price out a new GPU, for obvious reasons.  TechSpot have taken an interesting approach to dealing with the crypto-blues, they have just benchmarked 44 older GPUs on current games to see how well they fare.  The cards range from the GTX 560 and HD7770 through to current model cards which are available to purchase used from sites such as eBay.  Buying a used card brings the price down to somewhat reasonable levels, though you do run the risk of getting a dead or dying card.  With interesting metrics such as price per frame, this is a great resource if you find yourself in desperate need of a GPU in the current market.  Check it out here.

2018-03-21-image.jpg

"Along with our recent editorials on why it's a bad time to build a gaming PC, we've been revisiting some older GPUs to see how they hold up in today's games. But how do you know how much you should be paying for a secondhand graphics card?"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: TechSpot

Samsung Begins Mass Production Of 18 Gbps 16-Gigabit GDDR6 Memory

Subject: Memory | January 18, 2018 - 12:34 AM |
Tagged: Samsung, graphics memory, graphics cards, gddr6, 19nm

Samsung is now mass producing new higher density GDDR6 memory built on its 10nm-class process technology that it claims offers twice the speed and density of its previous 20nm GDDR5. Samsung's new GDDR6 memory uses 16 Gb dies (2 GB) featuring pin speeds of 18 Gbps (gigabits-per-second) and is able to hit data transfer speeds of up to 72 GB/s per chip.

Samsung GDDR6_PhotoFs.png

According to Samsnug, its new GDDR6 uses a new circuit design which allows it to run on a mere 1.35 volts. Also good news for Samsung and for memory supply (and thus pricing and availability of products) is that the company is seeing a 30% gain in manufacturing productivity cranking out its 16Gb GDDR6 versus its 20nm GDDR5. 

Running at 18 Gbps, the new GDDR6 offers up quite a bit of bandwidth and will allow for graphics cards with much higher amounts of VRAM. Per package, Samsung's 16Gb GDDR6 offers 72 GB/s which is twice the density, pin speed, and bandwidth than that of its 8Gb GDDR5 running at 8Gbps and 1.5V with data transfers of 32 GB/s. (Note that SK Hynix has announced it plans to produce 9Gbps and 10Gbps dies which max out at 40 GB/s.) GDDR5X gets closer to this mark, and in theory is able to hit up to 16 Gbps per pin and 64 GB/s per die, but so far the G5X used in real world products has been much slower (the Titan XP runs at 11.4 Gbps for example). The Titan XP runs 12 8Gb (1GB) dies at 11.4 Gbps on a 384-bit memory bus for maximum memory bandwidth of 547 GB/s. Moving to GDDR6 would enable that same graphics card to have 24 GB of memory (with the same number of dies) with up to 864 GB/s of bandwidth which is approaching High Bandwidth Memory levels of performance (though it still falls short of newer HBM2 and in practice the graphics card would likely be more conservative on the memory speeds). Still, it's an impressive jump in memory performance that widens the gap between GDDR6 and GDDR5X. I am curious how the GPU memory market will shake out in 2018 and 2019 with GDDR5, GDDR5X, GDDR6, HBM, HBM2, and HBM3 all being readily available for use in graphics cards and where each memory type will land especially on the mid-range and high-end consumer cards (HBM2/3 still holds the performance crown and is ideal for the HPC market).

Samsung is aiming its new 18Gbps 16Gb memory at high performance graphics cards, game consoles, vehicles, and networking devices. Stay tuned for more information on GDDR6 as it develops!

Also read:

Source: Samsung

Samsung Mass Producing Second Generation "Aquabolt" HBM2: Better, Faster, and Stronger

Subject: Memory | January 12, 2018 - 05:46 PM |
Tagged: supercomputing, Samsung, HPC, HBM2, graphics cards, aquabolt

Samsung recently announced that it has begun mass production of its second generation HBM2 memory which it is calling “Aquabolt”. Samsung has refined the design of its 8GB HBM2 packages allowing them to achieve an impressive 2.4 Gbps per pin data transfer rates without needing more power than its first generation 1.2V HBM2.

SAMSUNG-HBM2_C.jpg

Reportedly Samsung is using new TSV (through-silicon-via) design techniques and adding additional thermal bumps between dies to improve clocks and thermal control. Each 8GB HBM2 “Aquabolt” package is comprised of eight 8Gb dies each of which is vertically interconnected using 5,000 TSVs which is a huge number especially considering how small and tightly packed these dies are. Further, Samsung has added a new protective layer at the bottom of the stack to reinforce the package’s physical strength. While the press release did not go into detail, it does mention that Samsung had to overcome challenges relating to “collateral clock skewing” as a result of the sheer number of TSVs.

On the performance front, Samsung claims that Aquabolt offers up a 50% increase in per package performance versus its first generation “Flarebolt” memory which ran at 1.6Gbps per pin and 1.2V. Interestingly, Aquabolt is also faster than Samsung’s 2.0Gbps per pin HBM2 product (which needed 1.35V) without needing additional power. Samsung also compares Aquabolt to GDDR5 stating that it offers 9.6-times the bandwidth with a single package of HBM2 at 307 GB/s and a GDDR5 chip at 32 GB/s. Thanks to the 2.4 Gbps per pin speed, Aquabolt offers 307 GB/s of bandwidth per package and with four packages products such as graphics cards can take advantage of 1.2 TB/s of bandwidth.

This second generation HBM2 memory is a decent step up in performance (with HBM hitting 128GB/s and first generation HBM2 hitting 256 GB/s per package and 512 GB/s and 1 TB/s with four packages respectively), but the interesting bit is that it is faster without needing more power. The increased bandwidth and data transfer speeds will be a boon to the HPC and supercomputing market and useful for working with massive databases, simulations, neural networks and AI training, and other “big data” tasks.

Aquabolt looks particularly promising for the mobile market though with future products succeeding the current mobile Vega GPU in Kaby Lake-G processors, Ryzen Mobile APUs, and eventually discrete Vega mobile graphics cards getting a nice performance boost (it’s likely too late for AMD to go with this new HBM2 on these specific products, but future refreshes or generations may be able to take advantage of it). I’m sure it will also see usage in the SoCs uses in Intel’s and NVIDIA’s driverless car projects as well.

Source: Samsung

A good year to sell GPUs

Subject: General Tech | February 21, 2017 - 01:18 PM |
Tagged: jon peddie, marketshare, graphics cards

The GPU market increased 5.6% from Q3 to Q4 of 2016, beating the historical average of -4.7% by quite a large margin, over the year we saw an increase of 21.1%.  That increase is even more impressive when you consider that the total PC market dropped 10.1% in the same time, showing that far more consumers chose to upgrade their existing machines instead of buying new ones.  This makes sense as neither Intel nor AMD offered a compelling reason to upgrade your processor and motherboard for anyone who purchased one in the last two or three years.

AMD saw a nice amount of growth, grabbing almost 8% of the total market from NVIDIA over the year, though they lost a tiny bit of ground between Q3 and Q4 of 2016.  Jon Peddie's sample also includes workstation class GPUs as well as gaming models and it seems a fair number of users chose to upgrade their machines as that market increased just over 19% in 2016.

unnamed.png

"The graphics add-in board market has defied gravity for over a year now, showing gains while the overall PC market slips. The silly notion of integrated graphics "catching up" with discrete will hopefully be put to rest now," said Dr. Jon Peddie, president of Jon Peddie research, the industry's research and consulting firm for graphics and multimedia."

Here is some more Tech News from around the web:

Tech Talk

EKWB Releases AMD Radeon Pro Duo Full-Cover Water Block

Subject: Graphics Cards, Cases and Cooling | May 10, 2016 - 08:55 AM |
Tagged: water cooling, radeon pro duo, radeon, pro duo, liquid cooling, graphics cards, gpu cooler, gpu, EKWB, amd

While AMD's latest dual-GPU powerhouse comes with a rather beefy-looking liquid cooling system out of the box, the team at EK Water Blocks have nonetheless created their own full-cover block for the Pro Duo, which is now available in a pair of versions.

EKFC-Radeon-Pro-Duo_NP_fill_1600.jpg

"Radeon™ has done it again by creating the fastest gaming card in the world. Improving over the Radeon™ R9 295 X2, the Radeon Pro Duo card is faster and uses the 3rd generation GCN architecture featuring asynchronous shaders enables the latest DirectX™ 12 and Vulkan™ titles to deliver amazing 4K and VR gaming experiences. And now EK Water Blocks made sure, the owners can get the best possible liquid cooling solution for the card as well!"

EKFC-Radeon-Pro-Duo_pair.png

Nickel version (top), Acetal+Nickel version (bottom)

The blocks include a single-slot I/O bracket, which will allow the Pro Duo to fit in many more systems (and allow even more of them to be installed per motherboard!).

EKFC-Radeon-Pro-Duo_NP_input_1600-1500x999.jpg

"EK-FC Radeon Pro Duo water block features EK unique central inlet split-flow cooling engine with a micro fin design for best possible cooling performance of both GPU cores. The block design also allows flawless operation with reversed water flow without adversely affecting the cooling performance. Moreover, such design offers great hydraulic performance, allowing this product to be used in liquid cooling systems using weaker water pumps.

The base is made of nickel-plated electrolytic copper while the top is made of quality POM Acetal or acrylic (depending on the variant). Screw-in brass standoffs are pre-installed and allow for safe installation procedure."

Suggested pricing is set at 155.95€ for the blocks (approx. $177 US), and they are "readily available for purchase through EK Webshop and Partner Reseller Network".

Source: EKWB

Shedding a little light on Monday's announcement

Most of our readers should have some familiarity with GameWorks, which is a series of libraries and utilities that help game developers (and others) create software. While many hardware and platform vendors provide samples and frameworks, taking the brunt of the work required to solve complex problems, this is NVIDIA's branding for their suite of technologies. Their hope is that it pushes the industry forward, which in turn drives GPU sales as users see the benefits of upgrading.

nvidia-2016-gdc-gameworksmission.png

This release, GameWorks SDK 3.1, contains three complete features and two “beta” ones. We will start with the first three, each of which target a portion of the lighting and shadowing problem. The last two, which we will discuss at the end, are the experimental ones and fall under the blanket of physics and visual effects.

nvidia-2016-gdc-volumetriclighting-fallout.png

The first technology is Volumetric Lighting, which simulates the way light scatters off dust in the atmosphere. Game developers have been approximating this effect for a long time. In fact, I remember a particular section of Resident Evil 4 where you walk down a dim hallway that has light rays spilling in from the windows. Gamecube-era graphics could only do so much, though, and certain camera positions show that the effect was just a translucent, one-sided, decorative plane. It was a cheat that was hand-placed by a clever artist.

nvidia-2016-gdc-volumetriclighting-shaftswireframe.png

GameWorks' Volumetric Lighting goes after the same effect, but with a much different implementation. It looks at the generated shadow maps and, using hardware tessellation, extrudes geometry from the unshadowed portions toward the light. These little bits of geometry sum, depending on how deep the volume is, which translates into the required highlight. Also, since it's hardware tessellated, it probably has a smaller impact on performance because the GPU only needs to store enough information to generate the geometry, not store (and update) the geometry data for all possible light shafts themselves -- and it needs to store those shadow maps anyway.

nvidia-2016-gdc-volumetriclighting-shaftsfinal.png

Even though it seemed like this effect was independent of render method, since it basically just adds geometry to the scene, I asked whether it was locked to deferred rendering methods. NVIDIA said that it should be unrelated, as I suspected, which is good for VR. Forward rendering is easier to anti-alias, which makes the uneven pixel distribution (after lens distortion) appear more smooth.

Read on to see the other four technologies, and a little announcement about source access.

Report: AMD's Dual-GPU Fiji XT Card Might Be Coming Soon

Subject: Graphics Cards | October 5, 2015 - 02:33 AM |
Tagged: rumor, report, radeon, graphics cards, Gemini, fury x, fiji xt, dual-GPU, amd

The AMD R9 Fury X, Fury, and Nano have all been released, but a dual-GPU Fiji XT card could be on the way soon according to a new report.

AMD-Radeon-R9-Fury-X2-Dual-Fiji-GPU-Graphics-Card-635x416.jpg

Back in June at AMD's E3 event we were shown Project Quantum, AMD's concept for a powerful dual-GPU system in a very small form-factor. It was speculated that the system was actually housing an unreleased dual-GPU graphic card, which would have made sense given the very small size of the system (and mini-ITX motherboard therein). Now a report from WCCFtech is pointing to a manifest that just might be a shipment of this new dual-GPU card, and the code-name is Gemini.

AMD-Radeon-Fiji-Gemini-Tobermory-635x95.png

"Gemini is the code-name AMD has previously used in the past for dual GPU variants and surprisingly, the manifest also contains another phrase: ‘Tobermory’. Now this could simply be a reference to the port that the card shipped from...or it could be the actual codename of the card, with Gemini just being the class itself."

The manifest also indicates a Cooler Master cooler for the card, the maker of the liquid cooling solution for the Fury X. As the Fury X has had its share of criticism for pump whine issues it would be interesting to see how a dual-GPU cooling solution would fare in that department, though we could be seeing an entirely new generation of the pump as well. Of course speculation on an unreleased product like this could be incorrect, and verifiable hard details aren't available yet. Still, of the dual-GPU card is based on a pair of full Fiji XT cores the specs could be very impressive to say the least:

  • Core: Fiji XT x2
  • Stream Processors: 8192
  • GCN Compute Units: 128
  • ROPs: 128
  • TMUs: 512
  • Memory: 8 GB (4GB per GPU)
  • Memory Interface: 4096-bit x2
  • Memory Bandwidth: 1024 GB/s

In addition to the specifics above the report also discussed the possibility of 17.2 TFLOPS of performance based on 2x the performance of Fury X, which would make the Gemini product one of the most powerful single-card GPU solutions in the world. The card seems close enough to the final stage that we should expect to hear something official soon, but for now it's fun to speculate - unless of course the speculation concerns a high initial retail price, and unfortunately something at or above $1000 is quite likely. We shall see.

Source: WCCFtech

GPU Market Share: NVIDIA Gains in Shrinking Add-in Board Market

Subject: Graphics Cards | August 21, 2015 - 11:30 AM |
Tagged: PC, nvidia, Matrox, jpr, graphics cards, gpu market share, desktop market share, amd, AIB, add in board

While we reported recently on the decline of overall GPU shipments, a new report out of John Peddie Research covers the add-in board segment to give us a look at the desktop graphics card market. So how are the big two (sorry Matrox) doing?

GPU Supplier Market Share This Quarter Market Share Last Quarter Market Share Last Year
AMD 18.0% 22.5% 37.9%
Matrox 0.00% 0.1% 0.1%
NVIDIA 81.9% 77.4% 62.0%

The big news is of course a drop in market share for AMD of 4.5% quarter-to-quarter, and down to just 18% from 37.9% last year. There will be many opinions as to why their share has been dropping in the last year, but it certainly didn't help that the 300-series GPUs are rebrands of 200-series, and the new Fury cards have had very limited availability so far.

mercury_research.png

The graph from Mercury Research illustrates what is almost a mirror image, with NVIDIA gaining 20% as AMD lost 20%, for a 40% swing in overall share. Ouch. Meanwhile (not pictured) Matrox didn't have a statistically meaningful quarter but still manage to appear on the JPR report with 0.1% market share (somehow) last quarter.

The desktop market isn't actually suffering quite as much as the overall PC market, and specifically the enthusiast market.

"The AIB market has benefited from the enthusiast segment PC growth, which has been partially fueled by recent introductions of exciting new powerful (GPUs). The demand for high-end PCs and associated hardware from the enthusiast and overclocking segments has bucked the downward trend and given AIB vendors a needed prospect to offset declining sales in the mainstream consumer space."

But not all is well considering overall the add-in board attach rate with desktops "has declined from a high of 63% in Q1 2008 to 37% this quarter". This is indicative of the overall trend toward integrated GPUs in the industry with AMD APUs and Intel processor graphics, as illustrated by this graphic from the report.

AIB_attach.jpg

The year-to-year numbers show an overall drop of 18.8%, and even with their dominant 81.9% market share NVIDIA has still seen their shipments decrease by 12% this quarter. These trends seem to indicate a gloomy future for discrete graphics in the coming years, but for now we in the enthusiast community will continue to keep it afloat. It would certainly be nice to see some gains from AMD soon to keep things interesting, which might help lower prices down from their lofty $400 - $600 mark for flagship cards at the moment.

The Latest NVIDIA GeForce Drivers Are Here: Version 347.25 adds GTX 960 Support, MFAA to Most Games

Subject: Graphics Cards | January 23, 2015 - 11:09 PM |
Tagged: nvidia, gtx 960, graphics drivers, graphics cards, GeForce 347.25, geforce, game ready, dying light

With the release of GTX 960 yesterday NVIDIA also introduced a new version of the GeForce graphics driver, 347.25 - WHQL.

GeForce_Experience_Badge_Type_H_blk.png

NVIDIA states that the new driver adds "performance optimizations, SLI profiles, expanded Multi-Frame Sampled Anti-Aliasing support, and support for the new GeForce GTX 960".

While support for the newly released GPU goes without saying, the expanded MFAA support will help provide better anti-aliasing performance to many existing games, as “MFAA support is extended to nearly every DX10 and DX11 title”. In the release notes three games are listed that do not benefit from the MFAA support, as “Dead Rising 3, Dragon Age 2, and Max Payne 3 are incompatible with MFAA”.

347.25 also brings additional SLI profiles to add support for five new games, and a DirectX 11 SLI profile for one more:

SLI profiles added

  • Black Desert
  • Lara Croft and the Temple of Osiris
  • Nosgoth
  • Zhu Xian Shi Jie
  • The Talos Principle

DirectX 11 SLI profile added

  • Final Fantasy XIV: A Realm Reborn

The update is also the Game Ready Driver for Dying Light, a zombie action/survival game set to debut on January 27.

DYING_LIGHT.png

Much more information is available under the release notes on the driver download page, and be sure to check out Ryan’s chat with Tom Peterson from the live stream for a lot more information about this driver and the new GTX 960 graphics card.

Source: NVIDIA
Subject: Systems
Manufacturer: Various

The Road to 1080p

THE_GPUS.jpg

The stars of the show: a group of affordable GPU options

When preparing to build or upgrade a PC on any kind of a budget, how can you make sure you're extracting the highest performance per dollar from the parts you choose? Even if you do your homework comparing every combination of components is impossible. As system builders we always end up having to look at various benchmarks here and there and then ultimately make assumptions. It's the nature of choosing products within an industry that's completely congested at every price point.

Another problem is that lower-priced graphics cards are usually benchmarked on high-end test platforms with Core i7 processors - which is actually a necessary thing when you need to eliminate CPU bottlenecks from the mix when testing GPUs. So it seems like it might be valuable (and might help narrow buying choices down) if we could take a closer look at gaming performance from complete systems built with only budget parts, and see what these different combinations are capable of.

With this in mind I set out to see just how much it might take to reach acceptable gaming performance at 1080p (acceptable being 30 FPS+). I wanted to see where the real-world gaming bottlenecks might occur, and get a feel for the relationship between CPU and GPU performance. After all, if there was no difference in gaming performance between, say, a $40 and an $80 processor, why spend twice as much money? The same goes for graphics. We’re looking for “good enough” here, not “future-proof”.

ALL_COMPONENTS.jpg

The components in all their shiny boxy-ness (not everything made the final cut)

If money was no object we’d all have the most amazing high-end parts, and play every game at ultra settings with hundreds of frames per second (well, except at 4K). Of course most of us have limits, but the time and skill required to assemble a system with as little cash as possible can result in something that's actually a lot more rewarding (and impressive) than just throwing a bunch of money at top-shelf components.

The theme of this article is good enough, as in, don't spend more than you have to. I don't want this to sound like a bad thing. And if along the way you discover a bargain, or a part that overperforms for the price, even better!

Yet Another AM1 Story?

We’ve been talking about the AMD AM1 platform since its introduction, and it makes a compelling case for a low cost gaming PC. With the “high-end” CPU in the lineup (the Athlon 5350) just $60 and motherboards in the $35 range, it makes sense to start here. (I actually began this project with the Sempron 3820 as well, but it just wasn’t enough for 1080p gaming by a long shot so the test results were quickly discarded.) But while the 5350 is an APU, I didn't end up testing it without a dedicated GPU. (Ok, I eventually did but it just can't handle 1080p.)

But this isn’t just a story about AM1 after all. Jumping right in here, let's look at the result of my research (and mounting credit card debt). All prices were accurate as I wrote this, but are naturally prone to fluctuate:

Tested Hardware
Graphics Cards

MSI AMD Radeon R7 250 2GB OC - $79.99

XFX AMD Radeon R7 260X - $109.99

EVGA NVIDIA GeForce GTX 750 - $109.99

EVGA NVIDIA GeForce GTX 750 Ti SC - $153.99

Processors

AMD Athlon 5350 2.05 GHz Quad-Core APU - $59.99

AMD Athlon X2 340X 3.2 GHz Dual-Core CPU - $44.99.

AMD Athlon X4 760K 3.8 GHz Quad-Core CPU - $84.99

Intel Pentium G3220 3.0 GHz Dual-Core CPU - $56.99

Motherboards

ASRock AM1B-ITX Mini-ITX AMD AM1 - $39.99

MSI A88XM-E45 Micro-ATX AMD A88X - $72.99

ECS H81H3-M4 Micro-ATX Intel H81 - $47.99

Memory 4GB Samsung OEM PC3-12800 DDR3-1600 (~$40 Value)
Storage Western Digital Blue 1TB Hard Drive - $59.99
Power Supply EVGA 430 Watt 80 PLUS PSU - $39.99
OS Windows 8.1 64-bit - $99

So there it is. I'm sure it won't please everyone, but there is enough variety in this list to support no less than 16 different combinations, and you'd better believe I ran each test on every one of those 16 system builds!

Keep reading our look at budget gaming builds for 1080p!!