NVIDIA Launches GTX 1050 3GB for Budget Gamers

Subject: Graphics Cards | May 23, 2018 - 06:21 PM |
Tagged: pascal, nvidia, GP107, GDDR5, budget

NVIDIA recently quietly launched a new budget graphics card that neatly slots itself between the GTX 1050 and the GTX 1050 Ti. The new GTX 1050 3GB, as the name suggests, features 3GB of GDDR5 memory. The new card is closer to the GTX 1050 Ti than the name would suggest, however as it uses the same 768 CUDA cores instead of the 640 of the GTX 1050 2GB. The GDDR5 memory is where the card differs from the GTX 1050 Ti though as NVIDIA has cut the number of memory controllers by one along with the corresponding ROPs and cache meaning that the new GTX 1050 3GB has a smaller memory bus and less memory bandwidth than both the GTX 1050 2GB and GTX 1050 Ti 4GB.

NVIDIA GTX 1050.png

Specifically, the GTX 1050 with 3GB GDDR5 has a 96-bit memory bus that when paired with 7 Gbps GDDR5 results in maximum memory bandwidth of 84 GB/s versus the other previously released cards' 128-bit memory buses and 112 GB/s of bandwidth.

Clockspeeds on the new GTX 1050 3GB start are a good bit higher than the other cards though with the base clocks starting at 1392 MHz which is the boost clock of the 1050 Ti and running up to 1518 MHz boost clockspeeds. Thanks to the clockspeeds bumps, the theoretical GPU performance of 2.33 TFLOPS is actually higher than the GTX 1050 Ti (2.14 TFLOPS) and existing GTX 1050 2GB (1.86 TFLOPS) though the reduced memory bus (and loss of a small amount of ROPs and cache) will hold the card back from surpassing the Ti variant in most workloads – NVIDIA needs to maintain product segmentation somehow!

  NVIDIA GTX 1050 2GB NVIDIA GTX 1050 3GB NVIDIA GTX 1050 Ti 4GB AMD RX 560 4GB
GPU GP107 GP107 GP107 Polaris 11
GPU Cores 640 768 768 896 or 1024
Texture Units 40 48 48 64
ROPs 32 ? 32 16
GPU Base 1354 1392 1290 1175
GPU Boost 1455 1518 1392 1275
TFLOPS 1.86 2.33 2.14 up to 2.6
Memory 2GB GDDR5 3GB GDDR5 4GB GDDR5 2GB or 4GB GDDR5
Memory Clockspeed 7 Gbps 7 Gbps 7 Gbps 7 Gbps
Memory Bus 128-bit 96-bit 128-bit 128-bit
Memory Bandwidth 112 GB/s 84 GB/s 112 GB/s 112 GB/s
TDP 75W 75W 75W 60W to 80W
Pricing ~$150 ~$160 (Estimate) ~$200 ~$160

The chart above compares the specifications of the GTX 1050 3GB with the GTX 1050 and the GTX 1050 Ti on the NVIDIA side and the AMD RX 560 which appears to be its direct competitor based on pricing. The new 3GB GTX 1050 should compete well with AMD's Polaris 11 based GPU as well as NVIDIA's own cards in the budget gaming space where hopefully the downside of a reduced memory bus will at least dissuade cryptocurrency miners from adopting this card as an entry level miner for Ethereum and other alt coins giving gamers a chance to buy something a bit better than the GTX 1050 and RX 550 level at close to MSRP while the miners fight over the Ti and higher variants with more memory and compute units.

NVIDIA did not release formal pricing or release date information, but the cards are expected to launch in June and prices should be around $160 to $180 depending on retailer and extra things like fancier coolers and factory overclocks.

What are your thoughts on the GTX 1050 3GB? Is it the bastion of hope budget gamers have been waiting for? hehe Looking around online it seems pricing for these budget cards has somewhat returned to sane levels and hopefully alternative options like these aimed at gamers will help further stabilize the market for us DIYers that want to game more than mine. I do wish that NVIDIA could have changed the name a bit to better differentiate the card, maybe the GTX 1050G or something but oh well. I suppose so long as the 640 CUDA core GTX 1050 doesn't ever get 3GB GDDR5 at least gamers will be able to tell them apart by the amount of memory listed on the box or website.

Also read:

Source: NVIDIA

AMD Releases Updated Raven Ridge Desktop APU Graphics Drivers

Subject: Graphics Cards, Processors | May 18, 2018 - 04:33 PM |
Tagged: Vega, ryzen, raven ridge, Radeon Software Adrenalin Edition, r5 2400g, r3 2200g, amd

Today, AMD released the first driver update for the desktop Raven Ridge APUs since their launch in February of this year.

The new Q2 2018 drivers are based on AMD's current Radeon Software Adrenalin Edition release and bring features such as ReLive and the Radeon overlay to the Vega-powered desktop platform.

adrenalin-74.jpg

We haven't had a lot of time to look for potential performance enhancements this driver brings yet, but we did do a quick 3DMark run on our Ryzen 5 2400G with memory running at DDR4-3200.

3dmark-fs.png

Here, we see healthy gains of around 5% in 3DMark Firestrike for the new driver. While I wouldn't expect big gains for older titles, newer titles that have come out since the initial Raven Ridge drive release in February will see the biggest gains.

We are still eager to see the mobile iterations of AMD's Raven Ridge processors get updated drivers, as notebooks such as the HP Envy X360 have not been updated since they launched in November of last year.

It's good to see progress from AMD on this front, but they must work harder to unify the graphics drivers of their APU products into the mainstream graphics driver releases if they want those products to be taken seriously as gaming options.

Source: AMD

NVIDIA GeForce GTX Cards Finally Return to Stock at MSRP

Subject: Graphics Cards | May 9, 2018 - 12:23 PM |
Tagged: video card, pricing, msrp, mining, GTX 1080, gtx 1070, gtx 1060, gtx, graphics, gpu, gaming, crypto

The wait for in-stock NVIDIA graphics cards without inflated price tags seems to be over. Yes, in the wake of months of crypto-fueled disappointment for gamers the much anticipated, long-awaited return of graphics cards at (gasp) MSRP prices is at hand. NVIDIA has now listed most of their GTX lineup as in-stock (with a limit of 2) at normal MSRPs, with the only exception being the GTX 1080 Ti (still out of stock). The lead time from NVIDIA is one week, but worth it for those interested in the lower prices and 'Founders Edition' coolers.

GTX_STORE.PNG

Many other GTX 10 Series options are to be found online at near-MSRP pricing, though as before many of the aftermarket designs command a premium, with factory overclocks and proprietary cooler designs to help justify the added cost. Even Amazon - previously home to some of the most outrageous price-gouging from third-party sellers in months past - has cards at list pricing, which seems to solidify a return to GPU normalcy.

AMZ_1080.PNG

The GTX 1080 inches closer to standard pricing once again on Amazon

Some of the current offers include:

MSI Gaming GeForce GTX 1080 ARMOR 8G - $549.99 @ Amazon.com

EVGA GeForce GTX 1070 SC GAMING ACX 3.0 - $469.99 @ Amazon.com

EVGA GeForce GTX 1060 SC GAMING 6GB - $299.99 @ Newegg.com

GTX 1070 cards continue to have the highest premium outside of NVIDIA's store, with the lowest current pricing on Newegg or Amazon at $469.99. Still, the overall return to near-MSRP pricing around the web is good news for gamers who have been forced to play second (or third) fiddle to cryptomining "entrepreneurs" for several months now; a disturbing era in which pre-built gaming systems from Alienware and others actually presented a better value than DIY builds.

Source: NVIDIA

Who's a pretty cult member?

Subject: Graphics Cards | May 7, 2018 - 02:48 PM |
Tagged: Dunia 2, far cry 5, 4k

Armed with a GTX 1080 Ti Founders Edition and a 4k display, you venture forth into the advanced graphics settings of Ubisoft's latest Far Cry game.  Inside you face a multitude of challenges, from volumetric fog through various species of anti-aliasing until finally facing the beast known as overall quality.  [H]ard|OCP completed this quest and you can benefit from their experience, although no matter how long they searched they could not locate any sign of NVIDIA's Gameworks which appeared in the previous Far Cry.  There were signs of rapid packed math enhancements, much to the rejoicing of those who have no concrete interest in Gameworks existence.

1525152416j5roe5lgd3_1_4_l.png

"We will be comparing Far Cry 5's Overall Quality, Shadows, Volumetric Fog, and Anti-Aliasing image quality. In addition, we will find out if we can improve IQ in the game by adding Anisotropic Filtering and forcing AA from the control panel. We’ve have a video showing you many IQ issues we found in Far Cry 5, and those are plentiful."

Here are some more Graphics Card articles from around the web:

Graphics Cards

 

Source: [H]ard|OCP

NVIDIA Releases GeForce 397.55 Hotfix Drivers

Subject: Graphics Cards | May 3, 2018 - 08:41 PM |
Tagged: nvidia, graphics drivers

The previous set of drivers, version 397.31 that was released last week, had a few bugs in them… so NVIDIA has released a hotfix (397.55) to address the issues (without waiting for the next “Game Ready” date). Of course, these drivers also went through a reduced QA process, so they should be avoided unless one of the problems affect you.

nvidia-2015-bandaid.png

And the fixed bugs are:

  • Device Manager may report Code 43 on certain GTX 1060 models
  • Netflix playback may occasionally stutter
  • Added support for Microsoft Surface Book notebooks
  • Driver may get removed after PC has been idle for extended periods of time

The last issue manifests in a couple of different forms. The forum page specifically mentions Windows 10, although users with Windows 7 and Windows 8 could also be affected by the bug, just with different symptoms. I experienced it, and for me (on Windows 10) it was just a matter of force-quitting all processes prefixed with “nv” in task manager. My symptoms were that GeForce Experience would attempt to re-download the drivers and StarCraft II would fail to launch. If you’re experiencing similar issues, then you’ll probably want to give this driver a shot.

You can download it from their CustHelp page.

Source: NVIDIA

NVIDIA Releases GeForce 397.31. RTX for Developers.

Subject: Graphics Cards | April 25, 2018 - 08:27 PM |
Tagged: nvidia, graphics drivers, rtx, Volta

It’s quite the jump in version number from 391.35 to 397.31, but NVIDIA has just released a new graphics driver. Interestingly, it is “Game Ready” tied to the Battletech, which I have been looking forward to, but I was always under the impression that no-one else was. Apparently not.

nvidia-geforce.png

As for its new features? The highlight is a developer preview of NVIDIA RTX Technology. This requires a Volta GPU, which currently means Titan V unless your team was seeded something that doesn’t necessarily exist, as well as 396.xx+ drivers, the new Windows 10 update, and Microsoft DXR developer package. Speaking of which, I’m wondering how much of the version number bump could be attributed to RTX being on the 396.xx branch. Even then, it still feels like a branch or two never left NVIDIA’s dev team. Hmm.

Moving on, the driver also conforms with the Vulkan 1.1 test suite (version 1.1.0.3). If you remember back from early March, the Khronos Group released the new standard, which integrated a bunch of features into core, and brought Subgroup Operations into the mix. This could allow future shaders to perform quicker by being compiled with new intrinsic functions.

Also – the standalone installer will apparently clean up after itself better than it used to. Often I can find a few gigabytes of old NVIDIA folders when I’m looking for space to save, so it’s good for NVIDIA to finally address at least some of that.

Pick up the new drivers on NVIDIA’s website or through GeForce Experience.

Source: NVIDIA

Is the GPU in Intel Kaby Lake-G More Polaris than Vega?

Subject: Graphics Cards, Processors | April 9, 2018 - 04:25 PM |
Tagged: Vega, Polaris, kaby lake-g, Intel, amd

Over the weekend, some interesting information has surfaced surrounding the new Kaby Lake-G hardware from Intel. A product that is officially called the “8th Generation Intel Core Processors with Radeon RX Vega M Graphics” is now looking like it might be more of a Polaris-based GPU than a Vega-based one. This creates an interesting marketing and technology capability discussion for the community, and both Intel and AMD, that is worth diving into.

PCWorld first posted the question this weekend, using some interesting data points as backup that Kaby Lake-G may in fact be based on Polaris. In Gordon’s story he notes that in AIDA64 the GPU is identified as “Polaris 22” while the Raven Ridge-based APUs from AMD show up as “Raven Ridge.” Obviously the device identification of a third party piece of software is a suspect credential in any situation, but the second point provided is more salient: based on the DXDiag information, the GPU on the Hades Canyon NUC powered by Kaby Lake-G does not support DirectX 12.1.

dx_diag_comparo-100754201-orig.jpg

Image source: PCWorld

AMD clearly stated in its launch of the Vega architecture last year that the new GPUs supported DX 12.1, among other features. The fact that the KBL-G part does NOT include support for it is compelling evidence that the GPU might be more similar to Polaris than Vega.

Tom’s Hardware did some more digging that was posted this morning, using a SiSoft Sandra test that can measure performance of FP16 math and FP32. For both the Radeon RX Vega 64 and 56 discrete graphics cards, running the test with FP16 math results in a score that is 65% faster than the FP32 results. With a Polaris-based graphics card, an RX 470, the scores between FP32 and FP16 were identical as the architecture can support FP16 math functions but doesn’t accelerate it with AMD’s “rapid packed math” feature (that was a part of the Vega launch).

tomsmath.jpg

Image source: Tom's Hardware

And you guessed it, the Kaby Lake-G part only runs essentially even in the FP16 mode. (Also note that AMD’s Raven Ridge APU that integrated Vega graphics does get accelerated by 61% using FP16.)

What Kaby Lake-G does have that leans toward Vega is support for HBM2 memory (which none of the Polaris cards have) and “high bandwidth memory cache controller and enhanced compute units with additional ROPs” according to the statement from Intel given to Tom’s Hardware.

It should be noted that just because the benchmarks and games that can support rapid packed math don’t take advantage of that capability today, does not mean they won’t have the capability to do so after a driver or firmware update. That being said, if that’s the plan, and even if it’s not, Intel should come out and tell the consumers and media.

The debate and accusations of conspiracy are running rampant again today with this news. Is Intel trying to pull one over on us by telling the community that this is a Vega-based product when it is in fact based on Polaris? Why would AMD allow and promote the Vega branding with a part that it knows didn’t meet the standards it created to be called a Vega architecture solution?

Another interesting thought comes when analyzing this debate with the Ryzen 7 2400G and Ryzen 5 2200G products, both of which claim to use Vega GPUs as a portion of the APU. However, without support for HBM2 or the high-bandwidth cache controller, does that somehow shortchange the branding for it? Or are the memory features of the GPU considered secondary to its design?

This is the very reason why companies hate labels, hate specifications, and hate having all of this tracked by a competent and technical media. Basically every company in the tech industry is guilty of this practice: Intel has 2-3 architectures running as “8th Generation” in the market, AMD is selling RX 500 cards that were once RX 400 cards, and NVIDIA has changed performance capabilities of the MX 150 at least once or twice.

The nature of semi-custom chips designs is that they are custom. Are the GPUs used in the PS4 and Xbox One or Xbox One X called Polaris, Vega, or something else? It would be safer for AMD and its partners to give each new product its own name, its own brand—but then the enthusiasts would want to know what it was most like, and how did it compare to Polaris, or Vega, etc.? It’s also possible that AMD was only willing to sell this product to Intel if it included some of these feature restrictions. In complicated negotiations like this one surely was, anything is feasible.

These are tough choices for companies to make. AMD loves having the Vega branding in more products as it gives weight to the development cost and time it spent on the design. Having Vega associated with more high-end consumer products, including those sold by Intel, give them leverage for other products down the road. From Intel’s vantage point using the Vega brand makes it looks like it has the very latest technology in its new processor and it can benefit from any cross-promotion that occurs around the Vega brand from AMD or its partners.

Unfortunately, it means that the devil is in the details, and the details are something that no one appears to be willing to share. Does it change the performance we saw in our recent Hades Canyon NUC review or our perspective on it as a product? It does not. But as features like Rapid Packed Math or the new geometry shader accelerate in adoption, the capability for Kaby Lake-G to utilize them is going to be scrutinized more heavily.

Source: Various

Considering picking up a vintage GPU?

Subject: Graphics Cards | April 2, 2018 - 02:36 PM |
Tagged: cryptocurrency, graphics cards

It has been a while since the Hardware Leaderboard has been updated as it is incredibly depressing to try to price out a new GPU, for obvious reasons.  TechSpot have taken an interesting approach to dealing with the crypto-blues, they have just benchmarked 44 older GPUs on current games to see how well they fare.  The cards range from the GTX 560 and HD7770 through to current model cards which are available to purchase used from sites such as eBay.  Buying a used card brings the price down to somewhat reasonable levels, though you do run the risk of getting a dead or dying card.  With interesting metrics such as price per frame, this is a great resource if you find yourself in desperate need of a GPU in the current market.  Check it out here.

2018-03-21-image.jpg

"Along with our recent editorials on why it's a bad time to build a gaming PC, we've been revisiting some older GPUs to see how they hold up in today's games. But how do you know how much you should be paying for a secondhand graphics card?"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: TechSpot

Eight-GPU SLI in Unreal Engine 4 (Yes There Is a Catch)

Subject: Graphics Cards | March 29, 2018 - 09:52 PM |
Tagged: nvidia, GTC, gp102, quadro p6000

At GTC 2018, Walt Disney Imagineering unveiled a work-in-progress clip of their upcoming Star Wars: Galaxy’s Edge attraction, which is expected to launch next year at Disneyland and Walt Disney World Resort. The cool part about this ride is that it will be using Unreal Engine 4 with eight, GP102-based Quadro P6000 graphics cards. NVIDIA also reports that Disney has donated the code back to Epic Games to help them with their multi-GPU scaling in general – a win for us consumers… in a more limited fashion.

nvidia-2018-GTC-starwars-8-way-sli.jpg

See? SLI doesn’t need to be limited to two cards if you have a market cap of $100 billion USD.

Another interesting angle to this story is how typical PC components are contributing to these large experiences. Sure, Quadro hardware isn’t exactly cheap, but it can be purchased through typical retail channels and it allows the company to focus their engineering time elsewhere.

Ironically, this also comes about two decades after location-based entertainment started to decline… but, you know, it’s Disneyland and Disney World. They’re fine.

Source: NVIDIA

ASRock Enters Graphics Card Market With Phantom Gaming Series of AMD GPUs

Subject: Graphics Cards | March 29, 2018 - 05:45 PM |
Tagged: RX 580, RX 570, RX 560, RX 550, Polaris, mining, asrock, amd

ASRock, a company known mostly for its motherboards that was formerly an Asus sub-brand but is now an independent company owned by Pegatron since 2010 is now getting into the graphics card market with a new Phantom Gaming series. At launch, the Phantom Gaming series is comprised of four AMD Polaris-based graphics cards including the Phantom Gaming RX 550 2G and RX 560 2G on the low end and the Phantom Gaming X RX 570 8G OC and RX 580 8G OC on the mid/high end range.

Phantom Gaming X Radeon RX580 8G OC(L4).png

ASRock is using black shrouds with white accents and silver and red logos. The lower end Phantom Gaming cards utilize a single dual ball bearing fan while the Phantom Gaming X cards use a dual fan configuration. ASRock is using copper baseplates paired with aluminum heatsinks and composite heatpipes. The Phantom Gaming RX 550 and RX 560 cards use only PCI-E slot power while the Phantom Gaming X RX 570 and RX 580 cards get power from both the slot and a single 8-pin PCI-E power connector.

Video outputs include one HDMI 2.0, one DisplayPort 1.4, and one DL-DVI-D on the Phantom Gaming parts and one HDMI 2.0, three DisplayPort 1.4, and one DL-DVI-D on the higher-end Phantom Gaming X graphics cards. All of the graphics card models feature both silent and overclocked modes in addition to their out-of-the-box default clocks depending on whether you value performance or noise. Users can select which mode they want or perform a custom overclock or fan curve using ASRock's Phantom Gaming Tweak utility.

On the performance front, out of the box ASRock is slightly overclocking the Phantom Gaming X OC cards (the RX 570 and RX 580 based ones) and slightly underclocking the lower end Phantom Gaming cards (including the memory which is downclocked to 6 GHz) compared to their AMD reference specifications.

  ASRock RX 580 OC RX 580 ASRock RX 570 OC RX 570 ASRock RX 560 RX 560 ASRock RX 550 RX 550
Cores 2304 2304 2048 2048 896 896 512 512
GPU Clock (MHz) 1380 1340 1280 1244 1149 1275 1100 1183
GPU Clock OC Mode (MHz) 1435 - 1331 - 1194 - 1144 -
Memory (GDDR5) 8GB 8GB 8GB 8GB 2GB 2GB/4GB 2GB 2GB/4GB
Memory Clock (GHz) 8GHz 8GHz 7GHz 7GHz 6GHz 7GHz 6GHz 7GHz
Memory Clock OC Mode (MHz) 8320 - 7280 - 6240 - 6240 -
Texture Units 144 144 128 128 64 64 32 32
ROPs 32 32 32 32 16 16 16 16

The table above shows the comparisons between the ASRock graphics cards and their AMD reference card counterparts. Note that the Phantom Gaming RX 560 2G is based on the cut-down 14 CU (compute unit) model rather than the launch 16 CU GPU. Also, even in OC Mode, ASRock does not bring the memory up to the 7 GT/s reference spec. On the positive side, turning on OC mode does give a decent factory overclock of the GPU over reference. Also nice to see is that on the higher end "OC Certified" Phantom Gaming X cards, ASRock overclocks both the GPU and memory speeds which is often not the case with factory overclocks.

Phantom Gaming Radeon RX550 2G(L1).png

ASRock did not detail pricing with any of the launch announcement cards, but they should be coming soon with 4GB models of the RX 560 an RX 550 to follow later this year.

It is always nice to have more competition in this space and hopefully a new AIB partner for AMD helps alleviate shortages and demand for gaming cards if only by a bit. I am curious how well the cards will perform as while they look good on paper the company is new to graphics cards and the build quality really needs to be there. I am just hoping that the Phantom Gaming moniker is not an allusion to how hard these cards are going to be to find for gaming! (heh) If the rumored Ethereum ASICs do not kill the demand for AMD GPUs I do expect that ASRock will also be releasing mining specific cards as well at some point.

What are your thoughts on the news of ASRock moving into graphics cards?

Also read:

Source: Tech Report