All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: General Tech, Graphics Cards | March 2, 2014 - 05:20 PM | Scott Michaud
Tagged: passive cooling, maxwell, gtx 750 ti
The NVIDIA GeForce GTX 750 Ti is fast but also power efficient, enough-so that Ryan found it a worthwhile upgrade for cheap desktops with cheap power supplies that were never intended for discrete graphics. Of course, this recommendation is about making the best of what you got; better options probably exist if you are building a PC (or getting one built by a friend or a computer store).
Image Credit: Tom's Hardware
Tom's Hardware went another route: make it fanless.
After wrecking a passively-cooled Radeon HD 7750, which is probably a crime in Texas, they clamped it on to the Maxwell-based GTX 750 Ti. While the cooler was designed for good airflow, they decided to leave it in a completely-enclosed case without fans. Under load, the card reached 80 C within about twenty minutes. The driver backed off performance slightly, 1-3% depending on your frame of reference, but was able to maintain that target temperature.
Now, if only it accepted SLi, this person might be happy.
Subject: Graphics Cards | March 2, 2014 - 03:14 AM | Tim Verry
Tagged: sapphire, R7 240, htpc, SFF, low profile, steam os
Sapphire is preparing a new low profile Radeon R7 240 graphics card for home theater PCs and small form factor desktop builds. The new graphics card is a single slot design that uses a small heatsink with fan cooler that is shorter than the low profile PCI bracket for assured compatibility with even extremely cramped cases.
The Sapphire R7 240 card pairs a 28nm AMD GCN-based GPU with 2GB of DDR3 memory. There are two HDMI 1.4a display outputs that each support 4K 4096 x 2160 resolutions. Specifically, this particular iteration of the Radeon R7 240 has 320 stream processors clocked at 730 MHz base and 780 MHz boost along with 2GB DDR3 memory clocked at 900 MHz on a 128-bit bus. The card further has 20 TMUs and 8 ROPs. The card has a power sipping 30W TDP.
This low profile R7 240 is a sub-$100 part that can easily power a home theater PC or Steam OS streaming endpoint. Actually, the R7 240 itself can deliver playable gaming frame rates with low quality settings and lowered resolutions delivering at least 30 average FPS in modern titles like Bioshock Infinite and BF4 according to this review. Another use case would be to add the card to an existing AMD APU-based system in Hybrid CrossFire (which has seen Frame Pacing fixes!) for a bit more gaming horsepower under a strict budget.
The card occupies a tight space where it is only viable in specific situations constrained by a tight budget, physical size, and the requirement to buy a card new and not an older (single and faster, not Hybrid CrossFire) generation card on the used market. Still, it is nice to have options and this will be one such new budget alternative. Exact pricing is not yet available, but it should be hitting store shelves soon. For an idea on pricing, the full height Sapphire R7 240 retails for around $70, so expect the new low profile variant to be around that price if at a slight premium.
Subject: Graphics Cards, Processors | February 26, 2014 - 07:18 PM | Ryan Shrout
Overclocking the memory and GPU clock speeds on an AMD APU can greatly improve gaming performance - it is known. With the new AMD A10-7850K in hand I decided to do a quick test and see how much we could improve average frame rates for mainstream gamers with only some minor tweaking of the motherboard BIOS.
Using some high-end G.Skill RipJaws DDR3-2400 memory, we were able to push memory speeds on the Kaveri APU up to 2400 MHz, a 50% increase over the stock 1600 MHz rate. We also increased the clock speed on the GPU portion of the A10-7850K from 720 MHz to 1028 MHz, a 42% boost. Interestingly, as you'll see in the video below, the memory speed had a MUCH more dramatic impact on our average frame rates in-game.
In the three games we tested for this video, GRID 2, Bioshock Infinite and Battlefield 4, total performance gain ranged from 26% to 38%. Clearly that can make the AMD Kaveri APU an even more potent gaming platform if you are willing to shell out for the high speed memory.
|Stock||GPU OC||Memory OC||Total OC||Avg FPS Change|
|22.4 FPS||23.7 FPS||28.2 FPS||29.1 FPS||+29%|
High + 2xAA
|33.5 FPS||36.3 FPS||41.1 FPS||42.3 FPS||+26%|
|30.1 FPS||30.9 FPS||40.2 FPS||41.8 FPS||+38%|
Subject: Graphics Cards | February 26, 2014 - 06:17 PM | Ryan Shrout
Tagged: opengl, nvidia, Mantle, gdc 14, GDC, DirectX 12, DirectX, amd
UPDATE (2/27/14): AMD sent over a statement today after seeing our story.
AMD would like you to know that it supports and celebrates a direction for game development that is aligned with AMD’s vision of lower-level, ‘closer to the metal’ graphics APIs for PC gaming. While industry experts expect this to take some time, developers can immediately leverage efficient API design using Mantle, and AMD is very excited to share the future of our own API with developers at this year’s Game Developers Conference.
Credit over to Scott and his reader at The Tech Report for spotting this interesting news today!!
It appears that DirectX and OpenGL are going to be announcing some changes at next month's Game Developers Conference in San Francisco. According to some information found in the session details, both APIs are trying to steal some of the thunder from AMD's Mantle, recently released with the Battlefield 4 patch. Mantle is na API was built by AMD to enable more direct access (lower level) to its GCN graphics hardware allowing developers to code games that are more efficient, providing better performance for the PC gamer.
From the session titled DirectX: Evolving Microsoft's Graphics Platform we find this description (emphasis mine):
For nearly 20 years, DirectX has been the platform used by game developers to create the fastest, most visually impressive games on the planet.
However, you asked us to do more. You asked us to bring you even closer to the metal and to do so on an unparalleled assortment of hardware. You also asked us for better tools so that you can squeeze every last drop of performance out of your PC, tablet, phone and console.
Come learn our plans to deliver.
Another DirectX session hosted by Microsoft is titled DirectX: Direct3D Futures (emphasis mine):
Come learn how future changes to Direct3D will enable next generation games to run faster than ever before!
In this session we will discuss future improvements in Direct3D that will allow developers an unprecedented level of hardware control and reduced CPU rendering overhead across a broad ecosystem of hardware.
If you use cutting-edge 3D graphics in your games, middleware, or engines and want to efficiently build rich and immersive visuals, you don't want to miss this talk.
Now look at a line from our initial article on AMD Mantle when announced at its Hawaii tech day event:
It bypasses DirectX (and possibly the hardware abstraction layer) and developers can program very close to the metal with very little overhead from software.
This is all sounding very familiar. It would appear that Microsoft has finally been listening to the development community and is working on the performance aspects of DirectX. Likely due in no small part to the push of AMD and Mantle's development, an updated DirectX 12 that includes a similar feature set and similar performance changes would shift the market in a few key ways.
Is it time again for innovation with DirectX?
First and foremost, what does this do for AMD's Mantle in the near or distant future? For now, BF4 will still include Mantle support as will games like Thief (update pending) but going forward, if these DX12 changes are as specific as I am being led to believe, then it would be hard to see anyone really sticking with the AMD-only route. Of course, if DX12 doesn't really address the performance and overhead issues in the same way that Mantle does then all bets are off and we are back to square one.
Interestingly, OpenGL might also be getting into the ring with the session Approaching Zero Driver Overhead in OpenGL:
Driver overhead has been a frustrating reality for game developers for the entire life of the PC game industry. On desktop systems, driver overhead can decrease frame rate, while on mobile devices driver overhead is more insidious--robbing both battery life and frame rate. In this unprecedented sponsored session, Graham Sellers (AMD), Tim Foley (Intel), Cass Everitt (NVIDIA) and John McDonald (NVIDIA) will present high-level concepts available in today's OpenGL implementations that radically reduce driver overhead--by up to 10x or more. The techniques presented will apply to all major vendors and are suitable for use across multiple platforms. Additionally, they will demonstrate practical demos of the techniques in action in an extensible, open source comparison framework.
This description seems to indicate more about new or lesser known programming methods that can be used with OpenGL to lower overhead without the need for custom APIs or even DX12. This could be new modules from vendors or possibly a new revision to OpenGL - we'll find out next month.
All of this leaves us with a lot of questions that will hopefully be answered when we get to GDC in mid-March. Will this new version of DirectX be enough to reduce API overhead to appease even the stingiest of game developers? How will AMD react to this new competitor to Mantle (or was Mantle really only created to push this process along)? What time frame does Microsoft have on DX12? Does this save NVIDIA from any more pressure to build its own custom API?
Gaming continues to be the driving factor of excitement and innovation for the PC! Stay tuned for an exciting spring!
Subject: General Tech, Graphics Cards | February 25, 2014 - 02:46 PM | Scott Michaud
Tagged: amd, Mantle, TrueAudio, Thief 4, thief
AMD released their Catalyst 14.2 Beta V1.3 graphics drivers today, coinciding with the launch of Thief. The game, developed by Eidos Montreal and published by Square Enix, is another entry in "Gaming Evolved" and their "Never Settle" promotion. Soon, it will also support Mantle and TrueAudio.
Being Theif's launch driver, it provides optimizations for both single-GPU and Crossfire customers in that title. It also provides fixes for other titles, especially Battlefield 4 which can now run Mantle with up-to four GPUs. Battlefield 3 and 4 also supports Frame Pacing on very high (greater than 2560x1600) resolution monitors in dual-card Crossfire. It also fixes a couple of bugs in using Crossfire with DirectX 9 games, missing textures Minecraft, and corruption in X-Plane.
Catalyst 14.2 Beta V1.3 driver is available now at AMD's website.
Subject: General Tech, Graphics Cards, Processors | February 25, 2014 - 01:33 PM | Scott Michaud
Tagged: Ivy Bridge, Intel, iGPU, haswell
Recently, Intel released the 22.214.171.12412 (126.96.36.199.3412 for 64-bit) drivers for their Ivy Bridge and Haswell integrated graphics. The download was apparently published on January 29th while its patch notes are dated February 22nd. It features expanded support for Intel Quick Sync Video Technology, allowing certain Pentium and Celeron-class processors to access the feature, as well as an alleged increase in OpenGL-based games. Probably the most famous OpenGL title of our time is Minecraft, although I do not know if that specific game will see improvements (and if so, how much).
The new driver enables Quick Sync Video for the following processors:
- Pentium 3558U
- Pentium 3561Y
- Pentium G3220(Unsuffixed/T/TE)
- Pentium G3420(Unsuffixed/T)
- Pentium G3430
- Celeron 2957U
- Celeron 2961Y
- Celeron 2981U
- Celeron G1820(Unsuffixed/T/TE)
- Celeron G1830
Besides the addition for these processors and the OpenGL performance improvements, the driver obviously fixes several bugs in each of its supported OSes. You can download the appropriate drivers from the Intel Download Center.
Subject: General Tech, Graphics Cards | February 20, 2014 - 05:45 PM | Ken Addison
Tagged: nvidia, mining, maxwell, litecoin, gtx 750 ti, geforce, dogecoin, coin, bitcoin, altcoin
As we have talked about on several different occasions, Altcoin mining (anything that is NOT Bitcoin specifically) is a force on the current GPU market whether we like it or not. Traditionally, Miners have only bought AMD-based GPUs, due to the performance advantage when compared to their NVIDIA competition. However, with continued development of the cudaMiner application over the past few months, NVIDIA cards have been gaining performance in Scrypt mining.
The biggest performance change we've seen yet has come with a new version of cudaMiner released yesterday. This new version (2014-02-18) brings initial support for the Maxwell architecture, which was just released yesterday in the GTX 750 and 750 Ti. With support for Maxwell, mining starts to become a more compelling option with this new NVIDIA GPU.
With the new version of cudaMiner on the reference version of the GTX 750 Ti, we were able to achieve a hashrate of 263 KH/s, impressive when you compare it to the performance of the previous generation, Kepler-based GTX 650 Ti, which tops out at about 150KH/s or so.
As you may know from our full GTX 750 Ti Review, the GM107 overclocks very well. We were able to push our sample to the highest offset configurable of +135 MHz, with an additional 500 MHz added to the memory frequency, and 31 mV bump to the voltage offset. All of this combined to a ~1200 MHz clockspeed while mining, and an additional 40 KH/s or so of performance, bringing us to just under 300KH/s with the 750 Ti.
As we compare the performance of the 750 Ti to AMD GPUs and previous generation NVIDIA GPUs, we start to see how impressive the performance of this card stacks up considering the $150 MSRP. For less than half the price of the GTX 770, and roughly the same price as a R7 260X, you can achieve the same performance.
When we look at power consumption based on the TDP of each card, this comparison only becomes more impressive. At 60W, there is no card that comes close to the performance of the 750 Ti when mining. This means you will spend less to run a 750 Ti than a R7 260X or GTX 770 for roughly the same hash rate.
Taking a look at the performance per dollar ratings of these graphics cards, we see the two top performers are the AMD R7 260X and our overclocked GTX 750 Ti.
However, when looking at the performance per watt differences of the field, the GTX 750 Ti looks more impressive. While most miners may think they don't care about power draw, it can help your bottom line. By being able to buy a smaller, less efficient power supply the payoff date for the hardware is moved up. This also bodes well for future Maxwell based graphics cards that we will likely see released later in 2014.
Subject: Graphics Cards | February 19, 2014 - 04:43 PM | Jeremy Hellstrom
Tagged: geforce, gm107, gpu, graphics, gtx 750 ti, maxwell, nvidia, video
We finally saw Maxwell yesterday, with a new design for the SMs called SMM each of which consist of four blocks of 32 dedicated, non-shared CUDA cores. In theory that should allow NVIDIA to pack more SMMs onto the card than they could with the previous SMK units. This new design was released on a $150 card which means we don't really get to see what this new design is capable of yet. At that price it competes with AMD's R7 260X and R7 265, at least if you can find them at their MSRP and not at inflated cryptocurrency levels. Legit Reviews contrasted the performance of two overclocked GTX 750 Ti to those two cards as well as to the previous generation GTX 650Ti Boost on a wide selection of games to see how it stacks up performance-wise which you can read here.
That is of course after you read Ryan's full review.
"NVIDIA today announced the new GeForce GTX 750 Ti and GTX 750 video cards, which are very interesting to use as they are the first cards based on NVIDIA's new Maxwell graphics architecture. NVIDIA has been developing Maxwell for a number of years and have decided to launch entry-level discrete graphics cards with the new technology first in the $119 to $149 price range. NVIDIA heavily focused on performance per watt with Maxwell and it clearly shows as the GeForce GTX 750 Ti 2GB video card measures just 5.7-inches in length with a tiny heatsink and doesn't require any internal power connectors!"
Here are some more Graphics Card articles from around the web:
- MSI GTX 750 Ti Gaming Video Card Review @HiTech Legion
- NVIDIA GeForce GTX 750 Ti @ Benchmark Reviews
- ASUS GTX 750 OC 1 GB @ techPowerUp
- MSI GTX 750 Ti Gaming 2 GB @ techPowerUp
- NVIDIA GeForce GTX 750Ti the Arrival of Maxwell @HiTech Legion
- Palit GTX 750 Ti StormX Dual 2 GB @ techPowerUp
- The GTX 750 Ti Review; Maxwell Arrives @ Hardware Canucks
- Nvidia GeForce GTX 750 Ti vs. AMD Radeon R7 265 @ Legion Hardware
- MSI GTX750Ti OC Twin Frozr @ Kitguru
- NVIDIA GeForce GTX 750 Ti 2 GB @ techPowerUp
- NVIDIA GeForce GTX 750 Ti "Maxwell" On Linux @ Phoronix
- A quick look at Mantle on AMD's Kaveri APU @ The Tech Report
- Sapphire Radeon R9 Tri-X OC video card @ Hardwareoverclock
- AMD Radeon R9 290: Still Not Good For Linux Users @ Phoronix
- AMD Radeon R7 265 2GB Video Card Review @ Legit Reviews
- Sapphire Radeon R7 260X OC 2GB Graphics Card Review @ Techgage
- XFX Double Dissipation R9 280X @ [H]ard|OCP
Subject: General Tech, Graphics Cards | February 19, 2014 - 12:01 AM | Scott Michaud
Tagged: raptr, gaming evolved, amd
The AMD Gaming Evolved App updates your drivers, optimizes your game settings, streams your gameplay to Twitch, accesses some social media platforms, and now gives prizes. Points are given for playing games using the app, optimizing game settings, and so forth. These can be exchanged for rewards ranging from free games, to Sapphire R9-series graphics cards.
This program has been in beta for a little while now, without the ability to redeem points. The system has been restructured to encourage using the entire app by lowering the accumulation rate for playing games and adding other goals. Beta participants do not lose all of their points, rather it is rescaled more in line with the new system.
Subject: General Tech, Graphics Cards | February 18, 2014 - 09:03 AM | Scott Michaud
Tagged: nvidia, gtx titan black, geforce titan, geforce
NVIDIA has just announced the GeForce GTX Titan Black. Based on the full high-performance Kepler (GK110) chip, it is mostly expected to be a lower cost development platform for GPU processing applications. All 2,880 single precision (FP32) CUDA Cores and 960 double precision (FP64) CUDA Cores are unlocked, yielding 5.1 TeraFLOPs of 32-bit decimal and 1.3 TeraFLOPs of 64-bit decimal performance. The chip contains 1536kB of L2 Cache and will be paired with 6GB of video memory on the board.
The original GeForce GTX Titan launched last year, almost to the day. Also based on the GK110 design, it also featured full double precision performance with only one SMX disabled. Of course, no component at the time contained a fully-enabled GK110 processor. The first product with all 15 SMX units active was not realized until the Quadro K6000, announced in July but only available in the fall. It was followed by the GeForce GTX 780 Ti (with a fraction of its FP64 performance) in November, and the fully powered Tesla K40 less than two weeks after that.
For gaming applications, this card is expected to have comparable performance to the GTX 780 Ti... unless you can find a use for the extra 3GB of memory. Games do not display much benefit with the extra 64-bit floating point (decimal) performance because the majority of their calculations are at 32-bit precision.
The NVIDIA GeForce GTX Titan Black is available today at a price of $999.
Subject: General Tech, Graphics Cards | February 14, 2014 - 06:02 PM | Scott Michaud
Tagged: supply shortage, shortage, R9 290X, podcast, litecoin, dogecoin, bitcoin
UPDATE (Feb 14th, 11pm ET): As a commenter has pointed out below, suddenly, as if by magic, Newegg has lowered prices on the currently in stock R9 290X cards by $200. That means you can currently find them for $699 - only $150 over the expected MSRP. Does that change anything about what we said above or in the video? Not really. It only lowers the severity.
I am curious to know if this was done by Newegg voluntarily due to pressure from news stories such as these, lack of sales at $899 or with some nudging from AMD...
If you have been keeping up with our podcasts and reviews, you will know that AMD cards are great compute devices for their MSRP. This is something that cryptocurrency applies a value to. Run a sufficient amount of encryption tasks and you are rewarded with newly created tokens (or some fee from validated transactions). Some people seem to think that GPUs are more valuable for that purpose than their MSRP, so retailers raise prices and people still buy them.
Currently, the cheapest R9 290X is being sold for $900. This is a 64% increase over AMD's intended $549 MSRP. They are not even the ones receiving this money!
This shortage also affects other products such as Corsair's 1200W power supply. Thankfully, only certain components are necessary for mining (mostly GPUs and a lot of power) so at least we are not seeing the shortage spread to RAM, CPUs, APUs, and so forth. We noted a mining kit on Newegg which was powered by a Sempron processor. This line of cheap and low-performance CPUs has not been updated since 2009.
We have kept up with GPU shortages, historically. We did semi-regular availability checks during the GeForce GTX 680 and 690 launch windows. The former was out of stock for over two months after its launch. Those also sometimes strayed from their MSRP, slightly.
Be sure to check out the clip (above) for a nice, 15-minute discussion.
Subject: Graphics Cards | February 13, 2014 - 02:31 PM | Jeremy Hellstrom
Tagged: radeon, r7 265, pitcairn, Mantle, gpu, amd
Some time in late February or March you will be able to purchase the R7 265 for around $150, a decent price for an entry level GPU that will benefit those who are currently dependent on the GPU portion of an APU. This leads to the question of its performance and if this Pitcairn refresh will really benefit a gamer on a tight budget. Hardware Canucks tested it against the two NVIDIA cards closest in price, the GTX 650 Ti Boost which is almost impossible to find and the GTX 660 2GB which is $40 more than the MSRP of the R7 265. The GTX 660 is faster overall but when you look at the price to performance ratio the R7 265 is a more attractive offering. Of course with NVIDIA's Maxwell release just around the corner this could change drastically.
If you already caught Ryan's review, you might have missed the short video he just added on the last page.
"AMD's R7 265 is meant to reside in the space between the R7 260X and R9 270, though performance is closer to its R9 sibling. Could this make it a perfect budget friendly graphics card?"
Here are some more Graphics Card articles from around the web:
- AMD updates Radeon R7 series with R7 265 GPU, promising 25 percent more power @ The Inquirer
- Sapphire Radeon R7 265 2 GB @ techPowerUp
- Sapphire R7 265 Dual X @ Kitguru
- Gigabyte Windforce Radeon R9 280X OC Video Card Review @HiTech Legion
- XFX Radeon R9 290 Double Dissipation @ Benchmark Reviews
- Sapphire R7 260X 2GB OC 2x DVI Video Card Review @ Legit Reviews
- Sapphire R9-290X Tri-X “Sapphire Takes a Shot at Cooling the Monster” Review! @ Bjorn3D
- Asus R9 290 Direct CU II OC @ Kitguru
- Sapphire R9 290X Tri-X OC 4 GB @ techPowerUp
- GIGABYTE R9 290X WindForce OC Review @ Hardware Canucks
- AMD Mantle BF4 and StarSwarm Testing Part 2 @ Legit Reviews
- Gigabyte GTX 780 Ti GHz Edition 3GB @ eTeknix
- MSI GeForce GTX 780 Ti GAMING 3G @ [H]ard|OCP
Subject: Graphics Cards | February 10, 2014 - 12:00 AM | Ryan Shrout
Tagged: radeon, R7, hd 7770, amd, 250x
With the exception of the R9 290X, the R9 290, and the R7 260X, AMD's recent branding campaign with the Radeon R7 and R9 series of graphics cards is really just a reorganization and rebranding of existing parts. When we reviewed the Radeon R9 280X and R9 270X, both were well known entities though this time with lower price tags to sweeten the pot.
Today, AMD is continuing the process of building the R7 graphics card lineup with the R7 250X. If you were looking for a new ASIC, maybe one that includes TrueAudio support, you are going to be let down. The R7 250X is essentially the same part that was released as the HD 7770 in February of 2012: Cape Verde.
AMD calls the R7 250X "the successor" to the Radeon HD 7770 and its targeting the 1080p gaming landscape in the $99 price range. For those keeping track at home, the Radeon HD 7770 GHz Edition parts are currently selling for the same price. The R7 250X will be available in both 1GB and 2GB variants with a 128-bit GDDR5 memory bus running at 4.5 GHz. The card requires a single 6-pin power connection and we expect a TDP of 95 watts.
Here is a table that details the current product stack of GPUs from AMD under $140. It's quite crowded as you can see.
|Radeon R7 260X||Radeon R7 260||Radeon R7 250X||Radeon R7 250||Radeon R7 240|
|GPU Code name||Bonaire||Bonaire||Cape Verde||Oland||Oland|
|Rated Clock||1100 MHz||1000 MHz||1000 MHz||1050 MHz||780 MHz|
|Memory||2GB||2GB||1 or 2GB||1 or 2GB||1 or 2GB|
|Memory Clock||6500 MHz||6000 MHz||4500 MHz||4600 MHz||4600 MHz|
|Memory Bandwidth||104 GB/s||96 GB/s||72 GB/s||73.6 GB/s||28.8 GB/s|
|TDP||115 watts||95 watts||95 watts||65 watts||30 watts|
|Peak Compute||1.97 TFLOPS||1.53 TFLOPS||1.28 TFLOPS||0.806 TFLOPS||0.499 TFLOPS|
The current competition from NVIDIA rests in the hands of the GeForce GTX 650 and the GTX 650 Ti, a GPU that was released itself in late 2012. Since we already know what performance to expect from the R7 250X because of its pedigree, the numbers below aren't really that surprising, as provided by AMD.
AMD did leave out the GTX 650 Ti from the graph above... but no matter, we'll be doing our own testing soon enough, once our R7 250X cards find there way into the PC Perspective offices.
The AMD Radeon R7 250X will be available starting today but if that is the price point you are looking at, you might want to keep an eye out for sales on those remaining Radeon HD 7770 GHz Edition parts.
Subject: General Tech, Graphics Cards | February 7, 2014 - 03:54 AM | Scott Michaud
Tagged: sli, crossfire
I will not even call this a thinly-veiled rant. Linus admits it. To make a point, he assembled a $5000 PC running a pair of NVIDIA GeForce 780 Ti GPUs and another pair of AMD Radeon R9 290X graphics cards. While Bitcoin mining would likely utilize all four video cards well enough, games will not. Of course, he did not even mention the former application (thankfully).
Honestly, he's right. One of the reasons why I am excited about OpenCL (and its WebCL companion) is that it simply does not care about devices. Your host code manages the application but, when the jobs get dirty, it enlists help from an available accelerator by telling it to perform a kernel (think of it like function) and share the resulting chunk of memory.
This can be an AMD GPU. This can be an NVIDIA GPU. This can be an x86 CPU. This can be an FPGA. If the host has multiple, independent tasks, it can be several of the above (and in any combination). OpenCL really does not care.
Obviously, to be fair, AMD is very receptive to open platforms. NVIDIA is less-so, and they are honest about that, but they conform to standards when it benefits their users more than their proprietary ones. I know that point can be taken multiple ways, and several will be hotly debated, but I really cannot find the words to properly narrow it.
Despite the fragmentation in features, there is one thing to be proud of as a PC gamer. You may have different experiences depending on the components you purchase.
But, at least you will always have an experience.
Subject: General Tech, Graphics Cards | February 6, 2014 - 08:54 PM | Scott Michaud
Tagged: amd, radeon, R7 250X
The AMD Radeon R7 250X has been mentioned on a few different websites over the last day, one of which was tweeted by AMD Radeon Italia. The SKU, which bridges the gap between the R7 250 and the R7 260, is expected to have a graphics processor with 640 Stream Processors, 40 TMUs, and 16 ROPs. It should be a fairly silent launch, with 1GB and 2GB versions appearing soon for an expected price of around 90 Euros, including VAT.
Image Credit: Videocardz.com
The GPU is expected to be based on the 28nm Oland chip design.
While it may seem like a short, twenty Euro jump from the R7 250 to the R7 260, the single-precision FLOP performance actually doubles from around 800 GFLOPs to around 1550 GFLOPs. If that metric is indicative of overall performance, there is quite a large gap to place a product within.
We still do not know official availability, yet.
Subject: General Tech, Graphics Cards | February 6, 2014 - 01:44 PM | Jeremy Hellstrom
We have more news and it is good for Galaxy fans. The newest update states that they will be sticking around!
Good news GPU fans, the rumours that Galaxy's GPU team is leaving the North American market might be somewhat exaggerated, at least according to their PR Team.
This post appeared on Facebook and was quickly taken off again, perhaps for rewording or perhaps it is a perfect example of the lack of communication that [H]ard|OCP cites in their story. Stay tuned as we update you as soon as we hear more.
Party like it's 2008!
[H]ard|OCP have been following Galaxy's business model closely for the past year as they have been seeing hints that the reseller just didn't get the North American market. Their concern grew as they tried and failed to contact Galaxy at the end of 2013, emails went unanswered and advertising campaigns seemed to have all but disappeared. Even with this reassurance that Galaxy is not planning to leave the North American market a lot of what [H] says rings true, with the stock and delivery issues Galaxy seemed to have over the past year there is something going on behind the scenes. Still it is not worth abandoning them completely and turning this into a self fulfilling prophecy, they have been in this market for a long time and may just be getting ready to move forward in a new way. On the other hand you might be buying a product which will not have warranty support in the future.
"The North American GPU market has been one that is at many times a swirling mass of product. For the last few years though, we have seen the waters calm in that regard as video card board partners have somewhat solidified and we have seen solid players emerge and keep the stage. Except now we seen one exit stage left."
Here is some more Tech News from around the web:
- Microsoft's new CEO: The technology isn't his problem @ The Register
- Oculus Releases Open Source Hardware @ Hack a Day
- HP retains the top spot in a declining PC market @ The Inquirer
- Is Intel Selling Bay Trail Chips Below Cost? @ Slashdot
- Lenovo hires ex-CIA bod to push through Moto deal @ The Register
Subject: General Tech, Graphics Cards | February 5, 2014 - 02:43 PM | Jeremy Hellstrom
Tagged: gaming, Mantle, amd, battlefield 4
Now that the new Mantle enabled driver has been released several sites have had a chance to try out the new API to see what effect it has on Battlefield 4. [H]ard|OCP took a stock XFX R9 290X paired with an i7-3770K and tested both single and multiplayer BF4 performance and the pattern they saw lead them to believe Mantle is more effective at relieving CPU bottlenecks than ones caused by the GPU. The performance increases they saw were greater at lower resolutions than at high resolutions. At The Tech Report another XFX R9 290X was paired with an A10-7850K and an i7-4770K and compared the systems performance in D3D as well as Mantle. To make the tests even more interesting they also tested D3D with a 780Ti, which you should fully examine before deciding which performs the best. Their findings were in line with [H]ard|OCP's and they made the observation that Mantle is going to offer the greatest benefits to lower powered systems, with not a lot to be gained by high end systems with the current version of Mantle. Legit Reviews performed similar tests but also brought the Star Swarm demo into the mix, using an R7 260X for their GPU. You can catch all of our coverage by clicking on the Mantle tag.
"Does AMD's Mantle graphics API deliver on its promise of smoother gaming with lower-spec CPUs? We take an early look at its performance in Battlefield 4."
Here is some more Tech News from around the web:
- Humble Sid Meier Bundle announced: So much Civilisation! @ HEXUS
- HARD ONES: Three new PC games that are BLOODY DIFFICULT @ The Register
- Developers Reporting No Payments From Strategy First @ Rock, Paper, SHOTGUN
Subject: General Tech, Graphics Cards, Processors | February 5, 2014 - 02:08 AM | Scott Michaud
Tagged: photoshop, opencl, Adobe
Adobe has recently enhanced Photoshop CC to accelerate certain filters via OpenCL. AMD contacted NitroWare with this information and claims of 11-fold performance increases with "Smart Sharpen" on Kaveri, specifically. The computer hardware site decided to test these claims on a Radeon HD 7850 using the test metrics that AMD provided them.
Sure enough, he noticed a 16-fold gain in performance. Without OpenCL, the filter's loading bar was on screen for over ten seconds; with it enabled, there was no bar.
Dominic from NitroWare is careful to note that an HD 7850 is significantly higher performance than an APU (barring some weird scenario involving memory transfers or something). This might mark the beginning of Adobe's road to sensible heterogeneous computing outside of video transcoding. Of course, this will also be exciting for AMD. While they cannot keep up with Intel, thread per thread, they are still a heavyweight in terms of total performance. With Photoshop, people might actually notice it.
Subject: General Tech, Graphics Cards | February 1, 2014 - 11:29 PM | Scott Michaud
Tagged: Mantle, BF4, amd
AMD has released the Catalyst 14.1 Beta driver (even for Linux) but you should, first, read Ryan's review. This is a little less than what he expects in a Beta from AMD. We are talking about crashes to desktop and freezes while loading a map on a single-GPU configuration - and Crossfire is a complete wash in his experience (although AMD acknowledges the latter in their release notes). According to AMD, there is even the possibility that the Mantle version of Battlefield 4 will render with your APU and ignore your dedicated graphics.
If you are determined to try Catalyst 14.1, however, it does make a first step into the promise of Mantle. Some situations show slightly lower performance than DirectX 11, albeit with a higher minimum framerate, while other results impress with double-digit percentage gains.
Multiplayer in BF4, where the CPU is more heavily utilized, seems to benefit the most (thankfully).
If you understand the risk (in terms of annoyance and frustration), and still want to give it a try, pick up the driver from AMD's support website. If not? Give it a little more time for AMD to whack-a-bug. At some point, there should be truly free performance waiting for you.
Subject: Graphics Cards, Processors | January 31, 2014 - 04:36 PM | Ryan Shrout
Tagged: 7850k, A10-7850K, amd, APU, gt 630, Intel, nvidia, video
As a follow up to our first video posted earlier in the week that looked at the A10-7850K and the GT 630 from NVIDIA in five standard games, this time we compare the A10-7850K APU against the same combination of the Intel and NVIDIA hardware in five of 2013's top free to play games.
UPDATE: I've had some questions about WHICH of the GT 630 SKUs were used in this testing. Our GT 630 was this EVGA model that is based on 96 CUDA cores and a 128-bit DDR3 memory interface. You can see a comparison of the three current GT 630 options on NVIDIA's website here.
If you are looking for more information on AMD's Kaveri APUs you should check out my review of the A8-7600 part as well our testing of Dual Graphics with the A8-7600 and a Radeon R7 250 card.