1920x1080, 2560x1440, 3840x2160
Join us on March 11th at 9pm ET / 6pm PT for a LIVE Titanfall Game Stream! You can find us at http://www.pcper.com/live. You can subscribe to our mailing list to be alerted whenever we have a live event!!
We canceled the event due to the instability of Titanfall servers. We'll reschedule soon!!
With the release of Respawn's Titanfall upon us, many potential PC gamers are going to be looking for suggestions on compiling a list of parts targeted at a perfect Titanfall experience. The good news is, even with a fairly low investment in PC hardware, gamers will find that the PC version of this title is definitely the premiere way to play as the compute power of the Xbox One just can't compete.
In this story we'll present three different build suggestions, each addressing a different target resolution but also better image quality settings than the Xbox One can offer. We have options for 1080p, the best option that the Xbox could offer, 2560x1440 and even 3840x2160, better known as 4K. In truth, the graphics horsepower required by Titanfall isn't overly extreme, and thus an entire PC build coming in under $800, including a full copy of Windows 8.1, is easy to accomplish.
Target 1: 1920x1080
First up is old reliable, the 1920x1080 resolution that most gamers still have on their primary gaming display. That could be a home theater style PC hooked up to a TV or monitors in sizes up to 27-in. Here is our build suggestion, followed by our explanations.
|Titanfall 1080p Build|
|Processor||Intel Core i3-4330 - $137|
|Motherboard||MSI H87-G43 - $96|
|Memory||Corsair Vengeance LP 8GB 1600 MHz (2 x 4GB) - $89|
|Graphics Card||EVGA GeForce GTX 750 Ti - $179|
|Storage||Western Digital Blue 1TB - $59|
|Case||Corsair 200R ATX Mid Tower Case - $72|
|Power Supply||Corsair CX 500 watt - $49|
|OS||Windows 8.1 OEM - $96|
|Total Price||$781 - Amazon Full Cart|
Our first build comes in at $781 and includes some incredibly competent gaming hardware for that price. The Intel Core i3-4330 is a dual-core, HyperThreaded processor that provides more than enough capability to push Titanfall any all other major PC games on the market. The MSI H87 motherboard lacks some of the advanced features of the Z87 platform but does the job at a lower cost. 8GB of Corsair memory, though not running at a high clock speed, provides more than enough capacity for all the programs and applications you could want to run.
Its been a while...
EVGA has been around for quite some time now. They have turned into NVIDIA’s closest North American partner after the collapse of the original VisionTek. At nearly every trade show or gaming event, EVGA is closely associated with whatever NVIDIA presence is there. In the past EVGA focused primarily on using NVIDIA reference designs for PCB and cooling, and would branch out now and then with custom or semi-custom watercooling solutions.
A very svelte and minimalist design for the shroud. I like it.
The last time I actually reviewed an EVGA products was way back in May of 2006. I took a look at the 7600 GS product, which was a passively cooled card. Oddly enough, that card is sitting right in front of me as I write this. Unfortunately, that particular card has a set of blown caps on it and no longer works. Considering that the card has been in constant use since 2006, I would say that it held up very well for those eight years!
EVGA has been expanding their product lineup to be able to handle the highs and lows of the PC market. They have started manufacturing motherboards, cases, and power supplies to help differentiate their product lineup and hopefully broaden their product portfolio. We know from past experiences that companies that rely on one type of product from a single manufacturer (GPUs in this particular case) can experience some real issues if demand drops dramatically due to competitive disadvantages. EVGA also has taken a much more aggressive approach to differentiating their products while keeping them within a certain budget.
The latest generation of GTX 700 based cards have seen the introduction of the EVGA ACX cooling solutions. These dual fan coolers are a big step up from the reference design and puts EVGA on par with competitive products from Asus and MSI. EVGA does make some tradeoffs as compared, but these are fairly minimal when considering the entire package.
Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 5, 2014 - 08:28 PM | Scott Michaud
Tagged: qualcomm, nvidia, microsoft, Intel, gdc 14, GDC, DirectX 12, amd
The announcement of DirectX 12 has been given a date and time via a blog post on the Microsoft Developer Network (MSDN) blogs. On March 20th at 10:00am (I assume PDT), a few days into the 2014 Game Developers Conference in San Francisco, California, the upcoming specification should be detailed for attendees. Apparently, four GPU manufacturers will also be involved with the announcement: AMD, Intel, NVIDIA, and Qualcomm.
As we reported last week, DirectX 12 is expected to target increased hardware control and decreased CPU overhead for added performance in "cutting-edge 3D graphics" applications. Really, this is the best time for it. Graphics processors are mostly settled into highly-efficient co-processors of parallel data, with some specialized logic for geometry and video tasks. A new specification can relax the needs of video drivers and thus keep the GPU (or GPUs, in Mantle's case) loaded and utilized.
But, to me, the most interesting part of this announcement is the nod to Qualcomm. Microsoft values DirectX as leverage over other x86 and ARM-based operating systems. With Qualcomm, clearly Microsoft believes that either Windows RT or Windows Phone will benefit from the API's next version. While it will probably make PC gamers nervous, mobile platforms will benefit most from reducing CPU overhead, especially if it can be spread out over multiple cores.
Honestly, that is fine by me. As long as Microsoft returns to treating the PC as a first-class citizen, I do not mind them helping mobile, too. We will definitely keep you up to date as we know more.
Subject: Graphics Cards | March 4, 2014 - 03:38 PM | Ryan Shrout
Tagged: radeon, R9 290X, hawaii, amd, 290x
Yes, I know it is only one card. And yes I know that this could sell out in the next 10 minutes and be nothing, but I was so interested, excited and curious about this that I wanted to put together a news post. I just found a Radeon R9 290X card selling for $549 on Newegg.com. That is the normal, regular, non-inflated, expected retail price.
You can get a Powercolor AXR9 290X with 4GB of memory for $549 right now, likely only if you hurry. That same GPU on Amazon.com will cost you $676. This same card at Newegg.com has been as high as $699:
Again - this is only one card on one site, but the implications are positive. This is also a reference design card, rather than one of the superior offerings with a custom cooler. After that single card, the next lowest price is $629, followed by a couple at $649 and then more at $699. We are still waiting to hear from AMD on the issue, what its response is and if it can actually even do anything to fix it. It seems plausible, but maybe not likely, that the draw of coin mining is reached a peak (and who can blame them) and the pricing of AMD GPUs could stabilize. Maybe. It's classified.
But for now, if you want an R9 290X, Newegg.com has at least one option that makes sense.
Subject: Graphics Cards | March 4, 2014 - 08:00 AM | Ryan Shrout
Tagged: radeon, r9 280, R9, hd 7950, amd
AMD continues to churn out its Radeon graphics card line. Out today, or so we are told, is the brand new Radeon R9 280! That's right kids, it's kind of like the R9 280X, but without the letter at the end. In fact, do you know what it happens to be very similar to? The Radeon HD 7950. Check out the testing card we got in.
It's okay AMD, it's just a bit of humor...
Okay, let's put the jokes aside and talk about what we are really seeing here.
The new Radeon R9 280 is the latest in the line of rebranding and reorganizing steps made by AMD with the move from the "HD" moniker to "R9/R7". As the image above would indicate, the specifications of the R9 280 are nearly 1:1 with that of the Radeon HD 7950 released in August of 2012 with Boost. We built a specification table below.
|Radeon R9 280X||Radeon R9 280||Radeon R9 270X||Radeon R9 270||Radeon R7 265|
|GPU Code name||Tahiti||Tahiti||Pitcairn||Pitcairn||Pitcairn|
|Rated Clock||1000 MHz||933 MHz||1050 MHz||925 MHz||925 MHz|
|Memory Clock||6000 MHz||6000 MHz||5600 MHz||5600 MHz||5600 MHz|
|Memory Bandwidth||288 GB/s||288 GB/s||179 GB/s||179 GB/s||179 GB/s|
|TDP||250 watts||250 watts||180 watts||150 watts||150 watts|
|Peak Compute||4.10 TFLOPS||3.34 TFLOPS||2.69 TFLOPS||2.37 TFLOPS||1.89 TFLOPS|
|Current Pricing||$420 - Amazon||???||$259 - Amazon||$229 - Amazon||???|
If you are keeping track, AMD should just about be out of cards to drag over to the new naming scheme. The R9 280 has a slightly higher top boost clock than the Radeon HD 7950 did (933 MHz vs. 925 MHz) but otherwise looks very similar. Oh, and apparently the R9 280 will require a 6+8 pin PCIe power combination while the HD 7950 was only 6+6 pin. Despite that change, it is still built on the same Tahiti GPU that has been chugging long for years now.
The Radeon R9 280 continues to support an assortment of AMD's graphics technologies including Mantle, PowerTune, CrossFire, Eyefinity, and included support for DX11.2. Note that because we are looking at an ASIC that has been around for a while, you will not find XDMA or TrueAudio support.
The estimated MSRP of $279 is only $20 lower than the MSRP of the R9 280X, but you should take all pricing estimates from AMD with a grain of salt. The prices listed in the table above from Amazon.com were current as of March 3rd, and of course, we did see Newegg attempt get people to buy R9 290X cards for $900 recently. AMD did use some interesting language on the availability of the R9 280 in its emails to me.
The AMD Radeon R9 280 will become available at a starting SEP of $279USD the first week of March, with wider availability the second week of March. Following the exceptional demand for the entire R9 Series, we believe the introduction of the R9 280 will help ensure that every gamer who plans to purchase an R9 Series graphics card has an opportunity to do so.
I like what the intent is from AMD with this release - get more physical product in the channel to hopefully lower prices and enable more gamers to purchase the Radeon card they really want. However, until I see a swarm of parts on Newegg.com or Amazon.com at, or very close to, the MSRPs listed on the table above for an extended period, I think the effects of coin mining (and the rumors of GPU shortages) will continue to plague us. No one wants to see competition in the market and great options at reasonable prices for gamers more than us!
AMD hasn't sent out any samples of the R9 280 as far as I know (at least we didn't get any) but the performance should be predictable based on its specifications relative to the R9 280X and the HD 7950 before it.
Do you think the R9 280 will fix the pricing predicament that AMD finds itself in today, and if it does, are you going to buy one?
Subject: Graphics Cards | March 3, 2014 - 10:33 PM | Tim Verry
Tagged: nvidia, maxwell, gtx 750, evga
EVGA recently launched two new GTX 750 graphics cards with 2GB of GDDR5 memory. The new cards include a reference clocked GTX 750 2GB and a factory overclocked GTX 750 2GB SC (Super Clocked).
The new graphics cards are based around NVIDIA’s GTX 750 GPU with 512 Maxwell architecture CUDA cores. The GTX 750 is the little brother to the GTX 750 Ti we recently reviewed which has 640 cores. EVGA has clocked the GTX 750 2GB card’s GPU at reference clockspeeds of 1020 MHz base and 1085 MHz boost and memory at a reference speed of 1253 MHz. The “Super Clocked” GTX 750 2GB SC card keeps the memory at reference speeds but overclocks the GPU quite a bit to 1215 MHz base and 1294 MHz boost.
|EVGA GTX 750 2GB||EVGA GTX 750 2GB Super Clocked|
|GPU||512 CUDA Cores (Maxwell)||512 CUDA Cores (Maxwell)|
|- GPU Base||1020 MHz||1215 MHz|
|- GPU Boost||1085 MHz||1294 MHz|
|Memory||2 GB GDDR5 @ 1253 MHz on 128-bit bus|
1 x DVI, 1 x HDMI, 1 x DP
Both cards have a 55W TDP sans any PCI-E power connector and utilize a single shrouded fan heatsink. The cards are short but occupy two PCI slots. The rear panel hosts one DVI, one HDMI, and one DisplayPort video output along with ventilation slots for the HSF. Further, the cards both support NVIDIA’s G-Sync technology.
The reference clocked GTX 750 2GB is $129.99 while the factory overclocked model is $139.99. Both cards are similar to their respective predecessors except for the additional 1GB of GDDR5 memory which comes at a $10 premium and should will help a bit at high resolutions.
Subject: General Tech, Graphics Cards | March 2, 2014 - 05:20 PM | Scott Michaud
Tagged: passive cooling, maxwell, gtx 750 ti
The NVIDIA GeForce GTX 750 Ti is fast but also power efficient, enough-so that Ryan found it a worthwhile upgrade for cheap desktops with cheap power supplies that were never intended for discrete graphics. Of course, this recommendation is about making the best of what you got; better options probably exist if you are building a PC (or getting one built by a friend or a computer store).
Image Credit: Tom's Hardware
Tom's Hardware went another route: make it fanless.
After wrecking a passively-cooled Radeon HD 7750, which is probably a crime in Texas, they clamped it on to the Maxwell-based GTX 750 Ti. While the cooler was designed for good airflow, they decided to leave it in a completely-enclosed case without fans. Under load, the card reached 80 C within about twenty minutes. The driver backed off performance slightly, 1-3% depending on your frame of reference, but was able to maintain that target temperature.
Now, if only it accepted SLi, this person might be happy.
Subject: Graphics Cards | March 2, 2014 - 03:14 AM | Tim Verry
Tagged: sapphire, R7 240, htpc, SFF, low profile, steam os
Sapphire is preparing a new low profile Radeon R7 240 graphics card for home theater PCs and small form factor desktop builds. The new graphics card is a single slot design that uses a small heatsink with fan cooler that is shorter than the low profile PCI bracket for assured compatibility with even extremely cramped cases.
The Sapphire R7 240 card pairs a 28nm AMD GCN-based GPU with 2GB of DDR3 memory. There are two HDMI 1.4a display outputs that each support 4K 4096 x 2160 resolutions. Specifically, this particular iteration of the Radeon R7 240 has 320 stream processors clocked at 730 MHz base and 780 MHz boost along with 2GB DDR3 memory clocked at 900 MHz on a 128-bit bus. The card further has 20 TMUs and 8 ROPs. The card has a power sipping 30W TDP.
This low profile R7 240 is a sub-$100 part that can easily power a home theater PC or Steam OS streaming endpoint. Actually, the R7 240 itself can deliver playable gaming frame rates with low quality settings and lowered resolutions delivering at least 30 average FPS in modern titles like Bioshock Infinite and BF4 according to this review. Another use case would be to add the card to an existing AMD APU-based system in Hybrid CrossFire (which has seen Frame Pacing fixes!) for a bit more gaming horsepower under a strict budget.
The card occupies a tight space where it is only viable in specific situations constrained by a tight budget, physical size, and the requirement to buy a card new and not an older (single and faster, not Hybrid CrossFire) generation card on the used market. Still, it is nice to have options and this will be one such new budget alternative. Exact pricing is not yet available, but it should be hitting store shelves soon. For an idea on pricing, the full height Sapphire R7 240 retails for around $70, so expect the new low profile variant to be around that price if at a slight premium.
Subject: Graphics Cards, Processors | February 26, 2014 - 07:18 PM | Ryan Shrout
Overclocking the memory and GPU clock speeds on an AMD APU can greatly improve gaming performance - it is known. With the new AMD A10-7850K in hand I decided to do a quick test and see how much we could improve average frame rates for mainstream gamers with only some minor tweaking of the motherboard BIOS.
Using some high-end G.Skill RipJaws DDR3-2400 memory, we were able to push memory speeds on the Kaveri APU up to 2400 MHz, a 50% increase over the stock 1600 MHz rate. We also increased the clock speed on the GPU portion of the A10-7850K from 720 MHz to 1028 MHz, a 42% boost. Interestingly, as you'll see in the video below, the memory speed had a MUCH more dramatic impact on our average frame rates in-game.
In the three games we tested for this video, GRID 2, Bioshock Infinite and Battlefield 4, total performance gain ranged from 26% to 38%. Clearly that can make the AMD Kaveri APU an even more potent gaming platform if you are willing to shell out for the high speed memory.
|Stock||GPU OC||Memory OC||Total OC||Avg FPS Change|
|22.4 FPS||23.7 FPS||28.2 FPS||29.1 FPS||+29%|
High + 2xAA
|33.5 FPS||36.3 FPS||41.1 FPS||42.3 FPS||+26%|
|30.1 FPS||30.9 FPS||40.2 FPS||41.8 FPS||+38%|
Subject: Graphics Cards | February 26, 2014 - 06:17 PM | Ryan Shrout
Tagged: opengl, nvidia, Mantle, gdc 14, GDC, DirectX 12, DirectX, amd
UPDATE (2/27/14): AMD sent over a statement today after seeing our story.
AMD would like you to know that it supports and celebrates a direction for game development that is aligned with AMD’s vision of lower-level, ‘closer to the metal’ graphics APIs for PC gaming. While industry experts expect this to take some time, developers can immediately leverage efficient API design using Mantle, and AMD is very excited to share the future of our own API with developers at this year’s Game Developers Conference.
Credit over to Scott and his reader at The Tech Report for spotting this interesting news today!!
It appears that DirectX and OpenGL are going to be announcing some changes at next month's Game Developers Conference in San Francisco. According to some information found in the session details, both APIs are trying to steal some of the thunder from AMD's Mantle, recently released with the Battlefield 4 patch. Mantle is na API was built by AMD to enable more direct access (lower level) to its GCN graphics hardware allowing developers to code games that are more efficient, providing better performance for the PC gamer.
From the session titled DirectX: Evolving Microsoft's Graphics Platform we find this description (emphasis mine):
For nearly 20 years, DirectX has been the platform used by game developers to create the fastest, most visually impressive games on the planet.
However, you asked us to do more. You asked us to bring you even closer to the metal and to do so on an unparalleled assortment of hardware. You also asked us for better tools so that you can squeeze every last drop of performance out of your PC, tablet, phone and console.
Come learn our plans to deliver.
Another DirectX session hosted by Microsoft is titled DirectX: Direct3D Futures (emphasis mine):
Come learn how future changes to Direct3D will enable next generation games to run faster than ever before!
In this session we will discuss future improvements in Direct3D that will allow developers an unprecedented level of hardware control and reduced CPU rendering overhead across a broad ecosystem of hardware.
If you use cutting-edge 3D graphics in your games, middleware, or engines and want to efficiently build rich and immersive visuals, you don't want to miss this talk.
Now look at a line from our initial article on AMD Mantle when announced at its Hawaii tech day event:
It bypasses DirectX (and possibly the hardware abstraction layer) and developers can program very close to the metal with very little overhead from software.
This is all sounding very familiar. It would appear that Microsoft has finally been listening to the development community and is working on the performance aspects of DirectX. Likely due in no small part to the push of AMD and Mantle's development, an updated DirectX 12 that includes a similar feature set and similar performance changes would shift the market in a few key ways.
Is it time again for innovation with DirectX?
First and foremost, what does this do for AMD's Mantle in the near or distant future? For now, BF4 will still include Mantle support as will games like Thief (update pending) but going forward, if these DX12 changes are as specific as I am being led to believe, then it would be hard to see anyone really sticking with the AMD-only route. Of course, if DX12 doesn't really address the performance and overhead issues in the same way that Mantle does then all bets are off and we are back to square one.
Interestingly, OpenGL might also be getting into the ring with the session Approaching Zero Driver Overhead in OpenGL:
Driver overhead has been a frustrating reality for game developers for the entire life of the PC game industry. On desktop systems, driver overhead can decrease frame rate, while on mobile devices driver overhead is more insidious--robbing both battery life and frame rate. In this unprecedented sponsored session, Graham Sellers (AMD), Tim Foley (Intel), Cass Everitt (NVIDIA) and John McDonald (NVIDIA) will present high-level concepts available in today's OpenGL implementations that radically reduce driver overhead--by up to 10x or more. The techniques presented will apply to all major vendors and are suitable for use across multiple platforms. Additionally, they will demonstrate practical demos of the techniques in action in an extensible, open source comparison framework.
This description seems to indicate more about new or lesser known programming methods that can be used with OpenGL to lower overhead without the need for custom APIs or even DX12. This could be new modules from vendors or possibly a new revision to OpenGL - we'll find out next month.
All of this leaves us with a lot of questions that will hopefully be answered when we get to GDC in mid-March. Will this new version of DirectX be enough to reduce API overhead to appease even the stingiest of game developers? How will AMD react to this new competitor to Mantle (or was Mantle really only created to push this process along)? What time frame does Microsoft have on DX12? Does this save NVIDIA from any more pressure to build its own custom API?
Gaming continues to be the driving factor of excitement and innovation for the PC! Stay tuned for an exciting spring!
Get notified when we go live!