Win a GeForce GTX 750 Ti by Showing Off Your Upgrade-Worthy Rig!

Subject: General Tech, Graphics Cards | March 11, 2014 - 06:06 PM |
Tagged: nvidia, gtx 750 ti, giveaway, geforce, contest

UPDATE:  We have our winners! Congrats to the following users that submitted upgrade worthy PCs that will be shipped a free GeForce GTX 750 Ti courtesy of NVIDIA! 

  • D. Todorov
  • C. Fogg
  • K. Rowe
  • K. Froehlich
  • D. Aarssen

When NVIDIA launched the GeForce GTX 750 Ti this month it convinced us to give this highly efficient graphics card a chance to upgrade some off-the-shelf, under powered PCs.  In a story that we published just a week ago, we were able to convert three pretty basic and pretty boring computers into impressive gaming PCs by adding in the $150 Maxwell-based graphics card.

gateway-bioshock.png

If you missed the video we did on the upgrade process and results, check it out here.

Now we are going to give our readers the chance to do the same thing to their PCs.  Do you have a computer in your home that is just not up to the task of playing the latest PC games?  Then this contest is right up your alley.

IMG_9552.JPG

Prizes: 1 of 5 GeForce GTX 750 Ti Graphics Cards

Your Task: You are going to have to do a couple of things to win one of these cards in our "Upgrade Story Giveaway."  We want to make sure these cards are going to those of you that can really use it so here is what we are asking for (you can find the form to fill out right here):

  1. Show us your PC that is in need of an upgrade!  Take a picture of your machine with this contest page on the screen or something similar and share it with us.  You can use Imgur.com to upload your photo if you need some place to put it.  An inside shot would be good as well.  Place the URL for your image in the appropriate field in the form below.
  2. Show us your processor and integrated graphics that need some help!  That means you can use a program like CPU-Z to view the processor in your system and then GPU-Z to show us the graphics setup.  Take a screenshot of both of these programs so we can see what hardware you have that needs more power for PC gaming!  Place the URL for that image in the correct field below.
  3. Give us your name and email address so we can contact you for more information if you win!
  4. Leave us a comment below to let me know why you think you should win!!
  5. Subscribing to our PCPer Live! mailing list or even our PCPer YouTube channel wouldn't hurt either...

That's pretty much it!  We'll run this promotion for 2 weeks with a conclusion date of March 13th. That should give you plenty of time to get your entry in.

Good luck!!

Author:
Manufacturer: Various

1920x1080, 2560x1440, 3840x2160

Join us on March 11th at 9pm ET / 6pm PT for a LIVE Titanfall Game Stream!  You can find us at http://www.pcper.com/live.  You can subscribe to our mailing list to be alerted whenever we have a live event!!

We canceled the event due to the instability of Titanfall servers.  We'll reschedule soon!!

With the release of Respawn's Titanfall upon us, many potential PC gamers are going to be looking for suggestions on compiling a list of parts targeted at a perfect Titanfall experience.  The good news is, even with a fairly low investment in PC hardware, gamers will find that the PC version of this title is definitely the premiere way to play as the compute power of the Xbox One just can't compete.

titanfallsystem.jpg
 

In this story we'll present three different build suggestions, each addressing a different target resolution but also better image quality settings than the Xbox One can offer.  We have options for 1080p, the best option that the Xbox could offer, 2560x1440 and even 3840x2160, better known as 4K.  In truth, the graphics horsepower required by Titanfall isn't overly extreme, and thus an entire PC build coming in under $800, including a full copy of Windows 8.1, is easy to accomplish.

Target 1: 1920x1080

First up is old reliable, the 1920x1080 resolution that most gamers still have on their primary gaming display.  That could be a home theater style PC hooked up to a TV or monitors in sizes up to 27-in.  Here is our build suggestion, followed by our explanations.

  Titanfall 1080p Build
Processor Intel Core i3-4330 - $137
Motherboard MSI H87-G43 - $96
Memory Corsair Vengeance LP 8GB 1600 MHz (2 x 4GB) - $89
Graphics Card EVGA GeForce GTX 750 Ti - $179
Storage Western Digital Blue 1TB - $59
Case Corsair 200R ATX Mid Tower Case - $72
Power Supply Corsair CX 500 watt - $49
OS Windows 8.1 OEM - $96
Total Price $781 - Amazon Full Cart

Our first build comes in at $781 and includes some incredibly competent gaming hardware for that price.  The Intel Core i3-4330 is a dual-core, HyperThreaded processor that provides more than enough capability to push Titanfall any all other major PC games on the market.  The MSI H87 motherboard lacks some of the advanced features of the Z87 platform but does the job at a lower cost.  8GB of Corsair memory, though not running at a high clock speed, provides more than enough capacity for all the programs and applications you could want to run.

Continue reading our article on building a gaming PC for Titanfall!!

Author:
Manufacturer: EVGA

Its been a while...

EVGA has been around for quite some time now.  They have turned into NVIDIA’s closest North American partner after the collapse of the original VisionTek.  At nearly every trade show or gaming event, EVGA is closely associated with whatever NVIDIA presence is there.  In the past EVGA focused primarily on using NVIDIA reference designs for PCB and cooling, and would branch out now and then with custom or semi-custom watercooling solutions.

evga_780_01.jpg

A very svelte and minimalist design for the shroud.  I like it.

The last time I actually reviewed an EVGA products was way back in May of 2006.  I took a look at the 7600 GS product, which was a passively cooled card.  Oddly enough, that card is sitting right in front of me as I write this.  Unfortunately, that particular card has a set of blown caps on it and no longer works.  Considering that the card has been in constant use since 2006, I would say that it held up very well for those eight years!

EVGA has been expanding their product lineup to be able to handle the highs and lows of the PC market.  They have started manufacturing motherboards, cases, and power supplies to help differentiate their product lineup and hopefully broaden their product portfolio.  We know from past experiences that companies that rely on one type of product from a single manufacturer (GPUs in this particular case) can experience some real issues if demand drops dramatically due to competitive disadvantages.  EVGA also has taken a much more aggressive approach to differentiating their products while keeping them within a certain budget.

The latest generation of GTX 700 based cards have seen the introduction of the EVGA ACX cooling solutions.  These dual fan coolers are a big step up from the reference design and puts EVGA on par with competitive products from Asus and MSI.  EVGA does make some tradeoffs as compared, but these are fairly minimal when considering the entire package.

Click here to read the entire review!

Microsoft, Along with AMD, Intel, NVIDIA, and Qualcomm, Will Announce DirectX 12 at GDC 2014

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 5, 2014 - 05:28 PM |
Tagged: qualcomm, nvidia, microsoft, Intel, gdc 14, GDC, DirectX 12, amd

The announcement of DirectX 12 has been given a date and time via a blog post on the Microsoft Developer Network (MSDN) blogs. On March 20th at 10:00am (I assume PDT), a few days into the 2014 Game Developers Conference in San Francisco, California, the upcoming specification should be detailed for attendees. Apparently, four GPU manufacturers will also be involved with the announcement: AMD, Intel, NVIDIA, and Qualcomm.

microsoft-dx12-gdc-announce.jpg

As we reported last week, DirectX 12 is expected to target increased hardware control and decreased CPU overhead for added performance in "cutting-edge 3D graphics" applications. Really, this is the best time for it. Graphics processors are mostly settled into highly-efficient co-processors of parallel data, with some specialized logic for geometry and video tasks. A new specification can relax the needs of video drivers and thus keep the GPU (or GPUs, in Mantle's case) loaded and utilized.

But, to me, the most interesting part of this announcement is the nod to Qualcomm. Microsoft values DirectX as leverage over other x86 and ARM-based operating systems. With Qualcomm, clearly Microsoft believes that either Windows RT or Windows Phone will benefit from the API's next version. While it will probably make PC gamers nervous, mobile platforms will benefit most from reducing CPU overhead, especially if it can be spread out over multiple cores.

Honestly, that is fine by me. As long as Microsoft returns to treating the PC as a first-class citizen, I do not mind them helping mobile, too. We will definitely keep you up to date as we know more.

Source: MSDN Blogs

EVGA Launches GTX 750 and GTX 750 SC With 2GB GDDR5

Subject: Graphics Cards | March 3, 2014 - 07:33 PM |
Tagged: nvidia, maxwell, gtx 750, evga

EVGA recently launched two new GTX 750 graphics cards with 2GB of GDDR5 memory. The new cards include a reference clocked GTX 750 2GB and a factory overclocked GTX 750 2GB SC (Super Clocked).

EVGA GTX 750 2GB GDDR5 GPU.jpg

The new graphics cards are based around NVIDIA’s GTX 750 GPU with 512 Maxwell architecture CUDA cores. The GTX 750 is the little brother to the GTX 750 Ti we recently reviewed which has 640 cores. EVGA has clocked the GTX 750 2GB card’s GPU at reference clockspeeds of 1020 MHz base and 1085 MHz boost and memory at a reference speed of 1253 MHz. The “Super Clocked” GTX 750 2GB SC card keeps the memory at reference speeds but overclocks the GPU quite a bit to 1215 MHz base and 1294 MHz boost.

  EVGA GTX 750 2GB EVGA GTX 750 2GB Super Clocked
GPU 512 CUDA Cores (Maxwell) 512 CUDA Cores (Maxwell)
-    GPU Base 1020 MHz 1215 MHz
-    GPU Boost 1085 MHz 1294 MHz
Memory 2 GB GDDR5 @ 1253 MHz on 128-bit bus
I/O
1 x DVI, 1 x HDMI, 1 x DP
TDP 55W 55W
Price $129.99 $139.99

Both cards have a 55W TDP sans any PCI-E power connector and utilize a single shrouded fan heatsink. The cards are short but occupy two PCI slots. The rear panel hosts one DVI, one HDMI, and one DisplayPort video output along with ventilation slots for the HSF. Further, the cards both support NVIDIA’s G-Sync technology.

EVGA GTX 750 2GB GDDR5 SC.jpg

The reference clocked GTX 750 2GB is $129.99 while the factory overclocked model is $139.99. Both cards are similar to their respective predecessors except for the additional 1GB of GDDR5 memory which comes at a $10 premium and should will help a bit at high resolutions.

Source: EVGA

Podcast #289 - Origin PC EOS-17 SLX Gaming Laptop, Mining on a 750Ti, News from MWC and more!

Subject: General Tech | February 27, 2014 - 12:48 PM |
Tagged: x240, video, tegra, podcast, origin, nvidia, MWC, litecoin, Lenovo, Intel, icera, eos 17 slx, dogecoin, bitcoin, atom, amd, 750ti

PC Perspective Podcast #289 - 02/27/2014

Join us this week as we discuss the Origin PC EOS-17 SLX Gaming Laptop, Mining on a 750Ti, News from MWC and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath and Allyn Malventano

 
This podcast is brought to you by Coolermaster, and the CM Storm Pulse-R Gaming Headset!
 
Program length: 1:17:30
  1. Week in Review:
  2. 0:21:48 This podcast is brought to you by Coolermaster, and the CM Storm Pulse-R Gaming Headset
  3. News items of interest:
  4. Hardware/Software Picks of the Week:
  5. Closing/outro

Be sure to subscribe to the PC Perspective YouTube channel!!

DirectX 12 and a new OpenGL to challenge AMD Mantle coming at GDC?

Subject: Graphics Cards | February 26, 2014 - 03:17 PM |
Tagged: opengl, nvidia, Mantle, gdc 14, GDC, DirectX 12, DirectX, amd

UPDATE (2/27/14): AMD sent over a statement today after seeing our story.  

AMD would like you to know that it supports and celebrates a direction for game development that is aligned with AMD’s vision of lower-level, ‘closer to the metal’ graphics APIs for PC gaming. While industry experts expect this to take some time, developers can immediately leverage efficient API design using Mantle, and AMD is very excited to share the future of our own API with developers at this year’s Game Developers Conference.

Credit over to Scott and his reader at The Tech Report for spotting this interesting news today!!

It appears that DirectX and OpenGL are going to be announcing some changes at next month's Game Developers Conference in San Francisco.  According to some information found in the session details, both APIs are trying to steal some of the thunder from AMD's Mantle, recently released with the Battlefield 4 patch.  Mantle is na API was built by AMD to enable more direct access (lower level) to its GCN graphics hardware allowing developers to code games that are more efficient, providing better performance for the PC gamer.

mantle.jpg

From the session titled DirectX: Evolving Microsoft's Graphics Platform we find this description (emphasis mine):

For nearly 20 years, DirectX has been the platform used by game developers to create the fastest, most visually impressive games on the planet.

However, you asked us to do more. You asked us to bring you even closer to the metal and to do so on an unparalleled assortment of hardware. You also asked us for better tools so that you can squeeze every last drop of performance out of your PC, tablet, phone and console.

Come learn our plans to deliver.

Another DirectX session hosted by Microsoft is titled DirectX: Direct3D Futures (emphasis mine): 

Come learn how future changes to Direct3D will enable next generation games to run faster than ever before!

In this session we will discuss future improvements in Direct3D that will allow developers an unprecedented level of hardware control and reduced CPU rendering overhead across a broad ecosystem of hardware. 

If you use cutting-edge 3D graphics in your games, middleware, or engines and want to efficiently build rich and immersive visuals, you don't want to miss this talk.

Now look at a line from our initial article on AMD Mantle when announced at its Hawaii tech day event:

It bypasses DirectX (and possibly the hardware abstraction layer) and developers can program very close to the metal with very little overhead from software.

This is all sounding very familiar.  It would appear that Microsoft has finally been listening to the development community and is working on the performance aspects of DirectX.  Likely due in no small part to the push of AMD and Mantle's development, an updated DirectX 12 that includes a similar feature set and similar performance changes would shift the market in a few key ways.

olddirectx.jpg

Is it time again for innovation with DirectX?

First and foremost, what does this do for AMD's Mantle in the near or distant future?  For now, BF4 will still include Mantle support as will games like Thief (update pending) but going forward, if these DX12 changes are as specific as I am being led to believe, then it would be hard to see anyone really sticking with the AMD-only route.  Of course, if DX12 doesn't really address the performance and overhead issues in the same way that Mantle does then all bets are off and we are back to square one.

Interestingly, OpenGL might also be getting into the ring with the session Approaching Zero Driver Overhead in OpenGL

Driver overhead has been a frustrating reality for game developers for the entire life of the PC game industry. On desktop systems, driver overhead can decrease frame rate, while on mobile devices driver overhead is more insidious--robbing both battery life and frame rate. In this unprecedented sponsored session, Graham Sellers (AMD), Tim Foley (Intel), Cass Everitt (NVIDIA) and John McDonald (NVIDIA) will present high-level concepts available in today's OpenGL implementations that radically reduce driver overhead--by up to 10x or more. The techniques presented will apply to all major vendors and are suitable for use across multiple platforms. Additionally, they will demonstrate practical demos of the techniques in action in an extensible, open source comparison framework.

This description seems to indicate more about new or lesser known programming methods that can be used with OpenGL to lower overhead without the need for custom APIs or even DX12.  This could be new modules from vendors or possibly a new revision to OpenGL - we'll find out next month.

All of this leaves us with a lot of questions that will hopefully be answered when we get to GDC in mid-March.  Will this new version of DirectX be enough to reduce API overhead to appease even the stingiest of game developers?  How will AMD react to this new competitor to Mantle (or was Mantle really only created to push this process along)?  What time frame does Microsoft have on DX12?  Does this save NVIDIA from any more pressure to build its own custom API?

Gaming continues to be the driving factor of excitement and innovation for the PC!  Stay tuned for an exciting spring!

Source: Tech Report

MWC 2014: NVIDIA and Wiko Release First Tegra 4i Smartphone

Subject: Processors, Mobile | February 24, 2014 - 12:00 AM |
Tagged: wiko, Tegra 4i, tegra, nvidia, MWC 14, MWC

NVIDIA has been teasing the Tegra 4i for quite some time - the integration of a Tegra 4 SoC with the acquired NVIDIA i500 LTE modem technology.  In truth, the Tegra 4i is a totally different processor than Tegra 4.  While the big-boy Tegra 4 is a 4+1 Cortex-A15 chip with 72 GPU cores, the Tegra 4i is a 4+1 Cortex-A9 design with 60 GPU cores.  

t4idie.png

NVIDIA and up-and-coming European phone provider Wiko are announcing at Mobile World Congress the first Tegra 4i smartphone: Wax.  That's right, the Wiko Wax.

wax.jpg

Here is the full information from NVIDIA:

NVIDIA LTE Modem Makes Landfall in Europe, with Launch of Wiko Tegra 4i LTE Smartphone

Wiko Mobile, France’s fastest growing local phonemaker, has just launched Europe’s first Tegra 4i smartphone.

Tegra 4i – our first integrated LTE mobile processor – combines a 60-core GPU and our own LTE modem to bring up to 2X higher performance than competing phone chips.

It helps the Wiko WAX deliver fast web browsing, best-in-class gaming performance, smooth video playback and great battery life.

Launched at Mobile World Congress, in Barcelona, the Wiko WAX phone features a 4.7-inch 720p display, 8MP rear camera and LTE / HSPA+ support.

The phone will be available throughout Europe – including France, Spain, Portugal, Germany, Italy, UK and Belgium – starting in April.

Within two short years, Wiko has become a major player by providing unlocked phones with sophisticated design, outstanding performance and the newest technologies. It has more than two million users in France and is expanding overseas fast.

Wiko WAX comes pre-installed with TegraZone – NVIDIA’s free app that showcases the best games optimized for the Tegra processor.

As a refresher, Tegra 4i includes a quad-core CPU and fifth battery saver core, and a version of the NVIDIA i500 LTE modem optimized for integration.

The result is an extremely power efficient, compact, high-performance mobile processor that unleashes performance and capability usually only available in costly super phones.

NVIDIA Updates Tegra Note 7 with LTE model, KitKat 4.4.2

Subject: Mobile | February 21, 2014 - 06:00 AM |
Tagged: tegra note 7, tegra 4, nvidia, LTE, i500

In November of last year NVIDIA and some of its partners around the world released the Tegra Note 7, a 7-in tablet that was powered by NVIDIA's Tegra 4 SoC.  I posted a review of the unit on its launch and found that the Note 7 offered some impressive features including a high quality screen, stylus input and high performance graphics for a cost of just $199.  Users that were looking for a budget priced Android tablet that didn't skimp on features found a perfect home with the Tegra Note 7.

In preparation for Mobile World Congress in Barcelona next week, NVIDIA is announcing a new model of the Tegra Note 7 that adds an LTE modem.  The appropriately named Tegra Note 7 LTE still includes the full performance Tegra 4 SoC but adds to it the NVIDIA i500 software LTE modem that enables support for the LTE and HSPA+ bands in the US.  That means you can expect support on AT&T and T-Mobile networks.  This is NOT the Tegra 4i SoC that integrates the i500 controller directly on die; this integration is two distinct chips.

01.jpg

The rest of the specifications of the Tegra Note 7 LTE remain the same as the previous model.  A 7-in 1280x800 resolution screen, front facing stereo speakers, front and rear cameras, chisel and brush tipped stylus and more.  The price of this new model will be $299 and it should be available "in the 2nd quarter."  That is a $100 markup over the current Tegra Note 7 that is WiFi only.  The Google Nexus 7 only has an $80 premium for the LTE-enabled option.

specs.jpg

Also announced with the Tegra Note 7 LTE is the availability of the 4.4.2 KitKat Android update for all Tegra Note 7 devices.  Along with the Android OS tweaks and updates you'll get support for the NVIDIA Gamepad Mapper to enable touch-based games to work on an attached controller.

03.jpg

Another solid OS update for existing Tegra Note 7 devices and LTE data support in a new model perk up the NVIDIA tablet line quite a bit.  With MWC kicking into high gear in the next few days I am sure we will see numerous new competitors in this 7-in tablet market though so we'll have to hold judgement on the Note 7's continued placement in the market.

NVIDIA Coin Mining Performance Increases with Maxwell and GTX 750 Ti

Subject: General Tech, Graphics Cards | February 20, 2014 - 02:45 PM |
Tagged: nvidia, mining, maxwell, litecoin, gtx 750 ti, geforce, dogecoin, coin, bitcoin, altcoin

As we have talked about on several different occasions, Altcoin mining (anything that is NOT Bitcoin specifically) is a force on the current GPU market whether we like it or not. Traditionally, Miners have only bought AMD-based GPUs, due to the performance advantage when compared to their NVIDIA competition. However, with continued development of the cudaMiner application over the past few months, NVIDIA cards have been gaining performance in Scrypt mining.

The biggest performance change we've seen yet has come with a new version of cudaMiner released yesterday. This new version (2014-02-18) brings initial support for the Maxwell architecture, which was just released yesterday in the GTX 750 and 750 Ti. With support for Maxwell, mining starts to become a more compelling option with this new NVIDIA GPU.

With the new version of cudaMiner on the reference version of the GTX 750 Ti, we were able to achieve a hashrate of 263 KH/s, impressive when you compare it to the performance of the previous generation, Kepler-based GTX 650 Ti, which tops out at about 150KH/s or so.

IMG_9552.JPG

As you may know from our full GTX 750 Ti Review,  the GM107 overclocks very well. We were able to push our sample to the highest offset configurable of +135 MHz, with an additional 500 MHz added to the memory frequency, and 31 mV bump to the voltage offset. All of this combined to a ~1200 MHz clockspeed while mining, and an additional 40 KH/s or so of performance, bringing us to just under 300KH/s with the 750 Ti.

perf.png

As we compare the performance of the 750 Ti to AMD GPUs and previous generation NVIDIA GPUs, we start to see how impressive the performance of this card stacks up considering the $150 MSRP. For less than half the price of the GTX 770, and roughly the same price as a R7 260X, you can achieve the same performance.

power.png

When we look at power consumption based on the TDP of each card, this comparison only becomes more impressive. At 60W, there is no card that comes close to the performance of the 750 Ti when mining. This means you will spend less to run a 750 Ti than a R7 260X or GTX 770 for roughly the same hash rate.

perfdollar.png

Taking a look at the performance per dollar ratings of these graphics cards, we see the two top performers are the AMD R7 260X and our overclocked GTX 750 Ti.

perfpower.png

However, when looking at the performance per watt differences of the field, the GTX 750 Ti looks more impressive. While most miners may think they don't care about power draw, it can help your bottom line. By being able to buy a smaller, less efficient power supply the payoff date for the hardware is moved up.  This also bodes well for future Maxwell based graphics cards that we will likely see released later in 2014.  

Continue reading our look at Coin Mining performance with the GTX 750 Ti and Maxwell!!