Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 5, 2014 - 08:28 PM | Scott Michaud
Tagged: qualcomm, nvidia, microsoft, Intel, gdc 14, GDC, DirectX 12, amd
The announcement of DirectX 12 has been given a date and time via a blog post on the Microsoft Developer Network (MSDN) blogs. On March 20th at 10:00am (I assume PDT), a few days into the 2014 Game Developers Conference in San Francisco, California, the upcoming specification should be detailed for attendees. Apparently, four GPU manufacturers will also be involved with the announcement: AMD, Intel, NVIDIA, and Qualcomm.
As we reported last week, DirectX 12 is expected to target increased hardware control and decreased CPU overhead for added performance in "cutting-edge 3D graphics" applications. Really, this is the best time for it. Graphics processors are mostly settled into highly-efficient co-processors of parallel data, with some specialized logic for geometry and video tasks. A new specification can relax the needs of video drivers and thus keep the GPU (or GPUs, in Mantle's case) loaded and utilized.
But, to me, the most interesting part of this announcement is the nod to Qualcomm. Microsoft values DirectX as leverage over other x86 and ARM-based operating systems. With Qualcomm, clearly Microsoft believes that either Windows RT or Windows Phone will benefit from the API's next version. While it will probably make PC gamers nervous, mobile platforms will benefit most from reducing CPU overhead, especially if it can be spread out over multiple cores.
Honestly, that is fine by me. As long as Microsoft returns to treating the PC as a first-class citizen, I do not mind them helping mobile, too. We will definitely keep you up to date as we know more.
Subject: Graphics Cards | March 3, 2014 - 10:33 PM | Tim Verry
Tagged: nvidia, maxwell, gtx 750, evga
EVGA recently launched two new GTX 750 graphics cards with 2GB of GDDR5 memory. The new cards include a reference clocked GTX 750 2GB and a factory overclocked GTX 750 2GB SC (Super Clocked).
The new graphics cards are based around NVIDIA’s GTX 750 GPU with 512 Maxwell architecture CUDA cores. The GTX 750 is the little brother to the GTX 750 Ti we recently reviewed which has 640 cores. EVGA has clocked the GTX 750 2GB card’s GPU at reference clockspeeds of 1020 MHz base and 1085 MHz boost and memory at a reference speed of 1253 MHz. The “Super Clocked” GTX 750 2GB SC card keeps the memory at reference speeds but overclocks the GPU quite a bit to 1215 MHz base and 1294 MHz boost.
|EVGA GTX 750 2GB||EVGA GTX 750 2GB Super Clocked|
|GPU||512 CUDA Cores (Maxwell)||512 CUDA Cores (Maxwell)|
|- GPU Base||1020 MHz||1215 MHz|
|- GPU Boost||1085 MHz||1294 MHz|
|Memory||2 GB GDDR5 @ 1253 MHz on 128-bit bus|
1 x DVI, 1 x HDMI, 1 x DP
Both cards have a 55W TDP sans any PCI-E power connector and utilize a single shrouded fan heatsink. The cards are short but occupy two PCI slots. The rear panel hosts one DVI, one HDMI, and one DisplayPort video output along with ventilation slots for the HSF. Further, the cards both support NVIDIA’s G-Sync technology.
The reference clocked GTX 750 2GB is $129.99 while the factory overclocked model is $139.99. Both cards are similar to their respective predecessors except for the additional 1GB of GDDR5 memory which comes at a $10 premium and should will help a bit at high resolutions.
Subject: General Tech | February 27, 2014 - 03:48 PM | Ken Addison
Tagged: x240, video, tegra, podcast, origin, nvidia, MWC, litecoin, Lenovo, Intel, icera, eos 17 slx, dogecoin, bitcoin, atom, amd, 750ti
PC Perspective Podcast #289 - 02/27/2014
Join us this week as we discuss the Origin PC EOS-17 SLX Gaming Laptop, Mining on a 750Ti, News from MWC and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath and Allyn Malventano
Week in Review:
0:21:48 This podcast is brought to you by Coolermaster, and the CM Storm Pulse-R Gaming Headset
News items of interest:
Hardware/Software Picks of the Week:
Subject: Graphics Cards | February 26, 2014 - 06:17 PM | Ryan Shrout
Tagged: opengl, nvidia, Mantle, gdc 14, GDC, DirectX 12, DirectX, amd
UPDATE (2/27/14): AMD sent over a statement today after seeing our story.
AMD would like you to know that it supports and celebrates a direction for game development that is aligned with AMD’s vision of lower-level, ‘closer to the metal’ graphics APIs for PC gaming. While industry experts expect this to take some time, developers can immediately leverage efficient API design using Mantle, and AMD is very excited to share the future of our own API with developers at this year’s Game Developers Conference.
Credit over to Scott and his reader at The Tech Report for spotting this interesting news today!!
It appears that DirectX and OpenGL are going to be announcing some changes at next month's Game Developers Conference in San Francisco. According to some information found in the session details, both APIs are trying to steal some of the thunder from AMD's Mantle, recently released with the Battlefield 4 patch. Mantle is na API was built by AMD to enable more direct access (lower level) to its GCN graphics hardware allowing developers to code games that are more efficient, providing better performance for the PC gamer.
From the session titled DirectX: Evolving Microsoft's Graphics Platform we find this description (emphasis mine):
For nearly 20 years, DirectX has been the platform used by game developers to create the fastest, most visually impressive games on the planet.
However, you asked us to do more. You asked us to bring you even closer to the metal and to do so on an unparalleled assortment of hardware. You also asked us for better tools so that you can squeeze every last drop of performance out of your PC, tablet, phone and console.
Come learn our plans to deliver.
Another DirectX session hosted by Microsoft is titled DirectX: Direct3D Futures (emphasis mine):
Come learn how future changes to Direct3D will enable next generation games to run faster than ever before!
In this session we will discuss future improvements in Direct3D that will allow developers an unprecedented level of hardware control and reduced CPU rendering overhead across a broad ecosystem of hardware.
If you use cutting-edge 3D graphics in your games, middleware, or engines and want to efficiently build rich and immersive visuals, you don't want to miss this talk.
Now look at a line from our initial article on AMD Mantle when announced at its Hawaii tech day event:
It bypasses DirectX (and possibly the hardware abstraction layer) and developers can program very close to the metal with very little overhead from software.
This is all sounding very familiar. It would appear that Microsoft has finally been listening to the development community and is working on the performance aspects of DirectX. Likely due in no small part to the push of AMD and Mantle's development, an updated DirectX 12 that includes a similar feature set and similar performance changes would shift the market in a few key ways.
Is it time again for innovation with DirectX?
First and foremost, what does this do for AMD's Mantle in the near or distant future? For now, BF4 will still include Mantle support as will games like Thief (update pending) but going forward, if these DX12 changes are as specific as I am being led to believe, then it would be hard to see anyone really sticking with the AMD-only route. Of course, if DX12 doesn't really address the performance and overhead issues in the same way that Mantle does then all bets are off and we are back to square one.
Interestingly, OpenGL might also be getting into the ring with the session Approaching Zero Driver Overhead in OpenGL:
Driver overhead has been a frustrating reality for game developers for the entire life of the PC game industry. On desktop systems, driver overhead can decrease frame rate, while on mobile devices driver overhead is more insidious--robbing both battery life and frame rate. In this unprecedented sponsored session, Graham Sellers (AMD), Tim Foley (Intel), Cass Everitt (NVIDIA) and John McDonald (NVIDIA) will present high-level concepts available in today's OpenGL implementations that radically reduce driver overhead--by up to 10x or more. The techniques presented will apply to all major vendors and are suitable for use across multiple platforms. Additionally, they will demonstrate practical demos of the techniques in action in an extensible, open source comparison framework.
This description seems to indicate more about new or lesser known programming methods that can be used with OpenGL to lower overhead without the need for custom APIs or even DX12. This could be new modules from vendors or possibly a new revision to OpenGL - we'll find out next month.
All of this leaves us with a lot of questions that will hopefully be answered when we get to GDC in mid-March. Will this new version of DirectX be enough to reduce API overhead to appease even the stingiest of game developers? How will AMD react to this new competitor to Mantle (or was Mantle really only created to push this process along)? What time frame does Microsoft have on DX12? Does this save NVIDIA from any more pressure to build its own custom API?
Gaming continues to be the driving factor of excitement and innovation for the PC! Stay tuned for an exciting spring!
Subject: Processors, Mobile | February 24, 2014 - 03:00 AM | Ryan Shrout
Tagged: wiko, Tegra 4i, tegra, nvidia, MWC 14, MWC
NVIDIA has been teasing the Tegra 4i for quite some time - the integration of a Tegra 4 SoC with the acquired NVIDIA i500 LTE modem technology. In truth, the Tegra 4i is a totally different processor than Tegra 4. While the big-boy Tegra 4 is a 4+1 Cortex-A15 chip with 72 GPU cores, the Tegra 4i is a 4+1 Cortex-A9 design with 60 GPU cores.
NVIDIA and up-and-coming European phone provider Wiko are announcing at Mobile World Congress the first Tegra 4i smartphone: Wax. That's right, the Wiko Wax.
Here is the full information from NVIDIA:
NVIDIA LTE Modem Makes Landfall in Europe, with Launch of Wiko Tegra 4i LTE Smartphone
Wiko Mobile, France’s fastest growing local phonemaker, has just launched Europe’s first Tegra 4i smartphone.
Tegra 4i – our first integrated LTE mobile processor – combines a 60-core GPU and our own LTE modem to bring up to 2X higher performance than competing phone chips.
It helps the Wiko WAX deliver fast web browsing, best-in-class gaming performance, smooth video playback and great battery life.
Launched at Mobile World Congress, in Barcelona, the Wiko WAX phone features a 4.7-inch 720p display, 8MP rear camera and LTE / HSPA+ support.
The phone will be available throughout Europe – including France, Spain, Portugal, Germany, Italy, UK and Belgium – starting in April.
Within two short years, Wiko has become a major player by providing unlocked phones with sophisticated design, outstanding performance and the newest technologies. It has more than two million users in France and is expanding overseas fast.
Wiko WAX comes pre-installed with TegraZone – NVIDIA’s free app that showcases the best games optimized for the Tegra processor.
As a refresher, Tegra 4i includes a quad-core CPU and fifth battery saver core, and a version of the NVIDIA i500 LTE modem optimized for integration.
The result is an extremely power efficient, compact, high-performance mobile processor that unleashes performance and capability usually only available in costly super phones.
Subject: Mobile | February 21, 2014 - 09:00 AM | Ryan Shrout
Tagged: tegra note 7, tegra 4, nvidia, LTE, i500
In November of last year NVIDIA and some of its partners around the world released the Tegra Note 7, a 7-in tablet that was powered by NVIDIA's Tegra 4 SoC. I posted a review of the unit on its launch and found that the Note 7 offered some impressive features including a high quality screen, stylus input and high performance graphics for a cost of just $199. Users that were looking for a budget priced Android tablet that didn't skimp on features found a perfect home with the Tegra Note 7.
In preparation for Mobile World Congress in Barcelona next week, NVIDIA is announcing a new model of the Tegra Note 7 that adds an LTE modem. The appropriately named Tegra Note 7 LTE still includes the full performance Tegra 4 SoC but adds to it the NVIDIA i500 software LTE modem that enables support for the LTE and HSPA+ bands in the US. That means you can expect support on AT&T and T-Mobile networks. This is NOT the Tegra 4i SoC that integrates the i500 controller directly on die; this integration is two distinct chips.
The rest of the specifications of the Tegra Note 7 LTE remain the same as the previous model. A 7-in 1280x800 resolution screen, front facing stereo speakers, front and rear cameras, chisel and brush tipped stylus and more. The price of this new model will be $299 and it should be available "in the 2nd quarter." That is a $100 markup over the current Tegra Note 7 that is WiFi only. The Google Nexus 7 only has an $80 premium for the LTE-enabled option.
Also announced with the Tegra Note 7 LTE is the availability of the 4.4.2 KitKat Android update for all Tegra Note 7 devices. Along with the Android OS tweaks and updates you'll get support for the NVIDIA Gamepad Mapper to enable touch-based games to work on an attached controller.
Another solid OS update for existing Tegra Note 7 devices and LTE data support in a new model perk up the NVIDIA tablet line quite a bit. With MWC kicking into high gear in the next few days I am sure we will see numerous new competitors in this 7-in tablet market though so we'll have to hold judgement on the Note 7's continued placement in the market.
Subject: General Tech, Graphics Cards | February 20, 2014 - 05:45 PM | Ken Addison
Tagged: nvidia, mining, maxwell, litecoin, gtx 750 ti, geforce, dogecoin, coin, bitcoin, altcoin
As we have talked about on several different occasions, Altcoin mining (anything that is NOT Bitcoin specifically) is a force on the current GPU market whether we like it or not. Traditionally, Miners have only bought AMD-based GPUs, due to the performance advantage when compared to their NVIDIA competition. However, with continued development of the cudaMiner application over the past few months, NVIDIA cards have been gaining performance in Scrypt mining.
The biggest performance change we've seen yet has come with a new version of cudaMiner released yesterday. This new version (2014-02-18) brings initial support for the Maxwell architecture, which was just released yesterday in the GTX 750 and 750 Ti. With support for Maxwell, mining starts to become a more compelling option with this new NVIDIA GPU.
With the new version of cudaMiner on the reference version of the GTX 750 Ti, we were able to achieve a hashrate of 263 KH/s, impressive when you compare it to the performance of the previous generation, Kepler-based GTX 650 Ti, which tops out at about 150KH/s or so.
As you may know from our full GTX 750 Ti Review, the GM107 overclocks very well. We were able to push our sample to the highest offset configurable of +135 MHz, with an additional 500 MHz added to the memory frequency, and 31 mV bump to the voltage offset. All of this combined to a ~1200 MHz clockspeed while mining, and an additional 40 KH/s or so of performance, bringing us to just under 300KH/s with the 750 Ti.
As we compare the performance of the 750 Ti to AMD GPUs and previous generation NVIDIA GPUs, we start to see how impressive the performance of this card stacks up considering the $150 MSRP. For less than half the price of the GTX 770, and roughly the same price as a R7 260X, you can achieve the same performance.
When we look at power consumption based on the TDP of each card, this comparison only becomes more impressive. At 60W, there is no card that comes close to the performance of the 750 Ti when mining. This means you will spend less to run a 750 Ti than a R7 260X or GTX 770 for roughly the same hash rate.
Taking a look at the performance per dollar ratings of these graphics cards, we see the two top performers are the AMD R7 260X and our overclocked GTX 750 Ti.
However, when looking at the performance per watt differences of the field, the GTX 750 Ti looks more impressive. While most miners may think they don't care about power draw, it can help your bottom line. By being able to buy a smaller, less efficient power supply the payoff date for the hardware is moved up. This also bodes well for future Maxwell based graphics cards that we will likely see released later in 2014.
Subject: Graphics Cards | February 19, 2014 - 04:43 PM | Jeremy Hellstrom
Tagged: geforce, gm107, gpu, graphics, gtx 750 ti, maxwell, nvidia, video
We finally saw Maxwell yesterday, with a new design for the SMs called SMM each of which consist of four blocks of 32 dedicated, non-shared CUDA cores. In theory that should allow NVIDIA to pack more SMMs onto the card than they could with the previous SMK units. This new design was released on a $150 card which means we don't really get to see what this new design is capable of yet. At that price it competes with AMD's R7 260X and R7 265, at least if you can find them at their MSRP and not at inflated cryptocurrency levels. Legit Reviews contrasted the performance of two overclocked GTX 750 Ti to those two cards as well as to the previous generation GTX 650Ti Boost on a wide selection of games to see how it stacks up performance-wise which you can read here.
That is of course after you read Ryan's full review.
"NVIDIA today announced the new GeForce GTX 750 Ti and GTX 750 video cards, which are very interesting to use as they are the first cards based on NVIDIA's new Maxwell graphics architecture. NVIDIA has been developing Maxwell for a number of years and have decided to launch entry-level discrete graphics cards with the new technology first in the $119 to $149 price range. NVIDIA heavily focused on performance per watt with Maxwell and it clearly shows as the GeForce GTX 750 Ti 2GB video card measures just 5.7-inches in length with a tiny heatsink and doesn't require any internal power connectors!"
Here are some more Graphics Card articles from around the web:
- MSI GTX 750 Ti Gaming Video Card Review @HiTech Legion
- NVIDIA GeForce GTX 750 Ti @ Benchmark Reviews
- ASUS GTX 750 OC 1 GB @ techPowerUp
- MSI GTX 750 Ti Gaming 2 GB @ techPowerUp
- NVIDIA GeForce GTX 750Ti the Arrival of Maxwell @HiTech Legion
- Palit GTX 750 Ti StormX Dual 2 GB @ techPowerUp
- The GTX 750 Ti Review; Maxwell Arrives @ Hardware Canucks
- Nvidia GeForce GTX 750 Ti vs. AMD Radeon R7 265 @ Legion Hardware
- MSI GTX750Ti OC Twin Frozr @ Kitguru
- NVIDIA GeForce GTX 750 Ti 2 GB @ techPowerUp
- NVIDIA GeForce GTX 750 Ti "Maxwell" On Linux @ Phoronix
- A quick look at Mantle on AMD's Kaveri APU @ The Tech Report
- Sapphire Radeon R9 Tri-X OC video card @ Hardwareoverclock
- AMD Radeon R9 290: Still Not Good For Linux Users @ Phoronix
- AMD Radeon R7 265 2GB Video Card Review @ Legit Reviews
- Sapphire Radeon R7 260X OC 2GB Graphics Card Review @ Techgage
- XFX Double Dissipation R9 280X @ [H]ard|OCP
An Upgrade Project
When NVIDIA started talking to us about the new GeForce GTX 750 Ti graphics card, one of the key points they emphasized was the potential use for this first-generation Maxwell GPU to be used in the upgrade process of smaller form factor or OEM PCs. Without the need for an external power connector, the GTX 750 Ti provided a clear performance delta from integrated graphics with minimal cost and minimal power consumption, so the story went.
Eager to put this theory to the test, we decided to put together a project looking at the upgrade potential of off the shelf OEM computers purchased locally. A quick trip down the road to Best Buy revealed a PC sales section that was dominated by laptops and all-in-ones, but with quite a few "tower" style desktop computers available as well. We purchased three different machines, each at a different price point, and with different primary processor configurations.
The lucky winners included a Gateway DX4885, an ASUS M11BB, and a Lenovo H520.
Subject: General Tech, Graphics Cards | February 18, 2014 - 09:03 AM | Scott Michaud
Tagged: nvidia, gtx titan black, geforce titan, geforce
NVIDIA has just announced the GeForce GTX Titan Black. Based on the full high-performance Kepler (GK110) chip, it is mostly expected to be a lower cost development platform for GPU processing applications. All 2,880 single precision (FP32) CUDA Cores and 960 double precision (FP64) CUDA Cores are unlocked, yielding 5.1 TeraFLOPs of 32-bit decimal and 1.3 TeraFLOPs of 64-bit decimal performance. The chip contains 1536kB of L2 Cache and will be paired with 6GB of video memory on the board.
The original GeForce GTX Titan launched last year, almost to the day. Also based on the GK110 design, it also featured full double precision performance with only one SMX disabled. Of course, no component at the time contained a fully-enabled GK110 processor. The first product with all 15 SMX units active was not realized until the Quadro K6000, announced in July but only available in the fall. It was followed by the GeForce GTX 780 Ti (with a fraction of its FP64 performance) in November, and the fully powered Tesla K40 less than two weeks after that.
For gaming applications, this card is expected to have comparable performance to the GTX 780 Ti... unless you can find a use for the extra 3GB of memory. Games do not display much benefit with the extra 64-bit floating point (decimal) performance because the majority of their calculations are at 32-bit precision.
The NVIDIA GeForce GTX Titan Black is available today at a price of $999.
Get notified when we go live!