Select GeForce GTX GPUs Now Include Borderlands: The Pre-Sequel

Subject: General Tech | August 13, 2014 - 12:26 PM |
Tagged: borderlands, nvidia, geforce

 

Santa Clara, CA — August 12, 2014 — Get ready to shoot ‘n’ loot your way through Pandora’s moon. Starting today, gamers who purchase select NVIDIA GeForce GTX TITAN, 780 Ti, 780, and 770 desktop GPUs will receive a free copy of Borderlands: The Pre-Sequel, the hotly anticipated new chapter to the multi-award winning Borderlands franchise from 2K and Gearbox Software.

Discover the story behind Borderlands 2’s villain, Handsome Jack, and his rise to power. Taking place between the original Borderlands and Borderlands 2, Borderlands: The Pre-Sequel offers players a whole lotta new gameplay in low gravity.

“If you have a high-end NVIDIA GPU, Borderlands: The Pre-Sequel will offer higher fidelity and higher performance hardware-driven special effects including awesome weapon impacts, moon-shatteringly cool cryo explosions and ice particles, and cloth and fluid simulation that blows me away every time I see it," said Randy Pitchford, CEO and president of Gearbox Software.

With NVIDIA PhysX technology, you will feel deep space like never before. Get high in low gravity and use new ice and laser weapons to experience destructible levels of mayhem. Check out the latest trailer here: http://youtu.be/c9a4wr4I1hk that just went live this morning!

Borderlands: The Pre-Sequel will also stream to your NVIDIA SHIELD tablet or portable. For the first time ever, you can play Claptrap anywhere by using NVIDIA Gamestream technologies. You can even livestream and record every fist punch with GeForce Shadowplay

Borderlands: The Pre-Sequel will be available on October 14, 2014 in North America and on October 17, 2014 internationally. Borderlands: The Pre-Sequel is not yet rated by the ESRB.

The GeForce GTX and Borderlands: The Pre-Sequel bundle is available starting today from leading e-tailers including Amazon, NCIX, Newegg, and Tiger Direct and system builders including Canada Computers, Digital Storm, Falcon Northwest, Maingear, Memory Express, Origin PC, V3 Gaming, and Velocity Micro. For a full list of participating partners, please visit: www.GeForce.com/GetBorderlands.

Source: NVIDIA

G-SYNC is sweet but far from free

Subject: Displays | August 12, 2014 - 03:36 PM |
Tagged: asus, g-sync, geforce, gsync, nvidia, pg278q, Republic of Gamers, ROG, swift, video

Ryan was not the only one to test the ASUS ROG Swift PG278Q G-Sync monitor, Overclockers Club also received a model to test out.  Their impressions of the 27" 2560 x 1440 TN panel were very similar, once they saw this monitor in action going back to their 30-inch 60Hz IPS monitor was not as enjoyable as once it was.  The only bad thing they could say about the display was the MSRP, $800 is steep for any monitor and makes it rather difficult to even consider getting two or more of them for a multiple display system.

2.jpg

”When you get down to it, the facts are that even with a TN panel being used for the high refresh rate, the ASUS ROG Swift PG278Q G-Sync monitor delivers great picture quality and truly impressive gaming. I could go on all day long about how smooth each of the games played while testing this monitor, but ultimately not be able to show you without having you sit at the desk with me. No stuttering, no tearing, no lag; it's like getting that new car and having all the sales hype end up being right on the money. When I flip back and forth between my 60Hz monitor and the PC278Q, its like a night and day experience.”

Here are some more Display articles from around the web:

Displays

NVIDIA Reveals 64-bit Denver CPU Core Details, Headed to New Tegra K1 Powered Devices Later This Year

Subject: Processors | August 12, 2014 - 01:06 AM |
Tagged: tegra k1, project denver, nvidia, Denver, ARMv8, arm, Android, 64-bit

During GTC 2014 NVIDIA launched the Tegra K1, a new mobile SoC that contains a powerful Kepler-based GPU. Initial processors (and the resultant design wins such as the Acer Chromebook 13 and Xiaomi Mi Pad) utilized four ARM Cortex-A15 cores for the CPU side of things, but later this year NVIDIA is deploying a variant of the Tegra K1 SoC that switches out the four A15 cores for two custom (NVIDIA developed) Denver CPU cores.

Today at the Hot Chips conference, NVIDIA revealed most of the juicy details on those new custom cores announced in January which will be used in devices later this year.

The custom 64-bit Denver CPU cores use a 7-way superscalar design and run a custom instruction set. Denver is a wide but in-order architecture that allows up to seven operations per clock cycle. NVIDIA is using a custom ISA and on-the-fly binary translation to convert ARMv8 instructions to microcode before execution. A software layer and 128MB cache enhance the Dynamic Code Optimization technology by allowing the processor to examine and optimize the ARM code, convert it to the custom instruction set, and further cache the converted microcode of frequently used applications in a cache (which can be bypassed for infrequently processed code). Using the wider execution engine and Dynamic Code Optimization (which is transparent to ARM developers and does not require updated applications), NVIDIA touts the dual Denver core Tegra K1 as being at least as powerful as the quad and octo-core packing competition.

Further, NVIDIA has claimed at at peak throughput (and in specific situations where application code and DCO can take full advantage of the 7-way execution engine) the Denver-based mobile SoC handily outpaces Intel’s Bay Trail, Apple’s A7 Cyclone, and Qualcomm’s Krait 400 CPU cores. In the results of a synthetic benchmark test provided to The Tech Report, the Denver cores were even challenging Intel’s Haswell-based Celeron 2955U processor. Keeping in mind that these are NVIDIA-provided numbers and likely the best results one can expect, Denver is still quite a bit more capable than existing cores. (Note that the Haswell chips would likely pull much farther ahead when presented with applications that cannot be easily executed in-order with limited instruction parallelism).

NVIDIA Denver CPU Core 64bit ARMv8 Tegra K1.png

NVIDIA is ratcheting up mobile CPU performance with its Denver cores, but it is also aiming for an efficient chip and has implemented several power saving tweaks. Beyond the decision to go with an in-order execution engine (with DCO hopefully mostly making up for that), the beefy Denver cores reportedly feature low latency power state transitions (e.g. between active and idle states), power gating, dynamic voltage, and dynamic clock scaling. The company claims that “Denver's performance will rival some mainstream PC-class CPUs at significantly reduced power consumption.” In real terms this should mean that the two Denver cores in place of the quad core A15 design in the Tegra K1 should not result in significantly lower battery life. The two K1 variants are said to be pin compatible such that OEMs and developers can easily bring upgraded models to market with the faster Denver cores.

NVIDIA Denver CPU cores in Tegra K1.png

For those curious, In the Tegra K1, the two Denver cores (clocked at up to 2.5GHz) share a 16-way L2 cache and each have 128KB instruction and 64KB data L1 caches to themselves. The 128MB Dynamic Code Optimization cache is held in system memory.

Denver is the first (custom) 64-bit ARM processor for Android (with Apple’s A7 being the first 64-bit smartphone chip), and NVIDIA is working on supporting the next generation Android OS known as Android L.

The dual Denver core Tegra K1 is coming later this year and I am excited to see how it performs. The current K1 chip already has a powerful fully CUDA compliant Kepler-based GPU which has enabled awesome projects such as computer vision and even prototype self-driving cars. With the new Kepler GPU and Denver CPU pairing, I’m looking forward to seeing how NVIDIA’s latest chip is put to work and the kinds of devices it enables.

Are you excited for the new Tegra K1 SoC with NVIDIA’s first fully custom cores?

Source: NVIDIA
Author:
Manufacturer: ASUS

The Waiting Game

NVIDIA G-Sync was announced at a media event held in Montreal way back in October, and promised to revolutionize the way the display and graphics card worked together to present images on the screen. It was designed to remove hitching, stutter, and tearing -- almost completely. Since that fateful day in October of 2013, we have been waiting. Patiently waiting. We were waiting for NVIDIA and its partners to actually release a monitor that utilizes the technology and that can, you know, be purchased.

In December of 2013 we took a look at the ASUS VG248QE monitor, the display for which NVIDIA released a mod kit to allow users that already had this monitor to upgrade to G-Sync compatibility. It worked, and I even came away impressed. I noted in my conclusion that, “there isn't a single doubt that I want a G-Sync monitor on my desk” and, “my short time with the NVIDIA G-Sync prototype display has been truly impressive…”. That was nearly 7 months ago and I don’t think anyone at that time really believed it would be THIS LONG before the real monitors began to show in the hands of gamers around the world.

IMG_9328.JPG

Since NVIDIA’s October announcement, AMD has been on a marketing path with a technology they call “FreeSync” that claims to be a cheaper, standards-based alternative to NVIDIA G-Sync. They first previewed the idea of FreeSync on a notebook device during CES in January and then showed off a prototype monitor in June during Computex. Even more recently, AMD has posted a public FAQ that gives more details on the FreeSync technology and how it differs from NVIDIA’s creation; it has raised something of a stir with its claims on performance and cost advantages.

That doesn’t change the product that we are reviewing today of course. The ASUS ROG Swift PG278Q 27-in WQHD display with a 144 Hz refresh rate is truly an awesome monitor. What did change is the landscape, from NVIDIA's original announcement until now.

Continue reading our review of the ASUS ROG Swift PG278Q 2560x1440 G-Sync Monitor!!

Acer Unveils Chromebook 13 Powered By NVIDIA Tegra K1 SoC

Subject: General Tech, Mobile | August 11, 2014 - 08:00 AM |
Tagged: webgl, tegra k1, nvidia, geforce, Chromebook, Bay Trail, acer

Today Acer unveiled a new Chromebook powered by an NVIDIA Tegra K1 processor. The aptly-named Chromebook 13 is 13-inch thin and light notebook running Google’s Chrome OS with up to 13 hours of battery life and three times the graphical performance of existing Chromebooks using Intel Bay Trail and Samsung Exynos processors.

Acer Chromebook 13 CB5-311_AcerWP_app-02.jpg

The Chromebook 13 is 18mm thick and comes in a white plastic fanless chassis that hosts a 13.3” display, full size keyboard, trackpad, and HD webcam. The Chromebook 13 will be available with a 1366x768 or 1920x1080 resolution panel depending on the particular model (more on that below).

Beyond the usual laptop fixtures, external I/O includes two USB 3.0 ports, HDMI video output, a SD card reader, and a combo headphone/mic jack. Acer has placed one USB port on the left side along with the card reader and one USB port next to the HDMI port on the rear of the laptop. Personally, I welcome the HDMI port placement as it means connecting a second display will not result in a cable invading the mousing area should i wish to use a mouse (and it’s even south paw friendly Scott!).

The Chromebook 13 looks decent from the outside, but it is the internals where the device gets really interesting. Instead of going with an Intel Bay Trail (or even Celeron/Core i3), Acer has opted to team up with NVIDIA to deliver the world’s first NVIDIA-powered Chromebook.

Specifically, the Chromebook 13 uses a NVIDIA Tegra K1 SoC, up to 4GB RAM, and up to 32GB of flash storage. The K1 offers up four A15 CPU cores clocked at 2.1GHz, and a graphics unit with 192 Kepler-based CUDA cores. Acer rates the Chromebook 13 at 11 hours with the 1080p panel or 13 hours when equipped with the 1366x768 resolution display. Even being conservative, the Chromebook 13 looks to be the new leader in Chromebook battery life (with the previous leader claiming 11 hours).

acer chromebook 13 tegra k1 quad core multitasking benchmark.jpg

A graph comparing WebGL performance between the NVIDIA Tegra K1, Intel (Bay Trail) Celeron N2830, Samsung Exynos 5800, and Samsung Exynos 5250. Results courtesy NVIDIA.

The Tegra K1 is a powerful little chip, and it is nice to see NVIDIA get a design win here. NVIDIA claims that the Tegra K1, which is rated at 326 GFLOPS of compute performance, offers up to three times the graphics performance of the Bay Trail N2830 and Exynos 5800 SoCs. Additionally, the K1 reportedly uses slightly less power and delivers higher multi-tasking performance. I’m looking forward to seeing independent reviews in this laptop formfactor and hoping that the chip lives up to its promises.

The Chromebook 13 is currently up for pre-order and will be available in September starting at $279. The Tegra K1-powered laptop will hit the United States and Europe first, with other countries to follow. Initially, the Europe roll-out will include “UK, Netherlands, Belgium, Denmark, Sweden, Finland, Norway, France, Germany, Russia, Italy, Spain, South Africa and Switzerland.”

Acer Chromebook 13 CB5-311_closed 2.jpg

Acer is offering three consumer SKUs and one education SKU that will be exclusively offering through a re-seller. Please see the chart below for the specifications and pricing.

Acer Chromebook 13 Models System Memory (RAM) Storage (flash) Display Price MSRP
CB5-311-T9B0 2GB 16GB 1920 x 1080 $299.99
CB5-311-T1UU 4GB 32GB 1920 x 1080 $379.99
CB5-311-T7NN - Base Model 2GB 16GB 1366 x 768 $279.99
Educational SKU (Reseller Only) 4GB 16GB 1366 x 768 $329.99

Intel made some waves in the Chromebook market earlier this year with the announcement of several new Intel-powered Chrome devices and the addition of conflict-free Haswell Core i3 options. It seems that it is now time for the ARM(ed) response. I’m interested to see how NVIDIA’s newest model chip stacks up to the current and upcoming Intel x86 competition in terms of graphics power and battery usage.

As far as Chromebooks go, if the performance is at the point Acer and NVIDIA claim, this one definitely looks like a decent option considering the price. I think a head-to-head between the ASUS C200 (Bay Trail N2830, 2GB RAM, 16GB eMMC, and 1366x768 display at $249.99 MSRP) and Acer Chromebook 13 would be interesting as the real differentiator (beyond aesthetics) is the underlying SoC. I do wish there was a 4GB/16GB/1080p option in the Chromebook 13 lineup though considering the big price jump to get 4GB RAM (mostly as a result of the doubling of flash) in the $379.99 model at, say, $320 MSRP.

Read more about Chromebooks at PC Perspective!

Source: Acer
Author:
Manufacturer: ASUS

Experience with Silent Design

In the time periods between major GPU releases, companies like ASUS have the ability to really dig down and engineer truly unique products. With the expanded time between major GPU releases, from either NVIDIA or AMD, these products have continued evolving to offer better features and experiences than any graphics card before them. The ASUS Strix GTX 780 is exactly one of those solutions – taking a GTX 780 GPU that was originally released in May of last year and twisting it into a new design that offers better cooling, better power and lower noise levels.

ASUS intended, with the Strix GTX 780, to create a card that is perfect for high end PC gamers, without crossing into the realm of bank-breaking prices. They chose to go with the GeForce GTX 780 GPU from NVIDIA at a significant price drop from the GTX 780 Ti, with only a modest performance drop. They double the reference memory capacity from 3GB to 6GB of GDDR5, to assuage any buyer’s thoughts that 3GB wasn’t enough for multi-screen Surround gaming or 4K gaming. And they change the cooling solution to offer a near silent operation mode when used in “low impact” gaming titles.

The ASUS Strix GTX 780 Graphics Card

The ASUS Strix GTX 780 card is a pretty large beast, both in physical size and in performance. The cooler is a slightly modified version of the very popular DirectCU II thermal design used in many of the custom built ASUS graphics cards. It has a heat dissipation area more than twice that of the reference NVIDIA cooler and uses larger fans that allow them to spin slower (and quieter) at the improved cooling capacity.

IMG_0325.JPG

Out of the box, the ASUS Strix GTX 780 will run at 889 MHz base clock and 941 MHz Boost clock, a fairly modest increase over the 863/900 MHz rates of the reference card. Obviously with much better cooling and a lot of work being done on the PCB of this custom design, users will have a lot of headroom to overclock on their own, but I continue to implore companies like ASUS and MSI to up the ante out of the box! One area where ASUS does impress is with the memory – the Strix card features a full 6GB of GDDR5 running 6.0 GHz, twice the capacity of the reference GTX 780 (and even GTX 780 Ti) cards. If you had any concerns about Surround or 4K gaming, know that memory capacity will not be a problem. (Though raw compute power may still be.)

Continue reading our review of the ASUS Strix GTX 780 6GB Graphics Card!!

Rumor: NVIDIA GeForce GTX 880 Is Actually September?

Subject: General Tech, Graphics Cards | August 3, 2014 - 04:59 PM |
Tagged: nvidia, maxwell, gtx 880

Just recently, we posted a story that claimed NVIDIA was preparing to launch high-end Maxwell in the October/November time frame. Apparently, that was generous. The graphics company is said to announce their GeForce GTX 880 in mid-September, with availability coming later in the month. It is expected to be based on the GM204 architecture (which previous rumors claim is 28nm).

nvidia-geforce.png

It is expected that the GeForce GTX 880 will be available with 4GB of video memory, with an 8GB version possible at some point. As someone who runs multiple (five) monitors, I can tell you that 2GB is not enough for someone of my use case. Windows 7 says the same. It kicks me out of applications to tell me that it does not have enough video memory. This would be enough reason for me to get more GPU memory.

We still do not know how many CUDA cores will be present in the GM204 chip, or if the GeForce GTX 880 will have all of them enabled (but I would be surprised if it didn't). Without any way to derive its theoretical performance, we cannot compare it against the GTX 780 or 780Ti. It could be significantly faster, it could be marginally faster, or it could be somewhere between.

But we will probably find out within two months.

Source: Videocardz

Podcast #311 - AMD FreeSync, NVIDIA SHIELD Tablet, Crucial M550 SSD and more!

Subject: General Tech | July 31, 2014 - 01:49 PM |
Tagged: podcast, video, nvidia, shield tablet, amd, freesync, crucial, M550, mx100, Oculus, DK2, logitech g402, evga, TORQ X10

PC Perspective Podcast #311 - 07/31/2014

Join us this week as we discuss AMD FreeSync, NVIDIA SHIELD Tablet, Crucial M550 SSD and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano

Program length: 1:32:53

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

AMD Releases FreeSync Information as a FAQ

Subject: General Tech, Graphics Cards, Displays | July 29, 2014 - 09:02 PM |
Tagged: vesa, nvidia, g-sync, freesync, DisplayPort, amd

Dynamic refresh rates have two main purposes: save power by only forcing the monitor to refresh when a new frame is available, and increase animation smoothness by synchronizing to draw rates (rather than "catching the next bus" at 16.67ms, on the 16.67ms, for 60 Hz monitors). Mobile devices prefer the former, while PC gamers are interested in the latter.

Obviously, the video camera nullifies the effect.

NVIDIA was first to make this public with G-Sync. AMD responded with FreeSync, starting with a proposal that was later ratified by VESA as DisplayPort Adaptive-Sync. AMD, then, took up "Project FreeSync" as an AMD "hardware/software solution" to make use of DisplayPort Adaptive-Sync in a way that benefits PC gamers.

Today's news is that AMD has just released an FAQ which explains the standard much more thoroughly than they have in the past. For instance, it clarifies the distinction between DisplayPort Adaptive-Sync and Project FreeSync. Prior to the FAQ, I thought that FreeSync became DisplayPort Adaptive-Sync, and that was that. Now, it is sounding a bit more proprietary, just built upon an open, VESA standard.

If interested, check out the FAQ at AMD's website.

Source: AMD

NVIDIA 340.52 Drivers Are Now Available

Subject: General Tech, Graphics Cards | July 29, 2014 - 08:27 PM |
Tagged: nvidia, geforce, graphics drivers, shield tablet, shield

Alongside the NVIDIA SHIELD Tablet launch, the company has released their GeForce 340.52 drivers. This version allows compatible devices to use GameStream and it, also, is optimized for Metro: Redux and Final Fantasy XIV (China).

nvidia-geforce.png

The driver supports GeForce 8-series graphics cards, and later. As a reminder, for GPUs that are not based on the Fermi architecture (or later), 340.xx will be your last driver version. NVIDIA does intend to provided extended support for 340.xx (and earlier) drivers until April 1st, 2016. But, when Fermi, Kepler, and Maxwell move on to 343.xx, Tesla and earlier will not. That said, most of the content of this driver is aimed at Kepler and later. Either way, the driver itself is available for those pre-Fermi cards.

I should also mention that a user of Anandtech's forums noted the removal of Miracast from NVIDIA documentation. NVIDIA has yet to comment, although it is still very short notice, at this point.

Source: NVIDIA