Computex 2013: Gigabyte Preparing Custom GTX Titan With WindForce 450W Cooler... Some Assembly Required

Subject: Graphics Cards | June 7, 2013 - 10:20 AM |
Tagged: windforce 450w, windforce, gtx titan, gk110, gigabyte

Back in April, Gigabyte showed off its new custom WindForce 450W GPU HSF, but did not name which specific high end graphics cards it would be used with. So far, NVIDIA's highest-end single GPU solution, the GTX Titan, has been off limits for GPU manufacturers as far as putting custom air coolers on the cards (NVIDIA has restricted designs to its reference cooler or factory installed water blocks).

It seems that Gigabyte has found a solution to the cooler restriction, however. The company will be selling a GTX TITAN with model number GV-NTITAN-6GDB later this year that will come with NVIDIA's reference cooler pre-installed along with a bundled WindForce 3X 450W cooler and instructions for switching out the coolers.

Gigabyte GTX Titan GPU With Windforce 450W Cooler.jpg

Gigabyte Is Showing Off the custom GTX Titan at Computex, as discovered by TechPowerUp.

Users that do take Gigabyte up on its offer to switch to the custom WindForce cooler will still be covered under the company's standard warranty policy, which is a good thing. The kit is likely to be more expensive than your standard TITAN though as Gigabyte is having to sell the card with two coolers and increased support costs. On the other hand, users could swap out the coolers and then sell the unused TITAN reference cooler to offset some of the cost of the kit.

Gigabyte is actually showing off the new graphics card with WindForce 3X 450W cooler at Computex this week. The dual slot WindForce cooler is said to keep a GTX 680 2°C and 23.3 dB quieter than the reference cooler when running the Furmark benchmark suite. The major benefit of the WindForce is having three large fans that can spin at lower RPMs to give you the same cooling performance as the reference NVIDIA design at a much lower noise volume rather than pure cooling performance. Should you be looking to push the TITAN to the extreme, a water block would be your best bet, but for many users i think the allure of a quieter air cooled TITAN may be enough for Gigabyte to snag a few adventurous enthusiasts willing to put up with assembling the new card themselves.

More information on the WindForce 3X 450W cooler can be found here.

Source: Gigabyte

Computex 2013: ASUS Working On GTX 770 Poseidon With Hybrid Waterblock and Air Cooler HSF

Subject: Graphics Cards | June 3, 2013 - 09:04 PM |
Tagged: poseidon, nvidia, kepler, gtx 770, gk-104, computex 2013, computex, ASUS ROG, asus

NVIDIA took the wraps off of its latest-generation Geforce GTX 770 GPU last week, and manufacturers have begun announcing not only reference designs but custom and factory overclocked versions of this GK-104 "Kepler" GPU refresh. One card in particular that caught my attention was the ASUS GTX 770 Poseidon graphics card, which combines NVIDIA's GK-104 GPU with a hybrid heatsink and fan combo that allows the simultaneous use of water and air cooling!

ASUS_ROG_Poseidon_GraphicsCard_with_Hybrid_DirectCU_H2O_and_CoolTech_Fan.jpg

According to the branding, and a hands-on report by Tech Power Up at Computex in Taipei, Taiwan, the GTX 770 Poseidon graphics card is part of the company's Republic of Gamers (ROG) line and likely sports beefy VRM hardware and factory GPU overclocks. Of course, the GTX 770 GPU uses NVIDIA's Kepler architecture and is essentially the GTX 680 with some seriously overclocked memory and refined GPU Boost technology. That means 1,536 CUDA cores, 128 texture units, and 32 ROPs (raster operation units) within 4 GPCs (Graphics Processing Clusters). This is the full GK-104 chip, desite the x70 name. For more information on the GTX 770 GPU, check out our recent review of the NVIDIA GTX 770 card.

Update: ASUS has just launched the new ROG graphics cards at a Computex press conference. According to the ASUS press release:

"ROG Poseidon graphics card with hybrid DirectCU H2O cooling
The new ROG Poseidon graphics card features an NVIDIA® GeForce® GTX 700 Series GPU and a hybrid DirectCU H2O thermal design that supports both air and liquid cooling. Developed by ASUS, its CoolTech fan combines blower and axial fans in one design, forcing air in multiple directions over the heatsink to maximize heat dissipation. Liquid cooling reduces operating temperatures by up to 31 degrees Celsius for cooler running with even greater overclocking potential. ROG Poseidon also features a red pulsing ROG logo for a distinctive dash of style."

(end update)

Back on the Poseidon specifcally, the card is a short GTX 770 with a distinctive cooler that uses a full cover water block that covers the entire card and includes the GPU, memory, and VRM areas. ASUS further added a more-traditional air cooler to the area above the GPU itself to help dissapate heat. The air cooler is a circular aluminum fin array with a fan that sits in the middle. The air entire hybrid cooler is then covered by a ROG-themed shroud with a configurable LED-backlit Republic of Gamers logo on the side that can be controlled via software.

ASUS_ROG_Poseidon_GraphicsCard_with_Gquarter_Fitting_and_LED_light.jpg

The water cooling portion acts as any other full cover water block, allowing cool water to move heat away from the metal contact (the bottom of the block) touching the various components. The inlet and outlets poke out from the side of the card, which is a bit odd but the shroud prevents them coming out at 90-degrees like typical blocks. If your case width is tight, you may need to get creative to fit a 90-degree barb extender (I apologize if that's not the technical term) on to the existing tubing connectors (heh). The cooler can be operated with the air cooler's fan running with or without being connected to a water loop. When water cooling is used, the fan can be turned off to reduce noise or left on to allow for higher overclocks and/or lower temperatures.

Unfortunately, that is all of the information that is currently available  as ASUS has not yet officially launched on the custom GTX 770 graphics card. Pricing, availability, and clockspeed details are still unknown.

For more information, stay tuned to the press.asus.com/events livestream page as it might be announced at a Computex press conference this week since the company is showing off the hardware at the show!

Source: ASUS

Samsung Galaxy Tab 3 10.1: Intel inside an Android?

Subject: General Tech, Graphics Cards, Processors, Mobile | June 3, 2013 - 12:00 AM |
Tagged: Intel, atom, Clover Trail+, SoC, Samsung, Galaxy Tab 3 10.1

While Reuters is being a bit cagey with their source, if true: Intel may have nabbed just about the highest profile Android tablet design win possible. The, still currently unannounced, Samsung Galaxy Tab 3 10.1 is expected to embed Intel's Clover Trail+ System on a Chip (SoC). Samsung would not be the largest contract available in the tablet market, their previous tablets ship millions of units each; they are a good OEM vendor to have.

Source: BGR India

Samsung is also known for releasing multiple versions of the same device for various regions and partners. The Galaxy Tab 10.1 and Galaxy Tab 2 10.1 did not have a variety of models with differing CPUs like, for instance, the Galaxy S4 phone did; the original "10.1" contained an NVIDIA Tegra 2 and the later "2 10.1" embed a TI OMAP 4430 SoC. It is entirely possible that Intel won every Galaxy Tab 3 10.1 tablet ever, but it is also entirely possible that they did not.

Boy Genius Report India (BGR India, video above) also claims more specific hardware based on a pair of listings at GLBenchmark. The product is registered under the name Santos10: GT-P5200 being the 3G version, and GT-P5210 being the Wi-Fi version.

These specifications are:

  • Intel Atom Z2560 800-933 MHz dual-core SoC (4 threads, 1600 MHz Turbo)
  • PowerVR SGX 544MP GPU (OpenGL ES 2.0)
  • 1280x800 display
  • Android 4.2.2

I am not entirely sure what Intel has to offer with Clover Trail+ besides, I would guess, reliable fabrication. Raw graphics performance is still about half of Apple's A6X GPU although, if the leaked resolution is true, it has substantially less pixels to push without being attached to an external display.

Maybe Intel made it too cheap to refuse?

Source: Reuters

Galaxy's Factory Overclocked GTX 770 Graphics Card Is Now Available for $400

Subject: Graphics Cards | June 1, 2013 - 09:43 PM |
Tagged: nvidia, gtx 770, graphics card, gk-104, galaxy

Galaxy recently made its custom factory overclocked GTX 770 graphics card available. The new card is not the fastest GTX 770, and doesn't quite embrace the supa-pipe as much (as Josh would say), but it looks to be a good deal all the same, giving you a quieter HSF and a decently-overclocked Geforce GTX 770 GPU for $399.99.

The Galaxy GeForce GTX 770 2GB (77XPH6DV6KXZ) takes NVIDIA's GTX 770 GPU with 1,536 GK-104 based CUDA cores and overclocks it to 1110 MHz base and 1163 MHz boost clockspeeds. The 2GB of GDDR5 memory is only clocked at the reference 7010 MHz, however.

Galaxy GTX 770 Graphics Card.jpg

The card has the same video outputs as other GTX 770 cards: two DL-DVI, one HDMI, and one DisplayPort output. The card with its dual slot, dual fan cooler is 10” in length and requires a 600W PSU at minimum (not solely for the GPU). It needs one 8-pin and one 6-pin PCI-E power connector.

Galaxy provides a two year warranty for the card. It is available now for around $400 at various retailers.

Read more about other factory overclocked GTX 770 graphics cards at PC Perspective!

Source: Newegg

ASUS Launches GTX 770 DirectCU II OC Graphics Card

Subject: Graphics Cards | June 1, 2013 - 03:00 PM |
Tagged: nvidia, kepler, gtx 770, graphics card

NVIDIA recently unveiled its GTX 770 GPU. Sitting between the GTX 680 and GTX 780, the Geforce GTX 770 is a refined GK104 with higher clockspeeds and improved GPU boost. It features 1536 CUDA cores and a 256-bit memory bus.

While the stock GTX 770 comes clocked at 1046 MHz base and 1085 MHz boost, ASUS is factory overclocking its DirectCU II OC card with a maximum boost GPU clockspeed of 1110 MHz. The 2GB of GDDR5 memory on the card will come clocked at 7010 MHz.

ASUS GTX 770 DirectCU II OC Graphics Card.jpg

The differentiating factor here (aside from the overclock) is the custom DirectCU II cooler. ASUS has fitted the overclocked GTX 770 with a DirectCU cooler that uses copper heatpipes that directly contact the GPU and attach to an aluminum fin stack. The heatsink is, in turn, cooled by two 80mm fans. ASUS claims that the GTX 770 DirectCU II OC is up to 20% cooler and three-times quieter than the referrence NVIDIA cooler. Other features include a 10-phase DIGI+ VRM, and “Super Alloy Power” capacitors, chokes, and MOSFETs. The dual slot card is 10.7” long and includes two DL-DVI, one HDMI, and one DisplayPort video ouptut. ASUS' GPU Tweak software will allow users to adjust core and memory clockspeeds, voltage, fan speeds, and the power control target.

The ASUS GTX 770 DirectCU II OC is shipping now and will be available at retailers soon. In fact, the card is avaiable at Newegg right now for just under $410.

Read more about NVIDIA's GTX 770 GPU: NVIDIA GeForce GTX 770 Review - GK104 Speed Bump @ PC Perspective!

Source: Videocardz

NVIDIA Launches New High-Performance 700M Graphics Cards

Subject: Graphics Cards | June 1, 2013 - 02:11 PM |
Tagged: gtx 700M, nvidia, mobile gpu, kepler, 780m, 700m

Earlier this year (beginning of April), NVIDIA introduced the first set of mobile graphics cards in its 700M series. These were relatively low-end cards that features at most 384 CUDA cores and were based on NVIDIA's 600-series Kepler architecture.

NVIDIA is now adding higher-end mobile GPUs to the 700M family with the GTX 760M, GTX 765M, GTX 770M, and GTX 780M. These chips are still based on Kepler (600-series), but feature more CUDA cores, more memory, a wider memory bus, and faster clockspeeds. The GTX 780M is not quite the mobile equivalent to the desktop GTX 680, but NVIDIA is matching it up against AMD's 8970M GPU and claims that it can run games like Sleeping Dogs, Assassins Creed 3, and Borderlands 2 at Ultra settings (1080p). The GTX 770M is also capable of running modern games, though some detail setitng may need to be turned down.

The chart below details the various specifications and compares the new GTX 700M cards to the existing GT 700M GPUs. At the high end, NVIDIA has the GTX 780M with 1,536 CUDA cores, a base clock of 823 MHz, and 4GB of GDDR5 memory (1250 MHz) on a 256-bit bus. The GTX 770M occupies the mid-range mobile gaming slot with 960 CUDA cores, a base clock of 811 MHz, and a memory clock of 1GHz. The GTX 760M and GTX 765M have similar hardware specifications, but the GTX 765M has a higher GPU base clock of 850 MHz versus the GTX 760M's 657 MHz base clock. The low end GTX 700M GPUs (760M and 765M) feature 768 CUDA cores, a 128-bit memory bus, and memory clockspeeds of 1GHz.

  GTX 720M GTX 735M GT 740M GT 750M GTX 760M GTX 765M GTX 770M GTX 780M
CUDA Cores 96 384 384 384 768 768 960 1536
GPU Base Clock 938 MHz 889 MHz 980 MHz 967 MHz 657 MHz 850 MHz 811 MHz 823Mhz
Memory Clock 1000 MHz 1000 MHz 2500 MHz 2500 MHz 1000 MHz 1000 MHz 1000 MHz 1250 MHz
Bus Width 64-bit 64-bit 128-bit 128-bit 128-bit 128-bit 192-bit 256-bit
          New New New New

Further, GPU Boost 2.0, Geforce Experience software, and NVIDIA Optimus support are features of the new GTX 700M graphics cards. You can read more about these NVIDIA technologies in this article by motherboard reviewer Morry Teitelman.

These cards are based on NVIDIA's 600-series despite the 700M moniker. They should provide OEMs with some good gaming options on the NVIDIA side of things and allow for some more competition in the gaming notebook hardware space against the existing AMD cards.

Source: NVIDIA

EVGA Outfits GTX 780 With Hydro Copper Water Block

Subject: Graphics Cards | June 1, 2013 - 10:38 AM |
Tagged: watercooling, nvidia, hydro copper, gtx 780, gpu, gk110, evga

EVGA GTX 780 Hydro Copper GPUs

While NVIDIA restricted partners from going with aftermarket coolers on the company's GTX TITAN graphics card, the recently released NVIDIA GTX 780 does not appear to have the same limits placed upon it. As such, many manufacturers will be releasing GTX 780 graphics cards with custom coolers. One such design that caught my attention was the Hydro Copper full cover waterblock from EVGA.

EVGA GTX 780 with Hydro Copper Water Block (2).jpg

This new cooler will be used on at least two upcoming EVGA graphics cards, the GTX 780 and GTX 780 Classified. EVGA has not yet announced clockspeeds or pricing for the Classified edition, but the GTX 780 Hydro Copper will be a GTX 780 GPU clocked at 980 MHz base and 1033 MHz boost. The 3GB of GDDR5 memory is stock clocked at 6008 MHz, however. It uses a single 8-pin and a single 6-pin PCI-E power connector. This card is selling for around $799 at retailers such as Newegg.

The GTX 780 Classified Hydro Copper will have a factory overclocked GTX 780 GPU and 3GB of GDDR5 memory at 6008 MHz, but beyond that details are scarce. The 8+8-pin PCI-E power connectors do suggest a healthy overclock (or at least that users will be able to push the cards after they get them).

Both the GTX 780 and GTX 780 Classified Hydro Copper graphics cards feature two DL-DVI, one HDMI, and one DisplayPort video outputs.

EVGA GTX 780 Classified with Hydro Copper Water Block (1).jpg

The Hydro Copper cooler itself is the really interesting bit about these cards though. It is a single slot, full cover waterblock that will cool the entire graphics card (GPU, VRM, Memory, ect). It has two inlet/outlet ports that can be swapped around to accommodate SLI setups or other custom water tube routing. A configurable LED-backlit EVGA logo adorns the side of the card and can be controlled in software. A 0.25 x 0.35 pin matrix is used in the portion of the block above the GPU to increase the surface area and aid in cooling. Unfortunately, while the card and cooler are single slot, you will actually need two case PCI expansion slots due to the two DL-DVI connectors.

It looks like a neat card, and it should perform well. I'm looking forward to seeing reviews of the card and how the cooler holds up to overclocking. Buying an overclocked card with a pre-installed waterblock is not for everyone but having a water cooled GPU with a warranty will be worth it more than pairing a stock card with a custom block.

Source: EVGA

Never mind the 780; here comes the GTX 770

Subject: Graphics Cards | May 30, 2013 - 11:55 AM |
Tagged: nvidia, kepler, gtx 770, gtx 680, GK104, geforce, MSI GTX660 HAWK

$400 is a tempting number, much less expensive than the $650 price tag on the GTX 780 and right in line with the existing GTX670 as well as AMD's HD7970.  You will probably not see many at that price, $450 is more likely as there will be very few reference cards released, all manufacturers will be putting there own spins on the design of these cards, which brings the price in line with the GTX680.  Performance wise these cards outpace the two current single GPU flagship cards, not by enough to make it worth upgrading from a 7970 or 680 but certainly enough to attract owners of previous generation cards.  [H]ard|OCP reviewed MSI's Lightning model, with dual fans, an overclock of 104MHz on the base clock and 117MHz boost, plus a completely unlocked BIOS for even more tweaking choices.

If you want to see how well it fares on our new Frame Rating metric you will have to read Ryan's full review here.

H770.jpg

"NVIDIA debuts the "new" GeForce GTX 770 today. The GeForce GTX 770 is poised to provide refreshed performance, for a surprising price. We evaluate a retail MSI GeForce GTX 770 Lightning flagship video card from MSI with specifications that will make any enthusiast smile. The $399 price point just got a kick in the pants."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

AMD Catalyst 13.6 Beta Drivers For Windows and Linux Now Available

Subject: Graphics Cards | May 28, 2013 - 08:32 PM |
Tagged: gpu, drivers, catalyst 13.6 beta, beta, amd

AMD has released its Catalyst 13.6 beta graphics driver, and it fixes a number of issues under both Windows 8 and Linux. The new beta driver is also compatible with the existing Catalyst 13.5 CAP1 (Catalyst Application Profile) which improves performance of several PC games.

As far as the Windows version of the graphics driver, Catalyst 13.6 adds OpenCL GPU acceleration support to Adobe's Premiere Pro CC software and enables AMD Wireless Display technology on systems with the company's A-Series APUs and either Broadcom or Atheros Wi-Fi chipsets. AMD has also made a couple of tweaks to its Enduro technology, including correctly identifying when a Metro app idles and offloading the corresponding GPU tasks to integrated graphics instead of a discrete card. The new beta driver also resolves an issue with audio dropout over HDMI.

AMD Catalyst Drivers.jpg

On the Linux side of things, Catalyst 13.6 beta adds support for the following when using AMD's A10, A8, A6, and A4 APUs:

  • Ubuntu 13.04
  • Xserver 1.14
  • GLX_EXT_buffer age

The driver fixes several bugs as well, including resolving black screen and corruption issues under TF2, an issue with OpenGL applications and VSYNC, and UVD playback issues where the taskbar would disappear and/or the system would experience a noticeable performance drop while playing a UVD in XBMC.

You can grab the new beta driver from the AMD website.

Source: AMD

Trimming the TITAN; NVIDIA's GTX 780

Subject: Graphics Cards | May 24, 2013 - 03:10 PM |
Tagged: nvidia, gtx 780, gk110, geforce

With 768 more CUDA Cores than the 680 but 384 less than the TITAN the 780 offers improvements over the previous generation and will be available for about $350 less than the TITAN.  As you can see in [H]ard|OCP's testing it does outperform the 680 and 7970 but not by a huge margin which hurts the price to performance ratio and makes it more attractive for 680 owners to pick up a second card for SLI.  AMD owners with previous generation cards and deep pockets might be tempted to pick up a pair of these cards as they show very good frame rating results in Ryan's review.

H_780.jpg

"NVIDIA's new GeForce GTX 780 video card has finally been unveiled. We review the GTX 780 with real world gaming with the most intense 3D games, including Metro: Last Light. If the GTX TITAN had you excited but was a bit out of your price range, the GTX 780 should hold your excitement while being a lot less expensive."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Epic Games is disappointed in the PS4 and Xbox One?

Subject: Editorial, General Tech, Graphics Cards, Systems | May 23, 2013 - 03:40 PM |
Tagged: xbox one, xbox, unreal engine, ps4, playstation 4, epic games

Unreal Engine 4 was presented at the PlayStation 4 announcement conference through a new Elemental Demo. We noted how the quality seemed to have dropped in the eight months following E3 while the demo was being ported to the console hardware. The most noticeable differences were in the severely reduced particle counts and the non-existent fine lighting details; of course, Epic pumped the contrast in the PS4 version which masked the lack of complexity as if it were a stylistic choice.

Still, the demo was clearly weakened. The immediate reaction was to assume that Epic Games simply did not have enough time to optimize the demo for the hardware. That is true to some extent, but there are theoretical limits on how much performance you can push out of hardware at 100% perfect utilization.

Now that we know both the PS4 and, recently, the Xbox One: it is time to dissect more carefully.

A recent LinkedIn post from EA Executive VP and CTO, Rajat Taneja, claims that the Xbox One and PS4 are a generation ahead of highest-end PC on the market. While there are many ways to interpret that statement, in terms of raw performance that statement is not valid.

As of our current knowledge, the PlayStation 4 contains an eight core AMD "Jaguar" CPU with an AMD GPU containing 18 GCN compute units, consisting of a total of 1152 shader units. Without knowing driving frequencies, this chip should be slightly faster than the Xbox One's 768 shader units within 12 GCN compute units. The PS4 claims their system has a total theoretical 2 teraFLOPs of performance and the Xbox One would almost definitely be slightly behind that.

Back in 2011, the Samaritan Demo was created by Epic Games to persuade console manufacturers. This demo was how Epic considered the next generation of consoles to perform. They said, back in 2011, that this demo would theoretically require 2.5 teraFLOPs of performance for 30FPS at true 1080p; ultimately their demo ran on the PC with a single GTX 680, approximately 3.09 teraFLOPs.

This required performance, (again) approximately 2.5 teraFLOPs, is higher than what is theoretically possible for the consoles, which is less than 2 teraFLOPs. The PC may have more overhead than consoles, but the PS4 and Xbox One would be too slow even with zero overhead.

Now, of course, this does not account for reducing quality where it will be the least noticeable and other cheats. Developers are able to reduce particle counts and texture resolutions in barely-noticeable places; they are also able to render below 1080p or even below 720p, as was the norm for our current console generation, to save performance for more important things. Perhaps developers might even use different algorithms which achieve the same, or better, quality for less computation at the expense of more sensitivity to RAM, bandwidth, or what-have-you.

But, in the end, Epic Games did not get the ~2.5 teraFLOPs they originally hoped for when they created the Samaritan Demo. This likely explains, at least in part, why the Elemental Demo looked a little sad at Sony's press conference: it was a little FLOP.

Update, 5/24/2013: Mark Rein of Epic Games responds to the statement made by Rajat Taneja of EA. While we do not know his opinion on consoles... we know his opinion on EA's opinion:

PCPer Live! GeForce GTX 780 3GB Graphics Card - 2pm EDT / 11am PDT

Subject: Editorial, Graphics Cards | May 23, 2013 - 09:08 AM |
Tagged: video, live, gtx 780, gk110

Missed the LIVE stream?  You can catch the video reply of it below!

Hopefully by now you have read over our review of the new NVIDIA GeForce GTX 780 graphics card that launched this morning.  Taking the GK110 GPU, cutting off some more cores, and setting a price point of $650 will definitely create some interesting discussion. 

IMG_9886.JPG

Join me today at 2pm ET / 11am PT as we discuss the GTX 780, our review and take your questions.  You can leave them in the comments below, no registration required. 

GTX 780 Stream - PC Perspective Live! - 2pm ET / 11am PT

Xbox One announced, the games: not so much.

Subject: Editorial, General Tech, Graphics Cards, Processors, Systems | May 21, 2013 - 02:26 PM |
Tagged: xbox one, xbox

xbox-one-head.jpg

Almost exactly three months have passed since Sony announced the Playstation 4 and just three weeks remain until E3. Ahead of the event, Microsoft unveiled their new Xbox console: The Xbox One. Being so close to E3, they are saving the majority of games until that time. For now, it is the box itself as well as its non-gaming functionality.

First and foremost, the raw specifications:

  • AMD APU (5 billion transistors, 8 core, on-die eSRAM)
  • 8GB RAM
  • 500GB Storage, Bluray reader
  • USB 3.0, 802.11n, HDMI out, HDMI in

The hardware is a definite win for AMD. The Xbox One is based upon an APU which is quite comparable to what the PS4 will offer. Unlike previous generations, there will not be too much differentiation based on available performance; I would not expect to see much of a fork in terms of splitscreen and other performance-sensitive features.

xbox-one-controller.jpg

A new version of the Kinect sensor will also be present with all units which developers can depend upon. Technically speaking, the camera is higher resolution and more wide-angle; up to six skeletons can be tracked with joints able to rotate rather than just hinge. Microsoft is finally also permitting developers to use the Kinect along with a standard controller to, as they imagine, allow a user to raise their controller to block with a shield. That is the hope, but near the launch of the original Kinect, Microsoft filed a patent to allow sign language recognition: has not happened yet. Who knows whether the device will be successfully integrated into gaming applications.

Of course Microsoft is known most for system software, and the Xbox runs three lightweight operating environments. In Windows 8, you have the Modern interface which runs WinRT applications and you have the desktop app which is x86 compatible.

The Xbox One borrows more than a little from this model.

The home screen, which I am tempted to call the Start Screen, for the console has a very familiar tiled interface. They are not identical to Windows but they are definitely consistent. This interface allows for access to Internet Explorer and an assortment of apps. These apps can be pinned to the side of the screen, identical to Windows 8 modern app. I am expecting there to be "a lot of crossover" (to say the least) between this and the Windows Store; I would not be surprised if it is basically the same API. This works both when viewing entertainment content as well as within a game.

Xbox_Home_UI_EN_US_Male_SS.jpg

These three operating systems run at the same time. The main operating system is basically a Hyper-V environment which runs the two other operating systems simultaneously in sort-of virtual machines. These operating systems can be layered with low latency, since all you are doing is compositing them in a different order.

Lastly, they made reference to Xbox Live, go figure. Microsoft is seriously increasing their server capacity and expects developers to utilize Azure infrastructure to offload "latency-insensitive" computation for games. While Microsoft promises that you can play games offline, this obviously does not apply to features (or whole games) which rely upon the back-end infrastructure.

xbox-one-live.jpg

And yes, I know you will all beat up on me if I do not mention the SimCity debacle. Maxis claimed that much of the game requires an online connection due to the complicated server requirements; after a crack allowed offline functionality, it was clear that the game mostly operates fine on a local client. How much will the Xbox Live cloud service offload? Who knows, but that is at least their official word.

Now to tie up some loose ends. The Xbox One will not be backwards compatible with Xbox 360 games although that is no surprise. Also, Microsoft says they are allowing users to resell and lend games. That said, games will be installed and not require the disc, from what I have heard. Apart from the concerns about how much you can run on a single 500GB drive, once the game is installed rumor has it that if you load it elsewhere (the rumor is even more unclear about whether "elsewhere" counts accounts or machines) you will need to pay a fee to Microsoft. In other words? Basically not a used game.

Well, that has it. You can be sure we will add more as information comes forth. Comment away!

Source: Xbox.com

JPR Releases Q1 2013 GPU Market Share Numbers, Good News for NVIDIA

Subject: Graphics Cards | May 20, 2013 - 09:54 AM |
Tagged: VIA, Q1 2013, nvidia, jpr, Intel, gpu market share, amd

Market analytics firm Jon Peddie Research recently released estimated market share and GPU shipment numbers from Q1 2013. The report includes information on AMD, NVIDIA, Intel, and Via and covered IGPs, processor graphics, and discrete GPUs included in desktop and mobile systems powered by X86 hardware. The report includes x86 tablets but otherwise does not factor in GPUs used in ARM devices like NVIDIA's Tegra chips. Year over Year, the PC market is down 12.6% and the GPU market declined by 12.9%. It is not all bad news for the PC market and discrete GPU makers, however. GPUs through 2016 are expected to exhibit a compound annual growth rate (CAGR) of 2.6% with as many as 394 million discrete GPUs shipped in 2016 alone.

In Q1 2013, the PC market is down 13.7% versus last quarter (Q4 2012) but the GPU market only declined 3.2%. This discrepency is explained as the result of people adding multiple GPUs to a single PC system, including adding a single discrete card to a system that already has processor graphics or an APU. By the end of Q1 2013, Intel holds 61.8% market share followed by AMD in second place with 20.2% and NVIDIA with 18%. Notably VIA is out of the game with 0.0% market share.

Q12013 Market Share.jpg

In terms of GPU shipments, NVIDIA had a relatively good first quarter of this year with an increase of 7.6% for notebook GPUs and desktop GPU shipments that remained flat. Overall, NVIDIA saw an increase in PC graphics shipments of 3.6%. On the other hand, x86 CPU giant Intel saw desktop and notebook GPUs slip by 3% and 6.3% respectively. Overall, that amounts to PC graphics shipments that fell by 5.3%. In between NVIDIA and Intel, AMD moved 30% more desktop chips (including APUs) versus Q4 2012. Meanwhile, Notebook chips (including APUs) fell by 7.3%. AMD's overall PC graphics shipments fell by 0.3%.

In all, this is decent news for the PC market as it shows that there is still interest in desktop GPUs. The PC market itself is declining and taking the GPU market with it, but it is far from the death of the desktop PC. It is interesting that NVIDIA (which announced Q1'13 revenue of $954.7 million) managed to push more chips while AMD and Intel were on the decline since NVIDIA doesn't have a x86 CPU with integrated graphics. I'm looking forward to seeing where NVIDIA stands as far as the mobile GPU market which does include ARM-powered products.

HP SlateBook x2: Tegra 4 on Android 4.2.2 in August

Subject: General Tech, Graphics Cards, Processors, Mobile | May 15, 2013 - 06:02 PM |
Tagged: tegra 4, hp, tablets

Sentences containing the words "Hewlett-Packard" and "tablet" can end in a question mark, an exclamation mark, or a period on occasion. The gigantic multinational technology company tried to own a whole mobile operating system with their purchase of Palm and abandoned those plans just as abruptly with such a successful $99 liquidation of $500 tablets, go figure, that they to some extent did it twice. The operating system was open sourced and at some point LG swooped in and bought it, minus patents, for use in Smart TVs.

So how about that Android?

HP-slatex2-01.jpg

The floodgates are open on Tegra 4 with HP announcing their SlateBook x2 hybrid tablet just a single day after NVIDIA's SHIELD move out of the projects. The SlateBook x2 uses the Tegra 4 processor to power Android 4.2.2 Jellybean along with the full Google experience including the Google Play store. Along with Google Play, the SlateBook and its Tegra 4 processor are also allowed in TegraZone and NVIDIA's mobile gaming ecosystem.

As for the device itself, it is a 10.1" Android tablet which can dock into a keyboard for extended battery life, I/O ports, and well, a hardware keyboard. You are able to attach this tablet to a TV via HDMI along with the typical USB 2.0, combo audio jack, and a full-sized SD card slot; which half any given port is available through is anyone's guess, however. Wirelessly, you have WiFi a/b/g/n and some unspecified version of Bluetooth.

HP-slatex2-02.jpg

The raw specifications list follows:

  • NVIDIA Tegra 4 SoC
    • ARM Cortex A15 quad core @ 1.8 GHz
    • 72 "Core" GeForce GPU @ ~672MHz, 96 GFLOPS
  • 2GB DDR3L RAM ("Starts at", maybe more upon customization?)
  • 64GB eMMC SSD
  • 1920x1200 10.1" touch-enabled IPS display
  • HDMI output
  • 1080p rear camera, 720p front camera with integrated microphone
  • 802.11a/b/g/n + Bluetooth (4.0??)
  • Combo audio jack, USB 2.0, SD Card reader
  • Android 4.2.2 w/ Full Google and TegraZone experiences.

If this excites you, then you only have to wait until some point in August; you will also, of course, need to wait until you save up about $479.99 plus tax and shipping.

Source: HP

Never Settle Reloaded Bundle Gets a Level Up Bonus

Subject: Graphics Cards | May 15, 2013 - 09:00 AM |
Tagged: tomb raider, never settle reloaded, never settle, level up, Crysis 3, bundle, amd

AMD dropped us a quick note to let us in on another limited time offer for buyers of AMD Radeon graphics cards.  Starting today, the Never Settle Reloaded bundle that we first told you about in February is getting an upgrade for select tiers.  For new buyers of the Radeon HD 7970, 7950 and 7790 AMD will be adding Tomb Raider into the mix.  Also, the Radeon HD 7870 will be getting Crysis 3.

Here is the updated, currently running AMD Radeon Level Up bundle matrix.

nslevelup2.jpg

Now if you buy a new AMD Radeon HD 7970, HD 7950 or HD 7870 today you will get four top-level PC games including Crysis 3, Bioshock Infinite, Far Cry 3: Blood Dragon and Tomb Raider. 

This is a limited time offer though that will end when supplies run out and we don't really have any idea when that will be.  Check out AMD's Level Up site for more details and to find retailers offering the updated bundles. 

I am curious to find out how successful these bundles have been for AMD and if NVIDIA has had a feedback on the Free-to-play bundle they offered or the new Metro: Last Light option.  Do gamers put much emphasis on the game bundles that come with each graphics card or does the performance and technology make the difference?

UPDATE: I have seen a couple of questions on whether this Level Up promotion would be retroactive. According to the details I have from AMD, this promotion is NOT retroactive.  If you have already purchased any of the affected cards you will not be getting the additional games.

Source: AMD

Haswell Laptop specs! NEC LaVie L to launch in Japan

Subject: General Tech, Graphics Cards, Processors, Systems, Mobile | May 14, 2013 - 12:54 PM |
Tagged: haswell, nec

While we are not sure when it will be released or whether it will be available for North America, we have found a Haswell laptop. Actually, NEC will release two products in this lineup: a high end 1080p unit and a lower end 1366x768 model. Unfortuantely, the article is in Japanese.

nec_haswell_01.jpg

IPS displays have really wide viewing angles, even top and bottom.

NEC is known for their higher-end monitors; most people equate the Dell Ultrasharp panels with professional photo and video production, but their top end offers are ofter a tier below the best from companies like NEC and Eizo. The laptops we are discussing today both contain touch-enabled IPS panels with apparently double the contrast ratio of what NEC considers standard. While these may or may not be the tip-top NEC offerings, they should at least be putting in decent screens.

Obviously the headliner for us is the introduction of Haswell. While we do not know exactly which product NEC decided to embed, we do know that they are relying upon it for their graphics performance. With the aforementioned higher-end displays, it seems likely that NEC is intending this device for the professional market. A price-tag of 190000 yen (just under $1900 USD) for the lower end and 200000 yen (just under $2000 USD) for the higher end further suggests this is their target demographic.

nec_haswell_02.jpg

Clearly a Japanese model.

The professional market does not exactly have huge requirements for graphics performance, but to explicitly see NEC trust Intel for their GPU performance is an interesting twist. Intel HD 4000 has been nibbling, to say the least, on the discrete GPU marketshare in laptops. I would expect this laptop would contain one of the BGA-based parts, which are soldered onto the motherboard, for the added graphics performance.

As a final note, the higher-end model will also contain a draft 802.11ac antenna. It is expected that network performance could be up to 867 megabits as a result.

Of course I could not get away without publishing the raw specifications:

LL850/MS (Price: 200000 yen):

  • Fourth-generation Intel Core processor with onboard video
  • 8GB DDR3 RAM
  • 1TB HDD w/ 32GB SSD caching
  • BDXL (100-128GB BluRay disc) drive
  • IEEE 802.11ac WiFi adapter, Bluetooth 4.0
  • SDXC, Gigabit Ethernet, HDMI, USB3.0, 2x2W stereo Yamaha speakers
  • 1080p IPS display with touch support
  • Office Home and Business 2013 preinstalled?

LL750/MS (Price: 190000 yen):

  • Fourth-generation Intel Core processor with onboard video
  • 8GB DDR3 RAM
  • 1TB HDD (no SSD cache)
  • (Optical disc support not mentioned)
  • IEEE 802.11a/b/g/n WiFi adapter, Bluetooth 4.0
  • SDXC, Gigabit Ethernet, HDMI, USB3.0, 2x2W stereo Yamaha speakers
  • 1366x768 (IPS?) touch-enabled display

Pushing $1000 cards to 5760x1200

Subject: Graphics Cards | May 10, 2013 - 04:25 PM |
Tagged: titan, radeon hd 7990, nvidia, amd

If you have been wondering how the two flagship GPUs fare in a battle royal of pure frame rate you can satisfy your curiousity at [H]ard|OCP.  They have tested both NVIDIA's TITAN and the finally released HD7990 in one of their latest reviews.  Both cards were force to push out pixels at 5760x1200 and for the most part tied, which makes sense as they both cost $1000.  The real winner was Crossfired HD 7970's which kept up with the big guns but cost $200 less to purcahse.

If that isn't extreme enough for you, they also overclocked the TITAN in a seperate review.

HOVERclock.gif

"We follow-up with a look at how the $999 GeForce GTX TITAN compares to the new $999 AMD Radeon HD 7990 video card. What makes this is unique is that the GeForce GTX TITAN is a single-GPU running three displays in NV Surround compared to the same priced dual-GPU CrossFire on a card Radeon HD 7990 in Eyefinity."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

PCPer Live! Frame Rating and FCAT - Your Questions Answered!

Subject: Editorial, Graphics Cards | May 8, 2013 - 08:37 PM |
Tagged: video, nvidia, live, frame rating, fcat

Update: Did you miss the live stream?  Watch the on-demand replay below and learn all about the Frame Rating system, FCAT, input latency and more!!

I know, based solely on the amount of traffic and forum discussion, that our readers have really adopted and accepted our Frame Rating graphics testing methodology.  Based on direct capture of GPU output via an external system and a high end capture card, our new systems have helped users see GPU performance a in more "real-world" light that previous benchmarks would not allow.

I also know that there are lots of questions about the process, the technology and the results we have shown.  In order to try and address these questions and to facilitate new ideas from the community, we are hosting a PC Perspective Live Stream on Thursday afternoon.

Joining me will be NVIDIA's Tom Petersen, a favorite of the community, to talk about NVIDIA's stance on FCAT and Frame Rating, as well as just talk about the science of animation and input. 

fr-2.png

The primary part of this live stream will be about education - not about bashing one particular product line or talking up another.  And part of that education is your ability to interact with us live, ask questions and give feedback.  During the stream we'll be monitoring the chat room embedded on http://pcper.com/live and I'll be watching my Twitter feed for questions from the audience.  The easiest way to get your question addressed though will be to leave a comment or inquiry here in this post below.  It doesn't require registration and this will allow us to think about the questions before hand, giving it a better chance of being answered during the stream.

Frame Rating and FCAT Live Stream

11am PT / 2pm ET - May 9th

PC Perspective Live! Page

So, stop by at 2pm ET on Thursday, May 9th to discuss the future of graphics performance and benchmarking!

AMD to erupt Volcanic Islands GPUs as early as Q4 2013?

Subject: Editorial, General Tech, Graphics Cards, Processors | May 8, 2013 - 06:32 PM |
Tagged: Volcanic Islands, radeon, ps4, amd

So the Southern Islands might not be entirely stable throughout 2013 as we originally reported; seismic activity being analyzed suggests the eruption of a new GPU micro-architecture as early as Q4. These Volcanic Islands, as they have been codenamed, should explode onto the scene opposing NVIDIA's GeForce GTX 700-series products.

It is times like these where GPGPU-based seismic computation becomes useful.

The rumor is based upon a source which leaked a fragment of a slide outlining the processor in block diagram form and specifications of its alleged flagship chip, "Hawaii". Of primary note, Volcanic Islands is rumored to be organized with both Serial Processing Modules (SPMs) and a Parallel Compute Module (PCM).

Radeon9000.jpg

So apparently a discrete GPU can have serial processing units embedded on it now.

Heterogeneous Systems Architecture (HSA) is a set of initiatives to bridge the gap between massively parallel workloads and branching logic tasks. We usually make reference to this in terms of APUs and bringing parallel-optimized hardware to the CPU. In this case, we are discussing it in terms of bringing serial processing to the discrete GPU. According to the diagram, the chip within would contain 8 processor modules each with two processing cores and an FPU for a total of 16 cores. There does not seem to be any definite identification whether these cores would be based upon their license to produce x86 processors or their other license to produce ARM processors. Unlike an APU, this is heavily skewed towards parallel computation rather than a relatively even balance between CPU, GPU, and chipset features.

Now of course, why would they do that? Graphics processors can do branching logic but it tends to sharply cut performance. With an architecture such as this, a programmer might be able to more efficiently switch between parallel and branching logic tasks without doing an expensive switch across the motherboard and PCIe bus between devices. Josh Walrath suggested a server containing these as essentially add-in card computers. For gamers, this might help out with workloads such as AI which is awkwardly split between branching logic and massively parallel visibility and path-finding tasks. Josh seems skeptical about this until HSA becomes further adopted, however.

Still, there is a reason why they are implementing this now. I wonder, if the SPMs are based upon simple x86 cores, how the PS4 will influence PC gaming. Technically, a Volcanic Island GPU would be an oversized PS4 within an add-in card. This could give AMD an edge, particularly in games ported to the PC from the Playstation.

This chip, Hawaii, is rumored to have the following specifications:

  • 4096 stream processors
  • 16 serial processor cores on 8 modules
  • 4 geometry engines
  • 256 TMUs
  • 64 ROPs
  • 512-bit GDDR5 memory interface, much like the PS4.
  • 20 nm Gate-Last silicon fab process
    • Unclear if TSMC or "Common Platform" (IBM/Samsung/GLOBALFOUNDRIES)

Softpedia is also reporting on this leak. Their addition claims that the GPU will be designed on a 20nm Gate-Last fabrication process. While gate-last is considered to be not worth the extra effort in production, Fully Depleted Silicon On Insulator (FD-SOI) is apparently "amazing" on gate-last at 28nm and smaller fabrication. This could mean that AMD is eying that technology and making this design with intent of switching to an FD-SOI process, without a large redesign which an initially easier gate-first production would require.

Well that is a lot to process... so I will leave you with an open question for our viewers: what do you think AMD has planned with this architecture, and what do you like and/or dislike about what your speculation would mean?

Source: TechPowerUp