All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards | June 13, 2017 - 01:17 PM | Jeremy Hellstrom
Tagged: nvidia, gtx 1080 ti, GTX 1080 Ti GAMING X, msi, Twin Frozr VI, 4k
MSI's latest version of the GeForce GTX 1080 Ti is their GAMING X 4K and has the design features you would expect, Twin Frozr VI, Hi-C CAPs, Super Ferrite Chokes and Japanese Solid Caps. When benchmarking the card [H]ard|OCP saw performance significantly higher than the quoted 1657MHz boost speed, the average was 1935MHz before they overclocked and an impressive 2038MHz for the highest stable in game frequency. They tested both the default and overclocked frequencies against a battery of benchmarks, including the newly released Prey. The card performed admirably at 4k, with many games still performing will with all graphics options at maximum, drop by for a look.
"We review a custom GeForce GTX 1080 Ti based video card with custom cooling and a factory overclock built for overclocking. Can the MSI GeForce GTX 1080 Ti GAMING X truly deliver a consistent enjoyable high-end graphics setting gameplay experience in games at 4K finally? Is a single card viable for current generation gaming at 4K?"
Here are some more Graphics Card articles from around the web:
- Asus ROG Strix GeForce GTX 1080 OC Edition 8GB 11Gbps Video Card Review @ Bjorn3d
- 15-Way NVIDIA/AMD OpenCL GPU Linux Benchmarks Of Ethereum Ethminer @ Phoronix
- XFX RX 460 4GB Heatsink Edition Review @ Bjorn3d
- XFX Rs XXX Edition Rx 570 4GB OC Review @ Bjorn3d
Subject: Graphics Cards | June 8, 2017 - 05:26 PM | Jeremy Hellstrom
Tagged: radeon, Crimson Edition 17.6.1, amd
In the very near future AMD will be releasing an updated driver, focused on improving performance in Prey and DiRT 4.
For DiRT 4 it will enable a Multi GPU profile and up to 30% performance improvement when using 8xMSAA on a Radeon RX 580 8GB compared to the previous release.
Subject: Graphics Cards, Displays | June 6, 2017 - 06:06 PM | Scott Michaud
Tagged: hdr, sdr, nvidia, computex
Dmitry Novoselov of Hardware Canucks saw an NVIDIA SDR vs HDR demo, presumably at Computex based on timing and the intro bumper, and noticed that the SDR monitor looked flat. According to his post in the YouTube comments, he asked NVIDIA to gain access to the monitor settings, and they let him... and he found that the brightness, contrast, and gamma settings were way off. He then performed a factory reset, to test how the manufacturer defaults hold up in the comparison, and did his video based on those results.
I should note that video footage of HDR monitors will not correctly describe what you can see in person. Not only is the camera not HDR, and thus not capable of showing the full range of what the monitor is displaying, but also who knows what the camera’s (and later video processing) exposure and color grading will actually correspond to. That said, he was there and saw it in person, so his eyewitness testimony is definitely valid, but it may or may not focus on qualities that you care about.
Anywho, the test was Mass Effect: Andromeda, which has a native HDR profile. To his taste, he apparently prefers the SDR content in a lot of ways, particularly how the blown out areas behave. He claims that he’s concerned about game-to-game quality, because there will be inconsistency between how one color grading professional chooses to process a scene versus another, but I take issue with that. Even in standard color range, there will always be an art director that decides what looks good and what doesn’t.
They are now given another knob, and it’s an adjustment that the industry is still learning how to deal with, but that’s not a downside to HDR.
Subject: Graphics Cards | June 2, 2017 - 03:02 PM | Jeremy Hellstrom
Tagged: amd, radeon, linux
When Phoronix does a performance round up they do not mess around. Their latest look at the performance of AMD cards on Linux stretches all the way back to the HD 2900XT and encompasses almost every single GPU released between that part and the RX 580, with a pair of Firepro cards and the Fury included as well. For comparative performance numbers you will see 28 NVIDIA cards on these charts, which makes the charts some of the longest you have seen. Drop by to check out the state of AMD performance on Linux in a variety of games as well as synthetic benchmarks.
"It's that time of the year where we see how the open-source AMD Linux graphics driver stack is working on past and present hardware in a large GPU comparison with various OpenGL games and workloads. This year we go from the new Radeon RX 580 all the way back to the Radeon HD 2900XT, looking at how the mature Radeon DRM kernel driver and R600 Gallium3D driver is working for aging ATI/AMD graphics hardware. In total there were 51 graphics cards tested for this comparison of Radeon cards as well as NVIDIA GeForce hardware for reference."
Here are some more Graphics Card articles from around the web:
- PowerColor Red Devl Radeon RX 580 Video Card Review @ Hardware Asylum
- 21-Way NVIDIA Fermi/Kepler/Maxwell/Pascal OpenCL GPU Comparison @ Phoronix
- 28-Way NVIDIA GeForce GPU Comparison On Ubuntu: From GeForce 8 To GeForce 1080 @ Phoronix
- ASUS GTX 1080 ROG Strix OC 11Gbps @ Kitguru
- MSI GTX 1080 Gaming X Plus 8GB @ Kitguru
Subject: Graphics Cards, Mobile | June 2, 2017 - 02:23 AM | Scott Michaud
Tagged: Imagination Technologies, PowerVR, ray tracing, ue4, vulkan
Imagination Technologies has published another video that demonstrates ray tracing with their PowerVR Wizard GPU. The test system, today, is a development card that is running on Ubuntu, and powering Unreal Engine 4. Specifically, it is using UE4’s Vulkan renderer.
The demo highlights two major advantages of ray traced images. The first is that, rather than applying a baked cubemap with screen-space reflections to simulate metallic objects, this demo calculates reflections with secondary rays. From there, it’s just a matter of hooking up the gathered information into the parameters that the shader requires and doing the calculations.
The second advantage is that it can do arbitrary lens effects, like distortion and equirectangular, 360 projections. Rasterization, which projects 3D world coordinates into 2D coordinates on a screen, assumes that edges are still straight, and that causes problems as FoV gets very large, especially full circle. Imagination Technologies acknowledges that workarounds exist, like breaking up the render into six faces of a cube, but the best approximation is casting a ray per pixel and seeing what it hits.
The demo was originally for GDC 2017, back in February, but the videos have just been released.
Subject: Graphics Cards | June 1, 2017 - 02:38 PM | Jeremy Hellstrom
Tagged: plex, live tv
Today Plex announced the addition of a live streaming service to their Plex Pass which will let you stream digital cable channels to any of your devices once you set it up on your account. All you need is a digital tuner such as an NVIDIA SHIELD, a product from Hauppauge, AVerMedia, DVBLogic or even a digital antenna. Hook the device up to the same network your Plex server resides on and start streaming your favourite shows, without paying your cable company rental fees for that set top box. You can check out what channels are available in your area on this Plex page. If you are unfamiliar with Plex or want to read a bit more on the setup you can check out this story at Gizmodo.
"In the latest build of Plex, the server can actually automatically convert the media, resolving this problem. More importantly, you get now watch live TV via a feature called, strangely enough, Live TV."
Here is some more Tech News from around the web:
- Intel Compute Card hands-on @ The Inquirer
- Android execs get technical talking updates, Project Treble, Linux, and more @ Ars Technica
- Enterprise ID management firm OneLogin covfefes to security breach @ The Inquirer
- Nvidia taps Taiwan server industry with HGX reference design for AI systems @ DigiTimes
- Sons of IoT: Bikers hack Jeeps in auto theft spree @ The Register
- Nest leaves competition in the dust with new smart camera @ The Register
- Windows XP Computers Were Mostly Immune To WannaCry @ Slashdot
- Nvidia: Pssst... farmers. Need to get some weeds whacked? @ The Register
- The Dell Computex 2017 Product Launch Coverage @ Tech ARP
- AMD Launches for Threadripper and Radeon RX Vega @ [H]ard|OCP
- The Computex Taipei 2017 Live Coverage (Day 2) @ Tech ARP
- AMD unveils 16-core Ryzen 9 CPUs to take on Intel's Core i9 @ The Inquirer
- The Intel Compute Card, X-Series & Core i9 CPUs & More @ Computex 2017 @ TechARP
- The Computex Taipei 2017 Live Coverage (Day 1) @ TechARP
Subject: Graphics Cards, Mobile | May 30, 2017 - 12:48 AM | Ryan Shrout
Tagged: nvidia, mobile, max-q design, max-q, GTX 1080, geforce
During CEO Jensen Huang’s keynote at Computex tonight, NVIDIA announced a new initiative called GeForce GTX with Max-Q Design, targeting the mobile gaming markets with a product that is lighter, thinner yet more powerful than previously available gaming notebooks.
The idea behind this technology differentiation centers around gaming notebooks that have seen limited evolution over the last several years in form factor and design. The biggest stereotype of gaming notebooks today is that they must big, bulky and heavy to provide a competitive gaming experience when compared to desktop computers. NVIDIA is taking it upon itself to help drive innovation forward in this market, in some ways similar to how Intel created the Ultrabook.
Using “typical” specifications from previous machines using a GeForce GTX 880M (admittedly a part that came out in early 2014), NVIDIA claims that Max-Q Designs will offer compelling gaming notebooks with half the weight, nearly a third of the thinness yet still see 3x the performance. Utilizing a GeForce GTX 1080 GP104 GPU, the team is focusing on four specific hardware data points to achieve this goal.
First, NVIDIA is setting specifications of the GPUs in this design to run at their maximum efficiency point, allowing the notebook to get the best possible gaming performance from Pascal with the smallest amount of power draw. This is an obvious move and is likely something that has been occurring for a while, but further down the product stack. It’s also likely that NVIDIA is highly binning the GP104 parts to filter those that require the least amount of power to hit the performance target of Max-Q Designs.
Second, NVIDIA is depending on the use of GeForce Experience software to set in-game settings optimally for power consumption. Though details are light, this likely means running the game with frame rate limiting enabled, keeping gamers from running at refresh rates well above their screen’s refresh rate (static or G-Sync) which is an unnecessary power drain. It could also mean lower quality settings than we might normally associate with a GeForce GTX 1080 graphics card.
Comparing a 3-year old notebook versus a Max-Q Design
The third and fourth points are heavily related: using the best possible cooling solutions and integrating the best available power regulators targeting efficiency. The former allows the GPU to be cooled quickly, and quietly (with a quoted sub-40 dbA goal), keeping the GTX 1080 at its peak efficiency curve. And putting the GPU in that state without inefficient power delivery hardware would be a waste, so NVIDIA is setting standards here too.
UPDATE: From the NVIDIA news release just posted on the company's website, we learned of a couple of new additions to Max-Q Design:
NVIDIA WhisperMode Technology
NVIDIA also introduced WhisperMode technology, which makes laptops run much quieter while gaming. WhisperMode intelligently paces the game's frame rate while simultaneously configuring the graphics settings for optimal power efficiency. This reduces the overall acoustic level for gaming laptops. Completely user adjustable and available for all Pascal GPU-based laptops, WhisperMode will be available soon through a GeForce Experience software update.
MaxQ-designed gaming laptops equipped with GeForce GTX 1080, 1070 and 1060 GPUs will be available starting June 27 from the world's leading laptop OEMs and system builders, including Acer, Aftershock, Alienware, ASUS, Clevo, Dream Machine, ECT, Gigabyte, Hasee, HP, LDLC, Lenovo, Machenike, Maingear, Mechrevo, MSI, Multicom, Origin PC, PC Specialist, Sager, Scan, Terrans Force, Tronic'5, and XoticPC. Features, pricing and availability may vary.
Jensen showed an upcoming ASUS Republic of Gamers notebook called Zephyrus that hit all of these targets – likely NVIDIA’s initial build partner. On it they demonstrated Project Cars 2, an impressive looking title for certain. No information was given on image quality settings, resolutions, frame rates, etc.
The ASUS ROG Zephyrus Max-Q Design Gaming Notebook
This design standard is impressive, and though I assume many gamers and OEMs will worry about having an outside party setting requirements for upcoming designs, I err on the side this being a necessary step. If you remember notebooks before the Intel Ultrabook push, they were stagnant and uninspiring. Intel’s somewhat forceful move to make OEMs innovate and compete in a new way changed the ecosystem at a fundamental level. It is very possible that GeForce GTX with Max-Q Design will do the same thing for gaming notebooks.
An initiative like this continues NVIDIA’s seeming goal of creating itself as the “PC brand”, competing more with Xbox and PlayStation than with Radeon. Jensen claimed that more than 10 million GeForce gaming notebooks were sold in the last year, exceeding the sales of Xbox hardware in the same time frame. He also called out the ASUS prototype notebook as having compute capability 60% higher than that of the PS4 Pro. It’s clear that NVIDIA wants to be more than just the add-in card leader, more than just the leader in computer graphics. Owning the ecosystem vertical gives them more control and power to drive the direction of software and hardware.
The ASUS ROG Zephyrus Max-Q Design Gaming Notebook
So, does the Max-Q Design technology change anything? Considering the Razer Blade B5 is already under 18mm thin, the argument could be made that the market was already going down this path, and NVIDIA is simply jumping in to get credit for the move. Though Razer is a great partner for NVIDIA, they are likely irked that NVIDIA is going to push all OEMs to steal some of the thunder from this type of design that Razer started and evangelized.
That political discussion aside, Max-Q Design will bring new, better gaming notebook options to the market from many OEMs, lowering the price of entry for these flagship designs. NVIDIA did not mention anything about cost requirements or segments around Max-Q, so I do expect the first wave of these to be on the premium end of the scale. Over time, as cost cutting measures come into place, and the necessity of thinner, lighter gaming notebooks is well understood, Max-Q Designs could find itself in a wide range of price segments.
Subject: Graphics Cards | May 29, 2017 - 08:30 PM | Jim Tanous
Tagged: Kingpin, gtx 1080 ti, gpu, evga, computex 2017
EVGA today took the wraps off its latest and highest-end NVIDIA GPU with the announcement of the EVGA GeForce GTX 1080 Ti Kingpin Edition. Part of the company's continuing line of "K|NGP|N" licensed graphics cards, the 1080 Ti Kingpin includes performance, cooling, and stability-minded features that are intended to set it apart from all of the other 1080 Ti models currently available.
From a design standpoint, the 1080 Ti Kingpin features an oversized PCB, triple-fan iCX cooler, an expansive copper heat sink, and right-edge PCIe connectors (2 x 8pin), meaning that those with an obsession for cable management won't need to pick up something like the EVGA PowerLink. The card's design is also thin enough that owners can convert it into a true single-slot card by removing the iCX cooler, allowing enthusiasts to pack more water- or liquid nitrogen-cooled GPUs into a single chassis.
The GTX 1080 Ti Kingpin also features a unique array of display outputs, with dual-link DVI, HDMI 2.0, and three Mini DisplayPort 1.3 connectors. This compares with the three full-size DisplayPort and single HDMI outputs found on the 1080 Ti reference design. The presence of the DVI port on the Kingpin edition also directly addresses the concerns of some NVIDIA customers who weren't fans of NVIDIA's decision to ditch the "legacy" connector.
With its overbuilt PCB and enhanced cooling, EVGA claims that users will be able to achieve greater performance from the Kingpin Edition compared to any other currently shipping GTX 1080 Ti. That includes a "guaranteed" overclock of at least 2025MHz right out of the box, which compares to the 1480MHz base / 1600MHz boost clock advertised for the 1080 Ti's reference design (although it's important to note that NVIDIA's advertised boost clocks have become quite conservative in recent years, and many 1080 Ti owners are able to easily exceed 1600MHz with modest overclocking).
EVGA has yet to confirm an exact release date for the GeForce GTX 1080 Ti Kingpin, but it is expected to launch in late June or July. As for price, EVGA has also declined to provide specifics, but interested enthusiasts should start saving their pennies now. Based on previous iterations of the "K|NGP|N" flagship model, expect a price premium of anywhere between $100 and $400.
Subject: General Tech, Graphics Cards | May 27, 2017 - 12:18 AM | Tim Verry
Tagged: vision fund, softbank, nvidia, iot, HPC, ai
SoftBank, the Tokyo, Japan based Japanese telecom and internet technology company has reportedly quietly amassed a 4.9% stake in graphics chip giant NVIDIA. Bloomberg reports that SoftBank has carefully invested $4 billion into NVIDIA avoiding the need to get regulatory approval in the US by keeping its investment under 5% of the company. SoftBank has promised the current administration that it will invest $50 billion into US tech companies and it seems that NVIDIA is the first major part of that plan.
NVIDIA's Tesla V100 GPU.
Led by Chairman and CEO Masayoshi Son, SoftBank is not afraid to invest in technology companies it believes in with major past acquisitions and investments in companies like ARM Holdings, Sprint, Alibaba, and game company Supercell.
The $4 billion-dollar investment makes SoftBank the fourth largest shareholder in NVIDIA, which has seen the company’s stock rally from SoftBank’s purchases and vote of confidence. The (currently $93) $100 billion Vision Fund may also follow SoftBank’s lead in acquiring a stake in NVIDIA which is involved in graphics, HPC, AI, deep learning, and gaming.
Overall, this is good news for NVIDIA and its shareholders. I am curious what other plays SoftBank will make for US tech companies.
What are your thoughts on SoftBank investing heavily in NVIDIA?
Subject: Graphics Cards | May 26, 2017 - 03:56 PM | Jeremy Hellstrom
Tagged: evga, Hydro Copper GTX 1080, water cooler, nvidia
EVGA's Hydro Copper GTX 1080 is purpose built to fix any GTX 1080 on the market with thermal pads for the memory and VRMs already attached with a tube of EVGA Frostbite thermal paste for the GPU. The ports to connect into your watercooling loop are further apart than usual, something that TechPowerUp were initially skeptical about, once they tested the cooler those doubts soon disappeared though they had other concerns about the design. Check out the review for the full details on this coolers performance.
"The EVGA Hydro Copper GTX 1080 is a full-cover waterblock that offers integrated lighting with no cable management needed, a six-port I/O port manifold, and an aluminum front cover for aesthetics and rigidity alike. It also aims to simplify installation by incorporating pre-installed thermal pads out of the box."
Here is some more Tech News from around the web:
- MSI GeForce GT 1030: A $70 Passively-Cooled Graphics Card @ Phoronix
- Palit GTX 1050 Ti KalmX 4 GB @ techPowerUp
- Radeon RX 560 Linux OpenGL/Vulkan Benchmarks @ Phoronix
- Aorus Radeon RX 570 4G Video Card Review @ Hardware Asylum
- Polaris, Boosted: A Look At PowerColor’s Radeon RX 570 & RX 580 @ Techgage
- XFX RX 570 RS 4GB XXX Edition Review @ Neoseeker
Subject: Graphics Cards, Shows and Expos | May 25, 2017 - 07:24 PM | Jeremy Hellstrom
Tagged: external gpu, zotac, thunderbolt 3, computex 2017
They haven't given us much detail but as you would expect the ZOTAC external GPU box connects an GPU to your system via a Thunderbolt 3 connector, allowing you to add more GPU power to a mobile system or any other computer which needs a little boost to its graphics. You can fit cards of up to 9" in length, which makes it a perfect match for the two Mini-GPUs just below or other lower powered cards which are not as well endowed as your average GTX 1080 or 1080 Ti. It also adds four USB 3.0 ports and a Quick Charge 3.0 port to your system so you can leave it at home and simply attach your laptop via the Thunderbolt cable and get right to gaming.
Subject: Graphics Cards, Shows and Expos | May 25, 2017 - 07:00 PM | Jeremy Hellstrom
Tagged: zotac, GTX 1080 Ti Mini, GTX 1080 Ti Arctic Storm Mini, gtx 1080 ti, computex 2017
ZOTAC is claiming bragging rights about the size of their new GTX 1080 Ti's, that they are the smallest of their kind. The two new cards measure a miniscule 210.8mm (8.3") in length and in the case of the Arctic Storm mini it is the lightest watercooled GPU on the market.
You can see the size of the ZOTAC GeForce GTX 1080 Ti Mini by how much of the length is taken up by the PCIe connector, compared to most 1080 Ti's which are over a foot long. This card is not long enough to fit a third fan on.
The Arctic Storm version is the same size as the air-cooled model but opts for the worlds lightest watercooler. That may mean you want a powerful pump attached to the GPU as there is less metal to transfer heat but it means small silent builds can pack a lot of graphical power.
Both these cards will use dual 8-pin PCIe power connectors, expect to see more of them at Computex.
Subject: Graphics Cards | May 23, 2017 - 03:58 PM | Jeremy Hellstrom
Tagged: ek cooling, pascal, nvidia, waterblock, GTX FE
The current series of EK Cooling waterblocks for Pascal based GPUs, up to and including the new Titan X are being replaced with a new family of coolers. The new GTX FE water blocks will be compatible with the previous generation of backplates, so you can do a partial upgrade or keep an eye out for discounts on the previous generation.
These new coolers will fit on any Founders Edition reference card, from GTX 1060's through to the Titan X, currently that count stands at 106 unique graphics cards so your card is likely to be compatible. You can choose between four models, a plain design, one with acetal, one with nickel and one with both acetal and nickel, whichever one you choose it will still run you 109.95€/$125USD
Full PR is below.
EK Water Blocks, the Slovenia-based premium computer liquid cooling gear manufacturer, is releasing several new EK-FC GeForce GTX FE water blocks that are compatible with multiple reference design Founders Edition NVIDIA® GeForce GTX 1060, 1070, 1080, 1080 Ti, Titan X Pascal and Titan Xp based graphics cards. All the water blocks feature recently introduced aesthetic terminal cover as well! FE blocks come as a replacement to current GeForce GTX 10x0 / TITAN X Series of water blocks.
All current GeForce GTX 10x0 / TITAN X Series of water blocks are going to be discontinued after the stock runs out and FE blocks come as a complete replacement. FE blocks are designed to fit all reference design Founders Edition NVIDIA GeForce GTX 1060, 1070, 1080, 1080 Ti, Titan X Pascal and Titan Xp based graphics cards. The current compatibility list rounds up a total of 106 graphics cards that are on the market, but as always, we recommend that you refer to the EK Cooling Configurator for a precise compatibility match.
The new EK-FC GeForce GTX FE water blocks are also backward compatible with all EK-FC1080 GTX Backplates, EK-FC1080 GTX Ti Backplates, and EK-FC Titan X Pascal Backplates.
Availability and pricing
These water blocks are made in Slovenia, Europe and are available for purchase through EK Webshop and Partner Reseller Network. In the table below you can see manufacturer suggested retail price (MSRP) with VAT included.
Subject: Graphics Cards | May 20, 2017 - 07:01 AM | Scott Michaud
Tagged: graphics drivers, amd
The second graphics driver of the month from AMD, Radeon Software Crimson ReLive 17.5.2, adds optimizations for Bethesda’s new shooter, Prey. AMD claims that it will yield up to a 4.5% performance improvement, as measured on an RX 580 (versus the same card with 17.5.1). This is over and above the up to 4.7% increase that 17.5.1 had over 17.4.4.
Outside of that game, 17.5.2 also addresses four issues. The first is a crash in NieR: Automata. The second is long load times in Forza Horizon 3. The third is a system hang with the RX 550 when going sleep. The fourth fixed issue is a bit more complicated; apparently, in a multi-GPU system, where monitors are attached to multiple graphics cards, the primary graphics card can appear disabled in Radeon Settings. All four are now fixed, so, if they affect you, then pick up the driver.
As always, they are available from AMD’s website.
Subject: Graphics Cards, Mobile | May 17, 2017 - 02:30 PM | Ryan Shrout
Tagged: snapdragon 835, snapdragon, qualcomm, google io 2017, google, daydream
During the Google I/O keynote, Google and Qualcomm announced a partnership to create a reference design for a standalone Daydream VR headset using Snapdragon 835 to enable the ecosystem of partners to have deliverable hardware in consumers’ hands by the end of 2017. The time line is aggressive, impressively so, thanks in large part to the previous work Qualcomm had done with the Snapdragon-based VR reference design we first saw in September 2016. At the time the Qualcomm platform was powered by the Snapdragon 820. Since then, Qualcomm has updated the design to integrate the Snapdragon 835 processor and platform, improving performance and efficiency along the way.
Google has now taken the reference platform and made some modifications to integrate Daydream support and will offer it to partners to show case what a standalone, untethered VR solution can do. Even though Google Daydream has been shipping in the form of slot-in phones with a “dummy” headset, integrating the whole package into a dedicate device offers several advantages.
First, I expected the free standalone units to have better performance than the phones used as a slot-in solution. With the ability to tune the device to higher thermal limits, Qualcomm and Google will be able to ramp up the clocks on the GPU and SoC to get optimal performance. And, because there is more room for a larger battery on the headset design, there should be an advantage in battery life along with the increase in performance.
The Qualcomm Snapdragon 835 VR Reference Device
It is also likely that the device will have better thermal properties than those using high smartphones today. In other words, with more space, there should be more area for cooling and thus the unit shouldn’t be as warm on the consumers face.
I would assume as well that the standalone units will have improved hardware over the smartphone iterations. That means better gyros, cameras, sensors, etc. that could lead to improved capability for the hardware in this form. Better hardware, tighter and more focused integration and better software support should mean lower latency and better VR gaming across the board. Assuming everything is implemented as it should.
The only major change that Google has made to this reference platform is the move away from Qualcomm’s 6DOF technology (6 degrees of freedom, allowing you to move in real space and have all necessary tracking done on the headset itself) and to Google calls WorldSense. Based on the Google Project Tango technology, this is the one area I have questions about going forward. I have used three different Tango enabled devices thus far with long-term personal testing and can say that while the possibilities for it were astounding, the implementations had been…slow. For VR that 100% cannot be the case. I don’t yet know how different its integration is from what Qualcomm had done previously, but hopefully Google will leverage the work Qualcomm has already done with its platform.
Google is claiming that consumers will have hardware based on this reference design in 2017 but no pricing has been shared with me yet. I wouldn’t expect it to be inexpensive though – we are talking about all the hardware that goes into a flagship smartphone plus a little extra for the VR goodness. We’ll see how aggressive Google wants its partners to be and if it is willing to absorb any of the upfront costs with subsidy.
Let me know if this is the direction you hope to see VR move – away from tethered PC-based solutions and into the world of standalone units.
Subject: Graphics Cards | May 17, 2017 - 01:55 PM | Jeremy Hellstrom
Tagged: nvidia, msi, gt 1030, gigabyte, evga. zotac
The GT 1030 quietly launched from a variety of vendors late yesterday amidst the tsunami of AMD announcements. The low profile card is advertised as offering twice the performance of the iGPU found on Intel Core i5 processors and in many cases is passively cooled. From the pricing of the cards available now, expect to pay around $75 to $85 for this new card.
EVGA announced a giveaway of several GTX 1030s at the same time as they released the model names. The card which is currently available retails for $75 and is clocked at 1290MHz base, 1544 MHz boost and has 384 CUDA Cores. The 2GB of GDDR5 is clocked a hair over 6GHz and runs on a 64 bit bus providing a memory bandwidth of 48.06 GB/s. Two of their three models offer HDMI + DVI-D out, the third has a pair of DVI-D connectors.
Zotac's offering provides slightly lower clocks, a base of 1227MHz and boost of 1468MHz however the VRAM remains unchanged at 6GHz. It pairs HDMI 2.0b with a DVI slot and comes with a low profile bracket if needed for an SFF build.
MSI went all out and released a half dozen models, two of which you can see above. The GT 1030 AERO ITX 2G OC is actively cooled which allows you to reach a 1265MHz base and 1518MHz boost clock. The passively cooled GT 1030 2GH LP OCV1 runs at the same frequency and fits in a single slot externally, however you will need to leave space inside the system as the heatsink takes up an additional slot internally. Both are fully compatible with the Afterburner Overclocking Utility and its features such as the Predator gameplay recording tool.
Last but not least are a pair from Gigabyte, the GT 1030 Low Profile 2G and Silent Low Profile 2G cards. The the cards both offer you two modes, in OC Mode the base clock is 1252MHz and boost clock 1506MHz while in Gaming Mode you will run at 1227MHz base and 1468MHz boost.
Subject: Graphics Cards | May 16, 2017 - 07:39 PM | Sebastian Peak
Tagged: Vega, reference, radeon, graphics card, gpu, Frontier Edition, amd
AMD has revealed their concept of a premium reference GPU for the upcoming Radeon Vega launch, with the "Frontier Edition" of the new graphics cards.
"Today, AMD announced its brand-new Radeon Vega Frontier Edition, the world’s most powerful solution for machine learning and advanced visualization aimed to empower the next generation of data scientists and visualization professionals -- the digital pioneers forging new paths in their fields. Designed to handle the most demanding design, rendering, and machine intelligence workloads, this powerful new graphics card excels in:
- Machine learning. Together with AMD’s ROCm open software platform, Radeon Vega Frontier Edition enables developers to tap into the power of Vega for machine learning algorithm development. Frontier Edition delivers more than 50 percent more performance than today’s most powerful machine learning GPUs.
- Advanced visualization. Radon Vega Frontier Edition provides the performance required to drive increasingly large and complex models for real-time visualization, physically-based rendering and virtual reality through the design phase as well as rendering phase of product development.
- VR workloads. Radeon Vega Frontier Edition is ideal for VR content creation supporting AMD’s LiquidVR technology to deliver the gripping content, advanced visual comfort and compatibility needed for next-generation VR experiences.
- Revolutionized game design workflows. Radeon Vega Frontier Edition simplifies and accelerates game creation by providing a single GPU optimized for every stage of a game developer’s workflow, from asset production to playtesting and performance optimization."
From the image provided on the official product page it appears that there will be both liquid-cooled (the gold card in the background) and air-cooled variants of these "Frontier Edition" cards, which AMD states will arrive with 16GB of HBM2 and offer 1.5x the FP32 performance and 3x the FP16 performance of the Fury X.
Radeon Vega Frontier Edition
- Compute units: 64
- Single precision compute performance (FP32): ~13 TFLOPS
- Half precision compute performance (FP16): ~25 TFLOPS
- Pixel Fillrate: ~90 Gpixels/sec
- Memory capacity: 16 GBs of High Bandwidth Cache
- Memory bandwidth: ~480 GBs/sec
The availability of the Radeon Vega Frontier Edition was announced as "late June", so we should not have too long to wait for further details, including pricing.
Subject: Graphics Cards | May 13, 2017 - 11:46 PM | Tim Verry
Tagged: SFF, pascal, nvidia, Inno3D, GP107
Hong Kong based Inno3D recently introduced a single slot graphics card using NVIDIA’s mid-range GTX 1050 Ti GPU. The aptly named Inno3D GeForce GTX 1050 Ti (1-Slot Edition) combines the reference clocked Pascal GPU, 4GB of GDDR5 memory, and a shrouded single fan cooler clad in red and black.
Around back, the card offers three display outputs including a HDMI 2.0, DisplayPort 1.4, and DVI-D. The single slot cooler is a bit of an odd design with an thin axial fan rather than a centrifugal type that sits over a fake plastic fin array. Note that these fins do not actually cool anything, in fact the PCB of the card does not even extend out to where the fan is; presumably the fins are there primarily for aesthetics and secondarily to channel a bit of the air the fan pulls down. Air is pulled in and pushed over the actual GPU heatsink (under the shroud) and out the vent holes next to the display connectors. Air is circulated through the case and is not actually exhausted like traditional dual slot (and some single slot) designs. I am curious how the choice of fan and vents will affect cooling performance.
Overclocking is going to be limited on this card, and it comes out-of-the-box clocked at NVIDIA reference speeds of 1290 MHz base and 1392 MHz boost for the GPU’s 768 cores and 7 GT/s for the 4GB of GDDR5 memory. The card measures 211 mm (~8.3”) long and should fit in just about any case. Since it pulls all of its power from the slot, it might be a good option for those slim towers OEMs like to use these days to get a bit of gaming out of a retail PC.
Inno3D is not yet talking availability or pricing, but looking at there existing lineup I would expect a MSRP around $150.
Subject: Graphics Cards | May 10, 2017 - 01:32 PM | Ryan Shrout
Tagged: v100, tesla, nvidia, gv100, gtc 2017
During the opening keynote to NVIDIA’s GPU Technology Conference, CEO Jen-Hsun Huang formally unveiled the latest GPU architecture and the first product based on it. The Tesla V100 accelerator is based on the Volta GPU architecture and features some amazingly impressive specifications. Let’s take a look.
|Tesla V100||GTX 1080 Ti||Titan X (Pascal)||GTX 1080||GTX 980 Ti||TITAN X||GTX 980||R9 Fury X||R9 Fury|
|GPU||GV100||GP102||GP102||GP104||GM200||GM200||GM204||Fiji XT||Fiji Pro|
|Base Clock||-||1480 MHz||1417 MHz||1607 MHz||1000 MHz||1000 MHz||1126 MHz||1050 MHz||1000 MHz|
|Boost Clock||1455 MHz||1582 MHz||1480 MHz||1733 MHz||1076 MHz||1089 MHz||1216 MHz||-||-|
|ROP Units||128 (?)||88||96||64||96||96||64||64||64|
|Memory Clock||878 MHz (?)||11000 MHz||10000 MHz||10000 MHz||7000 MHz||7000 MHz||7000 MHz||500 MHz||500 MHz|
|Memory Interface||4096-bit (HBM2)||352-bit||384-bit G5X||256-bit G5X||384-bit||384-bit||256-bit||4096-bit (HBM)||4096-bit (HBM)|
|Memory Bandwidth||900 GB/s||484 GB/s||480 GB/s||320 GB/s||336 GB/s||336 GB/s||224 GB/s||512 GB/s||512 GB/s|
|TDP||300 watts||250 watts||250 watts||180 watts||250 watts||250 watts||165 watts||275 watts||275 watts|
|Peak Compute||15 TFLOPS||10.6 TFLOPS||10.1 TFLOPS||8.2 TFLOPS||5.63 TFLOPS||6.14 TFLOPS||4.61 TFLOPS||8.60 TFLOPS||7.20 TFLOPS|
While we are low on details today, it appears that the fundamental compute units of Volta are similar to that of Pascal. The GV100 has 80 SMs with 40 TPCs and 5120 total CUDA cores, a 42% increase over the GP100 GPU used on the Tesla P100 and 42% more than the GP102 GPU used on the GeForce GTX 1080 Ti. The structure of the GPU remains the same GP100 with the CUDA cores organized as 64 single precision (FP32) per SM and 32 double precision (FP64) per SM.
Click to Enlarge
Interestingly, NVIDIA has already told us the clock speed of this new product as well, coming in at 1455 MHz Boost, more than 100 MHz lower than the GeForce GTX 1080 Ti and 25 MHz lower than the Tesla P100.
Click to Enlarge
Volta adds in support for a brand new compute unit though, known as Tensor Cores. With 640 of these on the GPU die, NVIDIA directly targets the neural network and deep learning fields. If this is your first time hearing about Tensor, you should read up on its influence on the hardware markets, bringing forth an open-source software library for machine learning. Google has invested in a Tensor-specific processor already, and now NVIDIA throws its hat in the ring.
Adding Tensor Cores to Volta allows the GPU to do mass processing for deep learning, on the order of a 12x improvement over Pascal’s capabilities using CUDA cores only.
For users interested in standard usage models, including gaming, the GV100 GPU offers 1.5x improvement in FP32 computing, up to 15 TFLOPS of theoretical performance and 7.5 TFLOPS of FP64. Other relevant specifications include 320 texture units, a 4096-bit HBM2 memory interface and 16GB of memory on-module. NVIDIA claims a memory bandwidth of 900 GB/s which works out to 878 MHz per stack.
Maybe more impressive is the transistor count: 21.1 BILLION! NVIDIA claims that this is the largest chip you can make physically with today’s technology. Considering it is being built on TSMC's 12nm FinFET technology and has an 815 mm2 die size, I see no reason to doubt them.
Shipping is scheduled for Q3 for Tesla V100 – at least that is when NVIDIA is promising the DXG-1 system using the chip is promised to developers.
I know many of you are interested in the gaming implications and timelines – sorry, I don’t have an answer for you yet. I will say that the bump from 10.6 TFLOPS to 15 TFLOPS is an impressive boost! But if the server variant of Volta isn’t due until Q3 of this year, I find it hard to think NVIDIA would bring the consumer version out faster than that. And whether or not NVIDIA offers gamers the chip with non-HBM2 memory is still a question mark for me and could directly impact performance and timing.
Subject: Graphics Cards | May 10, 2017 - 07:02 AM | Scott Michaud
Tagged: vrworks, nvidia, audio
GPUs are good at large bundles of related tasks, saving die area by tying several chunks of data together. This is commonly used for graphics, where screens have two-to-eight million (1080p to 4K) pixels, 3d models have thousands to millions of vertexes, and so forth. Each instruction is probably done hundreds, thousands, or millions of times, and so parallelism greatly helps with utilizing real-world matter to store and translate this data.
Audio is another area with a lot of parallelism. A second of audio has tens of thousands of sound pressure samples, but another huge advantage is that higher frequency sounds model pretty decently as rays, which can be traced. NVIDIA decided to repurpose their OptiX technology into calculating these rays. Beyond the architecture demo that you often see in global illumination demos, they also integrated it into an Unreal Tournament test map.
And now it’s been released, both as a standalone SDK and as an Unreal Engine 4.15 plug-in. I don’t know what its license specifically entails, because the source code requires logging into NVIDIA’s developer portal, but it looks like the plug-ins will be available to all users of supported engines.