All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Subject: Graphics Cards | July 29, 2016 - 01:09 AM | Ryan Shrout
Tagged: rx 470, rx 460, radeon, polaris 11, polaris 10, Polaris, amd
We know pretty much all there is to know about AMD's new Polaris architecture thanks to our Radeon RX 480 review, but AMD is taking the covers off of the lower priced, lower performance products based on the same architecture tonight. We previously covered AMD's launch event in Australia where the company officially introduced the Polaris 10 RX 470 and Polaris 11 RX 460 and talked about the broader specifications. Now, we have a bit more information to share on specifics and release dates. Specifically, AMD's RX 470 will launch on Thursday, August 4th and the RX 460 will launch on the following Monday, August 8th.
First up is the Radeon RX 470, based on the same Polaris 10 GPU as the RX 480, but with some CUs disabled to lower performance and increase yields.
This card is aimed at 1080p gaming at top quality settings with AA enabled at 60 FPS. Obviously that is a very vague statement, but it gives you an idea of what price point and target segment the RX 470 is going after.
The only comparison we have from AMD pits the upcoming RX 470 against the R9 270, where Polaris offers a range from 1.5x to 2.4x improvement in a handful of titles, which include DX12 and Vulkan enabled games, of course.
From a specifications stand point, the RX 470 will include 2048 stream processors running at a base clock of 926 MHz and a rated boost frequency of 1206 MHz. That gives us 4.9 TFLOPS of theoretical peak performance to pair with a 6.6 Gbps memory interface capable of 211 GB/s of peak bandwidth. With a 4GB frame buffer and a 120 watt TDP, the RX 470 should offer some compelling performance in the ~$150 price segment (this price is just a guess on my part... though yields should be better – they can salvage RX 480s – and partners being able to use memory chips that do not have to hit 8 Gbps should help to lower costs).
Going down another step to the Radeon RX 460, AMD is targeting this card at 1080p resolutions at "high" image quality settings. The obvious game categories here are eSports titles like MOBAs, CS: Go, Overwatch, etc.
Again, AMD provides a comparison to other AMD hardware: in this case the R7 260X. You'll find a 1.2x to 1.3x performance improvements in these types of titles. Clearly we want to know where the performance rests against the GeForce line but this comparison seems somewhat modest.
Based on the smaller Polaris 11 GPU, which is a new chip that we have not seen before, the RX 460 features up to 2.2 TFLOPS of computing capability with 896 stream processors (14 CUs enabled out of 16 total in full Polaris 11) running between 1090 MHz and 1200 MHz. The memory system is actually running faster on the RX 460 than the RX 470, though with half the memory bus width at 128-bits. The TDP of this card is sub-75 watts and thus we should find cards that don't require any kind of external power. The RX 460 GPU will be used in desktop cards as well as notebooks (though with lower TDPs and clocks).
The chart below outlines the comparison between the three known Polaris graphics processors.
|RX 480||RX 470||RX 460|
|GPU Clock (Base)||1120 MHz||926 MHz||1090 MHz|
|GPU Clock (Boost)||1266 MHz||1206 MHz||1200 MHz|
|Memory||4 or 8 GB GDDR5||4 or 8 GB GDDR5||2 or 4 GB GDDR5|
|Memory Bandwidth||256 GB/s||211 GB/s||112 GB/s|
|GPU||Polaris 10||Polaris 10||Polaris 11|
There is still much to learn about these new products, most importantly, prices. AMD is still shying away from telling us that important data point. The RX 470 will be on sale and will have reviews on August 4th, with the RX 460 following that on August 8th, so we'll have details and costs in our hands very soon.
It is not clear how many or what kinds of cards we can expect to see on the August 4th and August 8th release days though it would stand to reason that they will be mostly based upon reference designs especially for the RX 460 (though Gamer's Nexus did spot a dual fan Sapphire card).. With that said, we may see custom cooled RX 470 graphics cards because while AMD does technically have a reference design with blower style cooler the company expects most if not all of its partners to go their own direction with this board including their own single and dual fan coolers.
For gamers looking to buy into the truly budget card segment, stay tuned just a little longer!
NVIDIA Offers Preliminary Settlement To Geforce GTX 970 Buyers In False Advertising Class Action Lawsuit
Subject: Graphics Cards | July 28, 2016 - 07:07 PM | Tim Verry
Tagged: nvidia, maxwell, GTX 970, GM204, 3.5gb memory
A recent post on Top Class Actions suggests that buyers of NVIDIA GTX 970 graphics cards may soon see a payout from a settlement agreement as part of the series of class action lawsuits facing NVIDIA over claims of false advertising. NVIDIA has reportedly offered up a preliminary settlement of $30 to "all consumers who purchased the GTX 970 graphics card" with no cap on the total payout amount along with a whopping $1.3 million in attorney's fees.
This settlement offer is in response to several class action lawsuits that consumers filed against the graphics giant following the controversy over mis-advertised specifications (particularly the number of ROP units and amount of L2 cache) and the method in which NVIDIA's GM204 GPU addressed the four total gigabytes of graphics memory.
Specifically, the graphics card specifications initially indicated that it had 64 ROPs and 2048 KB of L2 cache, but later was revealed to have only 56 ROPs and 1792 KB of L2. On the memory front, the "3.5 GB memory controvesy" spawned many memes and investigations into how the 3.5 GB and 0.5 GB pools of memory worked and how performance both real world and theoretical were affected by the memory setup.
(My opinions follow)
It was quite the PR disaster and had NVIDIA been upfront with all the correct details on specifications and the new memory implementation the controversy could have been avoided. As is though buyers were not able to make informed decisions about the card and at the end of the day that is what is important and why the lawsuits have merit.
As such, I do expect both sides to reach a settlement rather than see this come to a full trial, but it may not be exactly the $30 per buyer payout as that amount still needs to be approved by the courts to ensure that it is "fair and reasonable."
For more background on the GTX 970 memory issue (it has been awhile since this all came about after all, so you may need a refresher):
- NVIDIA Discloses Full Memory Structure and Limitations of GTX 970
- NVIDIA Responds to GTX 970 3.5GB Memory Issue
- Frame Rating: GTX 970 Memory Issues Tested in SLI
- Frame Rating: Looking at GTX 970 Memory Performance
Subject: General Tech | July 28, 2016 - 05:33 PM | Tim Verry
Tagged: xiaomi, ultraportable, ultrabook, thin and light, Intel, core m3, core i5
According to the guys over at The Tech Report, Chinese smartphone maker Xiaomi is jumping into the notebook game with two new Mi Notebook Air ultrabooks. The all aluminum notebooks are sleek looking and priced very competitively for their specifications. They are set to release on August 2nd in China.
The new Mi Notebook Air notebooks come in 13.3" and 12.5" versions. Both models use all aluminum bodies with edge to edge glass displays (1080p though unknown what type of panel), backlit keyboards, and dual AKG speakers. Users can choose from gold or silver colors for the body and keyboard (Xiaomi uses a logo-less design which is nice).
Xiaomi Mi Notebook Air via Ars Technica.
Both models sport a single USB Type C port (which is also used for charging), two USB 3.0 Type A ports, one HDMI video output, and a headphone jack. The Xiaomi website shows an USB Type C adapter that adds extra ports as well. Internally, they have a M.2 slot for storage expansion but the notebooks do not appear to be user serviceable (though iFixit may rectify that...). Also shared is support for the company's Mi Sync software and Mi fitness band which can be used to unlock the computer when the user is in proximity.
The smaller 12.5" Mi Notebook Air is 0.51" thick and weighs just over 2.3 pounds. It is powered by an Intel Core M3 processor and Xiaomi claims that this model can hit 11.5 hours ouf battery life. Other specifications include 4 GB of RAM, a 128 GB SATA SSD, and 802.11ac wireless.
If you need a bit more computing power, the 13.3" notebook is slightly bulkier at 0.58" thick and 2.8 pounds with the tradeoff in size giving users a larger display, keyboard, and dedicated graphics card. Specifically, the 13.3" ultrabook features an Intel Core i5 processor, Nvidia Geforce 940MX GPU, 8 GB DDR4 RAM, a 256GB NVMe PCI-E SSD, and 802.11ac Wi-Fi. This laptop is a bit heavier but I think the extra horsepower is worth it for those that need or want it.
Perhaps the most surprising thing about what many will see as an Apple MacBook Air clone is the pricing. The 12.5" laptop will MSRP for RMB 3499 while that 13.3" notebook will cost RMB 4999. That translates to approximately $525 and $750 USD respectively which is a great value for the specifications and size and seemingly will give Apple a run for its money in China. That's the bad news: Xiaomi does not appear to be bringing these slick looking notebooks to the US anytime soon which is unfortunate.
Chinese technology company LeEco (SZSE: 300104) will purchase US television manufacture Vizio (NASDAQ: VZIO (not trading)) in a deal worth $2 Billion USD set to close in the fourth quarter of this year.
LeEco plans to acquire Vizio's hardware and software divisions and run the US company as a wholly owned subsidiary while spinning off Vizio's Inscape television viewership data arm as a privately held company. With approximately 400 employees, yearly revenue in the billions ($3.1 billion in 2014), and at least 20% of the US television market, the acquisition would allow LeEco to enter the US market in a big way. Vizio is best known in the US for its televisions where it is a respected brand, but the company also produces ultrabooks, tablets, smartphones, and sound bars. It is a private US-based company with manufacturing in Mexico and China.
Founded in 2004, LeEco is involved in a number of technology related fields across China, India, and soon the US. The Vizio brand (and partnerships such as the one with Walmart to carry its TVs) alone will be instrumental in LeEco's plans to break into the US market which has been resistant to Chinese brands making inroads (Lenovo apparently being the exception, but even Lenovo was not able to get its smartphones into the US market in a big way). The company of 5000+ employees is involved in Internet TV, video production and distribution, e-commerce, smartphones, tablets, gadgets, home automation, and even (soon) driverless cars.The company had 2014 revenue of $1.6 billion.
It is interesting to see all of the buy outs of US tech companies by overseas companies. To be clear, I don't necessarily think that these deals are a bad thing or being done with malicious intentions, but they do piques my curiosity. In this case it could be a good partnership that would allow both companies to benefit with LeEco getting a strong US brand and the recognition and market trust that entails and Vizio getting a much larger staffed company with experience in Chinese markets where it could help Vizio push its smart TV platform and ultrabooks and phone aspects further. Here's hoping that a LeEco owned Vizio grows and maintains its quality and price points.
What do you think about LeEco buying out Vizio? What will the future hold for the US TV maker?
Subject: Processors | July 28, 2016 - 02:47 PM | Tim Verry
Tagged: kaby lake, Intel, gt3e, coffee lake, 14nm
Intel will allegedly be releasing another 14nm processor following Kaby Lake (which is itself a 14nm successor to Skylake) in 2018. The new processors are code named "Coffee Lake" and will be released alongside low power runs of 10nm Cannon Lake chips.
Not much information is known about Coffee Lake outside of leaked slides and rumors, but the first processors slated to launch in 2018 will be mainstream mobile chips that will come in U and HQ mobile flavors which are 15W to 28W and 35W to 45W TDP chips respectively. Of course, these processors will be built on a very mature 14nm process with the usual small performance and efficiency gains beyond Skylake and Kaby Lake. The chips should have a better graphics unit, but perhaps more interesting is that the slides suggest that Coffee Lake will be the first architecture where Intel will bring "hexacore" (6 core) processors into mainstream consumer chips! The HQ-class Coffee Lake processors will reportedly come in two, four, and six core variants with Intel GT3e class GPUs. Meanwhile the lower power U-class chips top out at dual cores with GT3e class graphics. This is interesting because Intel has previous held back the six core CPUs for its more expensive and higher margin HEDT and Xeon platforms.
Of course 2018 is also the year for Cannon Lake which would have been the "tock" in Intel's old tick-tock schedule (which is no more) as the chips will move to a smaller process node and then Intel would improve on the 10nm process from there in future architectures. Cannon Lake is supposed to be built on the tiny 10nm node, and it appears that the first chips on this node will be ultra low power versions for laptops and tablets. Occupying the ULV platform's U-class (15W) and Y-class (4.5W), Cannon Lake CPUs will be dual cores with GT2 graphics. These chips should sip power while giving comparable performance to Kaby and Coffee Lake perhaps even matching the performance of the Coffee Lake U processors!
Stay tuned to PC Perspective for more information!
Subject: Editorial, Graphics Cards | July 28, 2016 - 02:36 PM | Ryan Shrout
Tagged: video, sapphire, rx 480, radeon, Polaris, pcper live, live, amd
UPDATE: Did you miss the live event? No worries, see what trouble Ed and I got into with the recording embedded below!!
When it comes to GPU releases, we at PC Perspective take things up a level in the kind of content we produce as well as the amount of information we provide to the community. Part of that commitment is our drive to bring in the very best people from around the industry to talk directly to the consumers, providing interesting and honest views on where their technology is going.
Though the Radeon RX 480 was released last month, based on AMD's latest Polaris, we are bringing in our first board partner. Ed Crisler, NA PR/Marketing Manager for Sapphire will be joining us in studio to talk about the RX 480 and Sapphire's plans for custom cards.
The Sapphire Nitro+ RX 480 Graphics Card
Sapphire Live Stream and Giveaway with Ed Crisler and Ryan Shrout
10:00am PT / 1:00pm ET - July 29th
Need a reminder? Join our live stream notification list!
The event will take place Friday, July 29th at 1:00pm ET / 10:00am PT at http://www.pcper.com/live. There you’ll be able to catch the live video stream as well as use our chat room to interact with the audience, asking questions for me and Ed to answer live.
As a price for hosting Sapphire in the offices, we demand a sacrifice: in the form of hardware to giveaway to our viewers! We'll have a brand new Sapphire Nitro+ RX 480 8GB to hand out during the live stream! All you have to do to win on the 29th is watch the live stream!
If you have questions, please leave them in the comments below and we'll look through them just before the start of the live stream. Of course you'll be able to tweet us questions @pcper and we'll be keeping an eye on the IRC chat as well for more inquiries. What do you want to know and hear from Ed or me?
So join us! Set your calendar for this coming Friday at 1:00pm ET / 10:00am PT and be here at PC Perspective to catch it. If you are a forgetful type of person, sign up for the PC Perspective Live notification list that we use exclusively to notify users of upcoming live streaming events including these types of specials and our regular live podcast. I promise, no spam will be had!
Subject: Editorial | July 28, 2016 - 01:03 PM | Ryan Shrout
Tagged: XSPC, wings, windows 10, VR, video, titan x, tegra, Silverstone, sapphire, rx 480, Raystorm, RapidSpar, radeon pro ssg, quadro, px1, podcast, p6000, p5000, nvidia, nintendo nx, MX300, gp102, evga, dg-87, crucial, angelbird
PC Perspective Podcast #410 - 07/28/2016
Join us this week as we discuss the new Pascal based Titan X, an AMD graphics card with 1TB of SSD storage on-board, data recovery with RapidSpar and more!!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store (audio only)
- Google Play - Subscribe to our audio podcast directly through Google Play!
- RSS - Subscribe through your regular RSS reader (audio only)
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Allyn Malventano, Sebastian Peak, and Josh Walrath
Subject: Motherboards | July 28, 2016 - 10:04 AM | Sebastian Peak
Tagged: small form-factor, SFF, mini-stx, mini-pc, H110M-STX, asrock
The motherboard within ASRock's DeskMini mini-PC kit has been released as a standalone product, and this H110M-STX motherboard offers Intel processor support up to 65W in its 5" x 5" Mini-STX form-factor.
Image credit: ASRock
Specifications from ASRock:
- Supports LGA 1151 6th Generation Intel Core i7/i5/i3/Pentium/Celeron Processors up to 65W TDP
- Supports Dual-Channel DDR4 SO-DIMM 2133
- Graphics output: D-Sub, HDMI, DisplayPort
- ALC283 Audio Codec
- 2x SATA3, 1 M.2 (PCIe Gen3 x4)
- 3x USB 3.0 (Type-A & Type-C from front I/O; 1 from rear I/O)
- 3x USB 2.0 (2 from onboard header; 1 from rear I/O)
- Intel Gigabit LAN
- 1x M.2 (Key E for WiFi + BT module)
Like thin-Mini-ITX motherboards the H110M-STX requires an external 19v power adapter. ASRock recommends a 120W adapter for 65W CPUs, while 35W Intel CPU builds can manage with a 90W adapter.
Image credit: ASRock
As to availability/price, this has yet to appear in the usual e-tail channels in the U.S., with no results currently on Amazon or Newegg. ASRock's larger H110-ITX board sells for $69.99, so this may give us an indication of where pricing might be - though the smaller STX form-factor could increase cost.
Image credit: ASRock
A sub-mITX form-factor might seem a bit unnecessary, but the smaller board does provide builders with a way to create their own mini-PC boxes with upgradable processors. Naturally, one would need an enclosure for this tiny motherboard, and the only one I have seen thus far came from SilverStone's booth at CES - though ready availability for all products in this newest form-factor is still an issue.
Subject: General Tech | July 27, 2016 - 08:47 PM | Scott Michaud
Tagged: microsoft, epic games, unreal engine, unreal engine 4, ue4, uwp
The head of Epic Games, Tim Sweeney, doesn't like UWP too much, at least as it exists today (and for noble reasons). He will not support the new software (app) platform unless Microsoft makes some clear changes that guarantee perpetual openness. There really isn't anything, technically or legally, to prevent Microsoft (or an entity with authority over Microsoft, like governments, activists groups who petition government, and so forth) from undoing their changes going forward. If Microsoft drops support for Win32, apart from applications that are converted using Project Centennial or something, their catalog would be tiny.
SteamOS would kick its butt levels of tiny, let alone OSX, Android, and countless others.
As a result, Microsoft keeps it around, despite its unruliness. Functionality that is required by legitimate software make it difficult to prevent malware, and, even without an infection, it can make the system just get junked up over time.
UWP, on the other hand, is slimmer, contained, and authenticated with keys. This is theoretically easier to maintain, but at the expense of user control and freedom; freedom to develop and install software anonymously and without oversight. The first iteration was with Windows RT, which was basically iOS, right down to the “you cannot ship a web browser unless it is a reskin of Internet Explorer ((replace that for Safari in iOS' case))” and “content above ESRB M and PEGI 16 are banned from the OS” levels of control.
Since then, content guidelines have increased, sideloading has been added, and so forth. That said, unlike the technical hurdles of Win32, there's nothing to prevent Microsoft from, in the future, saying “Okay, we have enough software for lock in. Sideloading is being removed in Windows 10 version 2810” or something. I doubt that the current administration wants to do this, especially executives like Phil Spencer, but their unwillingness to make it impossible to be done in the future is frustrating. This could be a few clauses in the EULA that make it easy for users to sue Microsoft if a feature is changed, and/or some chunks of code that breaks compatibility if certain openness features are removed.
Some people complain that he wasn't this concerned about iOS, but he already said that it was a bad decision in hindsight. Apple waved a shiny device around, and it took a few years for developers to think “Wait a minute, what did I just sign away?” iOS is, indeed, just as bad as UWP could turn into, if not worse.
Remember folks, once you build a tool for censorship, they will come. They may also have very different beliefs about what should be allowed or disallowed than you do. This is scary stuff, albeit based on good intentions.
That rant aside, Microsoft's Advanced Technology Group (ATG) has produced a fork of Unreal Engine 4, which builds UWP content. It is based upon Unreal Engine 4.12, and they have apparently merged changes up to version 4.12.5. This makes sense, of course, because that version is required to use Visual Studio 2015 Update 3.
If you want to make a game in Unreal Engine 4 for the UWP platform, then you might be able to use Microsoft's version. That said, it is provided without warranty, and there might be some bugs that cropped up, which Epic Games will probably not help with. I somehow doubt that Microsoft will have a dedicated team that merges all fixes going forward, and I don't think this will change Tim's mind (although concrete limitations that guarantee openness might...). Use at your own risk, I guess, especially if you don't care about potentially missing out on whatever is added for 4.13 and on (unless you add it yourself).
The fork is available on Microsoft's ATG GitHub, with lots of uppercase typing.
Subject: Graphics Cards, Systems, Mobile | July 27, 2016 - 07:58 PM | Scott Michaud
Tagged: nvidia, Nintendo, nintendo nx, tegra, Tegra X1, tegra x2, pascal, maxwell
Okay so there's a few rumors going around, mostly from Eurogamer / DigitalFoundry, that claim the Nintendo NX is going to be powered by an NVIDIA Tegra system on a chip (SoC). DigitalFoundry, specifically, cites multiple sources who claim that their Nintendo NX development kits integrate the Tegra X1 design, as seen in the Google Pixel C. That said, the Nintendo NX release date, March 2017, does provide enough time for them to switch to NVIDIA's upcoming Pascal Tegra design, rumored to be called the Tegra X2, which uses NVIDIA's custom-designed Denver CPU cores.
Preamble aside, here's what I think about the whole situation.
First, the Tegra X1 would be quite a small jump in performance over the WiiU. The WiiU's GPU, “Latte”, has 320 shaders clocked at 550 MHz, and it was based on AMD's TeraScale 1 architecture. Because these stream processors have single-cycle multiply-add for floating point values, you can get its FLOP rating by multiplying 320 shaders, 550,000,000 cycles per second, and 2 operations per clock (one multiply and one add). This yields 352 GFLOPs. The Tegra X1 is rated at 512 GFLOPs, which is just 45% more than the previous generation.
This is a very tiny jump, unless they indeed use Pascal-based graphics. If this is the case, you will likely see a launch selection of games ported from WiiU and a few games that use whatever new feature Nintendo has. One rumor is that the console will be kind-of like the WiiU controller, with detachable controllers. If this is true, it's a bit unclear how this will affect games in a revolutionary way, but we might be missing a key bit of info that ties it all together.
As for the choice of ARM over x86... well. First, this obviously allows Nintendo to choose from a wider selection of manufacturers than AMD, Intel, and VIA, and certainly more than IBM with their previous, Power-based chips. That said, it also jives with Nintendo's interest in the mobile market. They joined The Khronos Group and I'm pretty sure they've said they are interested in Vulkan, which is becoming the high-end graphics API for Android, supported by Google and others. That said, I'm not sure how many engineers exist that specialize in ARM optimization, as most mobile platforms try to abstract this as much as possible, but this could be Nintendo's attempt to settle on a standardized instruction set, and they opted for mobile over PC (versus Sony and especially Microsoft, who want consoles to follow high-end gaming on the desktop).
Why? Well that would just be speculating on speculation about speculation. I'll stop here.
Subject: Graphics Cards | July 27, 2016 - 03:43 AM | Tim Verry
Tagged: Twin Frozr VI, Radeon RX 480, polaris 10, msi
MSi is jumping full force into custom RX 480s with its upcoming line of Radeon RX 480 Gaming series including factory overclocked Gaming X and (slightly lower end) Gaming cards in both 8GB and 4GB SKUs. All four of the new graphics cards use a custom 8 phase power design, custom PCB with Military Class 4 components, and perhaps most importantly a beefy Twin Frozr VI cooler. The overclockable cards will be available by the middle of next month.
Specifically, MSI will be launching the RX 480 GAMING X 8G and RX 480 GAMING X 4G with 8GB and 4GB of GDDR5 memory respectively. These cards will have solid metal backplates and the highest factory overclocks. Below these cards sit the RX 480 GAMING 8G and RX480 GAMING 4G with the same TWIN FROZR VI cooler but sans backplate and with lower out of the box clockspeeds. Aside from those aspects, the cards all appear to offer identical features.
The new Gaming series graphics cards feature 8-pin PCI-E power connectors and 8-phase power design on a custom PCB that should allow users to push Polaris 10 quite a bit without running into issues of overheating the VRMs. The Twin Frozr VI cooler uses a nickel plated copper base plate, three 8mm copper heatpipes, a large aluminum fin array, and two large fans that spin down while the GPU temperature is under 60°C. The heatsink results in a larger than reference card that is both wider and longer at 276mm, but the size is made up for by offering 22% better cooling performance according to MSI. Further, RGB LEDs backlight the MSI logo on the side of the card. The metal backplate on the X variants should help dissipate slightly more heat than the non X models.
All for Polaris-based graphics cards offer a single DL-DVI, two HDMI, and two DisplayPort video outputs. The inclusion of two HDMI ports rather than three DP ports is allegedly to more easily support VR users by allowing them to have an HDMI connected monitor and headset connected at the same time without using adapters.
|RX 480 Gaming X 8G||RX 480 Gaming X 4G||RX 480 Gaming 8G||RX 480 Gaming 4G||RX 480 Reference|
|GPU Clock (OC Mode)||1316 MHz||1316 MHz||1292 MHz||1292 MHz||1266 MHz|
|GPU Clock (Gaming Mode)||1303 MHz||1303 MHz||1279 MHz||1279 MHz||1266 MHz|
|GPU Clock (Silent Mode)||1266 MHz||1266 MHz||1266 MHz||1266 MHz||1266 MHz|
|Memory||8GB GDDR5||4GB GDDR5||8GB GDDR5||4GB GDDR5||8GB or 4GB GDDR5|
|Memory Clock||8100 MHz||8100 MHz||8000 MHz (?)||8000 MHz (?)||8000 MHz|
|MSRP||?||?||?||?||$249 for 8GB, $199 for 4GB|
The GAMING and GAMING X RX 480s offer two tiers of factory overclocks that users can select using MSI's software utility. The non X GAMING cards will clock up to 1279 MHz in Gaming Mode and 1292 MHz in OC Mode. In Silent Mode the card will run at the same 1266 MHz boost speed as AMD's reference design card. Meanwhile the RX 480 GAMING X cards will boost up to 1303 MHz in Gaming Mode and 1316 MHz in OC Mode. In addition, MSI is bumping up the memory clockspeeds to 8100 MHz in OC Mode which is a nice surprise! MSI's announcement is not exactly clear, but it appears that the non X versions do not have factory overlcocked memory and it remains at the reference 8000 MHz.
Pricing has not yet been announced, but the cards will reportedly be on sale worldwide by mid August.
I am looking forward to seeing how far reviewers and users are able to push Polaris 10 with the Twin Frozr cooler and 8-phase VRMs!
Subject: Graphics Cards | July 27, 2016 - 01:56 AM | Tim Verry
Tagged: solid state, radeon pro, Polaris, gpgpu, amd
UPDATE (July 27th, 1am ET): More information on the Radeon Pro SSG has surfaced since the original article. According to AnandTech, the prototype graphics card actually uses an AMD Fiji GPU. The Fiji GPU is paired onboard PCI-E based storage using the same PEX8747 bridge chip used in the Radeon Pro Duo. Storage is handled by two PCI-E 3.0 x4 M.2 slots that can accommodate up to 1TB of NAND flash storage. As I mentioned below, having the storage on board the graphics card vastly reduces latency by reducing the number of hops and not having to send requests out to the rest of the system. AMD had more numbers to share following their demo, however.
From the 8K video editing demo, the dual Samsung 950 Pro PCI-E SSDs (in RAID 0) on board the Radeon Pro SSG hit 4GB/s while scrubbing through the video. That same video source stored on a Samsung 950 Pro attached to the motherboard had throughput of only 900MB/s. In theory, reaching out to system RAM still has raw throughput advantages (with DDR4 @ 3200 MHz on a Haswell-E platform theroretically capable of 62 GB/s reads and 47 GB/s writes though that would be bottlenecked by the graphics card having to go over the PCI-E 3.0 x16 link and it's maximum of 15.754 GB/s.). Of course if you can hold it in (much smaller) GDDR5 (300+GB/s depending on clocks and memory bus width) or HBM (1TB/s) and not have to go out to any other storage tier that's ideal but not always feasible especially in the HPC world.
However, having onboard storage on the same board as the GPU only a single "hop" away vastly reduces latency and offers much more total storage space than most systems have in DDR3 or DDR4. In essence, the solid state storage on the graphics card (which developers will need to specifically code for) acts as a massive cache for streaming in assets for data sets and workloads that are highly impacted by latency. This storage is not the fastest, but is the next best thing for holding active data outside of GDDR5/x or HBM. For throughput intensive workloads reaching out to system RAM will be better Finally, reaching out to system attached storage should be the last resort as it will be the slowest and most latent. Several commentors mentioned using a PCI-E based SSD in a second slot on the motherboard accessed much like GPUs in CrossFire communicate now (DMA over the PCI-E bus) which is an interesting idea that I had not considered.
Per my understanding of the situation, I think that the on board SSG storage would still be slightly more beneficial than this setup but it would get you close (I am assuming the GPU would be able to directly interact and request data from the SSD controller and not have to rely on the system CPU to do this work but I may well be mistaken. I will have to look into this further and ask the experts heh). On the prototype Radeon Pro SSG the M.2 slots are actually able to be seen as drives by the system and OS so it is essentially acting as if there was a PCI-E adapter card in a slot on the motherboard holding those drives but that may not be the case should this product actually hit the market. I do question their choice to go with Fiji rather than Polaris, but it sounds like they built the prototype off of the Radeon Pro Duo platform so I suppose it would make sense there.
Hopefully the final versions in 2017 or beyond use at least Vega though :).
Alongside the launch of new Radeon Pro WX (workstation) series graphics cards, AMD teased an interesting new Radeon Pro product: the Radeon Pro SSG. This new professional graphics card pairs a Polaris GPU with up ot a terabyte of on board solid state storage and seeks to solve one of the biggest hurdles in GP GPU performance when dealing with extremely large datasets which is latency.
One of the core focuses of AMD's HSA (heterogeneous system architecture) is unified memory and the ability of various processors (CPU, GPU, specialized co-processors, et al) to work together efficiently by being able to access and manipulate data from the same memory pool without having to copy data bck and forth between CPU-accessible memory and GPU-accessible memory. With the Radeon Pro SSG, this idea is not fully realized (it is more of a sidestep), but it will move performance further. It does not eliminate the need to copy data to the GPU before it can work on it, but once copied the GPU will be able to work on data stored in what AMD describes as a one terabyte frame buffer. This memory will be solid state and very fast, but more importantly it will be able to get at the data with much lower latency than previous methods. AMD claims the solid state storage (likely NAND but they have not said) will link with the GPU over a dedicated PCI-E bus. I suppose that if you can't bring the GPU to the data, you bring the data to the GPU!
Considering AMD's previous memory champ – the Radeon W9100 – maxed out at 32GB of GDDR5, the teased Radeon Pro SSG with its 1TB of purportedly low latency onboard flash storage opens up a slew of new possibilities for researchers and professionals in media, medical, and scientific roles working with massive datasets for imaging, creation, and simulations! I expect that there are many professionals out there eager to get their hands on one of these cards! They will be able to as well thanks to a beta program launching shortly, so long as they have $10,000 for the hardware!
AMD gave a couple of examples in their PR on the potential benefits of its "solid state graphics" including the ability to image a patient's beating heart in real time to allow medical professionals to examine and spot issues as early as possible and using the Radeon Pro SSG to edit and scrub through 8K video in real time at 90 FPS versus 17 with current offerings. On the scientific side of things being able to load up entire models into the new graphics memory (not as low latency as GDDR5 or HBM certainly) will be a boon as will being able to get data sets as close to the GPU as possible into servers using GPU accelerated databases powering websites accessed by millions of users.
It is not exactly the HSA future I have been waiting for ever so impatiently, but it is a nice advancement and an intriguing idea that I am very curious to see how well it pans out and if developers and researchers will truly take advantage of and use to further their projects. I suspect something like this could be great for deep learning tasks as well (such as powering the "clouds" behind self driving cars perhaps).
Stay tuned to PC Perspective for more information as it develops.
This is definitely a product that I will be watching and I hope that it does well. I am curious what Nvidia's and Intel's plans are here as well! What are your thoughts on AMD's "Solid State Graphics" card? All hype or something promising?
Subject: Storage | July 26, 2016 - 02:34 PM | Allyn Malventano
Tagged: MX300, micron, M.2, crucial, 525GB, 275GB, 1TB
We reviewed the Crucial MX300 750GB SSD a few months back. It was a good drive that tested well, and thanks to its IMFT 3D NAND, it came in at a very competitive price point. Today Crucial has rearranged that lineup a bit:
The following capacities are being added to the MX300 lineup:
- 1TB $260 ($0.26/GB)
- 525GB $130 ($0.25/GB)
- 275GB $70 ($0.25/GB)
- 275GB * M.2 2280
The new capacities will be what is sold moving forward (starting 'late August'), with the 750GB model shifting to 'Limited Edition' status. That $0.25/GB carrying all the way down to the lowest capacity is significant, as typically we see higher cost/GB due to controller/PCB/packaging have more impact. Without that coming into play, we get a nearly 300GB SSD coming in at $70!
Specs and expected performance remain the same across all capacities, save a dip in random read performance on the 275GB models, mainly due to the reduced die count / parallelism. We'll take a look at these new capacities just as soon as samples arrive.
Subject: Graphics Cards | July 26, 2016 - 12:36 AM | Tim Verry
Tagged: windforce, pascal, gigabyte, GeForce GTX 1060
In a recent press release, Gigabyte announced that it will soon be adding four new GTX 1060 graphics cards to its lineup. The new cards feature Windforce series coolers and custom PCBs. At the high end is the GTX 1060 G1 Gaming followed by the GTX 1060 Windforce OC, small form factor friendly GTX 1060 Mini ITX OC, and the budget minded GTX 1060 D5. While the company has yet to divulge pricing or availability, the cards should be out within the next month or two.
All of the upcoming cards use a custom design that uses a custom PCB and power phase setup paired with Gigabyte's dual – or in the case of the Mini ITX card – single fan Windforce air cooler. Unfortunately, exact specifications for all of the cards except the high end model are unknown including core and memory clocks. The coolers use a dual composite heatpipe that directly touches the GPU to pull heat away and is dissipated by an aluminum fin stack. The fans are 90mm on all of the cards with the dual fan models using a design that has each fan spinning alternate directions of the other. The cards feature 6GB of GDDR5 memory as well as DVI, HDMI, and DisplayPort video outputs. For example, the Mini ITX OC graphics card (which is only 17cm long) and features two DVI, one HDMI, and one DP output.
More information is available on the GTX 1060 G1 Gaming. This card is a dual slot dual fan design with a 6+1 power phase (reference is 3+1) powered by a single 8-pin power connector. The fans are shrouded and there is a metal backplate to aid in stability and cooling. Gigabyte claims that its "GPU Gauntlet" technology ensures users get heavily overclockable chips thanks to sorting and using the most promising chips.
The 16nm Pascal GPU is factory overclocked to 1847 MHz boost and 1620 MHz base clockspeeds in OC mode and 1809 MHz boost and 1594 MHz base in gaming mode. Users will be able to use the company's Xtreme Engine software to dial up the overclocks further as well as mess with the RGB LEDs. For comparison, the reference clockspeeds are 1708 MHz boost and 1506 MHz base. Gigabyte has left the 6GB of GDDR5 memory untouched at 8008 MHz.
The other cards should have similarly decent factory overclocks, but it is hard to say exactly what they will be out of the box. While I am not a big fan of the aesthetics, the Windforce coolers should let users push Pascal fairly far (for air cooling).
I would guess that the Gigabyte GTX 1060 G1 Gaming will MSRP for just above $300 while the lower end cards will be around $260 (the Mini ITX OC may be at a slight premium above that).
What do you think about Gigabyte's new cards?
Subject: General Tech, Graphics Cards | July 25, 2016 - 09:48 PM | Sebastian Peak
Tagged: siggraph 2016, Siggraph, capsaicin, amd, 3D rendering
At their Capsaicin Siggraph event tonight AMD has announced that what was previously announced as the FireRender rendering engine is being officially launched as AMD Radeon ProRender, and this is becoming open-source as part of AMD's GPUOpen initiative.
From AMD's press release:
AMD today announced its powerful physically-based rendering engine is becoming open source, giving developers access to the source code.
As part of GPUOpen, Radeon ProRender (formerly previewed as AMD FireRender) enables creators to bring ideas to life through high-performance applications and workflows enhanced by photorealistic rendering.
GPUOpen is an AMD initiative designed to assist developers in creating ground-breaking games, professional graphics applications and GPU computing applications with much greater performance and lifelike experiences, at no cost and using open development tools and software.
Unlike other renderers, Radeon ProRender can simultaneously use and balance the compute capabilities of multiple GPUs and CPUs – on the same system, at the same time – and deliver state-of-the-art GPU acceleration to produce rapid, accurate results.
Radeon ProRender plugins are available today for many popular 3D content creation applications, including Autodesk® 3ds Max®, SOLIDWORKS by Dassault Systèmes and Rhino®, with Autodesk® Maya® coming soon. Radeon ProRender works across Windows®, OS X and Linux®, and supports AMD GPUs, CPUs and APUs as well as those of other vendors.
Subject: Graphics Cards | July 25, 2016 - 09:30 PM | Sebastian Peak
Tagged: siggraph 2016, Siggraph, Radeon Pro WX Series, Radeon Pro WX 7100, Radeon Pro WX 5100, Radeon Pro WX 4100, radeon, capsaicin, amd
AMD has announced new Polaris-based professional graphics cards at Siggraph 2016 this evening, with the Radeon Pro WX 4100, WX 5100, and WX 7100 GPUs.
The AMD Radeon Pro WX 7100 GPU (Image credit: AMD)
From AMD's official press release:
AMD today unveils powerful new solutions to address modern content creation and engineering: the new Radeon Pro WX Series of professional graphics cards, which harness the award-winning Polaris architecture and is designed to deliver exceptional capabilities for the immersive computing era.
Radeon Pro solutions and the new Radeon Pro WX Series of professional graphics cards represent a fundamentally different approach for professionals rooted in a commitment to open, non-proprietary software and performant, feature-rich hardware that empowers people to create the “art of the impossible”.
The new Radeon Pro WX series graphics cards deliver on the promise of this new era of creation, are optimized for open source software, and are designed for creative professionals and those pushing the boundaries of science, technology and engineering.
The AMD Radeon Pro WX 5100 GPU (Image credit: AMD)
Radeon Pro WX Series professional graphics cards are designed to address specific demands of the modern content creation era:
- Radeon Pro WX 7100 GPU is capable of handling demanding design engineering and media and entertainment workflows and is AMD’s most affordable workstation solution for professional VR content creation.
- Radeon Pro WX 5100 GPU is the ideal solution for product development, powered by the impending game-engine revolution in design visualization.
- Radeon Pro WX 4100 GPU provides great performance in a half-height design, finally bringing mid-range application performance demanded by CAD professionals to small form factor (SFF) workstations
The AMD Radeon Pro WX 4100 GPU (Image credit: AMD)
A breakdown of the known specifications for these new GPUs was provided by AnandTech in their report on the WX Series:
Subject: Graphics Cards | July 25, 2016 - 08:49 PM | Tim Verry
Tagged: sapphire, Radeon RX 480, polaris 10, nitro+, nitro
UPDATE (July 27th, 1am ET): The 8GB overclocked Sapphire Nitro+ will MSRP for $269 while the 4GB version will be $219. For more information on Sapphire's new Polaris 10 graphics card check out our archived livestream with Sapphire's Ed Crisler!
More details on custom graphics cards based around AMD's RX 480 reference GPU are starting to trickle out now that the official shipping dates are approaching (it appears many of the cards will be available next month). Sapphire is the latest AIB to provide all the juicy details on its custom Nitro+ Radeon RX 480 card!
The Nitro+ RX 480 is a dual slot card with a Dual X cooler that features two 95mm quick connect fans, vented aluminum backplate, black shroud, and aluminum heatsink. The graphics card is powered by a single 8-pin PCI-E power connector which should be enough to allow overclocking headroom and alleviate any worries over pulling too much amperage over the PEG slot on the motherboard.
Sapphire is using high end capacitors and black diamond 4 chokes. The twin fan cooler supports "quick connect" which lets users easily pull out the fans for cleaning or replacement (which seems like a neat feature considering how dusty my PC can get (it doesn't help that my corgi loves to lay against my tower heh)). RGB LEDs illuminate the Sapphire logo and fans.
Of course, all of the LEDs can be controlled by software or a button on the back of the card to change colors in response to temperatures, fan speed, cycling through all colors, and turned off completely.
The company also uses an aluminum backplate which has a nice design to it (nice to see the only part of the card most will see getting some attention for once heh) as well as vents that allow hot air to escape. Air is pulled into the card from the two fans and pushed out the back of the card and up through the backplate. I am interested to see how much this design actually improved cooling.
Rear IO includes a single DL-DVI output along with two DisplayPort 1.4 and two HDMI 2.0b video outputs. This configuration results in a smaller air intake but also lets you hook up both a HDMI monitor and VR headset. While there are five connectors, only four may be used at the same time.
While Sapphire did not touch the memory, it did factory overclock the Polaris 10 GPU to up to 1,342 MHz boost. Compared to the reference boost clockspeed of 1,266 this is a decent jump, especially for a factory out of the box overclock. Users should be able to push the GPU further though exactly how far remains to be seen and will depend on the cooler and the quality of their specific chip.
Sapphire's Nitro+ RX 480 will reportedly be available as soon as next week in both 4GB and 8GB models. The 4GB will run $220 while the 8GB card will cost $269. If these numbers hold true, that is only a $20 premium over the reference designs which certainly seems like a great value all things considered! I am looking forward to the reviews on this slick looking card and I hope that the performance and build quality are up to snuff!
Subject: Graphics Cards | July 25, 2016 - 06:51 PM | Jeremy Hellstrom
Tagged: msi, gtx 1070, Gaming Z, Twin Frozr VI, factory overclocked
The Tech Report had a chance to see what the MSI Twin Frozr VI cooler can do to a GTX 1070, they have just wrapped up a review of the Gaming Z edition of that NVIDIA card. It comes with a respectable frequency bump when you enable OC mode, 1657 MHz base and 1860 MHz boost. When they tested it under load the GPU stayed below 70C so there should be room to push the card further. Check out the full benchmark suite in their full review.
"Nvidia's second Pascal graphics card, the GeForce GTX 1070, aims to set a new bar for graphics performance in the $379-and-up price range. We put MSI's GeForce GTX 1070 Gaming Z card through the wringer to see how a more affordable Pascal card performs."
Here are some more Graphics Card articles from around the web:
- Gigabyte GeForce GTX 1070 Xtreme Gaming @ Modders-Inc
- MSI GTX 1080 Gaming X 8G RGB SLI @ Kitguru
- NVIDIA GeForce GTX 1080 Founders Edition 8GB Graphics Card Review @ NikKTech
- MSI GTX 1060 Gaming X @ eTeknix
- MSI GTX 1060 Gaming X 6G Review @ OCC
- ASUS RX 480 STRIX OC 8 GB @ techPowerUp
Subject: Cases and Cooling | July 25, 2016 - 04:53 PM | Jeremy Hellstrom
Tagged: modular psu, Seasonic PRIME, 750w
It has been about a year since Seasonic released a brand new PSU as they do not tend to flood the market with incremental upgrades to their PSU families. While this may hurt their business a little as newer users do not see reviews or advertisements frequently, long term enthusiasts take note when a new PSU arrives. This fully modular PSU offers a single 12V rail capable of delivering 744W @ 62A and offers six 6+2 PCIe power cables, it even still has a floppy connector for those desperate times when you need to pull one out. [H]ard|OCP strapped the PSU to their torture bench and this Seasonic unit came out with a Gold medal. Check out the full review here.
"Seasonic has never been big on marketing-speak. Outside of its impressive specifications, and a list of features, this is all it has to say. "The creation of the PRIME Series is a renewed testimony of Seasonic's determination to push the limits of power supply design in every aspect." Let's see if that is true, or the shortest sales pitch ever."
Here are some more Cases & Cooling reviews from around the web:
- Seasonic Prime 750W Titanium @ Kitguru
- Enermax Revolution X't II 750W @ [H]ard|OCP
- Silverstone Strider Platinum 750W ST75F-PT @ Modders Inc
- Thermaltake Smart DPS G 700W @ NikKTe
- APC Power Saving Back-UPS Pro 1500VA (BR1500G) @ Custom PC Review
Subject: Graphics Cards | July 25, 2016 - 04:48 PM | Scott Michaud
Tagged: siggraph 2016, Siggraph, quadro, nvidia
SIGGRAPH is the big, professional graphics event of the year, bringing together tens of thousands of attendees. They include engineers from Adobe, AMD, Blender, Disney (including ILM, Pixar, etc.), NVIDIA, The Khronos Group, and many, many others. Not only are new products announced, but many technologies are explained in detail, down to the specific algorithms that are used, so colleagues can advance their own research and share in kind.
But new products will indeed be announced.
The NVIDIA Quadro P6000
NVIDIA, having just launched a few Pascal GPUs to other markets, decided to announce updates to their Quadro line at the event. Two cards have been added, the Quadro P5000 and the Quadro P6000, both at the top end of the product stack. Interestingly, both use GDDR5X memory, meaning that neither will be based on the GP100 design, which is built around HBM2 memory.
The NVIDIA Quadro P5000
The lower end one, the Quadro P5000, should look somewhat familiar to our reader. Exact clocks are not specified, but the chip has 2560 CUDA cores. This is identical to the GTX 1080, but with twice the memory: 16GB of GDDR5X.
Above it sits the Quadro P6000. This chip has 3840 CUDA cores, paired with 24GB of GDDR5X. We have not seen a GPU with exactly these specifications before. It has the same number of FP32 shaders as a fully unlocked GP100 die, but it doesn't have HBM2 memory. On the other hand, the new Titan X uses GP102, combining 3584 CUDA cores with GDDR5X memory, although only 12GB of it. This means that the Quadro P6000 has 256 more (single-precision) shader units than the Titan X, but otherwise very similar specifications.
Both graphics cards have four DisplayPort 1.4 connectors, as well as a single DVI output. These five connectors can be used to drive up to four, 4K, 120Hz monitors, or four, 5K, 60Hz ones. It would be nice if all five connections could be used at once, but what can you do.
Pascal has other benefits for professional users, too. For instance, Simultaneous Multi-Projection (SMP) is used in VR applications to essentially double the GPU's geometry processing ability. NVIDIA will be pushing professional VR at SIGGRAPH this year, also launching Iray VR. This uses light fields, rendered on devices like the DGX-1, with its eight GP100 chips connected by NVLink, to provide accurately lit environments. This is particularly useful for architectural visualization.
No price is given for either of these cards, but they will launch in October of this year.