Subject: General Tech, Systems | January 30, 2015 - 03:03 AM | Tim Verry
Tagged: visionx, SFF, radeon, m270x, haswell, asrock, amd
ASRock has unleashed an update to its small form factor VisionX series. The new VisionX 471D adds a faster Haswell processor and dedicated Radeon mobile graphics to the mini PC.
The 7.9” x 7.9” x 2.8” PC chassis comes in black or silver with rounded corners. External I/O is quite expansive with a DVD optical drive, two audio jacks, one USB 3.0 port, one MHSL* port (MHL compatible port that carries both data and video), and a SD card reader on the front. Further, the back of the PC holds the following ports:
- 5 x Analog audio jacks
- 1 x Optical audio out
- 1 x DVI
- 1 x HDMI
- 1 x Gigabit Ethernet jack
- 802.11ac (2 antennas)
- 5 x USB 3.0
- 1 x USB 2.0
- 1 x eSATA
ASRock has gone with the Intel Core i7-4712MQ processor. This is a 37W Haswell quad core (with eight threads) clocked at up to 3.3GHz. Graphics are handled by the AMD Radeon R9 M270X which is a mobile “Venus” GCN-based GPU with 1GB of memory. The 28nm GPU with 640 cores, 40 TMUs, and 16 ROPs is clocked at 725 MHz base and up to 775 MHz boost. The PC further supports two SO-DIMMS, two 2.5” drives, one mSATA connector, and the above-mentioned DVD drive (DL-8A4SH-01 comes pre-installed).
The VisionX 471D is a “barebones” system where you will have to provide your own OS but does come with bundled storage and memory. Specifically, for $999, the SFF computer comes with 8GB of DDR3 memory, a 2TB mechanical hard drive, and a 256GB mSATA SSD (the ASint SSDMSK256G-M1 using a JMF667 controller and 64GB 20nm IMFT NAND). This leaves room for one additional 2.5” drive for expansion. Although it comes without an operating system, it does ship with a Windows Media Center compatible remote.
This latest addition to the VisionX series succeeds the 420D and features a faster processor. At the time of this writing, the PC is not available for purchase, but it is in the hands of reviewers (such as this review from AnandTech) and will be coming soon to retailers for $999 USD.
The price is on the steep side especially compared to some other recent tiny PCs, but you are getting a top end mobile Haswell chip and good I/O for a small system with enough hardware to possibly be "enough" PC for many people (or at least a second PC or a HTPC in the living room).
Subject: Graphics Cards | January 13, 2015 - 12:22 PM | Ryan Shrout
Tagged: rumor, radeon, r9 380x, 380x
Spotted over at TechReport.com this morning and sourced from a post at 3dcenter.org, it appears that some additional information about the future Radeon R9 380X is starting to leak out through AMD employee LinkedIn pages.
Ilana Shternshain is a ASIC physical design engineer at AMD with more than 18 years of experience, 7-8 years of that with AMD. Under the background section is the line "Backend engineer and team leader at Intel and AMD, responsible for taping out state of the art products like Intel Pentium Processor with MMX technology and AMD R9 290X and 380X GPUs." A bit further down is an experience listing of the Playstation 4 APU as well as "AMD R9 380X GPUs (largest in “King of the hill” line of products)."
Interesting - though not entirely enlightening. More interesting were the details found on Linglan Zhang's LinkedIn page (since removed):
Developed the world’s first 300W 2.5D discrete GPU SOC using stacked die High Bandwidth Memory and silicon interposer.
Now we have something to work with! A 300 watt TDP would make the R9 380X more power hungry than the current R9 290X Hawaii GPU. High bandwidth memory likely implies memory located on the substrate of the GPU itself, similar to what exists on the Xbox One APU, though configurations could differ in considerable ways. A bit of research on the silicon interposer reveals it as an implementation method for 2.5D chips:
There are two classes of true 3D chips which are being developed today. The first is known as 2½D where a so-called silicon interposer is created. The interposer does not contain any active transistors, only interconnect (and perhaps decoupling capacitors), thus avoiding the issue of threshold shift mentioned above. The chips are attached to the interposer by flipping them so that the active chips do not require any TSVs to be created. True 3D chips have TSVs going through active chips and, in the future, have potential to be stacked several die high (first for low-power memories where the heat and power distribution issues are less critical).
An interposer would allow the GPU and stacked die memory to be built on different process technology, for example, but could also make the chips more fragile during final assembly. Obviously there a lot more questions than answers based on these rumors sourced from LinkedIn, but it's interesting to attempt to gauge where AMD is headed in its continued quest to take back market share from NVIDIA.
Subject: Graphics Cards, Displays, Shows and Expos | January 7, 2015 - 03:11 AM | Ryan Shrout
Tagged: video, radeon, monitor, g-sync, freesync, ces 2015, CES, amd
It finally happened - later than I had expected - we got to get hands on with nearly-ready FreeSync monitors! That's right, AMD's alternative to G-Sync will bring variable refresh gaming technology to Radeon gamers later this quarter and AMD had the monitors on hand to prove it. On display was an LG 34UM67 running at 2560x1080 on IPS technology, a Samsung UE590 with a 4K resolution and AHVA panel and BenQ XL2730Z 2560x1440 TN screen.
The three monitors sampled at the AMD booth showcase the wide array of units that will be available this year using FreeSync, possibly even in this quarter. The LG 34UM67 uses the 21:9 aspect ratio that is growing in popularity, along with solid IPS panel technology and 60 Hz top frequency. However, there is a new specification to be concerned with on FreeSync as well: minimum frequency. This is the refresh rate that monitor needs to maintain to avoid artifacting and flickering that would be visible to the end user. For the LG monitor it was 40 Hz.
What happens below that limit and above it differs from what NVIDIA has decided to do. For FreeSync (and the Adaptive Sync standard as a whole), when a game renders at a frame rate above or below this VRR window, the V-Sync setting is enforced. That means on a 60 Hz panel, if your game runs at 70 FPS, then you will have the option to enable or disable V-Sync; you can either force a 60 FPS top limit or allow 70 FPS with screen tearing. If your game runs under the 40 Hz bottom limit, say at 30 FPS, you get the same option: V-Sync on or V-Sync off. With it off, you would get tearing but optimal input/display latency but with it off you would reintroduce frame judder when you cross between V-Sync steps.
There are potential pitfalls to this solution though; what happens when you cross into that top or bottom region can cause issues depending on the specific implementation. We'll be researching this very soon.
Notice this screen shows FreeSync Enabled and V-Sync Disabled, and we see a tear.
FreeSync monitors have the benefit of using industry standard scalers and that means they won't be limited to a single DisplayPort input. Expect to see a range of inputs including HDMI and DVI though the VRR technology will only work on DP.
We have much more to learn and much more to experience with FreeSync but we are eager to get one in the office for testing. I know, I know, we say that quite often it seems.
Follow all of our coverage of the show at http://pcper.com/ces!
Subject: General Tech, Graphics Cards | December 28, 2014 - 09:47 PM | Scott Michaud
Tagged: radeon, nvidia, gtx, geforce, amd
According to an anonymous source of WCCFTech, AMD is preparing a 20nm-based graphics architecture that is expected to release in April or May. Originally, they predicted that the graphics devices, which they call R9 300 series, would be available in February or March. The reason for this “delay” is a massive demand for 20nm production.
The source also claims that NVIDIA will skip 20nm entirely and instead opt for 16nm when that becomes available (which is said to be mid or late 2016). The expectation is that NVIDIA will answer AMD's new graphics devices with a higher-end Maxwell device that is still at 28nm. Earlier rumors, based on a leaked SiSoftware entry, claim 3072 CUDA cores that are clocked between 1.1 GHz and 1.39 GHz. If true, this would give it between 6.75 and 8.54 TeraFLOPs of performance, the higher of which is right around the advertised performance of a GeForce Titan Z (only in a single compute device that does not require distribution of work like what SLI was created to automate).
Will this strategy work in NVIDIA's favor? I don't know. 28nm is a fairly stable process at this point, which will probably allow them to get chips that can be bigger and more aggressively clocked. On the other hand, they pretty much need to rely upon chips that are bigger and more aggressively clocked to be competitive with AMD's slightly more design architecture. Previous rumors also hint that AMD is looking at water-cooling for their reference card, which might place yet another handicap against NVIDIA, although cooling is not an area that NVIDIA struggles in.
Subject: General Tech | November 12, 2014 - 05:10 PM | Jeremy Hellstrom
Tagged: linux, amd, radeon, CS:GO, tf2
With the new driver from AMD and a long list of cards to test, from an R9290 all the way back to an HD4650, Phoronix has put together a rather definitive list of the current performance you can expect from CS:GO and TF2. CS:GO was tested at 2560x1600 and showed many performance changes from the previous driver, including some great news for 290 owners. TF2 was tested at the same resolution and many of the GPUs were capable of providing 60FPS or higher, again with the 290 taking the lead. Phoronix also did testing on the efficiency of these cards, detailing the number of frames per second, per watt used, this may not be pertinent to many users but does offer an interesting look at the efficiency of the GPUs. If you are gaming on a Radeon on Linux now is a good time to upgrade your drivers and associated programs.
"The latest massive set of Linux test data we have to share with Linux gamers and enthusiasts is a look at Counter-Strike: Global Offensive and Team Fortress 2 when using the very newest open-source Radeon graphics driver code. The very latest open-source Radeon driver code tested with these popular Valve Linux games were the Linux 3.18 Git kernel, Mesa 10.4-devel, LLVM 3.6 SVN, and xf86-video-ati 7.5.99."
Here is some more Tech News from around the web:
- A Spaceship For Christmas – Elite: Dangerous Dated @ Rock, Paper, SHOTGUN
- Wot I Think – Call Of Duty: Advanced Warfare Singleplayer @ Rock, Paper, SHOTGUN
- Ryse: Son of Rome PC: It’s Boring but Here’s Why You Should Still Buy It @ eTeknix
- Free Beards And Horse Armour: The Witcher 3 DLC Plans @ Rock, Paper, SHOTGUN
- Borderlands: The Pre-Sequel Review @ OCC
- Assassin's Creed: Unity widely found to be slow and buggy @ HEXUS
- Avalanche confirms Just Cause 3 for PC and next-gen consoles @ HEXUS
- The Witcher 2 And Mount & Blade Free In GOG Sale @ Rock, Paper, SHOTGUN
- Skaven Time: Warhammer’s XCOMish Mordheim Out Soon @ Rock, Paper, SHOTGUN
- Microsoft to bring back beloved 1990s super-hit BATTLETOADS!? @ The Register
Subject: General Tech, Graphics Cards | November 6, 2014 - 12:00 AM | Scott Michaud
Tagged: radeon, r9 295x2, R9 290X, r9 290, R9, hawaii, civilization, beyond earth, amd
Why settle for space, when you can go Beyond Earth too (but only if you go to Hawaii)!
The Never Settle promotion launched itself into space a couple of months ago, but AMD isn't settling for that. If you purchase a Hawaii-based graphics card (R9 290, R9 290X, or R9 295X2) then you will get a free copy of Civilization: Beyond Earth on top of the choice of three games (or game packs) from the Never Settle Space Gold Reward tier. Beyond Earth makes a lot of sense of course, because it is a new game that is also one of the most comprehensive implementations of Mantle yet.
To be eligible, the purchase would need to be made starting November 6th (which is today). Make sure that you check to make sure that what you're buying is a "qualifying purchase" from "participating retailers", because that is a lot of value to miss in a moment of carelessness.
AMD has not specified an end date for this promotion.
Subject: General Tech, Graphics Cards | November 5, 2014 - 12:56 PM | Andre DeCoste
Tagged: radeon, R9 290X, R9, amd, 8gb
With the current range of AMD’s R9 290X cards sitting at 4 GB of memory, listings for an 8 GB version have appeared on an online retailer. As far back as March, Sapphire was rumored to be building an 8 GB variety. Those rumours were supposedly quashed last month by AMD and Sapphire. However, AMD has since confirmed the existence of the new additions to the series. Pre-orders have appeared online and are said to be shipping out this month.
Image Credit: Overclockers UK
With 8 GB of GDDR5 memory and price tags between $480 and $520, these new additions, expectedly, do not come cheap. Compared to the 4 GB versions of the R9 290X line, which run about $160 less according to the online retailer, is it worth upgrading at this stage? For the people using a single 1080p monitor, the answer is likely no. For those with multi-screen setups, or those with deep enough pockets to own a 4K display, however, the benefits may begin to justify the premium. At 4K though, just a single 8 GB R9 290X may not provide the best experience; a Crossfire setup would benefit more from the 8 GB bump, being less reliant on GPU speed.
Subject: Graphics Cards | October 24, 2014 - 03:44 PM | Ryan Shrout
Tagged: radeon, R9 290X, leaderboard, hwlb, hawaii, amd, 290x
When NVIDIA launched the GTX 980 and GTX 970 last month, it shocked the discrete graphics world. The GTX 970 in particular was an amazing performer and undercut the price of the Radeon R9 290 at the time. That is something that NVIDIA rarely does and we were excited to see some competition in the market.
AMD responded with some price cuts on both the R9 290X and the R9 290 shortly thereafter (though they refuse to call them that) and it seems that AMD and its partners are at it again.
Looking on Amazon.com today we found several R9 290X and R9 290 cards at extremely low prices. For example:
- XFX Radeon R9 290X Double D - $299 (after MIR)
- Gigabyte R9 290X WindForce - $360
- MSI R9 290X Gaming - $366
The R9 290X's primary competition in terms of raw performance is the GeForce GTX 980, currently selling for $549 and up. If you can find them in stock, that means NVIDIA has a hill of $250 to climb when going against the lowest priced R9 290X.
The R9 290 looks interesting as well:
Several other R9 290 cards are selling for upwards of $300-320 making them bone-headed decisions if you can get the R9 290X for the same or lower price, but considering the GeForce GTX 970 is selling for at least $329 today (if you can find it) and you can see why consumers are paying close attention.
Will NVIDIA make any adjustments of its own? It's hard to say right now since stock is so hard to come by of both the GTX 980 and GTX 970 but it's hard to imagine NVIDIA lowering prices as long as parts continue to sell out. NVIDIA believes that its branding and technologies like G-Sync make GeForce cards more valuable and until they being to see a shift in the market, I imagine that will stay the course.
For those of you that utilize our Hardware Leaderboard you'll find that Jeremy has taken these prices into account and update a couple of the system build configurations.
A Civ for a New Generation
Turn-based strategy games have long been defined by the Civilization series. Civ 5 took up hours and hours of the PC Perspective team's non-working hours (and likely the working ones too) and it looks like the new Civilization: Beyond Earth has the chance to do the same. Early reviews of the game from Gamespot, IGN, and Polygon are quite positive, and that's great news for a PC-only release; they can sometimes get overlooked in the games' media.
For us, the game offers an interesting opportunity to discuss performance. Beyond Earth is definitely going to be more CPU-bound than the other games that we tend to use in our benchmark suite, but the fact that this game is new, shiny, and even has a Mantle implementation (AMD's custom API) makes interesting for at least a look at the current state of performance. Both NVIDIA and AMD sent have released drivers with specific optimization for Beyond Earth as well. This game is likely to be popular and it deserves the attention it gets.
Civilization: Beyond Earth, a turn-based strategy game that can take a very long time to complete, ships with an integrated benchmark mode to help users and the industry test performance under different settings and hardware configurations. To enable it, you simple add "-benchmark results.csv" to the Steam game launch options and then start up the game normally. Rather than taking you to the main menu, you'll be transported into a view of a map that represents a somewhat typical gaming state for a long term session. The game will use the last settings you ran the game at to measure your system's performance, without the modified launch options, so be sure to configure that before you prepare to benchmark.
The output of this is the "result.csv" file, saved to your Steam game install root folder. In there, you'll find a list of numbers, separated by commas, representing the frame times for each frame rendering during the run. You don't get averages, a minimum, or a maximum without doing a little work. Fire up Excel or Google Docs and remember the formula:
1000 / Average (All Frame Times) = Avg FPS
It's a crude measurement that doesn't take into account any errors, spikes, or other interesting statistical data, but at least you'll have something to compare with your friends.
Our testing settings
Just as I have done in recent weeks with Shadow of Mordor and Sniper Elite 3, I ran some graphics cards through the testing process with Civilization: Beyond Earth. These include the GeForce GTX 980 and Radeon R9 290X only, along with SLI and CrossFire configurations. The R9 290X was run in both DX11 and Mantle.
- Core i7-3960X
- ASUS Rampage IV Extreme X79
- 16GB DDR3-1600
- GeForce GTX 980 Reference (344.48)
- ASUS R9 290X DirectCU II (14.9.2 Beta)
Mantle Additions and Improvements
AMD is proud of this release as it introduces a few interesting things alongside the inclusion of the Mantle API.
- Enhanced-quality Anti-Aliasing (EQAA): Improves anti-aliasing quality by doubling the coverage samples (vs. MSAA) at each AA level. This is automatically enabled for AMD users when AA is enabled in the game.
- Multi-threaded command buffering: Utilizing Mantle allows a game developer to queue a much wider flow of information between the graphics card and the CPU. This communication channel is especially good for multi-core CPUs, which have historically gone underutilized in higher-level APIs. You’ll see in your testing that Mantle makes a notable difference in smoothness and performance high-draw-call late game testing.
- Split-frame rendering: Mantle empowers a game developer with total control of multi-GPU systems. That “total control” allows them to design an mGPU renderer that best matches the design of their game. In the case of Civilization: Beyond Earth, Firaxis has selected a split-frame rendering (SFR) subsystem. SFR eliminates the latency penalties typically encountered by AFR configurations.
EQAA is an interesting feature as it improves on the quality of MSAA (somewhat) by doubling the coverage sample count while maintaining the same color sample count as MSAA. So 4xEQAA will have 4 color samples and 8 coverage samples while 4xMSAA would have 4 of each. Interestingly, Firaxis has decided the EQAA will be enabled on Beyond Earth anytime a Radeon card is detected (running in Mantle or DX11) and AA is enabled at all. So even though in the menus you might see 4xMSAA enabled, you are actually running at 4xEQAA. For NVIDIA users, 4xMSAA means 4xMSAA. Performance differences should be negligible though, according to AMD (who would actually be "hurt" by this decision if it brought down FPS).
Quick Performance Comparison
Earlier this week, we posted a brief story that looked at the performance of Middle-earth: Shadow of Mordor on the latest GPUs from both NVIDIA and AMD. Last week also marked the release of the v1.11 patch for Sniper Elite 3 that introduced an integrated benchmark mode as well as support for AMD Mantle.
I decided that this was worth a quick look with the same line up of graphics cards that we used to test Shadow of Mordor. Let's see how the NVIDIA and AMD battle stacks up here.
For those unfamiliar with the Sniper Elite series, the focuses on the impact of an individual sniper on a particular conflict and Sniper Elite 3 doesn't change up that formula much. If you have ever seen video of a bullet slowly going through a body, allowing you to see the bones/muscle of the particular enemy being killed...you've probably been watching the Sniper Elite games.
Gore and such aside, the game is fun and combines sniper action with stealth and puzzles. It's worth a shot if you are the kind of gamer that likes to use the sniper rifles in other FPS titles.
But let's jump straight to performance. You'll notice that in this story we are not using our Frame Rating capture performance metrics. That is a direct result of wanting to compare Mantle to DX11 rendering paths - since we have no way to create an overlay for Mantle, we have resorted to using FRAPs and the integrated benchmark mode in Sniper Elite 3.
Our standard GPU test bed was used with a Core i7-3960X processor, an X79 motherboard, 16GB of DDR3 memory, and the latest drivers for both parties involved. That means we installed Catalyst 14.9 for AMD and 344.16 for NVIDIA. We'll be comparing the GeForce GTX 980 to the Radeon R9 290X, and the GTX 970 to the R9 290. We will also look at SLI/CrossFire scaling at the high end.