Subject: Processors | April 7, 2015 - 05:56 PM | Jeremy Hellstrom
Tagged: amd, FX-8320e
Over at Techgage one of the writers recently updated their system, due to budget constraints they needed to stay in the $600-700 range all told which of course indicates an AMD build. They chose the $138 FX-8320E for their processor, along with a pair of GTX 760s, the ASUS M5A99FX Pro R2.0, 8GB of DDR3-1866 and with storage, power, cooling and case they managed to keep within the ir budget. The question remain is if it is powerful enough for reasonable gaming duties such as Borderlands 2. Read on to see if the recommendation is to go with AMD or the i3-4330 and a low end H97 board.
"Released this past fall, AMD’s FX-8320E processor promises to deliver a lot of processing power for those on a budget. It sports eight cores, and as a Black Edition, its overclocking capabilities are unrestricted. But is that enough to make this the best go-to budget processor, especially for gamers?"
Here are some more Processor articles from around the web:
- A10-7800 CPU Review @ Hardware Secrets
- AMD A8-7650k Kaveri @ eTeknix
- A10-6800K vs. Core i3-4150 CPU Review @ Hardware Secrets
Subject: Displays | April 7, 2015 - 12:31 PM | Ryan Shrout
Tagged: variable refresh rate, mg279q, freesync, asus, amd
If you remember back at CES in early January, we got hands-on with an upcoming monitor from ASUS, the MG279Q. Unlike the company's other G-Sync enabled displays, this monitor was unique in that offered support for the Adaptive Sync portion of the DisplayPort 1.2a standard but was also not a part of AMD's initial wave of FreeSync monitors.
The ASUS MG279Q from CES 2015
The screen technology itself was impressive: a 2560x1440 resolution, IPS-style implementation and a maximum refresh rate of 120 Hz. (Note: the new marketing material indicates that the panel will have a 144 Hz maximum refresh rate. Maybe there was a hardware change since CES?) During a video interview with ASUS at the time it was labeled as having a minimum refresh rate of 40 Hz which is something we look forward to testing if and when we can get a sample in our labs.
At the time, there was some interesting debate about WHY this wasn't a FreeSync branded monitor. We asked AMD specifically about this monitor's capability to work with capable Radeon GPUs for variable refresh and they promised there were no lock-outs occurring. We guessed that maybe ASUS' deal with NVIDIA on G-Sync was preventing them from joining the FreeSync display program, but cleary that wasn't the case. Today on Twitter, AMD announced that the MG279Q was officially part of the FreeSync brand.
I am glad to see more products come into the FreeSync monitor market and hopefully we'll have some solid gaming experiences with the ASUS MG279Q to report back on soon!
Process Technology Overview
We have been very spoiled throughout the years. We likely did not realize exactly how spoiled we were until it became very obvious that the rate of process technology advances hit a virtual brick wall. Every 18 to 24 months we were treated to a new, faster, more efficient process node that was opened up to fabless semiconductor firms and we were treated to a new generation of products that would blow our hair back. Now we have been in a virtual standstill when it comes to new process nodes from the pure-play foundries.
Few expected the 28 nm node to live nearly as long as it has. Some of the first cracks in the façade actually came from Intel. Their 22 nm Tri-Gate (FinFET) process took a little bit longer to get off the ground than expected. We also noticed some interesting electrical features from the products developed on that process. Intel skewed away from higher clockspeeds and focused on efficiency and architectural improvements rather than staying at generally acceptable TDPs and leapfrogging the competition by clockspeed alone. Overclockers noticed that the newer parts did not reach the same clockspeed heights as previous products such as the 32 nm based Sandy Bridge processors. Whether this decision was intentional from Intel or not is debatable, but my gut feeling here is that they responded to the technical limitations of their 22 nm process. Yields and bins likely dictated the max clockspeeds attained on these new products. So instead of vaulting over AMD’s products, they just slowly started walking away from them.
Samsung is one of the first pure-play foundries to offer a working sub-20 nm FinFET product line. (Photo courtesy of ExtremeTech)
When 28 nm was released the plans on the books were to transition to 20 nm products based on planar transistors, thereby bypassing the added expense of developing FinFETs. It was widely expected that FinFETs were not necessarily required to address the needs of the market. Sadly, that did not turn out to be the case. There are many other factors as to why 20 nm planar parts are not common, but the limitations of that particular process node has made it a relatively niche process node that is appropriate for smaller, low power ASICs (like the latest Apple SOCs). The Apple A8 is rumored to be around 90 mm square, which is a far cry from the traditional midrange GPU that goes from 250 mm sq. to 400+ mm sq.
The essential difficulty of the 20 nm planar node appears to be a lack of power scaling to match the increased transistor density. TSMC and others have successfully packed in more transistors into every square mm as compared to 28 nm, but the electrical characteristics did not scale proportionally well. Yes, there are improvements there per transistor, but when designers pack in all those transistors into a large design, TDP and voltage issues start to arise. As TDP increases, it takes more power to drive the processor, which then leads to more heat. The GPU guys probably looked at this and figured out that while they can achieve a higher transistor density and a wider design, they will have to downclock the entire GPU to hit reasonable TDP levels. When adding these concerns to yields and bins for the new process, the advantages of going to 20 nm would be slim to none at the end of the day.
Subject: General Tech | March 31, 2015 - 12:18 PM | Jeremy Hellstrom
Tagged: skybridge, HPC, arm, amd
The details are a little sparse but we now have hints of what AMD's plans are for next year and 2017. In 2016 we should see AMD chips with ARM cores, the Skybridge architecture which Josh described almost a year ago, which will be pin compatible allowing the same motherboard to run with either an ARM processor or an AMD64 depending on your requirements. The GPU portion of their APUs will move forward on a two year cycle so we should not expect any big jumps in the next year but they are talking about an HPC capable part by 2017. The final point that The Register translated covers that HPC part which is supposed to utilize a new memory architecture which will be nine times faster than existing GDDR5.
"Consumer and commercial business lead Junji Hayashi told the PC Cluster Consortium workshop in Osaka that the 2016 release CPU cores (an ARMv8 and an AMD64) will get simultaneous multithreading support, to sit alongside the clustered multithreading of the company's Bulldozer processor families."
Here is some more Tech News from around the web:
- Project Spartan browser will hit Windows 10 in next Insider build @ The Inquirer
- How to Use the Linux Command Line: Basics of CLI @ Linux.com
- Microsoft: Office 365 IT admins get free device-wrangling controls @ The Register
- KitGuru TV: Titan X, Sli, AMD's 390x and AMD Freesync, Nvidia GSYNC
It's more than just a branding issue
As a part of my look at the first wave of AMD FreeSync monitors hitting the market, I wrote an analysis of how the competing technologies of FreeSync and G-Sync differ from one another. It was a complex topic that I tried to state in as succinct a fashion as possible given the time constraints and that the article subject was on FreeSync specifically. I'm going to include a portion of that discussion here, to recap:
First, we need to look inside the VRR window, the zone in which the monitor and AMD claims that variable refresh should be working without tears and without stutter. On the LG 34UM67 for example, that range is 48-75 Hz, so frame rates between 48 FPS and 75 FPS should be smooth. Next we want to look above the window, or at frame rates above the 75 Hz maximum refresh rate of the window. Finally, and maybe most importantly, we need to look below the window, at frame rates under the minimum rated variable refresh target, in this example it would be 48 FPS.
AMD FreeSync offers more flexibility for the gamer than G-Sync around this VRR window. For both above and below the variable refresh area, AMD allows gamers to continue to select a VSync enabled or disabled setting. That setting will be handled as you are used to it today when your game frame rate extends outside the VRR window. So, for our 34UM67 monitor example, if your game is capable of rendering at a frame rate of 85 FPS then you will either see tearing on your screen (if you have VSync disabled) or you will get a static frame rate of 75 FPS, matching the top refresh rate of the panel itself. If your game is rendering at 40 FPS, lower than the minimum VRR window, then you will again see the result of tearing (with VSync off) or the potential for stutter and hitching (with VSync on).
But what happens with this FreeSync monitor and theoretical G-Sync monitor below the window? AMD’s implementation means that you get the option of disabling or enabling VSync. For the 34UM67 as soon as your game frame rate drops under 48 FPS you will either see tearing on your screen or you will begin to see hints of stutter and judder as the typical (and previously mentioned) VSync concerns again crop their head up. At lower frame rates (below the window) these artifacts will actually impact your gaming experience much more dramatically than at higher frame rates (above the window).
G-Sync treats this “below the window” scenario very differently. Rather than reverting to VSync on or off, the module in the G-Sync display is responsible for auto-refreshing the screen if the frame rate dips below the minimum refresh of the panel that would otherwise be affected by flicker. So, in a 30-144 Hz G-Sync monitor, we have measured that when the frame rate actually gets to 29 FPS, the display is actually refreshing at 58 Hz, each frame being “drawn” one extra instance to avoid flicker of the pixels but still maintains a tear free and stutter free animation. If the frame rate dips to 25 FPS, then the screen draws at 50 Hz. If the frame rate drops to something more extreme like 14 FPS, we actually see the module quadruple drawing the frame, taking the refresh rate back to 56 Hz. It’s a clever trick that keeps the VRR goals and prevents a degradation of the gaming experience. But, this method requires a local frame buffer and requires logic on the display controller to work. Hence, the current implementation in a G-Sync module.
As you can see, the topic is complicated. So Allyn and I (and an aging analog oscilloscope) decided to take it upon ourselves to try and understand and teach the implementation differences with the help of some science. The video below is where the heart of this story is focused, though I have some visual aids embedded after it.
Still not clear on what this means for frame rates and refresh rates on current FreeSync and G-Sync monitors? Maybe this will help.
Subject: General Tech | March 26, 2015 - 01:51 PM | Ken Addison
Tagged: XPS 13, video, Vector 180, usb 3.1, supernova, Silverstone, quadro, podcast, ocz, nvidia, m6000, gsync, FT05, freesync, Fortress, evga, dell, ddr4-3400, ddr4, corsair, broadwell-u, amd
Join us this week as we discuss the launch of FreeSync, Dell XPS 13, Super Fast DDR4 and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts:Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Sebastian Peak
Program length: 1:29:50
Our first DX12 Performance Results
Late last week, Microsoft approached me to see if I would be interested in working with them and with Futuremark on the release of the new 3DMark API Overhead Feature Test. Of course I jumped at the chance, with DirectX 12 being one of the hottest discussion topics among gamers, PC enthusiasts and developers in recent history. Microsoft set us up with the latest iteration of 3DMark and the latest DX12-ready drivers from AMD, NVIDIA and Intel. From there, off we went.
First we need to discuss exactly what the 3DMark API Overhead Feature Test is (and also what it is not). The feature test will be a part of the next revision of 3DMark, which will likely ship in time with the full Windows 10 release. Futuremark claims that it is the "world's first independent" test that allows you to compare the performance of three different APIs: DX12, DX11 and even Mantle.
It was almost one year ago that Microsoft officially unveiled the plans for DirectX 12: a move to a more efficient API that can better utilize the CPU and platform capabilities of future, and most importantly current, systems. Josh wrote up a solid editorial on what we believe DX12 means for the future of gaming, and in particular for PC gaming, that you should check out if you want more background on the direction DX12 has set.
One of DX12 keys for becoming more efficient is the ability for developers to get closer to the metal, which is a phrase to indicate that game and engine coders can access more power of the system (CPU and GPU) without having to have its hand held by the API itself. The most direct benefit of this, as we saw with AMD's Mantle implementation over the past couple of years, is improved quantity of draw calls that a given hardware system can utilize in a game engine.
Subject: Motherboards | March 25, 2015 - 08:00 AM | Tim Verry
Tagged: usb 3.1, usb, msi, amd, am3+, 10Gbps
Last week, MSI launched a slew of new USB 3.1 equipped motherboards. Today, the company is releasing more details on one of the AMD-based products: the MSI 970A SLI Krait Edition. This upcoming motherboard is geared towards gamers using AMD FX (AM3+) processors and supports multi-GPU setups (both SLI and CrossFire). The 970A SLI Krait Edition has a black and white color scheme with rich expansion options and large aluminum heatsinks over the VRMs and northbridge.
The AM3+ processor socket sits to the left of four DDR3 memory slots. Six expansion slots take up the majority of the lower half of the board and include two PCI-E x16, two PCI-E x1, and two PCI slots. Six SATA ports occupy the bottom-right corner with four at 90-degree angles. MSI is using its latest “Military Class 4” capacitors and other hardware along with gold audio traces connecting the rear IO audio jacks to the onboard sound chip.
Speaking of rear IO, you will find the following ports on the 970A SLI Krait Edition.
- 2 x PS/2
- 6 x USB 2.0
- 2 x USB 3.1
- 1 x Gigabit Ethernet
- 6 x Analog Audio
The main feature that MSI is pushing with this new board is the addition of two USB 3.1 (Type A) ports to the AMD platform. This is the first AM3+ motherboard to support the faster standard – up to 10 Gbps using an Asmedia ASM1352R controller – while also being backwards compatible with older USB 3.0 and USB 2.0 devices.
MSI has not yet released pricing or availability, but expect it to launch soon for less than $100.
Few specifications have been released about this board so far, as well as no timetable for the launch. It is a finished product and should be out "soon" as Tim mentioned.
There are a few things we can gather from the photo of the board. The audio solution is not nearly as robust as we saw with the 970 Gaming motherboard. I doubt it will have the headphone amplification, and the filtering is going to be less due to fewer caps used. The audio is still physically isolated on the PCB, but it has not received the same focus as what we saw on 970 Gaming.
It looks like it is a full 8+2 power phase implementation, as it is taking up more space on the board than the 6+2 unit on the 970 Gaming did. This should allow for a greater selection of CPUs to be used, as well as potentially greater overclocking ability. It does not feature a separate SATA controller, so all 6 SATA ports on the board are handled by the SB950. There are no external e-SATA ports, which really is not a big deal as those are rarely used.
This looks to be a nice addition to the fading AM3+ market. For those holding onto their AMD builds and wish to upgrade, this looks to be an inexpensive option with next generation connectivity. MSI looks to have paid the licensing fee necessary to support SLI, plus they utilize the same AMD 970 chipset on the 970 Gaming that is not supposed to be able to split the 1 x 16X PEG connection to 2 x 8X slots. Some interesting design and chippery are required to that.
What is FreeSync?
FreeSync: What began as merely a term for AMD’s plans to counter NVIDIA’s launch of G-Sync (and mocking play on NVIDIA’s trade name) has finally come to fruition, keeping the name - and the attitude. As we have discussed, AMD’s Mantle API was crucial to pushing the industry in the correct and necessary direction for lower level APIs, though NVIDIA’s G-Sync deserves the same credit for recognizing and imparting the necessity of a move to a variable refresh display technology. Variable refresh displays can fundamentally change the way that PC gaming looks and feels when they are built correctly and implemented with care, and we have seen that time and time again with many different G-Sync enabled monitors at our offices. It might finally be time to make the same claims about FreeSync.
But what exactly is FreeSync? AMD has been discussing it since CES in early 2014, claiming that they would bypass the idea of a custom module that needs to be used by a monitor to support VRR, and instead go the route of open standards using a modification to DisplayPort 1.2a from VESA. FreeSync is based on AdaptiveSync, an optional portion of the DP standard that enables a variable refresh rate courtesy of expanding the vBlank timings of a display, and it also provides a way to updating EDID (display ID information) to facilitate communication of these settings to the graphics card. FreeSync itself is simply the AMD brand for this implementation, combining the monitors with correctly implemented drivers and GPUs that support the variable refresh technology.
A set of three new FreeSync monitors from Acer, LG and BenQ.
Fundamentally, FreeSync works in a very similar fashion to G-Sync, utilizing the idea of the vBlank timings of a monitor to change how and when it updates the screen. The vBlank signal is what tells the monitor to begin drawing the next frame, representing the end of the current data set and marking the beginning of a new one. By varying the length of time this vBlank signal is set to, you can force the monitor to wait any amount of time necessary, allowing the GPU to end the vBlank instance exactly when a new frame is done drawing. The result is a variable refresh rate monitor, one that is in tune with the GPU render rate, rather than opposed to it. Why is that important? I wrote in great detail about this previously, and it still applies in this case:
The idea of G-Sync (and FreeSync) is pretty easy to understand, though the implementation method can get a bit more hairy. G-Sync (and FreeSync) introduces a variable refresh rate to a monitor, allowing the display to refresh at wide range of rates rather than at fixed intervals. More importantly, rather than the monitor dictating what rate this refresh occurs at to the PC, the graphics now tells the monitor when to refresh in a properly configured G-Sync (and FreeSync) setup. This allows a monitor to match the refresh rate of the screen to the draw rate of the game being played (frames per second) and that simple change drastically improves the gaming experience for several reasons.
Gamers today are likely to be very familiar with V-Sync, short for vertical sync, which is an option in your graphics card’s control panel and in your game options menu. When enabled, it forces the monitor to draw a new image on the screen at a fixed interval. In theory, this would work well and the image is presented to the gamer without artifacts. The problem is that games that are played and rendered in real time rarely fall into a very specific frame rate. With only a couple of exceptions, games frame rates will fluctuate based on the activity happening on the screen: a rush of enemies, a changed camera angle, an explosion or falling building. Instantaneous frame rates can vary drastically, from 30, to 60, to 90, and force the image to be displayed only at set fractions of the monitor's refresh rate, which causes problems.
Subject: General Tech | March 17, 2015 - 01:18 PM | Jeremy Hellstrom
Tagged: hsa foundation, hsa, amd, arm, Samsung, Imagination Technologies, HSAIL
We have been talking about the HSA foundation since 2013, a cooperative effort by AMD, ARM, Imagination, Samsung, Qualcomm, MediaTek and TI to design a heterogeneous memory architecture to allow GPUs, DSPs and CPUs to all directly access the same physical memory. The release of the official specifications today are a huge step forward for these companies, especially for garnering future mobile market share as physical hardware apart from Carrizo becomes available.
Programmers will be able to use C, C++, Fortran, Java, and Python to write HSA-compliant code which is then compiled into HSAIL (Heterogeneous System Architecture Intermediate Language) and from there to the actual binary executables which will run on your devices. HSA currently supports x86 and x64 and there are Linux kernel patches available for those who develop on that OS. Intel and NVIDIA are not involved in this project at all, they have chosen their own solutions for mobile devices and while Intel certainly has pockets deep enough to experiment NVIDIA might not. We shall soon see if Pascal and improvements Maxwell's performance and efficiency through future generations can compete with the benefits of HSA.
The current problem is of course hardware, Bald Eagle and Carrizo are scheduled to arrive on the market soon but currently they are not available. Sea Islands GPUs and Kaveri have some HSA enhancements but with limited hardware to work with it will be hard to convince developers to focus on programming HSA optimized applications. The release of the official specs today is a great first step; if you prefer an overview to reading through the official documents The Register has a good article right here.
"The HSA Foundation today officially published version 1.0 of its Heterogeneous System Architecture specification, which (if we were being flippant) describes how GPUs, DSPs and CPUs can share the same physical memory and pass pointers between each other. (A provisional 1.0 version went live in August 2014.)"
Here is some more Tech News from around the web:
- Droidberry dangles: Why the BlackBerry-Samsung alliance is big potatoes @ The Register
- BlackBerry: FREAK SSL bug affects BES, BBM and BlackBerry smartphones @ The Inquirer
- Apple will pay you to ditch your Android or BlackBerry smartphone @ The Inquirer
- Ext4 Filesystem Improvements to Address Scaling Challenges @ Linux.com
- Microsoft gives EMET divine powers to repel God Mode attack @ The Register
- Microsoft RE-BORKS Windows 7 patch after reboot loop horror @ The Register
- Fujitsu Could Help Smartphone Chips Run Cooler @ Slashdot
- Gigabyte announces financial results for 2014 @ DigiTimes
- 3D Audio Standard Released @ Slashdot
- NikKTech And Nanoxia Spring Break EU Giveaway