I knew that the move to DirectX 12 was going to be a big shift for the industry. Since the introduction of the AMD Mantle API along with the Hawaii GPU architecture we have been inundated with game developers and hardware vendors talking about the potential benefits of lower level APIs, which give more direct access to GPU hardware and enable more flexible threading for CPUs to game developers and game engines. The results, we were told, would mean that your current hardware would be able to take you further and future games and applications would be able to fundamentally change how they are built to enhance gaming experiences tremendously.
I knew that the reader interest in DX12 was outstripping my expectations when I did a live blog of the official DX12 unveil by Microsoft at GDC. In a format that consisted simply of my text commentary and photos of the slides that were being shown (no video at all), we had more than 25,000 live readers that stayed engaged the whole time. Comments and questions flew into the event – more than me or my staff could possible handle in real time. It turned out that gamers were indeed very much interested in what DirectX 12 might offer them with the release of Windows 10.
Today we are taking a look at the first real world gaming benchmark that utilized DX12. Back in March I was able to do some early testing with an API-specific test that evaluates the overhead implications of DX12, DX11 and even AMD Mantle from Futuremark and 3DMark. This first look at DX12 was interesting and painted an amazing picture about the potential benefits of the new API from Microsoft, but it wasn’t built on a real game engine. In our Ashes of the Singularity benchmark testing today, we finally get an early look at what a real implementation of DX12 looks like.
And as you might expect, not only are the results interesting, but there is a significant amount of created controversy about what those results actually tell us. AMD has one story, NVIDIA another and Stardock and the Nitrous engine developers, yet another. It’s all incredibly intriguing.
The Ice Storm Test
Love it or hate it, 3DMark has a unique place in the world of PC gaming and enthusiasts. Since 3DMark99 was released...in 1998...with a target on DirectX 6, Futuremark has been developing benchmarks on a regular basis in time with major API changes and also major harware changes. The most recent release of 3DMark11 has been out since late in 2010 and has been a regular part of our many graphics card reviews on PC Perspective.
Today Futuremark is not only releasing a new version of the benchmark but is also taking fundamentally different approach to performance testing and platforms. The new 3DMark, just called "3DMark", will not only target high-end gaming PCs but integrated graphics platforms and even tablets and smartphones.
We interviewed the President of Futuremark, Oliver Baltuch, over the weekend and asked some questions about this new direction for 3DMark, how mobile devices were going to affect benchmarks going forward and asked about the new results patterns, stuttering and more. Check out the video below!
Make no bones about it, this is a synthetic benchmark and if you have had issues with that in the past because it is not a "real world" gaming test, you will continue to have those complaints. Personally I see the information that 3DMark provides to be very informative though it definitely shouldn't be depended on as the ONLY graphics performance metric.
With Google reporting daily Android device activations upward of 550,000 devices a day, the rapid growth and ubiqutity of the platform cannot be denied. As the platform has grown, we here at PC Perspective have constantly kept our eye out for ways to assess and compare the performance of different devices running the same mobile operating systems. In the past we have done performance testing with applications such as Quadrant and Linpack, and GPU testing with NenaMark and Qualcomm's NeoCore product.
Today we are taking a look at a new mobile benchmark from Qualcomm, named Vellamo. Qualcomm has seen the need for an agnostic browser benchmark on Android, and so came Vellamo. A video introduction from Qualcomm's Director of Product Management, Sy Choudhury, is below.
Subject: Editorial, General Tech | June 21, 2011 - 02:36 PM | Ryan Shrout
Tagged: VIA, sysmark, nvidia, Intel, benchmark, bapco, amd
It seems that all the tech community is talking about today is BAPCo and its benchmarking suite called Sysmark. A new version, 2012, was released just recently and yesterday we found out that AMD, NVIDIA and VIA have all dropped their support of the "Business Applications Performance Corporation". Obviously those companies have a beef with the benchmark as it is, yet somehow one company stands behind the test: Intel.
Everyone you know of is posting about it. My twitter feed "asplode" with comments like this:
AMD quits BAPCo, says SYSmark is nutso. Nvidia and VIA, they say, also. http://bit.ly/kHvKux
AMD: Voting For Openness: In order to get a better understanding of AMD's press release earlier concerning BAPCO... http://bit.ly/kNtKkj
Ooh, BapCo drama.
Even PC Perspective posted on this drama yesterday afternoon saying: "The disputes centered mostly over the release of SYSmark 2012. For years various members have been complaining about various aspects of the product which they allege Intel strikes down and ignores while designing each version. One major complaint is the lack of reporting on the computer’s GPU performance which is quickly becoming beyond relevant to an actual system’s overall performance. With NVIDIA, AMD, and VIA gone from the consortium, Intel is pretty much left alone in the company: now officially."
Obviously while cutting the grass this morning this is the topic swirling through my head; so thanks for that everyone. My question is this: does it really matter and how is this any different than it has been for YEARS? The cynical side of me says that AMD, NVIDIA and VIA all dropped out because each company's particular products aren't stacking up as well as Intel's when it comes to the total resulting score. Intel makes the world's fastest CPUs, I don't think anyone with a brain will dispute that, and as such on benchmarks that test the CPU, they are going to have the edge.
We recently reviewed the AMD Llano-based Sabine platform and in CPU-centric tests like SiSoft Sandra, TrueCrypt and 7zip the AMD APU is noticeably slower. But AMD isn't sending out press releases and posting blogs about how these benchmarks don't show the true performance of a system as the end user will see. And Intel isn't pondering why we used games like Far Cry 2 and Just Cause 2 to show the AMD APU dominating there. Why? Because these tests are part of a suite of benchmarks we use to show the overall performance of a system. They are tools which competent reviewers wield in order to explain to readers why certain hardware acts in a certain way in certain circumstances.
Continue reading for more on this topic...
G.Skill Breaks World Overclocking Record and Achieves Fastest Super Pi 32M Record For 1155 Intel Platform
Subject: Memory | June 4, 2011 - 08:26 PM | Tim Verry
Tagged: record, ram, G.Skill, computex, benchmark
G.Skill brought their “A game” to this year’s Computex 2011 show by shattering the current super Pi 32M record on the first day of the show. With the help of famous overclockers Shamino, Fredyama, and Young Pro, the team was able to achieve a time of 5min 33.172 seconds. Using the company’s DDR3 2400MHz CL8 4 (2x2GB) memory kit, the team achieved the record overclock using an Intel 2600K processor at 6.34Ghz and memory clocked at 2340MHz with a CAS Latency of 6-9-6-25 1T. This was all run on an Asus ROG maximum IV Extreme motherboard.
Considering that the memory still had some headroom before reaching even stock clocks, G.Skill is confident that they will break even their record, saying that “this is just the beginning, we aim to achieve more records before the close of Computex 2011.”
The super Pi 32M program is often used as both a benchmarking and stress test application as it heavily stresses both the CPU, memory controller, and RAM by calculating Pi out to 32 million digits. As a single threaded program, it is heavily dependent on CPU clock speed-which is why the G.Skill team focused on low RAM timings as well as getting the CPU clocks up as high as possible in order to grab the world record.
Subject: General Tech | May 12, 2011 - 03:32 PM | Tim Verry
Tagged: Windows 7, SSD test, performance testing, PCMark 7, Futuremark, benchmark
When it comes to hardware testing, PCMark is a widely known and tested benchmarking suite. Developer Futuremark has now deployed PCMark 7 for WIndows 7 alongside PCMark05 for XP and PCMark Vantage for Vista users.
Designed to test a wide range of hardware from low cost notebooks to high performance gaming systems, PCMark utilizes numerous subsystem tests to provide a final composite score for the computer which can then be accurately compared to other users’ scores.
In the same manner as its predecessor PCMark Vantage, PCMark 7 uses traces of actual Windows’ programs to score a system based on actual usage scenarios. For example, the benchmark suite’s storage tests have been designed to allow both home and business users the ability to compare benchmark scores across systems and upgrades (of the same system). Whether using a solid state drive or a mechanical hard drive, PCMark 7 uses recordings of actions in well known Windows applications, including “Microsoft Word, Windows Live Photo Gallery, Windows Live Movie Maker, Windows Media Center, Windows Media Player, Internet Explorer, Windows Defender (Security Essentials) and even World of Warcraft” to replicate how someone would use the computer in a real world situation. The reasoning behind the use of program traces versus pure synthetic testing is the reliability and benefits of real world comparison. Especially when comparing benchmark scores between a base and upgraded system, synthetic benchmarking can show the potential performance increases; however, program traces can more closely showcase the real world performance increases.
With three versions of the benchmarking suite, there is a version to fit various needs and budgets. The Basic Version, Advanced Version ($39.95), and Professional Version ($995.00) offer increased control over the process. Each can be purchased or downloaded from PCMark.com.
"A benchmark is a highly complex and sophisticated piece of software, yet PCMark 7 is easy to use and requires no specialist knowledge or set up," said Jani Joki, Director of PC Products and Services at Futuremark. "Better yet, PCMark 7 Basic Edition is available as a free download so all PC users can benefit from this industrial strength PC test."
Will you be using PCMark 7 in your next benchmarking run?
Subject: General Tech, Graphics Cards, Systems, Storage | May 4, 2011 - 06:43 PM | Jeremy Hellstrom
Tagged: ssd, everest, benchmarking, benchmark, aida64, aida
BUDAPEST, Hungary - May 04, 2011 - FinalWire Ltd. today announced the immediate availability of AIDA64 Extreme Edition 1.70 software, a streamlined diagnostic and benchmarking tool for home users; and the immediate availability of AIDA64 Business Edition 1.70 software, an essential network management solution for small and medium scale enterprises.
The new AIDA64 release further strengthens its solid-state drive health and temperature monitoring capabilities, and implements support for the latest graphics processors from both AMD and nVIDIA.
New features & improvements
- LGA1155 B3 stepping motherboards support
- Preliminary support for AMD “Bulldozer” and “Llano” processors
- Intel 320, Intel 510, OCZ Vertex 3, Samsung PM810 SSD support
- GPU details for AMD Radeon HD 6770M, Radeon HD 6790
- GPU details for nVIDIA GeForce GT 520, GT 520M, GT 550M, GT 555M, GTX 550 Ti, GTX 590
Pricing and Availability
AIDA64 Extreme Edition and AIDA64 Business Edition are available now at www.aida64.com/online-store. Additional information on product features, system requirements, and language versions is available at www.aida64.com/products. Join our Discussion Forum at forums.aida64.com.