Report: Leaked AMD Ryzen 7 1700X Benchmarks Show Strong Performance

Subject: Processors | February 21, 2017 - 03:54 PM |
Tagged: ryzen, rumor, report, R7, processor, leak, IPC, cpu, Cinebench, benchmark, amd, 1700X

VideoCardz.com, continuing their CPU coverage of the upcoming Ryzen launch, has posted images from XFASTEST depicting the R7 1700X processor and some very promising benchmark screenshots.

AMD-Ryzen-7-1700X.jpg

(Ryzen 7 1700X on the right) Image credit XFASTEST via VideoCardz

The Ryzen 7 1700X is reportedly an 8-core/16-thread processor with a base clock speed of 3.40 GHz, and while overall performance from the leaked benchmarks looks very impressive, it is the single-threaded score from the Cinebench R15 run pictured which really makes this CPU look like major competition for Intel with IPC.

AMD-Ryzen-7-1700X-Cinebench.jpg

Image credit XFASTEST via VideoCardz

An overall score of 1537 is outstanding, placing the CPU almost even with the i7-6900K at 1547 based on results from AnandTech:

AnandTech_Benchmarks.png

Image credit AnandTech

And the single-threaded performance score of the reported Ryzen 7 1700X is 154, which places it above the i7-6900K's score of 153. (It is worth noting that Cinebench R15 shows a clock speed of 3.40 GHz for this CPU, which is the base, while CPU-Z is displaying 3.50 GHz - likely indicating a boost clock, which can reportedly surpass 3.80 GHz with this CPU.)

Other results from the reported leak include 3DMark Fire Strike, with a physics score of 17,916 with Ryzen 7 1700X clocking in at ~3.90 GHz:

AMD-Ryzen-7-1700X-Fire-Strike-Physics.png

Image credit XFASTEST via VideoCardz

We will know soon enough where this and other Ryzen processors stand relative to Intel's current offerings, and if Intel will respond to the (rumored) price/performance double whammy of Ryzen. An i7-6900K retails for $1099 and currently sells for $1049 on Newegg.com, and the rumored pricing (taken from Wccftech), if correct, gives AMD a big win here. Competition is very, very good!

wccftech_chart.PNG

Chart credit Wccftech.com

Source: VideoCardz

Report: AMD Ryzen Performance in Ashes of the Singularity Benchmark

Subject: Processors | February 4, 2017 - 01:22 AM |
Tagged: titan x, ryzen, report, processor, nvidia, leak, cpu, benchmark, ashes of the singularity, amd

AMD's upcoming 8-core Ryzen CPU has appeared online in an apparent leak showing performance from an Ashes of the Singularity benchmark run. The benchmark results, available here on imgur and reported by TechPowerUp (among others today) shows the result of a run featuring the unreleased CPU paired with an NVIDIA Titan X graphics card.

Ryzen_Ashes_Screenshot.jpg

It is interesting to consider that this rather unusual system configuration was also used by AMD during their New Horizon fan event in December, with an NVIDIA Titan X and Ryzen 8-core processor powering the 4K game demos of Battlefield 1 that were pitted against an Intel Core i7-6900K/Titan X combo.

It is also interesting to note that the processor listed in the screenshot above is (apparently) not an engineering sample, as TechPowerUp points out in their post:

"Unlike some previous benchmark leaks of Ryzen processors, which carried the prefix ES (Engineering Sample), this one carried the ZD Prefix, and the last characters on its string name are the most interesting to us: F4 stands for the silicon revision, while the 40_36 stands for the processor's Turbo and stock speeds respectively (4.0 GHz and 3.6 GHz)."

March is fast approaching, and we won't have to wait long to see just how powerful this new processor will be for 4K gaming (and other, less important stuff). For now, I want to find results from an AotS benchmark with a Titan X and i7-6900K to see how these numbers compare!

Source: TechPowerUp

Sapphire AMD Radeon RX 460 Nitro 4GB Benchmarked at HEXUS

Subject: Graphics Cards | August 8, 2016 - 05:08 PM |
Tagged: amd, radeon, RX460, rx 460, graphics, gpu, gaming, benchmark, 1080p, 1920x1080, gtx 950, gtx 750 ti

HEXUS has posted their review of Sapphire's AMD Radeon RX 460 Nitro 4GB graphics card, pitting it against the NVIDIA GTX 950 and GTX 750 Ti in a 1920x1080 benchmarking battle.

rx460nitro.jpg

Image credit: HEXUS

"Unlike the two previous AMD GPUs released under the Polaris branding recently, RX 460 is very much a mainstream part that's aimed at buyers who are taking their first real steps into PC gaming. RX 460 uses a distinct, smaller die and is to be priced from £99. As usual, let's fire up the comparison specification table and dissect the latest offering from AMD."

rx460.PNG

Image credit: HEXUS

The results might surprise you, and vary somewhat based on the game selected. Check out the source link for the full review over at HEXUS.

Source: HEXUS

Basemark Announces VRScore Virtual Reality Benchmark

Subject: General Tech | March 15, 2016 - 09:32 PM |
Tagged: VRScore, VR, virtual reality, gdc 2016, GDC, crytek, CRYENGINE, benchmark, Basemark

Basemark has announced VRScore, a new benchmarking tool for VR produced in partnership with Crytek. The benchmark uses Crytek’s CRYENGINE along with the Basemark framework, and can be run with or without a head-mounted display (HMD).

VRScore Screen 04.png

"With VRScore, consumers and companies are able to reliably test their PC for VR readiness with various head mounted displays (HMDs). Unlike existing tools developed by hardware vendors themselves, VRScore has been developed independently to be an essential source of unbiased information for anyone interested in VR."

An independent solution is certainly welcome as we enter what promises to be the year of VR, and Basemark is well known for providing objective benchmark results with applications such as Basemark X and OS II, cross-platform benchmarks for mobile devices. The VRScore benchmark supports the Oculus Rift, HTC Vive, and Razer's OSVR headsets, and the corporate versions include VRTrek, a left/right eye latency measurement device.

Here’s the list of features from Basemark:

  • Supports HTC Vive, Oculus Rift and OSVR
  • Uses CRYENGINE
  • Supports both DirectX 12 and DirectX 11 
  • Features Codename: Sky Harbor, an original IP game scene by Crytek
  • Includes tests for interactive VR (VR game), non-interactive VR (360 VR video) and VR spatial audio (360 sound) 
  • Can be used with or without an HMD
  • Power Board, an integrated online service, gives personalized PC upgrading advice and features performance ranking lists for HMDs, CPUs and GPUs
  • Corporate versions include VRTrek, a patent pending latency testing device with dual phototransistors for application to photon latency, display persistence, left and right eye latency, dropped frames and duplicated frames testing

VRScore-Trek.png

VRScore Trek eye latency measurement device, included with corporate version

VRScore is currently available only to corporate customers via the company’s early access program and Benchmark Development Program. The consumer versions (free and paid) will be released in June.

Source: Basemark

The Fable of the uncontroversial benchmark

Subject: Graphics Cards | September 24, 2015 - 06:53 PM |
Tagged: radeon, nvidia, lionhead, geforce, fable legends, fable, dx12, benchmark, amd

By now you should have memorized Ryan's review of Fable's DirectX 12 performance on a variety of cards and hopefully tried out our new interactive IFU charts.  You can't always cover every card, as those who were brave enough to look at the CSV file Ryan provided might have come to realize.  That's why it is worth peeking at The Tech Report's review after reading through ours.  They have included an MSI R9 285 and XFX R9 390 as well as an MSI GTX 970, which may be cards you are interested in seeing.  They also spend some time looking at CPU scaling and the effect that has on AMD and NVIDIA's performance.  Check it out here.

mountains-shot.jpg

"Fable Legends is one of the first games to make use of DirectX 12, and it produces some truly sumptuous visuals. Here's a look at how Legends performs on the latest graphics cards."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Author:
Manufacturer: Lionhead Studios

Benchmark Overview

When approached a couple of weeks ago by Microsoft with the opportunity to take an early look at an upcoming performance benchmark built on a DX12 game pending release later this year, I of course was excited for the opportunity. Our adventure into the world of DirectX 12 and performance evaluation started with the 3DMark API Overhead Feature Test back in March and was followed by the release of the Ashes of the Singularity performance test in mid-August. Both of these tests were pinpointing one particular aspect of the DX12 API - the ability to improve CPU throughput and efficiency with higher draw call counts and thus enabling higher frame rates on existing GPUs.

ScreenShot00004.jpg

This game and benchmark are beautiful...

Today we dive into the world of Fable Legends, an upcoming free to play based on the world of Albion. This title will be released on the Xbox One and for Windows 10 PCs and it will require the use of DX12. Though scheduled for release in Q4 of this year, Microsoft and Lionhead Studios allowed us early access to a specific performance test using the UE4 engine and the world of Fable Legends. UPDATE: It turns out that the game will have a fall-back DX11 mode that will be enabled if the game detects a GPU incapable of running DX12.

This benchmark focuses more on the GPU side of DirectX 12 - on improved rendering techniques and visual quality rather than on the CPU scaling aspects that made Ashes of the Singularity stand out from other graphics tests we have utilized. Fable Legends is more representative of what we expect to see with the release of AAA games using DX12. Let's dive into the test and our results!

Continue reading our look at the new Fable Legends DX12 Performance Test!!

Author:
Manufacturer: Stardock

Benchmark Overview

I knew that the move to DirectX 12 was going to be a big shift for the industry. Since the introduction of the AMD Mantle API along with the Hawaii GPU architecture we have been inundated with game developers and hardware vendors talking about the potential benefits of lower level APIs, which give more direct access to GPU hardware and enable more flexible threading for CPUs to game developers and game engines. The results, we were told, would mean that your current hardware would be able to take you further and future games and applications would be able to fundamentally change how they are built to enhance gaming experiences tremendously.

I knew that the reader interest in DX12 was outstripping my expectations when I did a live blog of the official DX12 unveil by Microsoft at GDC. In a format that consisted simply of my text commentary and photos of the slides that were being shown (no video at all), we had more than 25,000 live readers that stayed engaged the whole time. Comments and questions flew into the event – more than me or my staff could possible handle in real time. It turned out that gamers were indeed very much interested in what DirectX 12 might offer them with the release of Windows 10.

game3.jpg

Today we are taking a look at the first real world gaming benchmark that utilized DX12. Back in March I was able to do some early testing with an API-specific test that evaluates the overhead implications of DX12, DX11 and even AMD Mantle from Futuremark and 3DMark. This first look at DX12 was interesting and painted an amazing picture about the potential benefits of the new API from Microsoft, but it wasn’t built on a real game engine. In our Ashes of the Singularity benchmark testing today, we finally get an early look at what a real implementation of DX12 looks like.

And as you might expect, not only are the results interesting, but there is a significant amount of created controversy about what those results actually tell us. AMD has one story, NVIDIA another and Stardock and the Nitrous engine developers, yet another. It’s all incredibly intriguing.

Continue reading our analysis of the Ashes of the Singularity DX12 benchmark!!

Author:
Manufacturer: Futuremark

The Ice Storm Test

Love it or hate it, 3DMark has a unique place in the world of PC gaming and enthusiasts.  Since 3DMark99 was released...in 1998...with a target on DirectX 6, Futuremark has been developing benchmarks on a regular basis in time with major API changes and also major harware changes.  The most recent release of 3DMark11 has been out since late in 2010 and has been a regular part of our many graphics card reviews on PC Perspective

Today Futuremark is not only releasing a new version of the benchmark but is also taking fundamentally different approach to performance testing and platforms.  The new 3DMark, just called "3DMark", will not only target high-end gaming PCs but integrated graphics platforms and even tablets and smartphones. 

We interviewed the President of Futuremark, Oliver Baltuch, over the weekend and asked some questions about this new direction for 3DMark, how mobile devices were going to affect benchmarks going forward and asked about the new results patterns, stuttering and more.  Check out the video below!

Video Loading...

Make no bones about it, this is a synthetic benchmark and if you have had issues with that in the past because it is not a "real world" gaming test, you will continue to have those complaints.  Personally I see the information that 3DMark provides to be very informative though it definitely shouldn't be depended on as the ONLY graphics performance metric. 

Continue reading our story on the new 3DMark benchmark and the first performance results!!

Author:
Subject: Editorial, Mobile
Manufacturer: Qualcomm

Meet Vellamo

With Google reporting daily Android device activations upward of 550,000 devices a day, the rapid growth and ubiqutity of the platform cannot be denied. As the platform has grown, we here at PC Perspective have constantly kept our eye out for ways to assess and compare the performance of different devices running the same mobile operating systems. In the past we have done performance testing with applications such as Quadrant and Linpack, and GPU testing with NenaMark and Qualcomm's NeoCore product.

Today we are taking a look at a new mobile benchmark from Qualcomm, named Vellamo. Qualcomm has seen the need for an agnostic browser benchmark on Android, and so came Vellamo. A video introduction from Qualcomm's Director of Product Management, Sy Choudhury, is below.

 

With the default configuration, Vellamo performs a battery of 14 tests. These tests are catagorized into Rendering, Javascript, User Experience, Networking, and Advanced. 

For more on this benchmark and our results from 10 different Android-power devices, keep reading!

What's the big deal with BAPCo? Why Benchmarking Matters

Subject: Editorial, General Tech | June 21, 2011 - 06:36 PM |
Tagged: VIA, sysmark, nvidia, Intel, benchmark, bapco, amd

It seems that all the tech community is talking about today is BAPCo and its benchmarking suite called Sysmark.  A new version, 2012, was released just recently and yesterday we found out that AMD, NVIDIA and VIA have all dropped their support of the "Business Applications Performance Corporation".  Obviously those companies have a beef with the benchmark as it is, yet somehow one company stands behind the test: Intel.

Everyone you know of is posting about it.  My twitter feed "asplode" with comments like this:

AMD quits BAPCo, says SYSmark is nutso. Nvidia and VIA, they say, also.

AMD: Voting For Openness: In order to get a better understanding of AMD's press release earlier concerning BAPCO...

Ooh, BapCo drama.

Why Legit Reviews won't use the latest BAPCo benchmark:

Even PC Perspective posted on this drama yesterday afternoon saying: "The disputes centered mostly over the release of SYSmark 2012. For years various members have been complaining about various aspects of the product which they allege Intel strikes down and ignores while designing each version. One major complaint is the lack of reporting on the computer’s GPU performance which is quickly becoming beyond relevant to an actual system’s overall performance. With NVIDIA, AMD, and VIA gone from the consortium, Intel is pretty much left alone in the company: now officially."

aaaaa.png

Obviously while cutting the grass this morning this is the topic swirling through my head; so thanks for that everyone.  My question is this: does it really matter and how is this any different than it has been for YEARS?  The cynical side of me says that AMD, NVIDIA and VIA all dropped out because each company's particular products aren't stacking up as well as Intel's when it comes to the total resulting score.  Intel makes the world's fastest CPUs, I don't think anyone with a brain will dispute that, and as such on benchmarks that test the CPU, they are going to have the edge.  

We recently reviewed the AMD Llano-based Sabine platform and in CPU-centric tests like SiSoft Sandra, TrueCrypt and 7zip the AMD APU is noticeably slower.  But AMD isn't sending out press releases and posting blogs about how these benchmarks don't show the true performance of a system as the end user will see.  And Intel isn't pondering why we used games like Far Cry 2 and Just Cause 2 to show the AMD APU dominating there. Why?  Because these tests are part of a suite of benchmarks we use to show the overall performance of a system.  They are tools which competent reviewers wield in order to explain to readers why certain hardware acts in a certain way in certain circumstances.  

Continue reading for more on this topic...

Source: PCPer