Subject: General Tech, Graphics Cards | February 25, 2013 - 01:32 PM | Jeremy Hellstrom
Tagged: jon peddie, graphics, market share
If last weeks report from Jon Peddie Research on sales for all add in and integrated graphics had you worried, the news this week is not gong to help boost your confidence. This week the report focuses solely on add in boards and the drop is dramatic; Q4 2012 sales plummeted just short of 20% compared to Q3 2012. When you look at the entire year, sales dropped 10% overall as AMD's APUs are making serious inroads into the mobile market, as are Intel's, with many notebooks being sold without a discrete GPU. The losses are coming from the mainstream market, enthusiast level GPUs actually saw a slight increase in sales but the small volume is utterly drowned by the mainstream market. You can check out the full press release here.
"JPR found that AIB shipments during Q4 2012 behaved according to past years with regard to seasonality, but the drop was considerably more dramatic. AIB shipments decreased 17.3% from the last quarter (the 10 year average is just -0.68%). On a year-to-year comparison, shipments were down 10%."
Here is some more Tech News from around the web:
- 3DMark Review @ OCC
- Trendnet N300 Easy-N-Range Extender @ Rbmods
- NETGEAR ProSafe GS110T Gigabit SmartSwitch @ Benchmark Reviews
- Quantum computer one step closer after ‘true’ quantum calculation @ The Register
- Microsoft brings Azure back online @ The Register
- Understanding Camera Optics & Smartphone Camera Trends, A Presentation by Brian Klug @ AnandTech
- MWC Sunday roundup: HP Slate, Ascend P2 and Firefox phones @ The Inquirer
- AMD releases Firepro R5000 with remote display technology @ The Inquirer
- The TR Podcast 129: PlayStation 4, Titan, and more
In case you missed it...
In one of the last pages of our recent NVIDIA GeForce GTX TITAN graphics card review we included an update to our Frame Rating graphics performance metric that details the testing method in more detail and showed results for the first time. Because it was buried so far into the article, I thought it was worth posting this information here as a separate article to solict feedback from readers and help guide the discussion forward without getting lost in the TITAN shuffle. If you already read that page of our TITAN review, nothing new is included below.
I am still planning a full article based on these results sooner rather than later; for now, please leave me your thoughts, comments, ideas and criticisms in the comments below!
Why are you not testing CrossFire??
If you haven't been following our sequence of stories that investigates a completely new testing methodology we are calling "frame rating", then you are really missing out. (Part 1 is here, part 2 is here.) The basic premise of Frame Rating is that the performance metrics that the industry is gathering using FRAPS are inaccurate in many cases and do not properly reflect the real-world gaming experience the user has.
Because of that, we are working on another method that uses high-end dual-link DVI capture equipment to directly record the raw output from the graphics card with an overlay technology that allows us to measure frame rates as they are presented on the screen, not as they are presented to the FRAPS software sub-system. With these tools we can measure average frame rates, frame times and stutter, all in a way that reflects exactly what the viewer sees from the game.
We aren't ready to show our full sets of results yet (soon!) but the problems lie in that AMD's CrossFire technology shows severe performance degradations when viewed under the Frame Rating microscope that do not show up nearly as dramatically under FRAPS. As such, I decided that it was simply irresponsible of me to present data to readers that I would then immediately refute on the final pages of this review (Editor: referencing the GTX TITAN article linked above.) - it would be a waste of time for the reader and people that skip only to the performance graphs wouldn't know our theory on why the results displayed were invalid.
Many other sites will use FRAPS, will use CrossFire, and there is nothing wrong with that at all. They are simply presenting data that they believe to be true based on the tools at their disposal. More data is always better.
Here are these results and our discussion. I decided to use the most popular game out today, Battlefield 3 and please keep in mind this is NOT the worst case scenario for AMD CrossFire in any way. I tested the Radeon HD 7970 GHz Edition in single and CrossFire configurations as well as the GeForce GTX 680 and SLI. To gather results I used two processes:
- Run FRAPS while running through a repeatable section and record frame rates and frame times for 60 seconds
- Run our Frame Rating capture system with a special overlay that allows us to measure frame rates and frame times with post processing.
Here is an example of what the overlay looks like in Battlefield 3.
Frame Rating capture on GeForce GTX 680s in SLI - Click to Enlarge
The column on the left is actually the visuals of an overlay that is applied to each and every frame of the game early in the rendering process. A solid color is added to the PRESENT call (more details to come later) for each individual frame. As you know, when you are playing a game, multiple frames will make it on any single 60 Hz cycle of your monitor and because of that you get a succession of colors on the left hand side.
By measuring the pixel height of those colored columns, and knowing the order in which they should appear beforehand, we can gather the same data that FRAPS does but our results are seen AFTER any driver optimizations and DX changes the game might make.
Frame Rating capture on Radeon HD 7970 CrossFire - Click to Enlarge
Here you see a very similar screenshot running on CrossFire. Notice the thin silver band between the maroon and purple? That is a complete frame according to FRAPS and most reviews. Not to us - we think that frame rendered is almost useless.
In our previous article and video, I introduced you to our upcoming testing methodology for evaluating graphics cards based not only frame rates but on frame smoothness and the efficiency of those frame rates. I showed off some of the new hardware we are using for this process and detailed how direct capture of graphics card output allows us to find interesting frame and animation anomalies using some Photoshop still frames.
Today we are taking that a step further and looking at a couple of captured videos that demonstrate a "stutter" and walking you through, frame by frame, how we can detect, visualize and even start to measure them.
This video takes a couple of examples of stutter in games, DiRT 3 and Dishonored to be exact, and shows what they look like in real time, at 25% speed and then finally in a much more detailed frame-by-frame analysis.
Obviously this is just a couple instances of what a stutter is and there are often times less apparent in-game stutters that are even harder to see in video playback. Not to worry - this capture method is capable of seeing those issues as well and we plan on diving into the "micro" level as well shortly.
We aren't going to start talking about whose card and what driver is being used yet and I know that there are still a lot of questions to be answered on this topic. You will be hearing more quite soon from us and I thank you all for your comments, critiques and support.
Let me know below what you thought of this video and any questions that you might have.
Subject: Shows and Expos | January 9, 2013 - 09:11 PM | Josh Walrath
Tagged: Vapor X, sapphire, PCs, graphics, APU, amd
Sapphire was a quick trip with a few interesting things to show off. At the moment we are in a quiet period with AMD and NVIDIA graphics releases. While AMD has released a few of their mobile based 8000 series parts, we are still not expecting a major desktop refresh anytime soon. This is somewhat bittersweet for the graphics partners. On one hand they have more time to differentiate their products and create more value for their consumers. On the other hand there is no major push with new technology that will help the bottom line.
The company is not only involved with graphics, but has a long history of producing motherboards. They offer products for both AMD and Intel, but their primary focus is to address the APU market. FM2 is well fleshed out with Sapphire with A85X, A75, and A55 products. Sapphire does find it slightly easier to compete in the AMD market than going against the biggies in the larger and potentially more lucrative Intel market.
The area where they are hoping to experience the most growth in is the micro PC market. These are very small “desktop” style products based on mobile parts. These are robust little units which do not ship with an OS or the ability to build in an optical device. Due to Sapphire being such a strong AMD partner, they are primarily focusing on APUs in this market as well.
The Edge VS8 is the top product for Sapphire in this market. It is based on a mobile Trinity APU that is quad core enabled running at 1.6 GHz. The graphics portion is the 7600G, which looks to feature the entire complement of GCN units but obviously clocked down to save on power. The VS4 features Trinity but with a dual core processor running at 1.9 GHz.
The lower end Edge HD series is a slightly older unit, and the HD3 runs the last generation Llano processor. They also feature an Intel based HD4 that runs the Celeron 897 processor.
These PCs are shipped without operating systems and can also be bought in a barebones state. For example the VS8 comes standard with 4GB of memory and a 320 GB HD (spindle based). By buying a barebones version a user can easily stack as much memory as possible in the machine as well as use a SSD to give that much more performance.
Sapphire continues to offer their entire line of AMD based graphics cards and are really pushing their Vapor X technology. Which leads us to our next product. Sapphire will start introducing their CPU cooling designs to the market and will be using the Vapor-X technology. Vapor chamber cooling will be coming to the CPU market very soon and at competitive prices.
PC Perspective's CES 2013 coverage is sponsored by AMD.
Follow all of our coverage of the show at http://pcper.com/ces!
A change is coming in 2013
If the new year will bring us anything, it looks like it might be the end of using "FPS" as the primary measuring tool for graphics performance on PCs. A long, long time ago we started with simple "time demos" that recorded rendered frames in a game like Quake and then played them back as quickly as possible on a test system. The lone result was given as time, in seconds, and was then converted to an average frame rate having known the total number of frames recorded to start with.
More recently we saw a transition to frame rates over time and the advent frame time graphs like the ones we have been using in our graphics reviews on PC Perspective. This expanded the amount of data required to get an accurate picture of graphics and gaming performance but it was indeed more accurate, giving us a more clear image of how GPUs (and CPUs and systems for that matter) performed in games.
And even though the idea of frame times have been around just a long, not many people were interested in getting into that detail level until this past year. A frame time is the amount of time each frame takes to render, usually listed in milliseconds, and could range from 5ms to 50ms depending on performance. For a reference, 120 FPS equates to an average of 8.3ms, 60 FPS is 16.6ms and 30 FPS is 33.3ms. But rather than average those out by each second of time, what if you looked at each frame individually?
Scott over at Tech Report started doing that this past year and found some interesting results. I encourage all of our readers to follow up on what he has been doing as I think you'll find it incredibly educational and interesting.
Through emails and tweets many PC Perspective readers have been asking for our take on it, why we weren't testing graphics cards in the same fashion yet, etc. I've stayed quiet about it simply because we were working on quite a few different angles on our side and I wasn't ready to share results. I am still not ready to share the glut of our information yet but I am ready to start the discussion and I hope our community find its compelling and offers some feedback.
At the heart of our unique GPU testing method is this card, a high-end dual-link DVI capture card capable of handling 2560x1600 resolutions at 60 Hz. Essentially this card will act as a monitor to our GPU test bed and allow us to capture the actual display output that reaches the gamer's eyes. This method is the best possible way to measure frame rates, frame times, stutter, runts, smoothness, and any other graphics-related metrics.
Using that recorded footage, sometimes reaching 400 MB/s of consistent writes at high resolutions, we can then analyze the frames one by one, though with the help of some additional software. There are a lot of details that I am glossing over including the need for perfectly synced frame rates, having absolutely zero dropped frames in the recording, analyzing, etc, but trust me when I say we have been spending a lot of time on this.
Subject: Chipsets | November 26, 2012 - 01:06 PM | Jeremy Hellstrom
Tagged: jon peddie, Q3 2012, graphics, market share
Jon Peddie Research have released their findings for the graphics market in Q3 of 2012, with bad news for the market, though not so bad for NVIDIA. The downward trend in PC sales has had an effect on the overall graphics market, with the number of units sold dropping 5.2% from this time last year and only NVIDIA seeing a rise in the number of units sold. AMD saw a drop of 10.7% in the number of units they shipped, specifically a 30% drop from last quarter in desktop APUs and just under 5% in mobile processors. Intel's overall sales dropped 8%, with both segments falling roughly equally but NVIDIA's strictly discrete GPU business saw a 28.3% gain in desktop market share and 12% for notebooks when compared to last quarter.
Worth noting is what JPR includes in this research above and beyond what we used to think of as the graphics market. Any x86 based processor with a GPU is included, tablets to desktops as are IGPs and discrete cards; ARM based devices, cell phones and all server chips are excluded.
"The news was terrific for Nvidia and disappointing for everyone the other major players. From Q2 to Q3 Intel slipped in both desktop (7%) and notebook (8.6%). AMD dropped (2%) in the desktop, and (17%) in notebooks. Nvidia gained 28.3% in desktop from quarter to quarter and jumped almost 12% in the notebook segment.
This was a not a very good quarter the shipments were down -1.45% on a Qtr-Qtr basis, and -10.8% on a Yr-Yr basis. We found that graphics shipments during Q3'12 slipped from last quarter -1.5% as compared to PCs which grew slightly by 0.9% overall (however more GPU's shipped than PCs due to double attach). GPUs are traditionally a leading indicator of the market, since a GPU goes into every system before it is shipped and most of the PC vendors are guiding down for Q4."
Here is some more Tech News from around the web:
- O'Reilly Discounts Every eBook By 50% @ Slashdot
- SysAdmin Corner: Getting More From Windows @ Techgage
- Cyber Monday 2012 Tech Deals @ TechReviewSource
- What Linux Users Need To Know When Holiday Shopping For PC Hardware @ Phoronix
- Ninjalane Podcast - Borderlands 2 Tips Tricks and Chat
- The early days of PCs as seen through DEAD TREES @ The Register
- Dreamhack Winter 2012 @ Rbmods
Subject: Graphics Cards | June 8, 2012 - 01:23 PM | Tim Verry
Tagged: tahiti, graphics, gpu, computex, binning, amd, 7970 ghz edition
AMD is having a string of successes with its 28nm 7000 series graphics cards. While it was dethroned by NVIDIA’s GTX 680, the AMD Radeon HD 7970 is easier to get a hold of. It certainly seems like the company is having a much easier time in manufacturing its GPUs compared to NVIDIA’s Kepler cards. AMD has been cranking out HD 7970s for a few months now and they have gotten the binning process down such that they are getting a good number of pieces of silicon that have a healthy bit of overhead over that of the 7970’s stock speeds.
And so enters Tahiti 2. Tahiti 2 represents GPU silicon that is binning not only for HD 7970 speeds but is able to push up the default clock speed while running with lower voltage. As a result, the GPUs are able to stay within the same TDP of current 7970 cards but run faster.
But how much faster? Well, SemiAccurate is reporting that AMD is seeing as much as a 20% clock speed improvement over current Radeon HD 7970 graphics cards. This means that cards are able to run at clock speeds up to approximately 1075MHz – quite a bit above the current reference clock speed of 925MHz!
The AMD 7970 3GB card. Expect Tahiti 2 to look exactly the same but run at higher clock speeds.
They are further reporting that, because the TDP has not changed, no cooler, PCB, or memory changes will be needed. This will make it that much easier for add in board partners to get the updated reference-based GPUs out as quickly as possible and with minimal cost increases (we hope). You can likely count on board partners capitalizing on the 1,000MHz+ speeds by branding the new cards “GHz Edition” much like the Radeon 7770 has enjoyed.
With 7970 chips having overhead and binning higher than needed, an updated and lower-power using refresh may also be in order for AMD’s 7950 “Tahiti Pro” graphics cards. Heck, maybe they can refresh the entire lineup with better binned silicon but keep the same clock speeds in order to reduce power consumption on all their cards.
Subject: Graphics Cards | February 6, 2012 - 06:23 PM | Tim Verry
Tagged: nvidia, kepler, graphics, gpu
Although there were quite a few rumors leading up to AMD's Radeon 7000 series launch, the Internet has been very quiet on the greener side of the graphics market. Finally; however, we have some rumors to share with you on the Nvidia front. As always, take these numbers with more than your average grain of salt.
Specifically, EXP Review managed to uncover two charts that supposedly detail specifics about a range of GeForce 600 series Kepler cards from the number of stream processors to the release date. Needless to say, it's a lot of rumored information to take in all at once.
Anyway, without further adieu, let's dive into the two leaked charts.
|Model||Code Name||Die Size||Core Clock (TBD) MHz||Shader Clock (TBD) GHz||Stream Processors||SM Count||ROPs||Memory Clock (effective) GDDR5||Bus Width||Memory Bus Width|
From the chart above, we can see the entire lineup of Kepler cards from the NVIDIA GTX 640 to the dual GPU GTX 690. The die size in the higher end GeForce cards is approximately 50% larger than that of the AMD Radeon HD 7970, but not much bigger than that of the GTX 580. If only we knew the TDP of these cards! In the next chart, we see alleged performance comparison versus the AMD competition.
|Model||Bus Interface||Frame Buffer||Transistors (Billion)||Price Point||Release Date||Performance Scale|
|GTX690||PCI-E 3 x16||2x1.75 GB||2x6.4||$999||Q3 2012|
|GTX680||PCI-E 3 x16||2 GB||6.4||$649||April 2012||~45%>HD7970|
|GTX670||PCI-E 3 x16||1.75 GB||6.4||$499||April 2012||~20%>HD7970|
|GTX660Ti||PCI-E 3 x16||1.5 GB||6.4||$399||Q2/Q3 2012||~10%>HD7950|
|GTX660||PCI-E 3 x16||2 GB||3.4||$319||April 2012||~GTX580|
|GTX650Ti||PCI-E 3 x16||1.75 GB||3.4||$249||Q2/Q3 2012||~GTX570|
|GTX650||PCI-E 3 x16||1.5 GB||1.8||$179||May 2012||~GTX560|
|GTX640||PCI-E 3 x16||2 GB||1.8||$139||May 2012||~GTX550Ti|
If these numbers hold true, NVIDIA will handily beat the current AMD offerings; however, I would wait for reviews to come out before making any purchasing decisions. One interesting aspect is the amount of GDDR5 memory. It seems that NVIDIA is sticking with 2GB frame buffers (or less) per GPU while AMD has really started upping the RAM. It will be interesting to see how this affects gaming in NVIDIA Surround and/or at high resolutions.
What do you guys think about these numbers, do you think Kepler will live up to the alleged performance scale figures?
Subject: Graphics Cards | August 2, 2011 - 10:46 AM | Tim Verry
Tagged: graphics, gpu, galaxy
Popular maker of NVIDIA graphics cards Galaxy, recently announced that they are extending the warranty of their graphics cards products to three years. "Galaxy has listened to the enthusiast market and we are glad to move from a 2 year warranty to a 3 year warranty by registration." The new extended warranty will apply to all graphics cards purchased after August 1st, 2011 that are then registered with Galaxy. Products will further bear the seal shown below to let customers know that the graphics card qualifies.
Seeing warranties being extended is always a good thing, especially in a world where the once popular lifetime warranty is rare. What do you think of the extended warranty? Will this be enough to push you towards a Galaxy branded card on your next purchase?
Subject: Motherboards | July 6, 2011 - 04:36 PM | Tim Verry
Tagged: PCI-E 3.0, msi, graphics
MSI recently unveiled a new motherboard supporting the PCI-Express 3.0 standard. The Intel LGA 1155 CPU socket and Z68 chipset are also features of the upcoming motherboard, dubbed the Z68A-GD80 (G3).
The new MSI board joins ASRock's announcement as one of the first PCI-Express 3.0 motherboards, and is loaded with tons of features. The Z68 chipset naturally supports Intel Sandy Bridge processors, PCI-E 3.0, a UEFI BIOS, OC Genie II, and their signature MIL-810STD military class components. The PCI-E 3.0 slots help AMD CrossFire X and NVIDIA SLI multi GPU solutions fed with plenty of bandwidth. Rear IO includes a PS/2 port, USB 3.0, USB 2.0, HDMI, DVI, 7.1 audio, Dual Gigabit Ethernet, e-SATA, and firewire. On board IO includes 3 PCI-E 3.0 slots, 2 PCI slots, and two PCI-E x1 slots, the 1155 CPU socket, and 4 DDR3 DIMMs.
What do you think of the new board; are you ready for PCI-E 3.0?