Subject: General Tech, Graphics Cards | October 27, 2014 - 04:50 PM | Ryan Shrout
Tagged: xbox one, sony, ps4, playstation 4, microsoft, amd
A couple of weeks back a developer on Ubisoft's Assassin's Creed Unity was quoted that the team had decided to run both the Xbox One and the Playstation 4 variants of the game at 1600x900 resolution "to avoid all the debates and stuff." Of course, the Internet exploded in a collection of theories about why that would be the case: were they paid off by Microsoft?
For those of us that focus more on the world of PC gaming, however, the following week an email into the Giantbomb.com weekly podcast from an anonymous (but seemingly reliable) developer on the Unity team raised even more interesting material. In this email, despite addressing other issues on the value of pixel count and the stunning visuals of the game, the developer asserted that we may have already peaked on the graphical compute capability of these two new gaming consoles. Here is a portion of the information:
The PS4 couldn’t do 1080p 30fps for our game, whatever people, or Sony and Microsoft say. ...With all the concessions from Microsoft, backing out of CPU reservations not once, but twice, you’re looking at about a 1-2 FPS difference between the two consoles.
What's hard is not getting the game to render but getting everything else in the game at the same level of performance we designed from the start for the graphics. By the amount of content and NPCs in the game, from someone who witnessed a lot of optimizations for games from Ubisoft in the past, this is crazily optimized for such a young generation of consoles. This is really about to define the next-generation unlike any other game beforehand.
We are bound from the CPU because of AI. Around 50% of the CPU is used for the pre-packaged rendering parts..
So, if we take this anonymous developers information as true, and this whole story is based on that assumption, then have learned some interesting things.
- The PS4, the more graphically powerful of the two very similarly designed consoles, was not able to maintain a 30 FPS target when rendering at 1920x1080 resolution with Assassin's Creed Unity.
- The Xbox One (after giving developers access to more compute cycles previously reserved to Kinect) is within a 1-2 FPS mark of the PS4.
- The Ubisoft team see Unity as being "crazily optimized" for the architecture and consoles even as we just now approach the 1 year anniversary of their release.
- Half of the CPU compute time is being used to help the rendering engine by unpacking pre-baked lighting models for the global illumination implementation and thus the game is being limited by the 50% remaining performance power the AI, etc.
It would appear that just as many in the media declared when the specifications for the new consoles were announced, the hardware inside the Playstation 4 and Xbox One undershoots the needs of game developers to truly build "next-generation" games. If, as this developer states, we are less than a year into the life cycle of hardware that was planned for an 8-10 year window and we have reached performance limits, that's a bad sign for game developers that really want to create exciting gaming worlds. Keep in mind that this time around the hardware isn't custom built cores or using a Cell architecture - we are talking about very basic x86 cores and traditional GPU hardware that ALL software developers are intimately familiar with. It does not surprise me one bit that we have seen more advanced development teams hit peak performance.
If the PS4, the slightly more powerful console of the pair, is unable to render reliably at 1080p with a 30 FPS target, then unless the Ubisoft team are completely off the rocker in terms of development capability, the advancement of gaming on consoles would appear to be somewhat limited. Remember the specifications for these two consoles:
|PlayStation 4||Xbox One|
|Processor||8-core Jaguar APU||8-core Jaguar APU|
|Memory||8GB GDDR5||8GB DDR3|
|Graphics Card||1152 Stream Unit APU||768 Stream Unit APU|
|Peak Compute||1,840 GFLOPS||1,310 GFLOPS|
The custom built parts from AMD both feature an 8-core Jaguar x86 architecture and either 768 or 1152 stream processors. The Jaguar CPU cores aren't high performance parts: single-threaded performance of Jaguar is less than the Intel Silvermont/Bay Trail designs by as much as 25%. Bay Trail is powering lots of super low cost tablets today and even the $179 ECS LIVA palm-sized mini-PC we reviewed this week. And the 1152/768 stream processors in the GPU portion of the AMD APU provide some punch, but a Radeon HD 7790 (now called the R7 260X), released in March of 2013, provides more performance than the PS4 and the Radeon R7 250X is faster than what resides in the Xbox One.
If you were to ask me today what kind of performance would be required from AMD's current GPU lineup for a steady 1080p gaming experience on the PC, I would probably tell you the R9 280, a card you can buy today for around $180. From NVIDIA, I would likely pick a GTX 760 (around $200).
Also note that if the developer is using 50% of the CPU resources for rendering computation and the remaining 50% isn't able to hold up its duties on AI, etc., we likely have hit performance walls on the x86 cores as well.
Even if this developer quote is 100% correct that doesn't mean that the current generation of consoles is completely doomed. Microsoft has already stated that DirectX 12, focused on performance efficiency of current generation hardware, will be coming to the Xbox One and that could mean additional performance gains for developers. The PS4 will likely have access to OpenGL Next that is due in the future. And of course, it's also possible that this developer is just wrong and there is plenty of headroom left in the hardware for games to take advantage of.
But honestly, based on my experience with these GPU and CPU cores, I don't think that's the case. If you look at screenshots of Assassin's Creed Unity and then look at the minimum and recommended specifications for the game on the PC, there is huge, enormous discrepancy. Are the developers just writing lazy code and not truly optimizing for the hardware? It seems unlikely that a company the size of Ubisoft would choose this route on purpose, creating a console game that runs in a less-than-ideal state while also struggling on the PC version. Remember, there is almost no "porting" going on here: the Xbox One and Playstation 4 share the same architecture as the PC now.
Of course, we might just be treading through known waters. I know we are a bit biased, and so is our reader base, but I am curious: do you think MS and Sony have put themselves in a hole with their shortsighted hardware selections?
UPDATE: It would appear that a lot of readers and commentors take our editorial on the state of the PS4 and XB1 as a direct attack on AMD and its APU design. That isn't really the case - regardless of what vendors' hardware is inside the consoles, had Microsoft and Sony still targeted the same performance levels, we would be in the exact same situation. An Intel + NVIDIA hardware combination could just have easily been built to the same peak theoretical compute levels and would have hit the same performance wall just as quickly. MS and Sony could have prevented this by using higher performance hardware, selling the consoles at a loss out the gate and preparing each platform for the next 7-10 years properly. And again, the console manufacturers could have done that with higher end AMD hardware, Intel hardware or NVIDIA hardware. The state of the console performance war is truly hardware agnostic.
Subject: Graphics Cards | October 26, 2014 - 02:44 AM | Scott Michaud
Tagged: amd, driver, catalyst
So Ryan has been playing many games lately, as a comparison between the latest GPUs from AMD and NVIDIA. While Civilization: Beyond Earth is not the most demanding game in existence on GPUs, it is not trivial either. While not the most complex, from a video card's perspective, it is a contender for most demanding game on your main processor (CPU). It also has some of the most thought-out Mantle support of any title using the API, when using the AMD Catalyst 14.9.2 Beta driver.
And now you can!
The Catalyst 14.9.2 Beta drivers support just about anything using the GCN architecture, from APUs (starting with Kaveri) to discrete GPUs (starting with the HD 7000 and HD 7000M series). Beyond enabling Mantle support in Civilization, it also fixes some issues with Metro, Shadow of Mordor, Total War: Rome 2, Watch_Dogs, and other games.
Also, both AMD and Firaxis are aware of a bug in Civilization: Beyond Earth where the mouse cursor does not click exactly where it is supposed to, if the user enables font scaling in Windows. They are working on it, but suggest setting it to the default (100%) if users experience this issue. This could be problematic for customers with high-DPI screens, but could keep you playing until an official patch is released.
You can get 14.9.2 Beta for Windows 7 and Windows 8.1 at AMD's website.
Subject: Graphics Cards | October 24, 2014 - 03:44 PM | Ryan Shrout
Tagged: radeon, R9 290X, leaderboard, hwlb, hawaii, amd, 290x
When NVIDIA launched the GTX 980 and GTX 970 last month, it shocked the discrete graphics world. The GTX 970 in particular was an amazing performer and undercut the price of the Radeon R9 290 at the time. That is something that NVIDIA rarely does and we were excited to see some competition in the market.
AMD responded with some price cuts on both the R9 290X and the R9 290 shortly thereafter (though they refuse to call them that) and it seems that AMD and its partners are at it again.
Looking on Amazon.com today we found several R9 290X and R9 290 cards at extremely low prices. For example:
- XFX Radeon R9 290X Double D - $299 (after MIR)
- Gigabyte R9 290X WindForce - $360
- MSI R9 290X Gaming - $366
The R9 290X's primary competition in terms of raw performance is the GeForce GTX 980, currently selling for $549 and up. If you can find them in stock, that means NVIDIA has a hill of $250 to climb when going against the lowest priced R9 290X.
The R9 290 looks interesting as well:
Several other R9 290 cards are selling for upwards of $300-320 making them bone-headed decisions if you can get the R9 290X for the same or lower price, but considering the GeForce GTX 970 is selling for at least $329 today (if you can find it) and you can see why consumers are paying close attention.
Will NVIDIA make any adjustments of its own? It's hard to say right now since stock is so hard to come by of both the GTX 980 and GTX 970 but it's hard to imagine NVIDIA lowering prices as long as parts continue to sell out. NVIDIA believes that its branding and technologies like G-Sync make GeForce cards more valuable and until they being to see a shift in the market, I imagine that will stay the course.
For those of you that utilize our Hardware Leaderboard you'll find that Jeremy has taken these prices into account and update a couple of the system build configurations.
A Civ for a New Generation
Turn-based strategy games have long been defined by the Civilization series. Civ 5 took up hours and hours of the PC Perspective team's non-working hours (and likely the working ones too) and it looks like the new Civilization: Beyond Earth has the chance to do the same. Early reviews of the game from Gamespot, IGN, and Polygon are quite positive, and that's great news for a PC-only release; they can sometimes get overlooked in the games' media.
For us, the game offers an interesting opportunity to discuss performance. Beyond Earth is definitely going to be more CPU-bound than the other games that we tend to use in our benchmark suite, but the fact that this game is new, shiny, and even has a Mantle implementation (AMD's custom API) makes interesting for at least a look at the current state of performance. Both NVIDIA and AMD sent have released drivers with specific optimization for Beyond Earth as well. This game is likely to be popular and it deserves the attention it gets.
Civilization: Beyond Earth, a turn-based strategy game that can take a very long time to complete, ships with an integrated benchmark mode to help users and the industry test performance under different settings and hardware configurations. To enable it, you simple add "-benchmark results.csv" to the Steam game launch options and then start up the game normally. Rather than taking you to the main menu, you'll be transported into a view of a map that represents a somewhat typical gaming state for a long term session. The game will use the last settings you ran the game at to measure your system's performance, without the modified launch options, so be sure to configure that before you prepare to benchmark.
The output of this is the "result.csv" file, saved to your Steam game install root folder. In there, you'll find a list of numbers, separated by commas, representing the frame times for each frame rendering during the run. You don't get averages, a minimum, or a maximum without doing a little work. Fire up Excel or Google Docs and remember the formula:
1000 / Average (All Frame Times) = Avg FPS
It's a crude measurement that doesn't take into account any errors, spikes, or other interesting statistical data, but at least you'll have something to compare with your friends.
Our testing settings
Just as I have done in recent weeks with Shadow of Mordor and Sniper Elite 3, I ran some graphics cards through the testing process with Civilization: Beyond Earth. These include the GeForce GTX 980 and Radeon R9 290X only, along with SLI and CrossFire configurations. The R9 290X was run in both DX11 and Mantle.
- Core i7-3960X
- ASUS Rampage IV Extreme X79
- 16GB DDR3-1600
- GeForce GTX 980 Reference (344.48)
- ASUS R9 290X DirectCU II (14.9.2 Beta)
Mantle Additions and Improvements
AMD is proud of this release as it introduces a few interesting things alongside the inclusion of the Mantle API.
- Enhanced-quality Anti-Aliasing (EQAA): Improves anti-aliasing quality by doubling the coverage samples (vs. MSAA) at each AA level. This is automatically enabled for AMD users when AA is enabled in the game.
- Multi-threaded command buffering: Utilizing Mantle allows a game developer to queue a much wider flow of information between the graphics card and the CPU. This communication channel is especially good for multi-core CPUs, which have historically gone underutilized in higher-level APIs. You’ll see in your testing that Mantle makes a notable difference in smoothness and performance high-draw-call late game testing.
- Split-frame rendering: Mantle empowers a game developer with total control of multi-GPU systems. That “total control” allows them to design an mGPU renderer that best matches the design of their game. In the case of Civilization: Beyond Earth, Firaxis has selected a split-frame rendering (SFR) subsystem. SFR eliminates the latency penalties typically encountered by AFR configurations.
EQAA is an interesting feature as it improves on the quality of MSAA (somewhat) by doubling the coverage sample count while maintaining the same color sample count as MSAA. So 4xEQAA will have 4 color samples and 8 coverage samples while 4xMSAA would have 4 of each. Interestingly, Firaxis has decided the EQAA will be enabled on Beyond Earth anytime a Radeon card is detected (running in Mantle or DX11) and AA is enabled at all. So even though in the menus you might see 4xMSAA enabled, you are actually running at 4xEQAA. For NVIDIA users, 4xMSAA means 4xMSAA. Performance differences should be negligible though, according to AMD (who would actually be "hurt" by this decision if it brought down FPS).
Subject: Graphics Cards | October 23, 2014 - 04:06 PM | Jeremy Hellstrom
Tagged: xfx, R9 285 Black Edition, factory overclocked, amd
Currently sitting at $260 the XFX R9 285 Black Edition is a little less expensive than the ASUS ROG STRIKER GTX 760 and significantly more expensive than the ASUS GTX760 DirectCU2 card. Those prices lead [H]ard|OCP to set up a showdown to see which card provided the best bang for the buck, especially once they overclocked the AMD card to 1125MHz core and 6GHz RAM. In the end it was a very close race between the cards, the performance crown did go to the R9 285 BE but that performance comes at a premium as you can get performance almost as good for $50 less. Of course the both the XFX card and the STRIKER sell at a premium compared to cards with less features and a stock setup; you should expect the lower priced R9 285s to be closer in performance to the DirectCU2 card.
"Today we are reviewing the new XFX Radeon R9 285 Black Edition video card. We will compare it to a pair of GeForce GTX 760 based GPUs to determine the best at the sub-$250 price point. XFX states that it is faster than the GTX 760, but that is based on a single synthetic benchmark, let's see how it holds up in real world gaming."
Here are some more Graphics Card articles from around the web:
- ZOTAC GTX 780 AMP! Edition Graphics Card Review @ TechwareLabs
- ASUS STRIX GeForce GTX 970 4GB @ eTeknix
- GeForce GTX 970 cards from MSI and Asus @ The Tech Report
- ASUS STRIX GTX 980 OC 4 GB @ techPowerUp
- MSI GTX980 Gaming 4G @ Kitguru
- MSI GTX 970 Gaming 4G – New Maxwell Price/Performance Beast @ Bjorn3D
- NVIDIA GeForce GTX 970 Offers Great Linux Performance @ Phoronix
Subject: General Tech | October 23, 2014 - 01:56 PM | Ken Addison
Tagged: video, podcast, GTX 980M, msi, X99S GAMING 9 AC, amd, nvidia, Intel, Kingwin, APU, Kaveri, 344.48, dsr
PC Perspective Podcast #323 - 10/23/2014
Join us this week as we discuss GTX 980M Performance, MSI X99S Gaming 9 AC and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano
Program length: 1:18:59
Subject: Processors | October 22, 2014 - 10:02 PM | Josh Walrath
Tagged: Richland, Q3 results, lisa su, Kaveri, APU, amd, A10 7850K
While AMD made a small profit last quarter, the Q4 outlook from the company is not nearly as rosy. AMD estimates that Q4 revenues will be around 12% lower than Q3, making for a rare drop in what is typically a robust season for sales. Unlike Intel, AMD is seeing a very soft PC market for their products. Intel so far has been able to deliver parts that are as fast, if not faster than the latest APUs, but they also feature lower TDPs while at a comparable price. The one area that AMD has a significant advantage is in terms of 3D performance and better driver support.
To keep the chips selling during this very important quarter, AMD is cutting the prices on their entire lineup of FM2+ parts. This includes the entire Kaveri based lineup from the top end A10-7850K to the A6-7400K. AMD is also cutting the prices on the previous Richland based parts, which include the A10-6800K. Also of interest is that buyers of A10 APUs will be able to select one of three game titles (Murdered: Soul Suspect, Thief, or Sniper Elite 3) for free, or use the included code to purchase Corel’s Aftershot Pro 2 for only $5.
|Compute Cores||12 (4+8)||12 (4+8)||10 (4+6)||10 (4+6)||6 (2+4)|
|TDP (cTDP)||95 (65/45)||65 (45)||95 (65/45)||65 (45)||65 (45)|
The A10-7850K is a pretty good part overall, though of course it does suffer at the hands of Intel when it comes to pure CPU performance. It still is a pretty quick part that competes well with Intel’s 2 core/4 thread chips. 3D performance from the integrated graphics is class leading, and the potential for using that unit for HSA applications is another checkmark for AMD. We have yet to see widespread adoption of HSA, but we are seeing more and more software products coming out that support it. Having tested it out myself, the GPU portion of the APU can be enabled when using a standalone GPU from either AMD or NVIDIA. The Kaveri chips also support TrueAudio, which will show up in more titles throughout the next year.
One aspect of AMD’s latest FM2+ platform that cannot be ignored is the pretty robust selection of good and interesting motherboards that are offered at very low prices. Products such as the Gigabyte G1.Sniper.A88X and the MSI A88X-G45 Gaming motherboards are well rounded products that typically sell in the $90 to $110 range. Top end products like the Asus Crossblade Ranger are still quite affordable at around $160. Budget offerings are still pretty decent and they come in the $50 range.
One other product that has sparked interest is the Athlon X4 860K Black Edition. This product is clocked between 3.7 GHz and 4.0 GHz, features two Steamroller modules, and is priced at a very reasonable $90. The downside is that there is no GPU portion enabled, while the upside is that there is potentially more thermal headroom for the CPU portion to be clocked higher than previous A10-7850K parts. This will of course differ from individual chips, but the potential is there to have a pretty solid CPU for a very low price. Add in the low motherboard prices, and this has the making of a nice budget enthusiast system.
So why the cuts now? We can simply look at last week’s results for AMD’s previous quarter, as well as how the next quarter is stacking up. While AMD made a small profit last quarter, predictions for Q4 look grim. AMD is looking at around a 12% decrease in revenue, as stated above. AMD has a choice in that they can keep ASPs higher, but risk shipping less product in the very important 4th quarter; or they can sacrifice ASPs and potentially ship a lot more product. The end result of cutting the prices on their entire line of APUs will be of course lower ASPs, but a higher volume of parts being shipped and sold. In terms of cash flow, it is likely more important to see parts flowing rather than having higher inventories with a higher ASP. This also means that more APUs being sold will mean more motherboards from their partners moving through the channel.
Intel does have several huge advantages over AMD in that they have a very solid 22 nm process, a huge workforce that can hand tune their processors, and enough marketing money to make any company other than Apple squirm. AMD is at the mercy of the pure-play foundries in terms of process node tweaks and shrinks. AMD spent a long time at 32 nm PD-SOI before it was able to migrated to 28 nm HKMG. It looks to be 2015 before AMD sees anything below 28 nm for their desktop APUs, but it could be sooner for their smaller APUs and ARM based products on planar 20 nm HKMG processes. We don’t know a all of the specifics of the upcoming 16/14nm FinFET products from TSMC, Samsung, and GLOBALFOUNDRIES, so it will be hard to compare/contrast to Intel’s 2nd generation 14 nm TriGate line. All we know is that it will most assuredly be better than the current 28 nm HKMG that AMD is stuck at.
Subject: General Tech | October 17, 2014 - 01:44 PM | Jeremy Hellstrom
Sad news again from AMD as roughly 710 employees from across the globe will be getting severance packages for Christmas. The cuts are likely to come from the Computing and Graphics division as they saw a 16% year-on-year decline in income. The Enterprise, Embedded, and Semi-Custom division saw a 21% increase and were the reason AMD's total income only dropped 2% when compared to this quarter last year. The news for the future is also not good, with The Inquirer reporting that AMD expects its revenues to slide another 10-16% per cent in the next quarter. Perhaps that is part of the reason Lisa Su will take home a salary that is $150K less than what Rory Read was earning.
"Following a grim earnings report on Thursday, AMD has announced a restructuring plan that includes axing seven per cent of its workforce by the end of the year.
The plan will see AMD issuing layoff notices to about 710 employees worldwide, and is expected to cost the chipmaker $57m in severance payment."
Here is some more Tech News from around the web:
- TSMC to start production of 16nm process products in 2Q15 @ DigiTimes
- The New TrueCrypt - VeraCrypt Or CipherShed @ TechARP
- IBM announces Internet of Things cloud services @ The Inquirer
- iPad Air 2 and iPad Mini 3 pre-orders go live at the Apple Store @ The Inquirer
- Devolo dLAN 500 WiFi Network Kit Review @ NikKTech
- Linksys SE4008 @ HardwareHeaven
- Introducing the F*Watch, a Fully Open Electronic Watch @ Hack a Day
Subject: General Tech | October 16, 2014 - 01:16 PM | Ken Addison
Tagged: podcast, video, nvidia, GTX 980, sli, 3-way sli, 4-way sli, amd, R9 290X, Samsung, 840 evo, Intel, corsair, HX1000i, gigabyte, Z97X-UD5H, Lenovo, yoga 3 pro, yoga tablet 2. nexus 9, tegra k1, Denver
PC Perspective Podcast #322 - 10/16/2014
Join us this week as we discuss GTX 980 4-Way SLI, Samsung's EVO Performance Fix, Intel Earnings and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Josh Walrath, Morry Tietelman
Program length: 1:26:16
Week in Review:
News items of interest:
0:46:25 You Missed It! PCPer Live! Borderlands: The Pre-Sequel Game Stream Powered by NVIDIA
0:48:20 Trio of Lenovo News
Hardware/Software Picks of the Week:
Ryan: Sonos BOOST
Subject: Editorial | October 15, 2014 - 12:39 PM | Josh Walrath
Tagged: revenue, Results, quarterly, Q3, Intel, haswell, Broadwell, arm, amd, 22nm, 2014, 14nm
Yesterday Intel released their latest quarterly numbers, and they were pretty spectacular. Some serious milestones were reached last quarter, much to the dismay of Intel’s competitors. Not everything is good with the results, but the overall quarter was a record one for Intel. The company reported revenues of $14.55 billion dollars with a net income of $3.31 billion. This is the highest revenue for a quarter in the history of Intel. This also is the first quarter in which Intel has shipped 100 million processors.
The death of the PC has obviously been overstated as the PC group had revenue of around $9 billion. The Data Center group also had a very strong quarter with revenues in the $3.7 billion range. These two groups lean heavily on Intel’s 22 nm TriGate process, which is still industry leading. The latest Haswell based processors are around 10% of shipping units so far. The ramp up for these products has been pretty impressive. Intel’s newest group, the Internet of Things, has revenues that shrank by around 2% quarter over quarter, but it has grown by around 14% year over year.
Not all news is good news though. Intel is trying desperately to get into the tablet and handheld markets, and so far has had little traction. The group reported revenues in the $1 million range. Unfortunately, that $1 million is offset by about $1 billion in losses. This year has seen an overall loss for mobile in the $3 billion range. While Intel arguably has the best and most efficient process for mobile processors, it is having a hard time breaking into this ARM dominated area. There are many factors involved here. First off there are more than a handful of strong competitors working directly against Intel to keep them out of the market. Secondly x86 processors do not have the software library or support that ARM has in this very dynamic and fast growing section. We also must consider that while Intel has the best overall process, x86 processors are really only now achieving parity in power/performance ratios. Intel still is considered a newcomer in this market with their 3D graphics support.
Intel is quite happy to take this loss as long as they can achieve some kind of foothold in this market. Mobile is the future, and while there will always be the need for a PC (who does heavy duty photo editing, video editing, and immersive gaming on a mobile platform?) the mobile market will be driving revenues from here on out. Intel absolutely needs to have a presence here if they wish to be a leader at driving technologies in this very important market. Intel is essentially giving away their chips to get into phones and tablets, and eventually this will pave the way towards a greater adoption. There are still hurdles involved, especially on the software side, but Intel is working hard with developers and Google to make sure support is there. Intel is likely bracing themselves for a new generation of 20 nm and 16 nm FinFET ARM based products that will start showing up in the next nine months. The past several years has seen Intel push mobile up to high priority in terms of process technology. Previously these low power, low cost parts were relegated to an N+1 process technology from Intel, but with the strong competition from ARM licensees and pure-play foundries Intel can no longer afford that. We will likely see 14 nm mobile parts from Intel sooner as opposed to later.
Intel has certainly shored up a lot of their weaknesses over the past few years. Their integrated 3D/GPU support has improved in leaps and bounds over the years, their IPC and power consumption with CPUs is certainly industry leading, and they continue to pound out impressive quarterly reports. Intel is certainly firing on all cylinders at this time and the rest of the industry is struggling to keep up. It will be interesting to see if Intel will keep up with this pace, and it will be imperative for the company to continue to push into mobile markets. I have never counted Intel out as they have a strong workforce, a solid engineering culture, and some really amazingly smart people (except Francois… he is just slightly above average- he is a GT-R aficionado after all).
Next quarter appears to be more of the same. Intel is expecting revenue in the $14.7 billion, plus or minus $500 million. This continues along with the strong sales of PC and server parts for Intel that helps buoy them to these impressive results. Net income and margins again look to appear similar to what this past quarter brought to the table. We will see the introduction of the latest 14 nm Broadwell processors, which is an important step for Intel. 14 nm development and production has taken longer than people expected, and Intel has had to lean on their very mature 22 nm process longer than they wanted to. This has allowed a few extra quarters for the pure-play foundries to try to catch up. Samsung, TSMC, and GLOBALFOUNDRIES are all producing 20 nm products with a fast transition to 16/14 nm FinFET by early next year. This is not to say that these 16/14nm FinFET products will be on par with Intel’s 14 nm process, but it at least gets them closer. In the near term though, these changes will have very little effect on Intel and their product offerings over the next nine months.