Remember when competition wasn't a bad word?

Subject: Editorial | October 2, 2015 - 12:41 PM |
Tagged: google, chromecast, AT&T, apple tv, amd, amazon

There is more discouraging news out of AMD as another 5% of their workforce, around 10,000 employees, will be let go by the end of 2016.  That move will hurt their bottom line before the end of this year, $42 million in severance, benefit payouts and other costs associated with restructuring but should save around $60-70 million in costs by the end of next year.  This is on top of the 8% cut to their workforce which occurred earlier this year and shows just how deep AMD needs to cut to stay alive, unfortunately reducing costs is not as effective as raising revenue.  Before you laugh, point fingers or otherwise disparage AMD; consider for a moment a world in which Intel has absolutely no competition selling high powered desktop and laptop parts.  Do you really think the already slow product refreshes will speed up or prices remain the same?

Consider the case of AT&T, who have claimed numerous times that they provide the best broadband service to their customers that they are capable of and at the lowest price they can sustain.  It seems that if you live in a city which has been blessed with Google Fibre somehow AT&T is able to afford to charge $40/month less than in a city which only has the supposed competition of Comcast or Time Warner Cable.  Interesting how the presence of Google in a market has an effect that the other two supposed competitors do not.

There is of course another way to deal with the competition and both Amazon and Apple have that one down pat.  Apple removed the iFixit app that showed you the insides of your phone and had the temerity to actually show you possible ways to fix hardware issues.  Today Amazon have started to kick both Apple TV and Chromecast devices off of their online store.  As of today no new items can be added to the virtual inventory and as of the 29th of this month anything not sold will disappear.  Apparently not enough people are choosing Amazon's Prime Video streaming and so instead of making the service compatible with Apple or Google's products, Amazon has opted to attempt to prevent, or at least hinder, the sale of those products.

The topics of competition, liquidity and other market forces are far too complex to be dealt with in a short post such as this but it is worth asking yourself; do you as a customer feel like competition is still working in your favour?

The Hand

The Hand

"AMD has unveiled a belt-tightening plan that the struggling chipmaker hopes will get its finances back on track to profitability."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register

'Learn to trust us, because we're not about to stop.'

Subject: Editorial, General Tech | September 29, 2015 - 03:30 PM |
Tagged: trust, security, rant, microsoft, metadata, fud

Privacy of any nature when you utilize a device connected to the internet is quickly becoming a joke and not a very funny one. Just to name a few, Apple tracks your devices, Google scans every email you send, Lenovo actually has two programs to track your usage and of course there is Windows 10 and the data it collects and sends.  Thankfully in some of these cases the programs which track and send your data can be disabled but the fact of the matter is that they are turned on by default.

The Inquirer hits the nail on the head "Money is simply a by-product of data." a fact which online sites such as Amazon and Facebook have known for a while and which software and hardware providers are now figuring out.  In some cases an informed choice to share personal data is made, but this is not always true. When you share to Facebook or post your Fitbit results to the web you should be aware you are giving companies valuable data, the real question is about the data and metadata you are sharing of which you are unaware of.

im_from_the_government_im_here_to_help.jpg

Should you receive compensation for the data you provide to these companies?  Should you always be able to opt out of sharing and still retain use of a particular service?  Perhaps the cost of utilizing that service is sharing your data instead of money?   There are a lot of questions and even a lot of different uses for this data but there is certainly no one single answer to those questions. 

Microsoft have been collecting data from BSoD's for decades and Windows users have all benefited from it even though there is no opt out for sending that data.  On the other hand is there a debt incurred towards Lenovo or other companies when you purchase a machine from them?  Does the collection of patterns of usage benefit Lenovo users in a similar way to the data generated by a Windows BSoD or does the risk of this monitoring software being corrupted by others for nefarious purposes outweigh any possible benefits?

3adb62458565e775daf44731fabf2b92.jpg

Of course this is only the tip of the iceberg, the Internet of Things is poised to become a nightmare for those who value their security, there are numerous exploits to track your cellphone that have nothing to do with your provider and that is only the tip of the iceberg.  Just read through the Security tag here on PCPer for more examples if you have a strong stomach.

Please, take some time to think about how much you value your privacy and what data you are willing to share in exchange for products and services.  Integrate that concern into your purchasing decisions, social media and internet usage.  Hashtags are nice, but nothing speaks as loudly as your money; never forget that.

"MICROSOFT HAS SPOKEN out about its oft-criticised privacy policies, particularly those in the newly released Windows 10, which have provoked a spike in Bacofoil sales over its data collection policies."

Here is some more Tech News from around the web:

Tech Talk

 

Source: The Register

Android to iPhone Day 3: Widgets and Live Photos

Subject: Editorial, Mobile | September 28, 2015 - 09:57 AM |
Tagged: iphone 6s, iphone, ios, google, apple, Android

PC Perspective’s Android to iPhone series explores the opinions, views and experiences of the site’s Editor in Chief, Ryan Shrout, as he moves from the Android smartphone ecosystem to the world of the iPhone and iOS. Having been entrenched in the Android smartphone market for 7+ years, the editorial series is less of a review of the new iPhone 6s as it is an exploration on how the current smartphone market compares to what each sides’ expectations are.

Full Story Listing:

iphonestorysmall.jpg

Day 1

Opening and setting up a new iPhone is still an impressive experience. The unboxing process makes it feel like you are taking part in the reveal of product worth its cost and the accessories included are organized and presented well. Having never used an iPhone 6 or iPhone 6 Plus beyond the cursory “let me hold that”, it was immediately obvious to me that the iPhone build quality exceeded any of the recent Android-based smartphones I have used; including the new OnePlus 2, LG G4 and Droid Turbo. The rounded edges sparked some debate in terms of aesthetics but it definitely makes the phone FEEL slimmer than other smartphone options. The buttons were firm and responsive though I think there is more noise in the click of the home button than I expected.

The setup process for the phone was pretty painless but Ken, our production editor who has been an iPhone user every generation, did comment that the number of steps you have to go through to get to a working phone have increased quite a bit. Setup Siri, setup Touch ID, setup Wi-Fi, have you heard about iCloud? The list goes on. I did attempt to use the “Move to iOS” application from the Android Play Store on my Droid Turbo but I was never able to get it to work – the devices kept complaining about a disconnection of some sort in its peer-to-peer network and after about 8 tries, I gave up. I’m hoping to try it again with the incoming iPhone 6 Plus next week to see if it was a temporary issue.

iphoned3-1.jpg

After getting to the iPhone 6s home screen I spent the better part of the next hour doing something that I do every time I get a new phone: installing apps. The process is painful – go to the App Store, search for the program, download it, open it, login (and try to remember login information), repeat. With the Android Play Store I do appreciate the ability to “push” application downloads to a phone from the desktop website, making it much faster to search and acquire all the software you need. Apple would definitely benefit from some version of this that doesn’t require installing iTunes.

I am a LastPass user and one of the first changes I had to get used to was the change in how that software works on Android and iOS. With my Droid Turbo I was able to give LastPass access to system levels lower than you can with iOS and when using a third-party app like Twitter, LastPass can insert itself into the process and automatically input the username and/or password for the website or service. With the iPhone you don’t have that ability and there was a lot of password copying and pasting to get everything setup. This is an area where the openness of the Android platform can benefit users.

That being said, the benefits of Touch ID from Apple were immediately apparent.  After going through the setup process using my fingerprint in place of my 15+ digit Apple ID password is a huge benefit and time saver.  Every time I download a new app from the App Store and simply place my thumb on the home button, I grin; knowing this is how it should be for all passwords, everywhere. I was even able to setup my primary LastPass password to utilize Touch ID, removing one of the biggest annoyances of using the password keeping software on Android. Logging into the phone with your finger or thumb print rather than a pattern or PIN is great too. And though I know new phones like the OnePlus 2 uses a fingerprint reader for this purpose, the implementation just isn’t as smooth.

My final step before leaving the office and heading for home was to download my favorite podcasts and get that setup on the phone for the drive. Rather than use the Apple Podcasts app it was recommended that I try out Overcast, which has been solid so far. I setup the Giant Bombcast, My Brother, My Brother and I and a couple of others, let them download on Wi-Fi and set out for home. Pairing the iPhone 6s with my Chevy Volt was as easy as any other phone but I did notice that Bluetooth-based information being passed to the entertainment system (icons, current time stamps, etc.) was more accurate with the iPhone 6s than my Droid Turbo (starting times and time remaining worked when they previously did not). That could be a result of the podcast application itself (I used doubleTwist on Android).

Day 2

On Saturday, with a bit more free time to setup the phone and get applications installed that I had previously forgotten, I did start to miss a couple of Android features. First, the lack of widgets on the iPhone home screens means the mass of icons on the iPhone 6s is much less useful than the customized screens I had on my Droid Turbo. With my Droid I had a page dedicated to social media widgets I could scroll through without opening up any specific applications. Another page included my current to-do list from Google Keep and my most current 15 items from Google Calendar, all at a glance.

iphoned3-4.jpg

I know that the top drag down menu on iOS with the Today and Notifications tabs is supposed to offer some of that functionality but the apps like Google Keep and Twitter don’t take advantage of it. And though cliché at this point, why in the hell doesn’t the Apple Weather application icon show the current temperature and weather status yet??

The second item I miss is the dedicated “back” button that Android devices have on them that are universal across the entire system. Always knowing that you can move to the previous screen or move from the current app to the home screen or other program that was just recently switched over is a great safety net that is missing in iOS. With only a single “always there” button on the phone, some software has the back button functionality on the top left hand corner and others have it in the form of an X or Close button somewhere else. I found myself constantly looking around each new app on the iPhone 6s to find out how to return to a previous screen and sometimes would hit the home button out of habit, which obviously isn’t going to have the intended function. Swiping from the left of the screen to the middle works with some applications, but not all.

Also, though my Droid Turbo phone was about the same size as the iPhone 6s, the size of the screen makes it hard to reach the top of the screen when only using one hand. With the Android back button along the bottom of the phone that meant it was always within reach. Those iOS apps that put the return functionality in the top left of the screen make it much more difficult to do, often risking dropping the phone by repositioning it in your hand. And double tapping (not clicking) the home button and THEN reaching for the back button on any particular app just seems to take too long.

On Saturday I went camping with my family at an early Halloween event that we have annually. This made for a great chance to test out the iPhone 6s camera, and without a doubt, it was the best phone camera I have used. The images were clear, the shutter speed was fast, and the ability to take high frame rate video or 4K video is a nice touch. I think that enough people have shown the advantages of the iPhone camera systems over almost anything else on the smartphone market and as a user of seemingly slow and laggard Android-based phone cameras, the move to the iPhone 6s is a noticeable change. As a parent of a 3 month old baby girl, these photos are becoming ever more important to me.

iphoned3-2.jpg

The new Live Photos feature, where essentially a few frames before and a few frames after the picture you actually took are captured (with audio included), is pretty much a gimmick but the effect is definitely eye-catching. When flipping through the camera roll you actually see a little bit of movement (someone’s face for example) which caused me to raise an eyebrow at first. It’s an interesting idea, but I’m not sure what use they will have off of the phone itself – will I be able to “play” these types of photos on my PC? Will I be able to share them to other phone users that don’t have the iPhone 6s?

Day 3

Most of Sunday was spent watching football and using the iPhone 6s to monitor fantasy football and to watch football through our Wi-Fi network when I needed to leave the room for laundry. The phone was able to keep up, as you would expect, with these mostly lightweight tasks without issue. Switching between applications was quick and responsive, and despite the disadvantage that the iPhone 6s has over many Android flagship phones in terms of system memory, I never felt like the system was penalized for it.

Browsing the web through either Safari or Google Chrome did demonstrate a standard complaint about iOS – reloading of webpages when coming back into the browser application even if you didn’t navigate away from the page. With Android you are able to load up a webpage and then just…leave it there, for reference later. With the iPhone 6s, even with the added memory this model ships with, it will reload a page after some amount of time away from the browser app as the operating system decided it needed to utilize that memory for another purpose.

iphoned3-3.jpg

I haven’t had a battery life crisis with the iPhone yet, but I am worried about the lack of Quick Charging or Turbo Charging support on the iPhone 6s. This was a feature I definitely fell in love with on the Droid Turbo, especially when travelling for work or going on extended outings without access to power. I’ll have to monitor how this issue does or does not pop its head up.

Speaking of power and battery life – so far I have been impressed with how the iPhone 6s has performed. As I write this editorial up at 9:30pm on Sunday night, the battery level sits at 22%. Considering I have been using the phone for frequent speed tests (6 of them today) and just general purpose performance and usability testing, I consider this a good result. I only took one 5 minute phone call but texting and picture taking was plentiful. Again, this is another area where this long-term test is going to tell the real story, but for my first impressions the thinness of the iPhone 6s hasn’t created an instant penalty for battery life.

 

The journey is still beginning – tomorrow is my first full work day with the iPhone 6s and I have the final installment of my summer evening golf league. Will the iPhone 6s act as my golf GPS like my Droid Turbo did? Will it make it through the full day without having to resort to car charging or using an external battery? What other features and capabilities will I love or hate in this transition? More soon!

Jim Keller Leaves AMD

Subject: Editorial | September 18, 2015 - 01:00 PM |
Tagged: Zen, raja koduri, lisa su, Jim Keller, bulldozer, amd

2012 was a significant year for AMD.  Many of the top executives left and there were many new and exciting hires at the company.  Lisa Su, who would eventually become President and CEO of AMD was hired in January of that year.  Rory Read seemed to be on a roll with many measures to turn around the company.  He also convinced some big name folks to come back to AMD from other lucrative positions.  One of these rehires was Jim Keller.

jim_keller.jpg

Jim Keller, breakin it down for AMD. Or doing "The Robot". Or both.

Today it was announced that Jim would be leaving AMD effective Sept. 18th.  He was back at AMD for three years and in that time headed up the CPU group.  He implemented massive changes that would result in the design of the upcoming Zen architecture.  There was a full scale ejection of the Bulldozer concept that powered AMD processors since 2011 with the FX-8150 introduction with the current Excavator core design to last through 2016 with the final product being "Bristol Ridge,"expected next summer.  Zen will not ship until late 2016 with the first full quarter of revenue in 2017.

Jim helped to develop the K7 and K8 processors from AMD.  He also was extremely influential in the creation of the X86-64 ISA that not only powers AMD’s parts, but also was adopted by Intel after their disastrous EPIC/IA64 ISA failed to go anywhere.  His past also includes work at DEC on the Alpha processors and before AMD at Apple working on the A4 and A5 SOCs.

We do not know any of the details about his leaving, and perhaps never will.  AMD has released an official statement that “Jim Keller is leaving AMD to pursue other opportunities, effective September 18”.  Looking at Jim’s past employment, he seems to move around a bit.  Perhaps he enjoys coming into a place, turning things around, implementing some new thinking, but then becomes bored with the daily routine of management, budget, and planning.

In the near future this change will not affect AMD’s roadmaps or product lineups.  We still will see Bristol Ridge as the follow-up for Godavari in Summer 2016 and the late 2016 introduction of Zen.  What can be said beyond that is hard to quantify.  There are a lot of smart and talented people still working at AMD and perhaps this allows someone there to step up and introduce the next generation of architectures and thinking at AMD.  Everybody likes the idea of a rockstar designer coming in to shake things up, but time moves on and new people become those rockstars.

We wish Jim well on his new journey and hope that this is not a harbinger of things to come for AMD.  Consumers need the competition that AMD brings to the table and we certainly hope we see them continue to release new products and stay on a schedule that will benefit both them and consumers.  Perhaps he will join fellow veteran Glenn Henry at VIA/Centaur and produce the next, great X86-64 chip.  Perhaps not.

Source: AMD

AMD makes Raja Koduri SVP and Chief Architect of Radeon Technologies Group

Subject: Editorial | September 9, 2015 - 03:53 PM |
Tagged: raja koduri, amd

In a move of outstanding wisdom and forward thinking, AMD has made a personnel move that I can get behind and support. After forming the Radeon Technologies Group to help refocus the company on graphics, it has promoted Raja Koduri to the role of Senior Vice President and Chief Architect of that new group. While this might be a little bit of an "inside baseball" announcement to discuss, Raja is one of the few people in the industry that I have known since day one and he is an outstanding and important person in the graphics world as we know it today.

Koduri recently returned to AMD after a stint with Apple as the mobile SoC vendors director of graphics architecture and his return was met with immediate enthusiasm and hope for a company that continues to struggle financially.

raja.jpg

In this new role, Koduri will no longer just be responsible for the IP of AMD graphics, adding to his responsibility the entirety of the hardware, software and business direction for Radeon products. From personal experience I can assure readers that Raja is a fantastic leader, has great instincts for what the industry needs and has seen some of AMD's most successful products through development.

This new role and new division of structure at AMD will come with a lot of responsibility, as Koduri will be responsible for finding ways to grow the Radeon brand's shrinking market share, how to make a play in the mobile IP space, change the dynamic between developers and AMD, and how working with console vendors like MS and Sony makes sense going forward. In many ways this is a return to the structure that made ATI so successful as a player in the GPU space and AMD is definitely hoping this move can turn things around.

Good luck Raja!

Read AMD's full press release after the break.

Source: AMD
Manufacturer: PC Perspective

To the Max?

Much of the PC enthusiast internet, including our comments section, has been abuzz with “Asynchronous Shader” discussion. Normally, I would explain what it is and then outline the issues that surround it, but I would like to swap that order this time. Basically, the Ashes of the Singularity benchmark utilizes Asynchronous Shaders in DirectX 12, but they disable it (by Vendor ID) for NVIDIA hardware. They say that this is because, while the driver reports compatibility, “attempting to use it was an unmitigated disaster in terms of performance and conformance”.

epic-2015-ue4-dx12.jpg

AMD's Robert Hallock claims that NVIDIA GPUs, including Maxwell, cannot support the feature in hardware at all, while all AMD GCN graphics cards do. NVIDIA has yet to respond to our requests for an official statement, although we haven't poked every one of our contacts yet. We will certainly update and/or follow up if we hear from them. For now though, we have no idea whether this is a hardware or software issue. Either way, it seems more than just politics.

So what is it?

Simply put, Asynchronous Shaders allows a graphics driver to cram workloads in portions of the GPU that are idle, but not otherwise available. For instance, if a graphics task is hammering the ROPs, the driver would be able to toss an independent physics or post-processing task into the shader units alongside it. Kollock from Oxide Games used the analogy of HyperThreading, which allows two CPU threads to be executed on the same core at the same time, as long as it has the capacity for it.

Kollock also notes that compute is becoming more important in the graphics pipeline, and it is possible to completely bypass graphics altogether. The fixed-function bits may never go away, but it's possible that at least some engines will completely bypass it -- maybe even their engine, several years down the road.

I wonder who would pursue something so silly, whether for a product or even just research.

But, like always, you will not get an infinite amount of performance by reducing your waste. You are always bound by the theoretical limits of your components, and you cannot optimize past that (except for obviously changing the workload itself). The interesting part is: you can measure that. You can absolutely observe how long a GPU is idle, and represent it as a percentage of a time-span (typically a frame).

And, of course, game developers profile GPUs from time to time...

According to Kollock, he has heard of some console developers getting up to 30% increases in performance using Asynchronous Shaders. Again, this is on console hardware and so this amount may increase or decrease on the PC. In an informal chat with a developer at Epic Games, so massive grain of salt is required, his late night ballpark “totally speculative” guesstimate is that, on the Xbox One, the GPU could theoretically accept a maximum ~10-25% more work in Unreal Engine 4, depending on the scene. He also said that memory bandwidth gets in the way, which Asynchronous Shaders would be fighting against. It is something that they are interested in and investigating, though.

AMD-2015-MantleAPI-slide1.png

This is where I speculate on drivers. When Mantle was announced, I looked at its features and said “wow, this is everything that a high-end game developer wants, and a graphics developer absolutely does not”. From the OpenCL-like multiple GPU model taking much of the QA out of SLI and CrossFire, to the memory and resource binding management, this should make graphics drivers so much easier.

It might not be free, though. Graphics drivers might still have a bunch of games to play to make sure that work is stuffed through the GPU as tightly packed as possible. We might continue to see “Game Ready” drivers in the coming years, even though much of that burden has been shifted to the game developers. On the other hand, maybe these APIs will level the whole playing field and let all players focus on chip design and efficient injestion of shader code. As always, painfully always, time will tell.

Introducing the Intel Box Master System with Color-enabled Gaming!

Subject: Editorial | August 21, 2015 - 02:28 PM |
Tagged: video, Skylake, master system, Intel, 6700k

Sometimes you get weird boxes in the mail and you just know they are going to be up to no good. This time, Intel just launched the Intel Box Master System gaming system...with COLOR!

Seriously.

You really need to watch the video, but if you MUST sneak a peek at what we're talking about, check out the images below!

Visit Intel at http://inte.ly/unbox

Source: Intel
Manufacturer: PC Perspective

It's Basically a Function Call for GPUs

Mantle, Vulkan, and DirectX 12 all claim to reduce overhead and provide a staggering increase in “draw calls”. As mentioned in the previous editorial, loading graphics card with tasks will take a drastic change in these new APIs. With DirectX 10 and earlier, applications would assign attributes to (what it is told is) the global state of the graphics card. After everything is configured and bound, one of a few “draw” functions is called, which queues the task in the graphics driver as a “draw call”.

While this suggests that just a single graphics device is to be defined, which we also mentioned in the previous article, it also implies that one thread needs to be the authority. This limitation was known about for a while, and it contributed to the meme that consoles can squeeze all the performance they have, but PCs are “too high level” for that. Microsoft tried to combat this with “Deferred Contexts” in DirectX 11. This feature allows virtual, shadow states to be loaded from secondary threads, which can be appended to the global state, whole. It was a compromise between each thread being able to create its own commands, and the legacy decision to have a single, global state for the GPU.

Some developers experienced gains, while others lost a bit. It didn't live up to expectations.

pcper-2015-dx12-290x.png

The paradigm used to load graphics cards is the problem. It doesn't make sense anymore. A developer might not want to draw a primitive with every poke of the GPU. At times, they might want to shove a workload of simple linear algebra through it, while other requests could simply be pushing memory around to set up a later task (or to read the result of a previous one). More importantly, any thread could want to do this to any graphics device.

pcper-2015-dx12-980.png

The new graphics APIs allow developers to submit their tasks quicker and smarter, and it allows the drivers to schedule compatible tasks better, even simultaneously. In fact, the driver's job has been massively simplified altogether. When we tested 3DMark back in March, two interesting things were revealed:

  • Both AMD and NVIDIA are only a two-digit percentage of draw call performance apart
  • Both AMD and NVIDIA saw an order of magnitude increase in draw calls

Read on to see what this means for games and game development.

Windows 10 One-Minute Ad Launches

Subject: Editorial | July 20, 2015 - 08:28 PM |
Tagged: microsoft, windows, windows 10

As we've been saying for several months now, Windows 10 is coming in a handful of days. Naturally, Microsoft is trickling out information and marketing material leading up to it. Some of the interesting ones we can talk about. I'd normally consider a one-minute TV spot as “not very interesting”, and it probably isn't for our audience, but there was one thing that I wanted to say about it.

The ad looks through an international cast of children, and of course an adorable puppy, describing how their technology life will evolve with Windows 10. The premise is that the OS will empower everything that they do, and grow with them because of automatic updates. Of course, young children and a puppy does a lot to sell a consumer product in itself. The video currently has over 200,000 views on YouTube with an almost 20:1 like-to-dislike ratio.

But the part that interested me was the quote “for them, every screen is meant to be touched”.

In a direct way, yes. Once you provide someone with a touch screen, especially a young child, they instantly want to touch every screen in their life. This has actually led to schools refusing to install touch-based all-in-one PCs because they were worried about kids ruining the non-touch monitors.

It is odd that Microsoft would focus on “touch” in the ad, though. This leads me to the point that I want to bring up. Nowhere in the ad is “familiar” or similar verbiage used. Each example is touch, stylus, or voice. You would think that Microsoft wants to draw in the audience who avoided Windows 8.x, and yet the tone sounds identical to what they've been saying for years.

It's just a TV spot, but it sounds a bit out of tune with the last year.

Tick Tock Tick Tock Tick Tock Tock

A few websites have been re-reporting on a leak from BenchLife.info about Kaby Lake, which is supposedly a second 14nm redesign (“Tock”) to be injected between Skylake and Cannonlake.

UPDATE (July 2nd, 3:20pm ET): It has been pointed out that many hoaxes have come out of the same source, and that I should be more clear in my disclaimer. This is an unconfirmed, relatively easy to fake leak that does not have a second, independent source. I reported on it because (apart from being interesting enough) some details were listed on the images, but not highlighted in the leak, such as "GT0" and a lack of Iris Pro on -K. That suggests that the leaker got the images from somewhere, but didn't notice those details, which implies that the original source was hoaxed by an anonymous source, who only seeded the hoax to a single media outlet, or that it was an actual leak.

Either way, enjoy my analysis but realize that this is a single, unconfirmed source who allegedly published hoaxes in the past.

intel-2015-kaby-lake-leak-01.png

Image Credit: BenchLife.info

If true, this would be a major shift in both Intel's current roadmap as well as how they justify their research strategies. It also includes a rough stack of product categories, from 4.5W up to 91W TDPs, including their planned integrated graphics configurations. This leads to a pair of interesting stories:

How Kaby Lake could affect Intel's processors going forward. Since 2006, Intel has only budgeted a single CPU architecture redesign for any given fabrication process node. Taking two attempts on the 14nm process buys time for 10nm to become viable, but it could also give them more time to build up a better library of circuit elements, allowing them to assemble better processors in the future.

What type of user will be given Iris Pro? Also, will graphics-free options be available in the sub-Enthusiast class? When buying a processor from Intel, the high-end mainstream processors tend to have GT2-class graphics, such as the Intel HD 4600. Enthusiast architectures, such as Haswell-E, cannot be used without discrete graphics -- the extra space is used for more cores, I/O lanes, or other features. As we will discuss later, Broadwell took a step into changing the availability of Iris Pro in the high-end mainstream, but it doesn't seem like Kaby Lake will make any more progress. Also, if I am interpreting the table correctly, Kaby Lake might bring iGPU-less CPUs to LGA 1151.

Keeping Your Core Regular

To the first point, Intel has been on a steady tick-tock cycle since the Pentium 4 architecture reached the 65nm process node, which was a “tick”. The “tock” came from the Conroe/Merom architecture that was branded “Core 2”. This new architecture was a severe departure from the high clock, relatively low IPC design that Netburst was built around, which instantaneously changed the processor landscape from a dominant AMD to an Intel runaway lead.

intel-tick-tock.png

After 65nm and Core 2 started the cycle, every new architecture alternated between shrinking the existing architecture to smaller transistors (tick) and creating a new design on the same fabrication process (tock). Even though Intel has been steadily increasing their R&D budget over time, which is now in the range of $10 to $12 billion USD each year, creating smaller, more intricate designs with new process nodes has been getting harder. For comparison, AMD's total revenue (not just profits) for 2014 was $5.51 billion USD.

Read on to see more about what Kaby Lake could mean for Intel and us.