Skylake and Later Will Be Withheld Windows 7 / 8.x Support

Subject: Processors | January 17, 2016 - 02:20 AM |
Tagged: Windows 8.1, Windows 7, windows 10, Skylake, microsoft, kaby lake, Intel, Bristol Ridge, amd

Microsoft has not been doing much to put out the fires in comment threads all over the internet. The latest flare-up involves hardware support with Windows 7 and 8.x. Currently unreleased architectures, such as Intel's Kaby Lake and AMD's Bristol Ridge, will only be supported on Windows 10. This is despite Windows 7 and Windows 8.x being supported until 2020 and 2023, respectively. Microsoft does not believe that they need to support older hardware, though.

windows-10-bandaid.png

This brings us to Skylake. These processors are out, but Microsoft considers them “transition” parts. Microsoft provided PC World with a list of devices that will be gjven Windows 7 and Windows 8.x drivers, which enable support until July 17, 2017. Beyond that date, only a handful of “most critical” updates will be provided until the official end of life.

I am not sure what the cut-off date for unsupported Skylake processors is, though; that is, Skylake processors that do not line up with Microsoft's list could be deprecated at any time. This is especially a problem for the ones that are potentially already sold.

As I hinted earlier, this will probably reinforce the opinion that Microsoft is doing something malicious with Windows 10. As Peter Bright of Ars Technica reports, Windows 10 does not exactly have an equivalent in the server space yet, which makes you wonder what that support cycle will be like. If they can continue to patch Skylake-based servers in Windows Server builds that are derived from Windows 7 and Windows 8.x, like Windows Server 2012 R2, then why are they unwilling to port those changes to the base operating system? If they will not patch current versions of Windows Server, because the Windows 10-derived version still isn't out yet, then what will happen with server farms, like Amazon Web Services, when Xeon v5s are suddenly incompatible with most Windows-based OS images? While this will, no doubt, be taken way out of context, there is room for legitimate commentary about this whole situation.

Of course, supporting new hardware on older operating systems can be difficult, and not just for Microsoft at that. Peter Bright also noted that Intel has a similar, spotty coverage of drivers, although that mostly applies to Windows Vista, which, while still in extended support for another year, doesn't have a significant base of users who are unwilling to switch. The point remains, though, that Microsoft could be doing a favor for their hardware vendor partners.

I'm not sure whether that would be less concerning, or more.

Whatever the reason, this seems like a very silly, stupid move on Microsoft's part, given the current landscape. Windows 10 can become a great operating system, but users need to decide that for themselves. When users are pushed, and an adequate reason is not provided, they will start to assume things. Chances are, it will not be in your favor. Some may put up with it, but others might continue to hold out on older platforms, maybe even including older hardware.

Other users may be able to get away with Windows 7 VMs on a Linux host.

Source: Ars Technica

Meet the new AMD Opteron A1100 Series SoC with ARM onboard

Subject: Processors | January 14, 2016 - 02:26 PM |
Tagged: opteron a1100, amd

The chip once known as Seattle has arrived from AMD, the Opteron A1100 Series which is built upon up to eight cores based on a 64-bit ARM Cortex-A57.  The chips will have up to 4 MB of shared L2 cache and 8 MB L3 cache with an integrated dual-channel memory controller that supports up to 128 GB of DDR3 or DDR4 memory.  For connectivity options you will have two 10Gb Ethernet ports, 8 lanes of PCIe 3.0 and up to 14 SATA3 devices.

a1100models.PNG

As you can see above the TDPs range from 25W to 32W, perfect for power conscious data centres.  The SoftIron Overdrive 3000 systems will use the new A1100 chips and AMD is working with Silver Lining Systems to integrate SLS’ fabric technology for interconnecting systems. 

a1100.PNG

TechARP has posted a number of slides from AMD's presentation or you can head straight over to AMD to get the scoop.  You won't see these chips on the desktop but new server chips are great news for AMD's bottom line in the coming year.  They also speak well of AMD's continued innovations, using low powered and low cost 64-bit ARM chips, combined with their interconnect technologies opens up a new market for AMD.

Full PR is available after the break.

Source: AMD

Report: AMD Carrizo FM2+ Processor Listing Appears Online

Subject: Processors | January 11, 2016 - 06:26 PM |
Tagged: rumor, report, FM2+, carrizo, Athlon X4, amd

According to a report published by CPU World, a pair of unreleased AMD Athlon X4 processors appeared in a supported CPU list on Gigabyte's website (since removed) long enough to give away some information about these new FM2+ models.

Athlon_X4_835_and_845.jpg

Image credit: CPU World

The CPUs in question are the Athlon X4 835 and Athlon X4 845, 65W quad-core parts that are both based on AMD's Excavator core, according to CPU World. The part numbers are AD835XACI43KA and AD845XACI43KA, which the CPU World report interprets:

"The 'I43' letters and digits in the part number signify Socket FM2+, 4 CPU cores, and 1 MB L2 cache per module, or 2MB in total. The last two letters 'KA' confirm that the CPUs are based on Carrizo design."

The report further states that the Athlon X4 835 will operate at 3.1 GHz, with 3.5 GHz for the X4 845. No Turbo Core frequency information is known for these parts.

Source: CPU-World
Manufacturer: PC Perspective
Tagged: moores law, gpu, cpu

Are Computers Still Getting Faster?

It looks like CES is starting to wind down, which makes sense because it ended three days ago. Now that we're mostly caught up, I found a new video from The 8-Bit Guy. He doesn't really explain any old technologies in this one. Instead, he poses an open question about computer speed. He was able to have a functional computing experience on a ten-year-old Apple laptop, which made him wonder if the rate of computer advancement is slowing down.

I believe that he (and his guest hosts) made great points, but also missed a few important ones.

One of his main arguments is that software seems to have slowed down relative to hardware. I don't believe that is true, but I believe it's looking in the right area. PCs these days are more than capable of doing just about anything in terms of 2D user interface that we would want to, and do so with a lot of overhead for inefficient platforms and sub-optimal programming (relative to the 80's and 90's at the very least). The areas that require extra horsepower are usually doing large batches of many related tasks. GPUs are key in this area, and they are keeping up as fast as they can, despite some stagnation with fabrication processes and a difficulty (at least before HBM takes hold) in keeping up with memory bandwidth.

For the last five years to ten years or so, CPUs have been evolving toward efficiency as GPUs are being adopted for the tasks that need to scale up. I'm guessing that AMD, when they designed the Bulldozer architecture, hoped that GPUs would have been adopted much more aggressively, but even as graphics devices, they now have a huge effect on Web, UI, and media applications.

google-android-opengl-es-extensions.jpg

These are also tasks that can scale well between devices by lowering resolution (and so forth). The primary thing that a main CPU thread needs to do is figure out the system's state and keep the graphics card fed before the frame-train leaves the station. In my experience, that doesn't scale well (although you can sometimes reduce the amount of tracked objects for games and so forth). Moreover, it is easier to add GPU performance, compared to single-threaded CPU, because increasing frequency and single-threaded IPC should be more complicated than planning out more, duplicated blocks of shaders. These factors combine to give lower-end hardware a similar experience in the most noticeable areas.

So, up to this point, we discussed:

  • Software is often scaling in ways that are GPU (and RAM) limited.
  • CPUs are scaling down in power more than up in performance.
  • GPU-limited tasks can often be approximated with smaller workloads.
    • Software gets heavier, but it doesn't need to be "all the way up" (ex: resolution).
    • Some latencies are hard to notice anyway.

Back to the Original Question

This is where “Are computers still getting faster?” can be open to interpretation.

intel-devilscanyon-overview.JPG

Tasks are diverging from one class of processor into two, and both have separate industries, each with their own, multiple goals. As stated, CPUs are mostly progressing in power efficiency, which extends (an assumed to be) sufficient amount of performance downward to multiple types of devices. GPUs are definitely getting faster, but they can't do everything. At the same time, RAM is plentiful but its contribution to performance can be approximated with paging unused chunks to the hard disk or, more recently on Windows, compressing them in-place. Newer computers with extra RAM won't help as long as any single task only uses a manageable amount of it -- unless it's seen from a viewpoint that cares about multi-tasking.

In short, computers are still progressing, but the paths are now forked and winding.

Far Cry Primal System Requirements Slightly Lower than 4?

Subject: Graphics Cards, Processors | January 9, 2016 - 07:00 AM |
Tagged: ubisoft, quad-core, pc gaming, far cry primal, dual-core

If you remember back when Far Cry 4 launched, it required a quad-core processor. It would block your attempts to launch the game unless it detected four CPU threads, either native quad-core or dual-core with two SMT threads per core. This has naturally been hacked around by the PC gaming community, but it is not supported by Ubisoft. It's also, apparently, a bad experience.

ubisoft-2015-farcryprimal.jpg

The follow-up, Far Cry Primal, will be released in late February. Oddly enough, it has similar, but maybe slightly lower, system requirements. I'll list them, and highlight the differences.

Minimum:

  • 64-bit Windows 7, 8.1, or 10 (basically unchanged from 4)
  • Intel Core i3-550 (down from i5-750)
    • or AMD Phenom II X4 955 (unchanged from 4)
  • 4GB RAM (unchanged from 4)
  • 1GB NVIDIA GTX 460 (unchanged from 4)
    • or 1GB AMD Radeon HD 5770 (down from HD 5850)
  • 20GB HDD Space (down from 30GB)

Recommended:

  • Intel Core i7-2600K (up from i5-2400S)
    • or AMD FX-8350 (unchanged from 4)
  • 8GB of RAM (unchanged from 4)
  • NVIDIA GeForce GTX 780 (up from GTX 680)
    • or AMD Radeon R9 280X (down from R9 290X)

While the CPU is interesting, the opposing directions of the recommended GPU is fascinating. Either the parts are within Ubisoft's QA margin of error, or they increased the GPU load, but were able to optimize AMD better than Far Cry 4, which was a net gain in performance (and explains the slight bump in CPU power required to feed the extra content). Of course, either way is just a guess.

Back on the CPU topic though, I would be interested to see the performance of Pentium Anniversary Edition parts. I wonder whether they removed the two-thread lock, and, especially if hacks are still required, whether it is playable anyway.

That is, in a month and a half.

Source: Ubisoft

Intel Pushes Device IDs of Kaby Lake GPUs

Subject: Graphics Cards, Processors | January 8, 2016 - 02:38 AM |
Tagged: Intel, kaby lake, linux, mesa

Quick post about something that came to light over at Phoronix. Someone noticed that Intel published a handful of PCI device IDs for graphics processors to Mesa and libdrm. It will take a few months for graphics drivers to catch up, although this suggests that Kaby Lake will be releasing relatively soon.

intel-2015-linux-driver-mesa.png

It also gives us hints about what Kaby Lake will be. Of the published batch, there will be six tiers of performance: GT1 has five IDs, GT1.5 has three IDs, GT2 has six IDs, GT2F has one ID, GT3 has three IDs, and GT4 has four IDs. Adding them up, we see that Intel plans 22 GPU devices. The Phoronix post lists what those device IDs are, but that is probably not interesting for our readers. Whether some of those devices overlap in performance or numbering is unclear, but it would make sense given how few SKUs Intel usually provides. I have zero experience in GPU driver development.

Source: Phoronix

Rumor: Intel and Xiaomi "Special Deal"

Subject: Processors, Mobile | January 6, 2016 - 10:56 PM |
Tagged: xiaomi, Intel, atom

So this rumor cites anonymous source(s) that leaked info to Digitimes. That said, it aligns with things that I've suspected in a few other situations. We'll discuss this throughout the article.

Xiaomi, a popular manufacturer of mobile devices, are breaking into the laptop space. One model was spotted on pre-order in China with an Intel Core i7 processor. According to the aforementioned leak, Intel has agreed to bundle an additional Intel Atom processor with every Core i7 that they order. Use Intel in a laptop, and they can use Intel in an x86-based tablet for no additional cost.

Single_grain_of_table_salt_(electron_micrograph).jpg

A single grain of salt... ...
Image Source: Wikipedia

While it's not an explicit practice, we've been seeing hints of similar initiatives for years now. A little over a year ago, Intel's mobile group reported revenues that are ~$1 million, which are offset by ~$1 billion in losses. We would also see phones like the ASUS ZenFone 2, which has amazing performance at a seemingly impossible $199 / $299 price point. I'm not going to speculate on what the actual relationships are, but it sounds more complicated than a listed price per tray.

And that's fine, of course. I know comments will claim the opposite, either that x86 is unsuitable for mobile devices or alleging that Intel is doing shady things. In my view, it seems like Intel has products that they believe can change established mindsets if given a chance. Personally, I would be hesitant to get an x86-based developer phone, but that's because I would only want to purchase one and I'd prefer to target the platform that the majority has. It's that type of inertia that probably frustrates Intel, but they can afford to compete against it.

It does make you wonder how long Intel plans to make deals like this -- again, if they exist.

Coverage of CES 2016 is brought to you by Logitech!

PC Perspective's CES 2016 coverage is sponsored by Logitech.

Follow all of our coverage of the show at http://pcper.com/ces!

Source: Digitimes

Photonic IC Created by University of Colorado Boulder

Subject: Processors | December 28, 2015 - 09:03 PM |
Tagged: optical, photonics

A typical integrated circuit pushes electrical voltage across pathways, with transistors and stuff modifying it. When you interpret those voltages as mathematical values and logical instructions, then congratulations, you have created a processor, memory, and so forth. You don't need to use electricity for this. In fact, the history of Charles Babbage and Ada Lovelace was their attempts to perform computation on mechanical state.

ucolorado-2015-opticalic.jpg

Image Credit: University of Colorado
Chip contains optical (left) and electric (top and right) circuits.

One possible follow-up is photonic integrated circuits. This routes light through optical waveguides, rather than typical electric traces. The prototype made by University of Colorado Boulder (and UC Berkeley) seem to use photonics just to communicate, and an electrical IC for the computation. The advantage is high bandwidth, high density, and low power.

This sort of technology was being investigated for several years. My undergraduate thesis for Physics involved computing light transfer through defects in a photonic crystal, using it to create 2D waveguides. With all the talk of silicon fabrication coming to its limits, as 14nm transistors are typically made of around two-dozen atoms, this could be a new direction to innovate.

And honestly, wouldn't you want to overclock your PC to 400+ THz? Make it go plaid for ludicrous speed. (Yes, this paragraph is a joke.)

Intel Adds New Processors to Broadwell and Skylake Lineups

Subject: Processors | December 28, 2015 - 07:00 AM |
Tagged: skylake-u, Skylake, mobile cpu, Intel, desktop cpu, core i7, core i5, core i3, Broadwell

As reported by CPU World Intel has added a total of eight new processors to the 5th-gen “Broadwell” and 6th-gen “Skylake” CPU lineups, with new mobile and desktop models appearing in Intel’s price lists. The models include Core and Celeron, and range from dual core (five with Hyper-Threading) to a new quad-core i5:

CPU_World_chart.png

Chart of new Intel models from CPU-World

“Intel today added 8 new Broadwell- and Skylake-based microprocessors to the official price list. New CPUs have unusual model numbers, like i5-6402P and i5-5200DU, which indicates that they may have different feature-set than the mainstream line of desktop and mobile CPUs. Intel also introduced today Celeron 3855U and 3955U ultra-low voltage models.”

It is unclear if the desktop models (Core i3-6098P, Core i5-6402P) listed with enter the retail channel, or if they are destined for OEM applications. The report points out these models have a P suffix “that was used to signify the lack of integrated GPU in older generations of Core i3/i5 products. There is a good chance that it still means just that”.

Source: CPU-World
Author:
Manufacturer: AMD

May the Radeon be with You

In celebration of the release of The Force Awakens as well as the new Star Wars Battlefront game from DICE and EA, AMD sent over some hardware for us to use in a system build, targeted at getting users up and running in Battlefront with impressive quality and performance, but still on a reasonable budget. Pairing up an AMD processor, MSI motherboard, Sapphire GPU with a low cost chassis, SSD and more, the combined system includes a FreeSync monitor for around $1,200.

swbf.jpg

Holiday breaks are MADE for Star Wars Battlefront

Though the holiday is already here and you'd be hard pressed to build this system in time for it, I have a feeling that quite a few of our readers and viewers will find themselves with some cash and gift certificates in hand, just ITCHING for a place to invest in a new gaming PC.

The video above includes a list of components, the build process (in brief) and shows us getting our gaming on with Star Wars Battlefront. Interested in building a system similar the one above on your own? Here's the hardware breakdown.

  AMD Powered Star Wars Battlefront System
Processor AMD FX-8370 - $197
Cooler Master Hyper 212 EVO - $29
Motherboard MSI 990FXA Gaming - $137
Memory AMD Radeon Memory DDR3-2400 - $79
Graphics Card Sapphire NITRO Radeon R9 380X - $266
Storage SanDisk Ultra II 240GB SSD - $79
Case Corsair Carbide 300R - $68
Power Supply Seasonic 600 watt 80 Plus - $69
Monitor AOC G2460PF 1920x1080 144Hz FreeSync - $259
Total Price Full System (without monitor) - Amazon.com - $924

For under $1,000, plus another $250 or so for the AOC FreeSync capable 1080p monitor, you can have a complete gaming rig for your winter break. Let's detail some of the specific components.

cpu.jpg

AMD sent over the FX-8370 processor for our build, a 4-module / 8-core CPU that runs at 4.0 GHz, more than capable of handling any gaming work load you can toss at it. And if you need to do some transcoding, video work or, heaven forbid, school or productivity work, the FX-8370 has you covered there too.

cooler.jpg

For the motherboard AMD sent over the MSI 990FXA Gaming board, one of the newer AMD platforms that includes support for USB 3.1 so you'll have a good length of usability for future expansion. The Cooler Master Hyper 212 EVO cooler was our selection to keep the FX-8370 running smoothly and 8GB of AMD Radeon DDR3-2133 memory is enough for the system to keep applications and the Windows 10 operating system happy.

Continue reading about our AMD system build for Star Wars Battlefront!!