Manufacturer: Intel

When Magma Freezes Over...

Intel confirms that they have approached AMD about access to their Mantle API. The discussion, despite being clearly labeled as "an experiment" by an Intel spokesperson, was initiated by them -- not AMD. According to AMD's Gaming Scientist, Richard Huddy, via PCWorld, AMD's response was, "Give us a month or two" and "we'll go into the 1.0 phase sometime this year" which only has about five months left in it. When the API reaches 1.0, anyone who wants to participate (including hardware vendors) will be granted access.

AMD_Mantle_Logo.png

AMD inside Intel Inside???

I do wonder why Intel would care, though. Intel has the fastest per-thread processors, and their GPUs are not known to be workhorses that are held back by API call bottlenecks, either. Of course, that is not to say that I cannot see any reason, however...

Read on to see why, I think, Intel might be interested and what this means for the industry.

Battlefield Will Not Be Annualized Says Patrick Söderlund

Subject: Editorial, General Tech | June 17, 2014 - 07:54 PM |
Tagged: battlefield, medal of honor, ea

Last year, we got Battlefield 4. The year before? Medal of Honor: Warfighter. The year before? Battlefield 3. The year before? Medal of Honor (Reboot). We will not be getting a new Medal of Honor this year, because Danger Close was shut down in June 2013. Danger Close developed the two recent Medal of Honor titles and, as EA Los Angeles, many of the previous Medal of Honor titles and many RTS games (Command and Conquer, Red Alert, Lord of the Rings: The Battle for Middle-Earth).

battlefield-hardline.jpg

Many of their employees are now working at DICE LA.

So, when a new Medal of Honor title should be released, we get Battlefield: Hardline. A person with decent pattern recognition might believe that Battlefield, or its spinoffs, would fill the gap left by Medal of Honor. Not so, according to Patrick Söderlund, Executive VP of EA Studios. As was the case at E3, where both studios (DICE and Visceral) repetitively claimed that Battlefield: Hardline was the product (literally) of a fluke encounter and pent-up excitement for cops and robbers.

Of course, they do not close the door for annualized Battlefield releases, either. They just say that it is not their plan to have that be "the way it's going to be forever and ever". Honestly, for all the hatred that annualized releases get, the problem is not the frequency. If EA can bring out a Battlefield title every year, and one that is continually a good game, then power to them. The problem is that, with an annual release cycle, it is hard to get success-after-success, especially when fatigue is an opposing, and (more importantly) ever-increasing force.

It is the hard, but lucrative road.

Source: PC Gamer

Why would SanDisk buy Fusion-io for $1.1 Billion?

Subject: Editorial, Storage | June 17, 2014 - 09:56 AM |
Tagged: sandisk, fusion-io, buyout

Fusion-io was once a behemoth of flash memory storage. Back when SSDs were having a hard time saturating SATA 3Gb/sec, Fusion-io was making fire breathing PCIe SSDs full of SLC flash and pushing relatively insane IOPS and throughput figures. Their innovations were a good formula at the time. They made the controller a very simple device, basically just a simple bridge from the PCIe bus to the flash memory. This meant that most of the actual work was done in the driver. This meant that Fusion-io SSDs were able to leverage the CPU and memory of the host system to achieve very high performance.

iops (2010).jpg

Fusion-io ioDrive 160 creams the competition back in 2010.

Being the king of IOPS back in the early days of flash memory storage, Fusion-io was able to charge a premium for their products. In a 2010 review, I priced their 160GB SSD at about $40/GB. In the years since, while flash memory prices (and therefore SSD products) have steadily dropped in price while achieving higher and higher performance figures, Fusion-io products have mostly remained static in price. All of this time, the various iterations of the ioDrive continued to bank on the original model of a simple controller and the bulk of the work taking place in the driver. This actually carries a few distinct disadvantages, in that the host system has to spent a relatively large amount of CPU and memory resources towards handling the Fusion-io devices. While this enables higher performance, it leaves less resources available to actually do stuff with the data. This ends up adding to the build cost of a system, as more CPU cores and memory must be thrown at the chassis handling the storage. In more demanding cases, additional systems would need to be added to the rack space in order to handle the additional storage overhead in addition to the other required workloads. Lastly, the hefty driver means Fusion-io devices are not bootable, despite early promises to the contrary. This isn't necessarily a deal breaker for enterprise use, but it does require system builders to add an additional storage device (from a different vendor) to handle OS duties.

iops (2014).png

In 2014, the other guys are making faster stuff. Note this chart is 4x the scale of the 2010 chart.

Lets fast forward to present times. Just over a week ago, Fusion-io announced their new 'Atomic' line of SSDs. The announcement seemed to fall flat, and did little to save the continuous decline of their stock price. I suspect this was because despite new leadership, these new products are just another iteration of the same resource consuming formula. Another reason for the luke warm reception might have been the fact that Intel launched their P3700 series a few days prior. The P3700 is a native PCIe SSD that employs the new NVM Express communication standard. This open standard was developed specifically for flash memory communication, and it allows more direct access to flash in a manner that significantly reduces the overhead required to perform high data throughputs and very high IO's per second. NVMe is a very small driver stack with native support built into modern operating systems, and is basically the polar opposite of the model Fusion-io has relied on for years now.

NVMe.png

Intel's use of NVMe enables very efficient access to flash memory with minimal CPU overhead.

Fusion-io's announcement claimed "The Atomic Series of ioMemory delivers the highest transaction rate per gigabyte for everything from read intensive workflows to mixed workloads.". Let's see how this stacks up against the Intel P3700 - an SSD that launched the same week:



Model Fusion-io PX600 Intel P3700
Capacity (TB) 1.0 1.3 2.6 5.2 0.4 0.8 1.6 2.0
Interface / Flash type PCIe 2.0 x8 / 20nm MLC PCIe 3.0 x4 / 20nm MLC
Read BW (GB/sec) 2.7 2.7 2.7 2.7 2.7 2.8 2.8 2.8
Write BW (GB/sec) 1.5 1.7 2.2 2.1 1.2 1.9 1.9 1.9
4k random read IOPS 196,000 235,000 330,000 276,000 450,000 460,000 450,000 450,000
Read transactions/GB 196 181 127 53 1,125 575 281 225
4k random write IOPS 320,000 370,000 375,000 375,000 75,000 90,000 150,000 175,000
Write transactions/GB 320 285 144 72 188 113 94 88
4k 70/30 R/W IOPS Unlisted 150,000 200,000 240,000 250,000
Read latency 92us 20/115us
Write latency 15us 20/25us
Endurance (PBW) 12 16 32 64 7.3 14.6 29.2 36.5
Endurance / TB 12.0 12.3 12.3 12.3 18.3 18.3 18.3 18.3
Cost Unlisted $1,207 $2,414 $4,828 $6,035
Cost/GB Unlisted $3.02 $3.02 $3.02 $3.02
Warranty 5 years 5 years
                 

Source: Fusion-io / Intel

We are comparing flagship to flagship (in a given form factor) here. Starting from the top, the Intel P3700 is available in generally smaller capacities than the Fusion-io PX600. Both use 20nm flash, but the P3700 uses half the data lanes at twice the throughput. Regarding Fusion-io's 'transaction rate per GB' point, well, it's mostly debunked by the Intel P3700, which has excellent random read performance all the way down to its smallest 400GB capacity point. The seemingly unreal write specs seen from the PX600 are, well, actually unreal. Flash memory writes take longer than reads, so the only logical explanation for the inversion we see here is that Fusion-io's driver is passing those random writes through RAM first. Writing to RAM might be quicker, but you can't sustain it indefinitely, and it consumes more host system resources in the process. Moving further down the chart, we see Intel coming in with a ~50% higher endurance rating when compared to the Fusion-io. The warranties may be of equal duration, but the Intel drive is (on paper / stated warranty) guaranteed to outlast the Fusion-io part when used in a heavy write environment.

For pricing, Intel launched the P3700 at a competitive $3/GB. Pricing data for Fusion-io is not available, as they are behind a bit of a 'quote wall', and no pricing at all was included with the Atomic product launch press materials. Let's take a conservative guess and assume the new line is half the cost/GB of their previous long-standing flagship, the Octal. One vendor lists pricing directly at $124,995 for 10.24TB ($12.21/GB) and $99,995 for 5.12TB ($19.53/GB), both of which require minumum support contracts as an additional cost. Half of $12/GB is still more than twice the $3/GB figure from Intel.

My theory as to why SanDisk is going for Fusion-io?

  • A poor track record since the Fusion-io IPO have driven the stock price way down, making it prime for a buyout.
  • SanDisk is one of the few remaining flash memory companies that does not own their own high end controller tech.
  • Recent Fusion-io product launch overshadowed by much larger (Intel) company launching a competing superior product at a lower cost/GB.

So yeah, the buyout seemed inevitable. The question that remains is what will SanDisk do with them once they've bought them? Merging the two will mean that Fusion-io can include 'in house' flash and (hopefully) offer their products at a lower cost/GB, but that can only succeed if the SanDisk flash performs adequately. Assuming it does, there's still the issue of relatively high costs when compared to freshly competing products from Intel and others. Last but not least is the ioDrive driver model, which grows incresingly dated while the rest of the industry adopts NVMe.

AMD Restructures. Lisa Su Is Now COO.

Subject: Editorial, General Tech, Graphics Cards, Processors, Chipsets | June 13, 2014 - 06:45 PM |
Tagged: x86, restructure, gpu, arm, APU, amd

According to VR-Zone, AMD has reworked their business, last Thursday, sorting each of their projects into two divisions and moving some executives around. The company is now segmented into the "Enterprise, Embedded, and Semi-Custom Business Group", and the "Computing and Graphics Business Group". The company used to be divided between "Computing Solutions", which handled CPUs, APUs, chipsets, and so forth, "Graphics and Visual Solutions", which is best known for GPUs but also contains console royalties, and "All Other", which was... everything else.

amd-new2.png

Lisa Su, former general manger of global business, has moved up to Chief Operating Officer (COO), along with other changes.

This restructure makes sense for a couple of reasons. First, it pairs some unprofitable ventures with other, highly profitable ones. AMD's graphics division has been steadily adding profitability to the company while its CPU division has been mostly losing money. Secondly, "All Other" is about a nebulous as a name can get. Instead of having three unbalanced divisions, one of which makes no sense to someone glancing at AMD's quarterly earnings reports, they should now have two, roughly equal segments.

At the very least, it should look better to an uninformed investor. Someone who does not know the company might look at the sheet and assume that, if AMD divested from everything except graphics, that the company would be profitable. If, you know, they did not know that console contracts came into their graphics division because their compute division had x86 APUs, and so forth. This setup is now more aligned to customers, not products.

Source: VR-Zone

Podcast #304 - GeForce GTX TITAN Z, Core i7-4790K, Gigabyte Z97X-SOC Force and more!

Subject: Editorial | June 12, 2014 - 02:28 PM |
Tagged: Z97X-SOC Force, video, titan z, radeon, project tango, podcast, plextor, nvidia, Lightning, gtx titan z, gigabyte, geforce, E3 14, amd, 4790k, 290x

PC Perspective Podcast #304 - 06/12/2014

We have lots of reviews to talk about this week including the GeForce GTX TITAN Z, Core i7-4790K, Gigabyte Z97X-SOC Force, E3 News and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Josh Walrath, Jeremy Hellstrom and Allyn Maleventano

Program length: 1:11:36

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

Join the PC Perspective Team and Austin Evans LIVE on Tonight's Podcast!

Subject: Editorial | June 4, 2014 - 07:42 PM |
Tagged: video, pcper, live, austin evans

Tonight's live edition of the PC Perspective Podcast is going to have a special guest, the Internet's Austin Evans. You likely know of Austin through his wildly popular YouTube channel or maybe his dance moves.

But seriously, Austin Evans is a great guy with a lot of interesting input on technology. Stop by our live page at http://www.pcper.com/live at 10pm EST / 7pm EST for all the fun!

Make sure you don't miss it by signing up for our PC Perspective Live Mailing List!

pcperlive.png

Source: PCPer Live!

TrueCrypt Taken Offline Doesn't Pass My Smell Test

Subject: Editorial, General Tech | May 29, 2014 - 02:17 AM |
Tagged: TrueCrypt

It should not pass anyone's smell test but it apparently does, according to tweets and other articles. Officially, the TrueCrypt website (which redirects to their SourceForge page) claims that, with the end of Windows XP support (??), the TrueCrypt development team wants users to stop using their software. Instead, they suggest a switch to BitLocker, Mac OSX built-in encryption, or whatever random encryption suite comes up when you search your Linux distro's package manager (!?). Not only that, but several versions of Windows (such as 7 Home Premium) do not have access to BitLocker. Lastly, none of these are a good solution for users who want a single encrypted container across multiple OSes.

A new version (don't use it!!!) called TrueCrypt 7.2 was released and signed with their private encryption key.

TrueCrypt_Logo.png

The developers have not denied the end of support, and its full-of-crap reason. (Seriously, because Microsoft deprecated Windows XP almost two months ago, they pull support for a two year old version now?)

They have also not confirmed it. They have been missing since at least "the announcement" (or earlier if they were not the ones who made it). Going missing and unreachable, the day of your supposedly gigantic resignation announcement, does not support the validity of that announcement. 

To me, that is about as unconfirmed as you can get.

Still, people are believing the claims that TrueCrypt 7.1a is not secure. The version has been around since February 2012 and, beyond people looking at its source code, has passed a significant portion of a third-party audit. Even if you believe the website, it only says that TrueCrypt will not be updated for security. It does not say that TrueCrypt 7.1a is vulnerable to any known attack.

In other words, the version that has been good enough for over two years, and several known cases of government agencies being unable to penetrate it, is probably as secure today as it was last week.

"The final version", TrueCrypt 7.2, is a decrypt-only solution. It allows users to unencrypt existing vaults, although who knows what else it does, to move it to another solution. The source code changes have been published, and they do not seem shady so far, but since we cannot even verify that their private key has not leaked, I wouldn't trust it. A very deep compromise could make finding vulnerabilities very difficult.

So what is going on? Who knows. One possibility is that they were targeted for a very coordinated hack, one which completely owned them and their private key, performed by someone(s) who spent a significant amount of time modifying a fake 7.2 version. Another possibility is that they were legally gagged and forced to shut down operations, but they managed to negotiate a method for users to decrypt existing data with a neutered build.

One thing is for sure, if this is a GoG-style publicity stunt, I will flip a couple of tables.

We'll see. ┻━┻ \_()_/ ┻━┻

Source: TrueCrypt
Author:
Manufacturer: Various

The AMD Argument

Earlier this week, a story was posted in a Forbes.com blog that dove into the idea of NVIDIA GameWorks and how it was doing a disservice not just on the latest Ubisoft title Watch_Dogs but on PC gamers in general. Using quotes from AMD directly, the author claims that NVIDIA is actively engaging in methods to prevent game developers from optimizing games for AMD graphics hardware. This is an incredibly bold statement and one that I hope AMD is not making lightly. Here is a quote from the story:

Gameworks represents a clear and present threat to gamers by deliberately crippling performance on AMD products (40% of the market) to widen the margin in favor of NVIDIA products. . . . Participation in the Gameworks program often precludes the developer from accepting AMD suggestions that would improve performance directly in the game code—the most desirable form of optimization.

The example cited on the Forbes story is the recently released Watch_Dogs title, which appears to show favoritism towards NVIDIA GPUs with performance of the GTX 770 ($369) coming close the performance of a Radeon R9 290X ($549).

It's evident that Watch Dogs is optimized for Nvidia hardware but it's staggering just how un-optimized it is on AMD hardware.

watch_dogs_ss9_99866.jpg

Watch_Dogs is the latest GameWorks title released this week.

I decided to get in touch with AMD directly to see exactly what stance the company was attempting to take with these kinds of claims. No surprise, AMD was just as forward with me as they appeared to be in the Forbes story originally.

The AMD Stance

Central to AMD’s latest annoyance with the competition is the NVIDIA GameWorks program. First unveiled last October during a press event in Montreal, GameWorks combines several NVIDIA built engine functions into libraries that can be utilized and accessed by game developers to build advanced features into games. NVIDIA’s website claims that GameWorks is “easy to integrate into games” while also including tutorials and tools to help quickly generate content with the software set. Included in the GameWorks suite are tools like VisualFX which offers rendering solutions like HBAO+, TXAA, Depth of Field, FaceWorks, HairWorks and more. Physics tools include the obvious like PhysX while also adding clothing, destruction, particles and more.

Continue reading our editorial on the verbal battle between AMD and NVIDIA about the GameWorks program!!

Mozilla Firefox to Implement Adobe DRM for Video

Subject: Editorial, General Tech | May 14, 2014 - 09:56 PM |
Tagged: ultraviolet, mozilla, DRM, Adobe Access, Adobe

Needless to say, DRM is a controversial topic and I am clearly against it. I do not blame Mozilla. The non-profit organization responsible for Firefox knew that they could not oppose Chrome, IE, and Safari while being a consumer software provider. I do not even blame Apple, Google, and Microsoft for their decisions, either. This problem is much bigger and it comes down to a total misunderstanding of basic mathematics (albeit at a ridiculously abstract and applied level).

22-mozilla-2.jpg

Simply put, piracy figures are meaningless. They are a measure of how many people use content without paying (assuming they are even accurate). You know what is more useful? Sales figures. Piracy figures are measurements, dependent variables, and so is revenue. Measurements cannot influence other measurements. Specifically, measurements cannot influence anything because they are, themselves, the result of influences. That is what "a measure" is.

Implementing DRM is not a measurement, however. It is a controllable action whose influence can be recorded. If you implement DRM and your sales go down, it hurt you. You may notice piracy figures decline. However, you should be too busy to care because you should be spending your time trying to undo the damage you did to your sales! Why are you looking at piracy figures when you're bleeding money?

I have yet to see a DRM implementation that correlated with an increase in sales. I have, however, seen some which correlate to a massive decrease.

The thing is, Netflix might know that and I am pretty sure that some of the web browser companies know that. They do not necessarily want to implement DRM. What they want is content and, surprise, the people who are in charge of the content are definitely not enlightened to that logic. I am not even sure if they realize that the reason why content is pirated before their release dates is because they are not leaked by end users.

But whatever. Technical companies, who want that content available on their products, are stuck finding a way to appease those content companies in a way that damages their users and shrinks their potential market the least. For Mozilla, this means keeping as much open as possible.

do-not-hurt-2.jpg

Since they do not have existing relationships with Hollywood, Adobe Access will be the actual method of displaying the video. They are clear to note that this only applies to video. They believe their existing relationships in text, images, and games will prevent the disease from spreading. This is basically a plug-in architecture with a sandbox that is open source and as strict as possible.

This sandbox is intended to prevent a security vulnerability from having access to the host system, give a method of controlling the DRM's performance if it hitches, and not allow the DRM to query the machine for authentication. The last part is something they wanted to highlight, because it shows their effort to protect the privacy of their users. They also imply a method for users to opt-out but did not go into specifics.

As an aside, Adobe will support their Access DRM software on Windows, Mac, and Linux. Mozilla is pushing hard for Android and Firefox OS, too. According to Adobe, Access DRM is certified for use with Ultraviolet content.

I accept Mozilla's decision to join everyone else but I am sad that it came to this. I can think of only two reasons for including DRM: for legal (felony) "protection" under the DMCA or to make content companies feel better while they slowly sink their own ships chasing after numbers which have nothing to do with profits or revenue.

Ultimately, though, they made a compromise. That is always how we stumble and fall down slippery slopes. I am disappointed but I cannot suggest a better option.

Source: Mozilla

Mozilla Makes Suggestions to the FCC about Net Neutrality

Subject: Editorial, General Tech | May 5, 2014 - 08:08 PM |
Tagged: mozilla, net neutrality

Recently, the FCC has been moving to give up Net Neutrality. Mozilla, being dedicated to the free (as in speech) and open internet, has offered a simple compromise. Their proposal is that the FCC classifies internet service providers (ISPs) as common carriers on the server side, forcing restrictions on them to prevent discrimination of traffic to customers, while allowing them to be "information services" to consumers.

mozilla-fcc.png

In other words, force ISPs to allow services to have unrestricted access to consumers, without flipping unnecessary tables with content distribution (TV, etc.) services. Like all possibilities so far, it could have some consequences, however.

"Net Neutrality" is a hot issue lately. Simply put, the internet gives society an affordable method of sharing information. How much is "just information" is catching numerous industries off guard, including ones which Internet Service Providers (ISPs) participate in (such as TV and Movie distribution), and that leads to serious tensions.

On the one hand, these companies want to protect their existing business models. They want consumers to continue to select their cable and satellite TV packages, on-demand videos, and other services at controlled profit margins and without the stress and uncertainty of competing.

On the other hand, if the world changes, they want to be the winner in that new reality. Yikes.

mozilla-UP.jpg

A... bad... photograph of Mozilla's "UP" anti-datamining proposal.

Mozilla's proposal is very typical of them. They tend to propose compromises which divides an issue such that both sides get the majority of their needs. Another good example is "UP", or User Personalization, which tries to cut down on data mining by giving a method for the browser to tell websites what they actually want to know (and let the user tell the browser how much to tell them). The user would compromise, giving the amount of information they find acceptable, so the website would compromise and take only what they need (rather than developing methods to grab anything and everything they can). It feels like a similar thing is happening here. This proposal gives users what they want, freedom to choose services without restriction, without tossing ISPs into "Title II" common carrier altogether.

Of course, this probably comes with a few caveats...

The first issue that pops in my mind is, "What is a service?". I see this causing problems for peer-to-peer applications (including BitTorrent Sync and Crashplan, excluding Crashplan Central). Neither endpoint would necessarily be classified as "a server", or at least convince a non-technical lawmaker that is the case, and thus ISPs would not need to apply common carrier restrictions to them. This could be a serious issue for WebRTC. Even worse, companies like Google and Netflix would have no incentive to help fight those battles -- they're legally protected. It would have to be defined, very clearly, what makes "a server".

Every method will get messy for someone. Still, the discussion is being made.

Source: Mozilla