Since the introduction of the Haswell line of CPUs, the Internet has been aflame with how hot the CPUs run. Speculation ran rampant on the cause with theories abounding about the lesser surface area and inferior thermal interface material (TIM) in between the CPU die surface and the underside of the CPU heat spreader. It was later confirmed that Intel had changed the TIM interfacing the CPU die surface to the heat spreader with Haswell, leading to the hotter than expected CPU temperatures. This increase in temperature led to inconsistent core-to-core temperatures as well as vastly inferior overclockability of the Haswell K-series chips over previous generations.
A few of the more adventurous enthusiasts took it upon themselves to use inventive ways to address the heat concerns surrounding the Haswell by delidding the processor. The delidding procedure involves physically removing the heat spreader from the CPU, exposing the CPU die. Some individuals choose to clean the existing TIM from the core die and heat spreader underside, applying superior TIM such as metal or diamond-infused paste or even the Coollaboratory Liquid Ultra metal material and fixing the heat spreader back in place. Others choose a more radical solution, removing the heat spreader from the equation entirely for direct cooling of the naked CPU die. This type of cooling method requires use of a die support plate, such as the MSI Die Guard included with the MSI Z97 XPower motherboard.
Whichever outcome you choose, you must first remove the heat spreader from the CPU's PCB. The heat spreader itself is fixed in place with black RTV-type material ensuring a secure and air-tight seal, protecting the fragile die from outside contaminants and influences. Removal can be done in multiple ways with two of the most popular being the razor blade method and the vise method. With both methods, you are attempting to separate the CPU PCB from the heat spreader without damaging the CPU die or components on the top or bottom sides of the CPU PCB.
Subject: Editorial, General Tech, Memory | August 20, 2014 - 01:08 PM | Jeremy Hellstrom
Tagged: Haswell-E, G.Skill, ddr4-2800, ddr4-2666, ddr4-2400, ddr4-2133, ddr4, crucial, corsair
DDR4 is starting to arrive at NewEgg and some kits are actually in stock for those who want to be the first on their block to have these new DIMMs and can remortgage their home. The price of Haswell-E CPUs and motherboards is as of yet unknown but looking over the past few years of Intel's new processors you can assume the flagship processor will be around $999.99 with the feature rich motherboards starting around $200 and quickly raising from there.
At the 16GB mark you have more choices with Corsair joining in and a range of speeds that go up to DDR4-2800 as well as your choice of a pair of 8GB DIMMs or four 4GB DIMMs. Corsair was kind enough to list the timings, the DDR4-2666 @ 15-17-17-35 and the DDR4-2800 @ 16-18-18-36 though you will certainly pay a price for the RAM with the highest frequencies.
For those on a budget it would seem like waiting is your best choice, especially as Amazon is offering a limited selection of the new kits, as there is only a single 8GB kit from Crucial although you can buy two of the single DIMMs without heatspreaders for $110.
Intel product releases are always dearly priced, the introduction of a new generation of RAM is both exciting and daunting. You will see power reductions, base frequencies that were uncommon in DDR3 and very likely an increase in the ability to overclock these DIMMs but it is going to cost you. If Haswell-E is in your sights you should start planning on how to afford replacing your CPU, motherboard and RAM at the same time as this is no refresh this is a whole new product line.
Subject: Editorial, General Tech, Shows and Expos | July 23, 2014 - 01:43 PM | Ryan Shrout
Tagged: workshop, video, streaming, quakecon, prizes, live, giveaways
UPDATE: The event is over, but the video is embeded below if you want to see the presentations! Thanks again to everyone that attended and all of our sponsors!
It is that time of year again: another installment of the PC Perspective Hardware Workshop! Once again we will be presenting on the main stage at Quakecon 2014 being held in Dallas, TX July 17-20th.
Main Stage - Quakecon 2014
Saturday, July 19th, 12:00pm CT
Our thanks go out to the organizers of Quakecon for allowing us and our partners to put together a show that we are proud of every year. We love giving back to the community of enthusiasts and gamers that drive us to do what we do! Get ready for 2 hours of prizes, games and raffles and the chances are pretty good that you'll take something out with you - really, they are pretty good!
Our primary partners at the event are those that threw in for our ability to host the workshop at Quakecon and for the hundreds of shirts we have ready to toss out! Our thanks to NVIDIA, Seasonic and Logitech!!
If you can't make it to the workshop - don't worry! You can still watch the workshop live on our live page as we stream it over one of several online services. Just remember this URL: http://pcper.com/live and you will find your way!
PC Perspective LIVE Podcast and Meetup
We are planning on hosting any fans that want to watch us record our weekly PC Perspective Podcast (http://pcper.com/podcast) on Wednesday or Thursday evening in our meeting room at the Hilton Anatole. I don't yet know exactly WHEN or WHERE the location will be, but I will update this page accordingly on Wednesday July 16th when we get the data. You might also consider following me on Twitter for updates on that status as well.
After the recording, we'll hop over the hotel bar for a couple drinks and hang out. We have room for at leaast 50-60 people to join us in the room but we'll still be recording if just ONE of you shows up. :)
Prize List (will continue to grow!)
When Magma Freezes Over...
Intel confirms that they have approached AMD about access to their Mantle API. The discussion, despite being clearly labeled as "an experiment" by an Intel spokesperson, was initiated by them -- not AMD. According to AMD's Gaming Scientist, Richard Huddy, via PCWorld, AMD's response was, "Give us a month or two" and "we'll go into the 1.0 phase sometime this year" which only has about five months left in it. When the API reaches 1.0, anyone who wants to participate (including hardware vendors) will be granted access.
AMD inside Intel Inside???
I do wonder why Intel would care, though. Intel has the fastest per-thread processors, and their GPUs are not known to be workhorses that are held back by API call bottlenecks, either. Of course, that is not to say that I cannot see any reason, however...
Subject: Editorial, General Tech | June 17, 2014 - 04:54 PM | Scott Michaud
Tagged: battlefield, medal of honor, ea
Last year, we got Battlefield 4. The year before? Medal of Honor: Warfighter. The year before? Battlefield 3. The year before? Medal of Honor (Reboot). We will not be getting a new Medal of Honor this year, because Danger Close was shut down in June 2013. Danger Close developed the two recent Medal of Honor titles and, as EA Los Angeles, many of the previous Medal of Honor titles and many RTS games (Command and Conquer, Red Alert, Lord of the Rings: The Battle for Middle-Earth).
Many of their employees are now working at DICE LA.
So, when a new Medal of Honor title should be released, we get Battlefield: Hardline. A person with decent pattern recognition might believe that Battlefield, or its spinoffs, would fill the gap left by Medal of Honor. Not so, according to Patrick Söderlund, Executive VP of EA Studios. As was the case at E3, where both studios (DICE and Visceral) repetitively claimed that Battlefield: Hardline was the product (literally) of a fluke encounter and pent-up excitement for cops and robbers.
Of course, they do not close the door for annualized Battlefield releases, either. They just say that it is not their plan to have that be "the way it's going to be forever and ever". Honestly, for all the hatred that annualized releases get, the problem is not the frequency. If EA can bring out a Battlefield title every year, and one that is continually a good game, then power to them. The problem is that, with an annual release cycle, it is hard to get success-after-success, especially when fatigue is an opposing, and (more importantly) ever-increasing force.
It is the hard, but lucrative road.
Subject: Editorial, Storage | June 17, 2014 - 06:56 AM | Allyn Malventano
Tagged: sandisk, fusion-io, buyout
Fusion-io was once a behemoth of flash memory storage. Back when SSDs were having a hard time saturating SATA 3Gb/sec, Fusion-io was making fire breathing PCIe SSDs full of SLC flash and pushing relatively insane IOPS and throughput figures. Their innovations were a good formula at the time. They made the controller a very simple device, basically just a simple bridge from the PCIe bus to the flash memory. This meant that most of the actual work was done in the driver. This meant that Fusion-io SSDs were able to leverage the CPU and memory of the host system to achieve very high performance.
Fusion-io ioDrive 160 creams the competition back in 2010.
Being the king of IOPS back in the early days of flash memory storage, Fusion-io was able to charge a premium for their products. In a 2010 review, I priced their 160GB SSD at about $40/GB. In the years since, while flash memory prices (and therefore SSD products) have steadily dropped in price while achieving higher and higher performance figures, Fusion-io products have mostly remained static in price. All of this time, the various iterations of the ioDrive continued to bank on the original model of a simple controller and the bulk of the work taking place in the driver. This actually carries a few distinct disadvantages, in that the host system has to spent a relatively large amount of CPU and memory resources towards handling the Fusion-io devices. While this enables higher performance, it leaves less resources available to actually do stuff with the data. This ends up adding to the build cost of a system, as more CPU cores and memory must be thrown at the chassis handling the storage. In more demanding cases, additional systems would need to be added to the rack space in order to handle the additional storage overhead in addition to the other required workloads. Lastly, the hefty driver means Fusion-io devices are not bootable, despite early promises to the contrary. This isn't necessarily a deal breaker for enterprise use, but it does require system builders to add an additional storage device (from a different vendor) to handle OS duties.
In 2014, the other guys are making faster stuff. Note this chart is 4x the scale of the 2010 chart.
Lets fast forward to present times. Just over a week ago, Fusion-io announced their new 'Atomic' line of SSDs. The announcement seemed to fall flat, and did little to save the continuous decline of their stock price. I suspect this was because despite new leadership, these new products are just another iteration of the same resource consuming formula. Another reason for the luke warm reception might have been the fact that Intel launched their P3700 series a few days prior. The P3700 is a native PCIe SSD that employs the new NVM Express communication standard. This open standard was developed specifically for flash memory communication, and it allows more direct access to flash in a manner that significantly reduces the overhead required to perform high data throughputs and very high IO's per second. NVMe is a very small driver stack with native support built into modern operating systems, and is basically the polar opposite of the model Fusion-io has relied on for years now.
Intel's use of NVMe enables very efficient access to flash memory with minimal CPU overhead.
Fusion-io's announcement claimed "The Atomic Series of ioMemory delivers the highest transaction rate per gigabyte for everything from read intensive workflows to mixed workloads.". Let's see how this stacks up against the Intel P3700 - an SSD that launched the same week:
|Model||Fusion-io PX600||Intel P3700|
|Interface / Flash type||PCIe 2.0 x8 / 20nm MLC||PCIe 3.0 x4 / 20nm MLC|
|Read BW (GB/sec)||2.7||2.7||2.7||2.7||2.7||2.8||2.8||2.8|
|Write BW (GB/sec)||1.5||1.7||2.2||2.1||1.2||1.9||1.9||1.9|
|4k random read IOPS||196,000||235,000||330,000||276,000||450,000||460,000||450,000||450,000|
|4k random write IOPS||320,000||370,000||375,000||375,000||75,000||90,000||150,000||175,000|
|4k 70/30 R/W IOPS||Unlisted||150,000||200,000||240,000||250,000|
|Endurance / TB||12.0||12.3||12.3||12.3||18.3||18.3||18.3||18.3|
|Warranty||5 years||5 years|
We are comparing flagship to flagship (in a given form factor) here. Starting from the top, the Intel P3700 is available in generally smaller capacities than the Fusion-io PX600. Both use 20nm flash, but the P3700 uses half the data lanes at twice the throughput. Regarding Fusion-io's 'transaction rate per GB' point, well, it's mostly debunked by the Intel P3700, which has excellent random read performance all the way down to its smallest 400GB capacity point. The seemingly unreal write specs seen from the PX600 are, well, actually unreal. Flash memory writes take longer than reads, so the only logical explanation for the inversion we see here is that Fusion-io's driver is passing those random writes through RAM first. Writing to RAM might be quicker, but you can't sustain it indefinitely, and it consumes more host system resources in the process. Moving further down the chart, we see Intel coming in with a ~50% higher endurance rating when compared to the Fusion-io. The warranties may be of equal duration, but the Intel drive is (on paper / stated warranty) guaranteed to outlast the Fusion-io part when used in a heavy write environment.
For pricing, Intel launched the P3700 at a competitive $3/GB. Pricing data for Fusion-io is not available, as they are behind a bit of a 'quote wall', and no pricing at all was included with the Atomic product launch press materials. Let's take a conservative guess and assume the new line is half the cost/GB of their previous long-standing flagship, the Octal. One vendor lists pricing directly at $124,995 for 10.24TB ($12.21/GB) and $99,995 for 5.12TB ($19.53/GB), both of which require minumum support contracts as an additional cost. Half of $12/GB is still more than twice the $3/GB figure from Intel.
My theory as to why SanDisk is going for Fusion-io?
- A poor track record since the Fusion-io IPO have driven the stock price way down, making it prime for a buyout.
- SanDisk is one of the few remaining flash memory companies that does not own their own high end controller tech.
- Recent Fusion-io product launch overshadowed by much larger (Intel) company launching a competing superior product at a lower cost/GB.
So yeah, the buyout seemed inevitable. The question that remains is what will SanDisk do with them once they've bought them? Merging the two will mean that Fusion-io can include 'in house' flash and (hopefully) offer their products at a lower cost/GB, but that can only succeed if the SanDisk flash performs adequately. Assuming it does, there's still the issue of relatively high costs when compared to freshly competing products from Intel and others. Last but not least is the ioDrive driver model, which grows incresingly dated while the rest of the industry adopts NVMe.
Subject: Editorial, General Tech, Graphics Cards, Processors, Chipsets | June 13, 2014 - 03:45 PM | Scott Michaud
Tagged: x86, restructure, gpu, arm, APU, amd
According to VR-Zone, AMD has reworked their business, last Thursday, sorting each of their projects into two divisions and moving some executives around. The company is now segmented into the "Enterprise, Embedded, and Semi-Custom Business Group", and the "Computing and Graphics Business Group". The company used to be divided between "Computing Solutions", which handled CPUs, APUs, chipsets, and so forth, "Graphics and Visual Solutions", which is best known for GPUs but also contains console royalties, and "All Other", which was... everything else.
Lisa Su, former general manger of global business, has moved up to Chief Operating Officer (COO), along with other changes.
This restructure makes sense for a couple of reasons. First, it pairs some unprofitable ventures with other, highly profitable ones. AMD's graphics division has been steadily adding profitability to the company while its CPU division has been mostly losing money. Secondly, "All Other" is about a nebulous as a name can get. Instead of having three unbalanced divisions, one of which makes no sense to someone glancing at AMD's quarterly earnings reports, they should now have two, roughly equal segments.
At the very least, it should look better to an uninformed investor. Someone who does not know the company might look at the sheet and assume that, if AMD divested from everything except graphics, that the company would be profitable. If, you know, they did not know that console contracts came into their graphics division because their compute division had x86 APUs, and so forth. This setup is now more aligned to customers, not products.
Subject: Editorial | June 12, 2014 - 11:28 AM | Ken Addison
Tagged: Z97X-SOC Force, video, titan z, radeon, project tango, podcast, plextor, nvidia, Lightning, gtx titan z, gigabyte, geforce, E3 14, amd, 4790k, 290x
PC Perspective Podcast #304 - 06/12/2014
We have lots of reviews to talk about this week including the GeForce GTX TITAN Z, Core i7-4790K, Gigabyte Z97X-SOC Force, E3 News and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Josh Walrath, Jeremy Hellstrom and Allyn Maleventano
Pimp Next Week Events
0:03:45 Podcast #305 with David Hewlett!
Week in Review:
News items of interest:
Hardware/Software Picks of the Week:
0:57:40 Ryan: Ken's Switching Hardware
1:02:15 Josh: This saved me last weekend.
1:04:00 Allyn: Sony DSC-RX10
Subject: Editorial | June 4, 2014 - 04:42 PM | Ryan Shrout
Tagged: video, pcper, live, austin evans
Tonight's live edition of the PC Perspective Podcast is going to have a special guest, the Internet's Austin Evans. You likely know of Austin through his wildly popular YouTube channel or maybe his dance moves.
But seriously, Austin Evans is a great guy with a lot of interesting input on technology. Stop by our live page at http://www.pcper.com/live at 10pm EST / 7pm EST for all the fun!
Make sure you don't miss it by signing up for our PC Perspective Live Mailing List!
Subject: Editorial, General Tech | May 28, 2014 - 11:17 PM | Scott Michaud
It should not pass anyone's smell test but it apparently does, according to tweets and other articles. Officially, the TrueCrypt website (which redirects to their SourceForge page) claims that, with the end of Windows XP support (??), the TrueCrypt development team wants users to stop using their software. Instead, they suggest a switch to BitLocker, Mac OSX built-in encryption, or whatever random encryption suite comes up when you search your Linux distro's package manager (!?). Not only that, but several versions of Windows (such as 7 Home Premium) do not have access to BitLocker. Lastly, none of these are a good solution for users who want a single encrypted container across multiple OSes.
A new version (don't use it!!!) called TrueCrypt 7.2 was released and signed with their private encryption key.
The developers have not denied the end of support, and its full-of-crap reason. (Seriously, because Microsoft deprecated Windows XP almost two months ago, they pull support for a two year old version now?)
They have also not confirmed it. They have been missing since at least "the announcement" (or earlier if they were not the ones who made it). Going missing and unreachable, the day of your supposedly gigantic resignation announcement, does not support the validity of that announcement.
To me, that is about as unconfirmed as you can get.
Still, people are believing the claims that TrueCrypt 7.1a is not secure. The version has been around since February 2012 and, beyond people looking at its source code, has passed a significant portion of a third-party audit. Even if you believe the website, it only says that TrueCrypt will not be updated for security. It does not say that TrueCrypt 7.1a is vulnerable to any known attack.
In other words, the version that has been good enough for over two years, and several known cases of government agencies being unable to penetrate it, is probably as secure today as it was last week.
"The final version", TrueCrypt 7.2, is a decrypt-only solution. It allows users to unencrypt existing vaults, although who knows what else it does, to move it to another solution. The source code changes have been published, and they do not seem shady so far, but since we cannot even verify that their private key has not leaked, I wouldn't trust it. A very deep compromise could make finding vulnerabilities very difficult.
So what is going on? Who knows. One possibility is that they were targeted for a very coordinated hack, one which completely owned them and their private key, performed by someone(s) who spent a significant amount of time modifying a fake 7.2 version. Another possibility is that they were legally gagged and forced to shut down operations, but they managed to negotiate a method for users to decrypt existing data with a neutered build.
One thing is for sure, if this is a GoG-style publicity stunt, I will flip a couple of tables.
We'll see. ┻━┻ \_(ツ)_/ ┻━┻
Get notified when we go live!