Manufacturer: PC Percpective

Overview

We’ve been tracking NVIDIA’s G-Sync for quite a while now. The comments section on Ryan’s initial article erupted with questions, and many of those were answered in a follow-on interview with NVIDIA’s Tom Petersen. The idea was radical – do away with the traditional fixed refresh rate and only send a new frame to the display when it has just completed rendering by the GPU. There are many benefits here, but the short version is that you get the low-latency benefit of V-SYNC OFF gaming combined with the image quality (lack of tearing) that you would see if V-SYNC was ON. Despite the many benefits, there are some potential disadvantages that come from attempting to drive an LCD panel at varying periods of time, as opposed to the fixed intervals that have been the norm for over a decade.

IMG_9328.JPG

As the first round of samples came to us for review, the current leader appeared to be the ASUS ROG Swift. A G-Sync 144 Hz display at 1440P was sure to appeal to gamers who wanted faster response than the 4K 60 Hz G-Sync alternative was capable of. Due to what seemed to be large consumer demand, it has taken some time to get these panels into the hands of consumers. As our Storage Editor, I decided it was time to upgrade my home system, placed a pre-order, and waited with anticipation of finally being able to shift from my trusty Dell 3007WFP-HC to a large panel that can handle >2x the FPS.

Fast forward to last week. My pair of ROG Swifts arrived, and some other folks I knew had also received theirs. Before I could set mine up and get some quality gaming time in, my bro FifthDread and his wife both noted a very obvious flicker on their Swifts within the first few minutes of hooking them up. They reported the flicker during game loading screens and mid-game during background content loading occurring in some RTS titles. Prior to hearing from them, the most I had seen were some conflicting and contradictory reports on various forums (not limed to the Swift, though that is the earliest panel and would therefore see the majority of early reports), but now we had something more solid to go on. That night I fired up my own Swift and immediately got to doing what I do best – trying to break things. We have reproduced the issue and intend to demonstrate it in a measurable way, mostly to put some actual data out there to go along with those trying to describe something that is borderline perceptible for mere fractions of a second.

screen refresh rate-.png

First a bit of misnomer correction / foundation laying:

  • The ‘Screen refresh rate’ option you see in Windows Display Properties is actually a carryover from the CRT days. In terms of an LCD, it is the maximum rate at which a frame is output to the display. It is not representative of the frequency at which the LCD panel itself is refreshed by the display logic.
  • LCD panel pixels are periodically updated by a scan, typically from top to bottom. Newer / higher quality panels repeat this process at a rate higher than 60 Hz in order to reduce the ‘rolling shutter’ effect seen when panning scenes or windows across the screen.
  • In order to engineer faster responding pixels, manufacturers must deal with the side effect of faster pixel decay between refreshes. This is a balanced by increasing the frequency of scanning out to the panel.
  • The effect we are going to cover here has nothing to do with motion blur, LightBoost, backlight PWM, LightBoost combined with G-Sync (not currently a thing, even though Blur Busters has theorized on how it could work, their method would not work with how G-Sync is actually implemented today).

With all of that out of the way, let’s tackle what folks out there may be seeing on their own variable refresh rate displays. Based on our testing so far, the flicker only presented at times when a game enters a 'stalled' state. These are periods where you would see a split-second freeze in the action, like during a background level load during game play in some titles. It also appears during some game level load screens, but as those are normally static scenes, they would have gone unnoticed on fixed refresh rate panels. Since we were absolutely able to see that something was happening, we wanted to be able to catch it in the act and measure it, so we rooted around the lab and put together some gear to do so. It’s not a perfect solution by any means, but we only needed to observe differences between the smooth gaming and the ‘stalled state’ where the flicker was readily observable. Once the solder dust settled, we fired up a game that we knew could instantaneously swing from a high FPS (144) to a stalled state (0 FPS) and back again. As it turns out, EVE Online does this exact thing while taking an in-game screen shot, so we used that for our initial testing. Here’s what the brightness of a small segment of the ROG Swift does during this very event:

eve ss-2-.png

Measured panel section brightness over time during a 'stall' event. Click to enlarge.

The relatively small ripple to the left and right of center demonstrate the panel output at just under 144 FPS. Panel redraw is in sync with the frames coming from the GPU at this rate. The center section, however, represents what takes place when the input from the GPU suddenly drops to zero. In the above case, the game briefly stalled, then resumed a few frames at 144, then stalled again for a much longer period of time. Completely stopping the panel refresh would result in all TN pixels bleeding towards white, so G-Sync has a built-in failsafe to prevent this by forcing a redraw every ~33 msec. What you are seeing are the pixels intermittently bleeding towards white and periodically being pulled back down to the appropriate brightness by a scan. The low latency panel used in the ROG Swift does this all of the time, but it is less noticeable at 144, as you can see on the left and right edges of the graph. An additional thing that’s happening here is an apparent rise in average brightness during the event. We are still researching the cause of this on our end, but this brightness increase certainly helps to draw attention to the flicker event, making it even more perceptible to those who might have not otherwise noticed it.

Some of you might be wondering why this same effect is not seen when a game drops to 30 FPS (or even lower) during the course of normal game play. While the original G-Sync upgrade kit implementation simply waited until 33 msec had passed until forcing an additional redraw, this introduced judder from 25-30 FPS. Based on our observations and testing, it appears that NVIDIA has corrected this in the retail G-Sync panels with an algorithm that intelligently re-scans at even multiples of the input frame rate in order to keep the redraw rate relatively high, and therefore keeping flicker imperceptible – even at very low continuous frame rates.

A few final points before we go:

  • This is not limited to the ROG Swift. All variable refresh panels we have tested (including 4K) see this effect to a more or less degree than reported here. Again, this only occurs when games instantaneously drop to 0 FPS, and not when those games dip into low frame rates in a continuous fashion.
  • The effect is less perceptible (both visually and with recorded data) at lower maximum refresh rate settings.
  • The effect is not present at fixed refresh rates (G-Sync disabled or with non G-Sync panels).

This post was primarily meant as a status update and to serve as something for G-Sync users to point to when attempting to explain the flicker they are perceiving. We will continue researching, collecting data, and coordinating with NVIDIA on this issue, and will report back once we have more to discuss.

During the research and drafting of this piece, we reached out to and worked with NVIDIA to discuss this issue. Here is their statement:

"All LCD pixel values relax after refreshing. As a result, the brightness value that is set during the LCD’s scanline update slowly relaxes until the next refresh.

This means all LCDs have some slight variation in brightness. In this case, lower frequency refreshes will appear slightly brighter than high frequency refreshes by 1 – 2%.

When games are running normally (i.e., not waiting at a load screen, nor a screen capture) - users will never see this slight variation in brightness value. In the rare cases where frame rates can plummet to very low levels, there is a very slight brightness variation (barely perceptible to the human eye), which disappears when normal operation resumes."

So there you have it. It's basically down to the physics of how an LCD panel works at varying refresh rates. While I agree that it is a rare occurrence, there are some games that present this scenario more frequently (and noticeably) than others. If you've noticed this effect in some games more than others, let us know in the comments section below. 

(Editor's Note: We are continuing to work with NVIDIA on this issue and hope to find a way to alleviate the flickering with either a hardware or software change in the future.)

Subject: Editorial, Storage
Manufacturer: PC Perspective
Tagged: ssd, nand, Intel, flash, 3d

It has become increasingly apparent that flash memory die shrinks have hit a bit of a brick wall in recent years. The issues faced by the standard 2D Planar NAND process were apparent very early on. This was no real secret - here's a slide seen at the 2009 Flash Memory Summit:

microsoft-powerpoint-fms09-tut-2a-flash-memory-summit-2-728-.jpg

Despite this, most flash manufacturers pushed the envelope as far as they could within the limits of 2D process technology, balancing shrinks with reliability and performance. One of the largest flash manufacturers was Intel, having joined forces with Micron in a joint venture dubbed IMFT (Intel Micron Flash Technologies). Intel remained in lock-step with Micron all the way up to 20nm, but chose to hold back at the 16nm step, presumably in order to shift full focus towards alternative flash technologies. This was essentially confirmed late last week, with Intel's announcement of a shift to 3D NAND production.

progression-3-.png

Intel's press briefing seemed to focus more on cost efficiency than performance, and after reviewing the very few specs they released about this new flash, I believe we can do some theorizing as to the potential performance of this new flash memory. From the above illustration, you can see that Intel has chosen to go with the same sort of 3D technology used by Samsung - a 32 layer vertical stack of flash cells. This requires the use of an older / larger process technology, as it is too difficult to etch these holes at a 2x nm size. What keeps the die size reasonable is the fact that you get a 32x increase in bit density. Going off of a rough approximation from the above photo, imagine that 50nm die (8 Gbit), but with 32 vertical NAND layers. That would yield a 256 Gbit (32 GB) die within roughly the same footprint.

blog9_fig2.jpg

Representation of Samsung's 3D VNAND in 128Gbit and 86 Gbit variants.
20nm planar (2D) = yellow square, 16nm planar (2D) = blue square.

Image republished with permission from Schiltron Corporation.

It's likely a safe bet that IMFT flash will be going for a cost/GB far cheaper than the competing Samsung VNAND, and going with a relatively large 256 Gbit (vs. VNAND's 86 Gbit) per-die capacity is a smart move there, but let's not forget that there is a catch - write speed. Most NAND is very fast on reads, but limited on writes. Shifting from 2D to 3D NAND netted Samsung a 2x speed boost per die, and another effective 1.5x speed boost due to their choice to reduce per-die capacity from 128 Gbit to 86 Gbit. This effective speed boost came from the fact that a given VNAND SSD has 50% more dies to reach the same capacity as an SSD using 128 Gbit dies.

Now let's examine how Intel's choice of a 256 Gbit die impacts performance:

  • Intel SSD 730 240GB = 16x128 Gbit 20nm dies
    • 270 MB/sec writes and ~17 MB/sec/die
  • Crucial MX100 128GB = 8x128Gbit 16nm dies
    • 150 MB/sec writes and ~19 MB/sec/die
  • Samsung 850 Pro 128GB = 12x86Gbit VNAND dies
    • 470MB/sec writes and ~40 MB/sec/die

If we do some extrapolation based on the assumption that IMFT's move to 3D will net the same ~2x write speed improvement seen by Samsung, combined with their die capacity choice of 256Gbit, we get this:

  • Future IMFT 128GB SSD = 4x256Gbit 3D dies
    • 40 MB/sec/die x 4 dies = 160MB/sec

Even rounding up to 40 MB/sec/die, we can see that also doubling the die capacity effectively negates the performance improvement. While the IMFT flash equipped SSD will very likely be a lower cost product, it will (theoretically) see the same write speed limits seen in today's SSDs equipped with IMFT planar NAND. Now let's go one layer deeper on theoretical products and assume that Intel took the 18-channel NVMe controller from their P3700 Series and adopted it to a consumer PCIe SSD using this new 3D NAND. The larger die size limits the minimum capacity you can attain and still fully utilize their 18 channel controller, so with one die per channel, you end up with this product:

  • Theoretical 18 channel IMFT PCIE 3D NAND SSD = 18x256Gbit 3D dies
    • 40 MB/sec/die x 18 dies = 720 MB/sec
    • 18x32GB (die capacity) = 576GB total capacity

​​Overprovisioning decisions aside, the above would be the lowest capacity product that could fully utilize the Intel PCIe controller. While the write performance is on the low side by PCIe SSD standards, the cost of such a product could easily be in the $0.50/GB range, or even less.

DSC06167_DxO-.jpg

In summary, while we don't have any solid performance data, it appears that Intel's new 3D NAND is not likely to lead to a performance breakthrough in SSD speeds, but their choice on a more cost-effective per-die capacity for their new 3D NAND is likely to give them significant margins and the wiggle room to offer SSDs at a far lower cost/GB than we've seen in recent years. This may be the step that was needed to push SSD costs into a range that can truly compete with HDD technology.

Following Up with Wyoming Whiskey

Subject: Editorial | November 12, 2014 - 06:58 PM |
Tagged: Wyoming Whiskey, Whiskey, Kirby, Bourbon

Last year around this time I reviewed my first bottle of Wyoming Whiskey.  Overall, I was quite pleased with how this particular spirit has come along.  You can read my entire review here.  It also includes a little interview with one of the co-founders of Wyoming Whiskey, David Defazio.  The landscape has changed a little throughout the past year, and the distillery has recently released a second product in limited quantities to the Wyoming market.  The Single Barrel Bourbon selections come from carefully selected barrels and are not blended with others.  I had the chance to chat with David again recently and received some interesting information from him about the latest product and where the company is headed.

dingle_barrel.jpg

Picture courtesy of Wyoming Whiskey

Noticed that you have a new single barrel product on the shelves.  How would you characterize this as compared to the standard bottle you sell?

These very few barrels are selected from many and only make the cut if they meet very high standards.  We have only bottled 4 so far.  And, the State has sold out.  All of our product has matured meaningfully since last year and these barrels have benefitted the most as evidenced by their balance and depth of character.  The finish is wickedly smooth.  I have not heard one negative remark about the Single Barrel Product.

Have you been able to slowly lengthen out the time that the bourbon matures til it is bottled, or is it around the same age as what I sampled last year?

Yes, these barrels are five years old, as is the majority of our small batch product.

How has been the transition from Steve to Elizabeth as the master distiller?

Elizabeth is no longer with us.  She had intended to train under Steve for the year, but when his family drew him back to Kentucky in February, this plan disintegrated.  So, our crew is making bourbon under the direction of Sam Mead, my partners' son, who is our production manager.  He has already applied his engineering degree in ways that help increase quality and production.  And he's just getting started.

What other new products may be showing up in the next year?

You may see a barrel-strength bourbon from us.  There are a couple of honey barrels that we are setting aside for this purpose.

Wyoming Whiskey had originally hired on Steve Nally of Maker’s Mark fame, somehow pulling him out of retirement.  He was the master distiller for quite a few years, and had moved on from the company this past year.  He is now heading up a group that is opening a new distillery in Kentucky that is hoping to break into the bourbon market.  They expect their first products to be aged around 7 years.  As we all know, it is hard to keep afloat as a company if they are not selling product.  In the meantime, it looks like this group will do what so many other “craft” distillers have been caught doing, and that is selling bourbon that is produced from mega-factories that is then labeled as their own.

Bourbon has had quite the renaissance in the past few years with the popularity of the spirit soaring.  People go crazy trying to find limited edition products like Pappy Van Winkle and many estimate that overall bourbon production in the United States will not catch up to demand anytime soon.  This of course leads to higher prices and tighter supply for the most popular of brands.

It is good to see that Wyoming Whiskey is lengthening out the age of the barrels that they are bottling, as it can only lead to smoother and more refined bourbon.  From most of my tasting, it seems that 6 to 7 years is about optimal for most bourbon.  There are other processes that can speed up these results, and I have tasted batches that are only 18 months old and rival that of much older products.  I look forward to hearing more about what Wyo Whiskey is doing to improve their product.

PCPer Live! Borderlands: The Pre-Sequel Game Stream Powered by NVIDIA Part 2!

Subject: Editorial, Graphics Cards | October 21, 2014 - 07:45 PM |
Tagged: video, pcper, nvidia, live, GTX 980, geforce, game stream, borderlands: the pre-sequel, borderlands

UPDATE: It's time for ROUND 2!

UPDATE 2: You missed the fun for the second time? That's unfortunate, but you can relive the fun with the replay right here!

I'm sure like the staff at PC Perspective, many of our readers have been obsessively playing the Borderlands games since the first release in 2009. Borderlands 2 arrived in 2012 and once again took hold of the PC gaming mindset. This week marks the release of Borderlands: The Pre-Sequel, which as the name suggests, takes place before the events of Borderlands 2. The Pre-Sequel has playable characters that were previously only known to the gamer as NPCs and that, coupled with the new low-gravity game play style, should entice nearly everyone that loves the first-person, loot-driven series to come back.

To celebrate the release, PC Perspective has partnered with NVIDIA to host a couple of live game streams that will feature some multi-player gaming fun as well some prizes to giveaway to the community. I will be joined once again by NVIDIA's Andrew Coonrad and Kris Rey to tackle the campaign in a cooperative style while taking a couple of stops to give away some hardware.

livelogo.jpg

Borderlands: The Pre-Sequel Game Stream Powered by NVIDIA Part 2

5pm PT / 8pm ET - October 21st

PC Perspective Live! Page

Need a reminder? Join our live mailing list!

Here are some of the prizes we have lined up for those of you that join us for the live stream:

Holy crap, that's a hell of a list!! How do you win? It's really simple: just tune in and watch the Borderlands: The Pre-Sequel Game Stream Powered by NVIDIA! We'll explain the methods to enter live on the air and anyone can enter from anywhere in the world - no issues at all!

So stop by Tuesday night for some fun, some gaming and the chance to win some hardware!

global-landing-page-borderlands-presequal.jpg

2K_Borderlands_Pre-Sequel_AthenaToss_1stPerson.jpg

2K_Borderlands_Pre-Sequel_moonBandits.jpg

Apple Announces New Mac Minis with Haswell. What?

Subject: Editorial, General Tech, Systems | October 17, 2014 - 03:22 PM |
Tagged: Thunderbolt 2, thunderbolt, mac mini, mac, Intel, haswell, apple

I was not planning to report on Apple's announcement but, well, this just struck me as odd.

So Apple has relaunched the Mac Mini with fourth-generation Intel Core processors, after two years of waiting. It is the same height as the Intel NUC, but it also almost twice the length and twice the width (Apple's 20cm x 20cm versus the NUC's ~11cm x 11cm when the case is included). So, after waiting through the entire Haswell architecture launch cycle, right up until the imminent release of Broadwell, they are going with the soon-to-be outdated architecture, to update their two-year-old platform?

((Note: The editorial originally said "two-year-old architecture". I thought that Haswell launched about six months earlier than it did. The mistake was corrected.))

apple-macmini-hero.png

I wonder if, following the iTunes U2 deal, this device will come bundled with Limp Bizkit's "Nookie"...

The price has been reduced to $499, which is a welcome $100 price reduction especially for PC developers who want a Mac to test cross-platform applications on. It also has Thunderbolt 2. These are welcome additions. I just have two, related questions: why today and why Haswell?

The new Mac Mini started shipping yesterday. 15-watt Broadwell-U is expected to launch at CES in January with 28W parts anticipated a few months later, for the following quarter.

Source: Apple

Intel Announces Q3 2014: Mucho Dinero

Subject: Editorial | October 15, 2014 - 12:39 PM |
Tagged: revenue, Results, quarterly, Q3, Intel, haswell, Broadwell, arm, amd, 22nm, 2014, 14nm

Yesterday Intel released their latest quarterly numbers, and they were pretty spectacular.  Some serious milestones were reached last quarter, much to the dismay of Intel’s competitors.  Not everything is good with the results, but the overall quarter was a record one for Intel.  The company reported revenues of $14.55 billion dollars with a net income of $3.31 billion.  This is the highest revenue for a quarter in the history of Intel.  This also is the first quarter in which Intel has shipped 100 million processors.

The death of the PC has obviously been overstated as the PC group had revenue of around $9 billion.  The Data Center group also had a very strong quarter with revenues in the $3.7 billion range.  These two groups lean heavily on Intel’s 22 nm TriGate process, which is still industry leading.  The latest Haswell based processors are around 10% of shipping units so far.  The ramp up for these products has been pretty impressive.  Intel’s newest group, the Internet of Things, has revenues that shrank by around 2% quarter over quarter, but it has grown by around 14% year over year.

Intel-Swimming-in-Money.jpg

Not all news is good news though.  Intel is trying desperately to get into the tablet and handheld markets, and so far has had little traction.  The group reported revenues in the $1 million range.  Unfortunately, that $1 million is offset by about $1 billion in losses.  This year has seen an overall loss for mobile in the $3 billion range.  While Intel arguably has the best and most efficient process for mobile processors, it is having a hard time breaking into this ARM dominated area.  There are many factors involved here.  First off there are more than a handful of strong competitors working directly against Intel to keep them out of the market.  Secondly x86 processors do not have the software library or support that ARM has in this very dynamic and fast growing section.  We also must consider that while Intel has the best overall process, x86 processors are really only now achieving parity in power/performance ratios.  Intel still is considered a newcomer in this market with their 3D graphics support.

Intel is quite happy to take this loss as long as they can achieve some kind of foothold in this market.  Mobile is the future, and while there will always be the need for a PC (who does heavy duty photo editing, video editing, and immersive gaming on a mobile platform?) the mobile market will be driving revenues from here on out.  Intel absolutely needs to have a presence here if they wish to be a leader at driving technologies in this very important market.  Intel is essentially giving away their chips to get into phones and tablets, and eventually this will pave the way towards a greater adoption.  There are still hurdles involved, especially on the software side, but Intel is working hard with developers and Google to make sure support is there.  Intel is likely bracing themselves for a new generation of 20 nm and 16 nm FinFET ARM based products that will start showing up in the next nine months.  The past several years has seen Intel push mobile up to high priority in terms of process technology.  Previously these low power, low cost parts were relegated to an N+1 process technology from Intel, but with the strong competition from ARM licensees and pure-play foundries Intel can no longer afford that.  We will likely see 14 nm mobile parts from Intel sooner as opposed to later.

Intel has certainly shored up a lot of their weaknesses over the past few years.  Their integrated 3D/GPU support has improved in leaps and bounds over the years, their IPC and power consumption with CPUs is certainly industry leading, and they continue to pound out impressive quarterly reports.  Intel is certainly firing on all cylinders at this time and the rest of the industry is struggling to keep up.  It will be interesting to see if Intel will keep up with this pace, and it will be imperative for the company to continue to push into mobile markets.  I have never counted Intel out as they have a strong workforce, a solid engineering culture, and some really amazingly smart people (except Francois… he is just slightly above average- he is a GT-R aficionado after all).

Next quarter appears to be more of the same.  Intel is expecting revenue in the $14.7 billion, plus or minus $500 million.  This continues along with the strong sales of PC and server parts for Intel that helps buoy them to these impressive results.  Net income and margins again look to appear similar to what this past quarter brought to the table.  We will see the introduction of the latest 14 nm Broadwell processors, which is an important step for Intel.  14 nm development and production has taken longer than people expected, and Intel has had to lean on their very mature 22 nm process longer than they wanted to.  This has allowed a few extra quarters for the pure-play foundries to try to catch up.  Samsung, TSMC, and GLOBALFOUNDRIES are all producing 20 nm products with a fast transition to 16/14 nm FinFET by early next year.  This is not to say that these 16/14nm FinFET products will be on par with Intel’s 14 nm process, but it at least gets them closer.  In the near term though, these changes will have very little effect on Intel and their product offerings over the next nine months.

Source: Intel

PCPer Live! Borderlands: The Pre-Sequel Game Stream Powered by NVIDIA

Subject: Editorial, Graphics Cards | October 13, 2014 - 10:28 PM |
Tagged: video, pcper, nvidia, live, GTX 980, geforce, game stream, borderlands: the pre-sequel, borderlands

UPDATE: You missed this weeks live stream but you can watch the game play via this YouTube embed!!

I'm sure like the staff at PC Perspective, many of our readers have been obsessively playing the Borderlands games since the first release in 2009. Borderlands 2 arrived in 2012 and once again took hold of the PC gaming mindset. This week marks the release of Borderlands: The Pre-Sequel, which as the name suggests, takes place before the events of Borderlands 2. The Pre-Sequel has playable characters that were previously only known to the gamer as NPCs and that, coupled with the new low-gravity game play style, should entice nearly everyone that loves the first-person, loot-driven series to come back.

To celebrate the release, PC Perspective has partnered with NVIDIA to host a couple of live game streams that will feature some multi-player gaming fun as well some prizes to giveaway to the community. I will be joined by NVIDIA's Andrew Coonrad and Kris Rey to tackle the campaign in a cooperative style while taking a couple of stops to give away some hardware.

livelogo.jpg

Borderlands: The Pre-Sequel Game Stream Powered by NVIDIA

5pm PT / 8pm ET - October 14th

PC Perspective Live! Page

Need a reminder? Join our live mailing list!

Here are some of the prizes we have lined up for those of you that join us for the live stream:

Holy crap, that's a hell of a list!! How do you win? It's really simple: just tune in and watch the Borderlands: The Pre-Sequel Game Stream Powered by NVIDIA! We'll explain the methods to enter live on the air and anyone can enter from anywhere in the world - no issues at all!

So stop by Tuesday night for some fun, some gaming and the chance to win some hardware!

global-landing-page-borderlands-presequal.jpg

2K_Borderlands_Pre-Sequel_AthenaToss_1stPerson.jpg

2K_Borderlands_Pre-Sequel_moonBandits.jpg

World of Warcraft: Warlords of Draenor Requirements Listed

Subject: Editorial, General Tech | September 24, 2014 - 03:55 PM |
Tagged: wow, blizzard

When software has been supported and maintained for almost ten years, like World of Warcraft, it is not clear whether the worst compatible machine at launch should remain supported or whether the requirements should increase over time. For instance, when Windows XP launched, the OS was tuned for 128MB of RAM. Later updates made it be highly uncomfortable with anything less than a whole gigabyte. For games though, we mostly pretend that they represent the time that they were released.

blizzard-battlenet-real01.jpg

That mental model does not apply to World of Warcraft: Warlords of Draenor. While technically this is an expansion pack, its requirements jumped again (significantly if compared to the original release). Even the first expansion pack, Burning Crusade, was able to run on a GeForce 2. Those cards were bundled with the original Unreal Tournament, which was a relatively new game at the time that the GeForce 2 was released.

Now? Well the minimum is:

  • Windows XP or later.
  • Intel Core 2 Duo E6600 or AMD Phenom X3 8750
  • NVIDIA GeForce 8800 GT, AMD Radeon HD 4850), or Intel HD Graphics 3000.
  • 2GB of RAM
  • 35GB HDD

And the recommended is:

  • Windows 7 or 8 (x86-64)
  • Intel Core i5 2400 or AMD FX-4100
  • NVIDIA GeForce GTX 470 or AMD Radeon HD 5870
  • 4GB of RAM
  • 35GB HDD

World of Warcraft, and other MMORPGs, might get a pass on this issue. With its subscription model, there is not really an expectation that a user can go back and see the game in the same state as it launched. It is not a work, but a service -- and that does not devalue its artistic merits. It just is not really the same game now that it was then.

World of Warcraft: Warlords of Draenor will launch on November 13th.

Source: Blizzard
Subject: Editorial, Storage
Manufacturer: PC Perspecitve
Tagged: tlc, Samsung, bug, 840 evo, 840

Investigating the issue

** Edit ** (24 Sep)

We have updated this story with temperature effects on the read speed of old data. Additional info on page 3.

** End edit **

** Edit 2 ** (26 Sep)

New quote from Samsung:

"We acknowledge the recent issue associated with the Samsung 840 EVO SSDs and are qualifying a firmware update to address the issue.  While this issue only affects a small subset of all 840 EVO users, we regret any inconvenience experienced by our customers.  A firmware update that resolves the issue will be available on the Samsung SSD website soon.  We appreciate our customer’s support and patience as we work diligently to resolve this issue."

** End edit 2 **

** Edit 3 **

The firmware update and performance restoration tool has been tested. Results are found here.

** End edit 3 **

Over the past week or two, there have been growing rumblings from owners of Samsung 840 and 840 EVO SSDs. A few reports scattered across internet forums gradually snowballed into lengthy threads as more and more people took a longer look at their own TLC-based Samsung SSD's performance. I've spent the past week following these threads, and the past few days evaluating this issue on the 840 and 840 EVO samples we have here at PC Perspective. This post is meant to inform you of our current 'best guess' as to just what is happening with these drives, and just what you should do about it.

The issue at hand is an apparent slow down in the reading of 'stale' data on TLC-based Samsung SSDs. Allow me to demonstrate:

840 EVO 512 test hdtach-2-.png

You might have seen what looks like similar issues before, but after much research and testing, I can say with some confidence that this is a completely different and unique issue. The old X25-M bug was the result of random writes to the drive over time, but the above result is from a drive that only ever saw a single large file write to a clean drive. The above drive was the very same 500GB 840 EVO sample used in our prior review. It did just fine in that review, and at afterwards I needed a quick temporary place to put a HDD image file and just happened to grab that EVO. The file was written to the drive in December of 2013, and if it wasn't already apparent from the above HDTach pass, it was 442GB in size. This brings on some questions:

  • If random writes (i.e. flash fragmentation) are not causing the slow down, then what is?
  • How long does it take for this slow down to manifest after a file is written?

Read on for the full scoop!