IDF 2014: Through Silicon Via - Connecting memory dies without wires

Subject: Storage, Shows and Expos | September 10, 2014 - 03:34 PM |
Tagged: TSV, Through Silicon Via, memory, idf 2014, idf

If you're a general computer user, you might have never heard the term "Through Silicon Via". If you geek out on photos of chip dies and wafers, and how chips are assembled and packaged, you might have heard about it. Regardless of your current knowledge of TSV, it's about to be a thing that impacts all of you in the near future.

Let's go into a bit of background first. We're going to talk about how chips are packaged. Micron has an excellent video on the process here:

The part we are going to focus on appears at 1:31 in the above video:

View Full Size

This is how chip dies are currently connected to the outside world. The dies are stacked (four high in the above pic) and a machine has to individually wire them to a substrate, which in turn communicates with the rest of the system. As you might imagine, things get more complex with this process as you stack more and more dies on top of each other:

View Full Size

16 layer die stack, pic courtesy NovaChips

...so we have these microchips with extremely small features, but to connect them we are limited to a relatively bulky process (called package-on-package). Stacking these flat planes of storage is a tricky thing to do, and one would naturally want to limit how many of those wires you need to connect. The catch is that those wires also equate to available throughput from the device (i.e. one wire per bit of a data bus). So, just how can we improve this method and increase data bus widths, throughput, etc?

Before I answer that, let me lead up to it by showing how flash memory has just taken a leap in performance. Samsung has recently made the jump to VNAND:

View Full Size

By stacking flash memory cells vertically within a die, Samsung was able to make many advances in flash memory, simply because they had more room within each die. Because of the complexity of the process, they also had to revert back to an older (larger) feature size. That compromise meant that the capacity of each die is similar to current 2D NAND tech, but the bonus is speed, longevity, and power reduction advantages by using this new process.

I showed you the VNAND example because it bears a striking resemblance to what is now happening in the area of die stacking and packaging. Imagine if you could stack dies by punching holes straight through them and making the connections directly through the bottom of each die. As it turns out, that's actually a thing:

View Full Size

Read on for more info about TSV!

Above we see a stack of four dies, interconnected not by wires, but by holes that have been made straight through the die. Data lines (one of many shown at left) pass through and connect to all dies. Address lines (the chip selection portion shown on the right) are segmented accordingly such that dies can be addressed individually. From an electrical connection standpoint, this is doing exactly the same thing as the old 'wire' method, but there are some distinct advantages:

  • TSV holes and connection points can be made *much* smaller than the space needed for wiring.
  • More connection points can be made between dies and from dies to the substrate.

By 'more connection points', we're talking *way* more. Imagine going from a 32 bit to a 512 bit data bus width. Assuming the same clock rate, that's a potential 16x boost in throughput!

While my above examples used flash memory die packaging, flash is more limited by the speed of the flash itself and is not so bottlenecked by the interface throughput. RAM, on the other hand, is packaged in a similar fashion and could see some significant gains by opening up the interface a bit. Because of this, DDR has been a driving force for adoption of TSV die stacking techniques, and at this IDF we have seen it becoming a reality. At Samsung's memory (not flash) booth, we saw a model to help visualize TSV:

View Full Size

View Full Size

Even better, SK Hynix had TSV flash on display:

View Full Size

Yes, you read that correctly - that's a 128GB (byte not bit) DIMM.

View Full Size

...along with another model of how TSV junctions work:

View Full Size

So the takeaways here are that TSV is a new die / chip interconnect that enables significantly more parallelism between (through) dies and from dies to the host. This should give us faster RAM throughputs in the short term, and higher stacks of high throughput flash memory dies in the future. Oh, one more thing, TSV has been around in the image sensing field for a while now, so chances are your cell phone or camera has a TSV connected image sensor. Before I go, here's a slide on the progression of chip interconnects:

View Full Size

..and as with most of my technical chip posts, I leave you with the best shot I could get of SK Hynix's TSV DDR wafer:

View Full Size

Video News


No comments posted yet.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.