Memory: From Basics to Buying
The term "memory" can refer to many different parts of the PC since there are so many components that use one form of memory or the other. Usually when speaking of memory in PCs, we?re referring to main system memory or RAM.
Before moving on...here's a short overview of this article and the topics to be discussed
- Memory Basics
- Mysterious Bios Settings
- Memory and Overclocking
- Buying Memory
RAM (Random Access Memory) is a means to store data and instructions temporarily for subsequent use by your system processor (CPU). RAM is called "random access" because earlier read-write memories were sequential and did not allow data to be ?randomly accessed?. RAM differs from read-only memory (ROM) in that it can be both read and written. It is considered volatile storage because unlike ROM, the contents of RAM are lost when the power is turned off. ROM is known to be non-volatile and is most commonly used to store system-level programs that we want to have available to the PC at all times, such as system BIOS programs. There are several ROM variants that can be changed or written to, under certain circumstances; these can be thought of as ?mostly? read-only memory: PROM, EPROM, and EEPROM.
Like ROM, there are also variants of RAM which hold different properties and purposes, two of which are SRAM and several flavors of DRAM. DRAM, or Dynamic RAM, is the slower of the two because it needs to be periodically refreshed or recharged thousands of times per second. If this is not done regularly, then the DRAM will lose its stored contents, even if it continues to have power supplied to it. This refreshing action is why the memory is called dynamic, meaning moving or always changing. SRAM, or Static RAM, on the other hand, does not need to be refreshed like DRAM. This gives SRAM faster access times (the time it takes to locate and read one unit of memory). As such, SRAM is far more expensive and that is why it is primarily used in relatively small quantities (normally less than 1 Mb) for cache on processors while the cheaper-to-manufacture DRAM is left for system RAM.
Memory can be built right into a motherboard, but it is more typically attached to the motherboard in the form of a chip or module called a DIMM. A DIMM (Dual Inline Memory Module) is the name given to the circuit board that holds the memory chips, gold or tin/lead contacts and other memory devices and provides a 64 bit interface to the memory chips. You've probably seen memory listed as 32x64 or 64x64. These numbers represent the number of the chips multiplied by the capacity of each individual chip, which is measured in megabits (Mb), or one million bits. Take the result and divide it by eight to get the number of megabytes on that module. For example, 32x64 means that the module has thirty-two 64-megabit chips. Multiply 32 by 64 and you get 2048 megabits. Since a byte has 8 bits, we need to divide our result of 2048 by 8. Our result is 256 megabytes!
Processors tend to access memory in a distinct hierarchy. This hierarchy is a way of saying ? the order of things; from top to bottom, fast to slow or most important to least important. Whether it comes from permanent storage (e.g. hard drive) or input (e.g. keyboard) most data goes in RAM first.
Going from fastest to slowest, the memory hierarchy is made up of: Registers - Cache [L1; L2] - RAM [Physical and Virtual] - Input Devices
Registers are fast data stores typically capable of holding a few bytes of data. The registers contain instructions, data, and include the program counter. Modern processors typically contain two levels of cache, known as the "level 1" and "level 2" caches. Cache memory is high speed memory that is integrated into the CPU itself (or very close it; as in older systems), and is designed to hold a copy of the contents of memory data that were recently accessed by the processor thus keeping transfer time between processor and memory at a minimum. It takes a fraction of the time, compared to normal RAM, to access cache memory. In modern systems the L2 cache is synchronous SRAM, meaning it runs at full CPU core speed. L1 cache has been synchronous since its appearance in the i486 architecture.
Now the processor sends its request to the fastest, usually smallest and most expensive partition level of the hierarchy. If what it wants is there, it can be quickly loaded. If it isn't, the request is forwarded to the next lowest level of the hierarchy and so on. For the sake of example, let's say the CPU issues a load instruction that tells the memory subsystem to load a piece of data (in this case, a single byte) into one of its registers. First, the request goes out to the L1 cache, which is checked to see if it contains the requested data. If the L1 cache does not contain the data and therefore cannot fulfill the request--a situation called a cache miss--then the request propagates down to the L2 cache. If the L2 cache does not contain the desired byte, then the request begins the relatively long trip out to main memory. If main memory doesn't contain the data, then we're in big trouble, because then it has to be paged in from the hard disk, an act which can take a relative eternity in CPU time.
Let's assume that the requested byte is found in main memory. Once located, the byte is copied from main memory, along with a bunch of its neighboring bytes in the form of a cache block or cache line, into the L2 and L1 caches. When the CPU requests this same byte again it will be waiting for it there in the L1 cache, a situation called a cache hit.
Virtual Memory is common to almost all modern operating systems. With this feature, the PC?s processor creates a file on the hard disk called the swap file that is used to store RAM memory data. So, if you attempt to load a program that does not fit in the RAM, the operating system sends to the swap file parts of programs that are presently stored in the RAM memory but are not being accessed, freeing space in RAM and allowing another program to be loaded. When you need to access a part of the program that the system has stored in the hard disk, the opposite process happens: the system stores on the disk, parts of memory that are not in use at the time and transfers the original memory content back. So in effect, virtual memory is just hard drive space used to simulate more physical RAM than a system actually has.
The problem is that the hard disk is a mechanical device, and not an electronic one. This means that the data transfer between the hard disk and the RAM memory is much slower than the data transfer between the processor and RAM. For you to have an idea of the magnitude the processor communicates with RAM typically at a transfer rate of 3200 MB/s (200 MHz bus), while the hard disks transfer data at rates such as 66 MB/s and 100 MB/s, depending on their technology (DMA/66 and DMA/100, respectively)
When you realize that there?s no good substitute for the real thing and you therefore decide to add more system RAM, you?ll discover that you use your virtual memory less because you will now have more memory available to complete the tasks that were previously handled or carted off to your virtual memory.
DRAM Memory Technologies
DRAM is available in several different technology types. At their core, each technology is quite similar to the one that it replaces or the one used on a parallel platform. The differences between the various acronyms of DRAM technologies are primarily a result of how the DRAM inside the module is connected, configured and/or addressed, in addition to any special enhancements added to the technology.
There are three well-known technologies:
Synchronous DRAM (SDRAM) An older type of memory that quickly replaced earlier types and was able to synchronize with the speed of the system clock. SDRAM started out running at 66 MHz, faster than previous technologies and was able to scale to 133 MHz (PC133) officially and unofficially up to 180 MHz. As processors grew in speed and bandwidth capability, new generations of memory such as DDR and RDRAM were required to get proper performance.
Double Data Rate Synchronous DRAM (DDR SDRAM)
DDR SDRAM is a lot like regular SDRAM (Single Data Rate) but its main difference is its ability to effectively double the clock frequency without increasing the actual frequency, making it substantially faster than regular SDRAM. This is achieved by transferring data not only at the rising edge of the clock cycle but also at the falling edge. A clock cycle can be represented as a square wave, with the rising edge defined as the transition from ?0? to ?1?, and the falling edge as ?1? to ?0?. In SDRAM, only the rising edge of the wave is used, but DDR SDRAM references both, effectively doubling the rate of data transmission. For example, with DDR SDRAM, a 100 or 133 MHz memory bus clock rate yields an effective data rate of 200 MHz or 266 MHz, respectively. DDR modules utilize a 184-pin DIMM packaging which, like SDRAM, allows for a 64 bit data path, allowing faster memory access with single modules over previous technologies. Although SDRAM and DDR share the same basic design, DDR is not backward compatible with older SDRAM motherboards and vice-versa.
It is important to understand that while DDR doubles the available bandwidth, it generally does not improve the latency of the memory as compared to an otherwise equivalent SDRAM design. In fact the latency is slightly degraded, as there is no free lunch in the world of electronics or mechanics. So while the performance advantage offered by DDR is substantial, it does not double memory performance, and for some latency-dependant tasks does not improve application performance at all. Most applications will benefit significantly, though.
Rambus DRAM (RDRAM)
Developed by Rambus, Inc., RDRAM, or Rambus DRAM was a totally new DRAM technology that was aimed at processors that needed high bandwidth. RAMBUS, Inc. agreed to a development and license contract with Intel and that lead to Intel?s PC chipsets supporting RDRAM. RDRAM comes in PC600, PC700, PC800 and PC1066 speeds. Specific information on this memory technology can be found at the RAMBUS Website.
Unfortunately for Rambus, dual channel DDR memory solutions have proved to be quite efficient at delivering about the same levels of performance as RDRAM at a much lower cost. Intel eventually dropped RDRAM support in their new products and chose to follow the DDR dance, at which point RDRAM almost completely fell off the map. Rambus, SiS, Asus and Samsung have now teamed up and are planning a new RDRAM solution (the SiS 659 chipset) providing 9.6 GB/s of bandwidth for the Pentium 4. It will be an uphill battle to get RDRAM back in the mainstream market without Intel's support.