Você está na página 1de 66

================================= =================================

Contents

Memory Basics The nitty-gritty Memory Access What is Virtual Memory? DRAM Memory Technologies DDR Memory Speed Processors and Bandwidth DDR Dual Channel What do these terms mean? Installing New Memory Can I mix DRAM? How do I use Dual Channel?

Bios Settings Memory Timings Which timings mean what? What is SPD? To tweak or not to tweak? Ok, so I want to tweak, what do I do? The Anomaly: nVIDIA's nForce2 & tRAS Dealing with Memory Speeds: What is sync / async?

Overclocking How do I overclock my memory?

What to do with ddr voltage? Do I need ram cooling? How do I burn-in memory? Memory Chips

Buying Memory What memory to buy? Why are you recommending PC3500, when my mobo only supports PC3200? How much is enough? Matched or Certified Dual Channel RAM

=================================

Memory Basics
=================================

The nitty-gritty RAM (Random Access Memory) is a means to store data and instructions temporarily for subsequent use by your system processor (CPU). RAM is called "random access" because earlier read-write memories were sequential and did not allow data to be randomly accessed. RAM differs from read-only memory (ROM) in that it can be both read and written. It is considered volatile storage because unlike ROM, the contents of RAM are lost when the power is turned off. ROM is known to be non-volatile and is most commonly used to store system-level programs that we want to have available to the PC at all times, such as system BIOS programs. There are several ROM variants that can be changed or written to, under certain circumstances; these can be thought of as mostly read-only memory: PROM, EPROM, and EEPROM.

Like ROM, there are also variants of RAM which hold different properties and purposes, two of which are SRAM and several flavors of DRAM. DRAM, or Dynamic RAM, is the slower of the two because it needs to be periodically refreshed or recharged thousands of times per second. If this is not done regularly, then the DRAM will lose its stored contents, even if it continues to have power supplied to it. This refreshing action is why the memory is called dynamic, meaning moving or always changing. SRAM, or Static RAM, on the other hand, does not need to be refreshed like DRAM. This gives SRAM faster access times (the time it takes to locate and read one unit of memory). As such, SRAM is far more expensive and that is why it is primarily used in relatively small quantities (normally less than 1 Mb) for cache on processors while the cheaper-to-manufacture DRAM is left for system RAM. Memory can be built right into a motherboard, but it is more typically attached to the motherboard in the form of a chip or module called a DIMM. A DIMM (Dual Inline Memory Module) is the name given to the circuit board that holds the memory chips, gold or tin/lead contacts and other memory devices and provides a 64 bit interface to the memory chips. You've probably seen memory listed as 32x64 or 64x64. These numbers represent the number of the chips multiplied by the capacity of each individual chip, which is measured in megabits (Mb), or one million bits. Take the result and divide it by eight to get the number of megabytes on that module. For example, 32x64 means that the module has thirty-two 64-megabit chips. Multiply 32 by 64 and you get 2048 megabits. Since a byte has 8 bits, we need to divide our result of 2048 by 8. Our result is 256 megabytes!

Memory Access Processors tend to access memory in a distinct hierarchy. This hierarchy is a way of saying the order of things; from top to bottom, fast to slow or most important to least important. Whether it comes from permanent storage (e.g. hard drive) or input (e.g. keyboard) most data goes in RAM first.

Going from fastest to slowest, the memory hierarchy is made up of: Registers - Cache [L1; L2] - RAM [Physical and Virtual] Input Devices Registers are fast data stores typically capable of holding a few bytes of data. The registers contain instructions, data, and include the program counter. Modern processors typically contain two levels of cache, known as the "level 1" and "level 2" caches. Cache memory is high speed memory that is integrated into the CPU itself (or very close it; as in older systems), and is designed to hold a copy of the contents of memory data that were recently accessed by the processor thus keeping transfer time between processor and memory at a minimum. It takes a fraction of the time, compared to normal RAM, to access cache memory. In modern systems the L2 cache is synchronous SRAM, meaning it runs at full CPU core speed. L1 cache has been synchronous since its appearance in the i486 architecture. Now the processor sends its request to the fastest, usually smallest and most expensive partition level of the hierarchy. If what it wants is there, it can be quickly loaded. If it isn't, the request is forwarded to the next lowest level of the hierarchy and so on. For the sake of example, let's say the CPU issues a load instruction that tells the memory subsystem to load a piece of data (in this case, a single byte) into one of its registers. First, the request goes out to the L1 cache, which is checked to see if it contains the requested data. If the L1 cache does not contain the data and therefore cannot fulfill the request--a situation called a cache miss--then the request propagates down to the L2 cache. If the L2 cache does not contain the desired byte, then the request begins the relatively long trip out to main memory. If main memory doesn't contain the data, then we're in big trouble, because then it has to be paged in from the hard disk, an act which can take a relative eternity in CPU time. Let's assume that the requested byte is found in main memory. Once located, the byte is copied from main memory, along with a bunch of its neighboring bytes in the form of a cache block or cache line, into the L2 and L1 caches. When the CPU requests this same byte again it will be waiting for it there in the L1 cache,

a situation called a cache hit. What is Virtual Memory? Virtual Memory is common to almost all modern operating systems. With this feature, the PCs processor creates a file on the hard disk called the swap file that is used to store RAM memory data. So, if you attempt to load a program that does not fit in the RAM, the operating system sends to the swap file parts of programs that are presently stored in the RAM memory but are not being accessed, freeing space in RAM and allowing another program to be loaded. When you need to access a part of the program that the system has stored in the hard disk, the opposite process happens: the system stores on the disk, parts of memory that are not in use at the time and transfers the original memory content back. So in effect, virtual memory is just hard drive space used to simulate more physical RAM than a system actually has. The problem is that the hard disk is a mechanical device, and not an electronic one. This means that the data transfer between the hard disk and the RAM memory is much slower than the data transfer between the processor and RAM. For you to have an idea of the magnitude the processor communicates with RAM typically at a transfer rate of 3200 MB/s (200 MHz bus), while the hard disks transfer data at rates such as 66 MB/s and 100 MB/s, depending on their technology (DMA/66 and DMA/100, respectively) When you realize that theres no good substitute for the real thing and you therefore decide to add more system RAM, youll discover that you use your virtual memory less because you will now have more memory available to complete the tasks that were previously handled or carted off to your virtual memory. DRAM Memory Technologies DRAM is available in several different technology types. At their core, each technology is quite similar to the one that it replaces or the one used on a parallel platform. The differences between

the various acronyms of DRAM technologies are primarily a result of how the DRAM inside the module is connected, configured and/or addressed, in addition to any special enhancements added to the technology. There are three well-known technologies: Synchronous DRAM (SDRAM) An older type of memory that quickly replaced earlier types and was able to synchronize with the speed of the system clock. SDRAM started out running at 66 MHz, faster than previous technologies and was able to scale to 133 MHz (PC133) officially and unofficially up to 180 MHz. As processors grew in speed and bandwidth capability, new generations of memory such as DDR and RDRAM were required to get proper performance. Double Data Rate Synchronous DRAM (DDR SDRAM) DDR SDRAM is a lot like regular SDRAM (Single Data Rate) but its main difference is its ability to effectively double the clock frequency without increasing the actual frequency, making it substantially faster than regular SDRAM. This is achieved by transferring data not only at the rising edge of the clock cycle but also at the falling edge. A clock cycle can be represented as a square wave, with the rising edge defined as the transition from 0 to 1, and the falling edge as 1 to 0. In SDRAM, only the rising edge of the wave is used, but DDR SDRAM references both, effectively doubling the rate of data transmission. For example, with DDR SDRAM, a 100 or 133 MHz memory bus clock rate yields an effective data rate of 200 MHz or 266 MHz, respectively. DDR modules utilize a 184-pin DIMM packaging which, like SDRAM, allows for a 64 bit data path, allowing faster memory access with single modules over previous technologies. Although SDRAM and DDR share the same basic design, DDR is not backward compatible with older SDRAM motherboards and vice-versa. It is important to understand that while DDR doubles the available bandwidth, it generally does not improve the latency of the memory as compared to an otherwise equivalent SDRAM

design. In fact the latency is slightly degraded, as there is no free lunch in the world of electronics or mechanics. So while the performance advantage offered by DDR is substantial, it does not double memory performance, and for some latency-dependant tasks does not improve application performance at all. Most applications will benefit significantly, though. Rambus DRAM (RDRAM) Developed by Rambus, Inc., RDRAM, or Rambus DRAM was a totally new DRAM technology that was aimed at processors that needed high bandwidth. RAMBUS, Inc. agreed to a development and license contract with Intel and that lead to Intels PC chipsets supporting RDRAM. RDRAM comes in PC600, PC700, PC800 and PC1066 speeds. Specific information on this memory technology can be found at the RAMBUS Website. Unfortunately for Rambus, dual channel DDR memory solutions have proved to be quite efficient at delivering about the same levels of performance as RDRAM at a much lower cost. Intel eventually dropped RDRAM support in their new products and chose to follow the DDR dance, at which point RDRAM almost completely fell off the map. Rambus, SiS, Asus and Samsung have now teamed up and are planning a new RDRAM solution (the SiS 659 chipset) providing 9.6 GB/s of bandwidth for the Pentium 4. It will be an uphill battle to get RDRAM back in the mainstream market without Intel's support. Whats new, pussycat? Enter DDR-2 Second generation double date rate memory (DDR-2), expected to start at 400 MHz then go to 533 MHz and 667 MHz, should soon begin replacing DDR-1 (or DDR as we know it). DDR-2 seeks to increase the total memory bandwidth available to the system. This will be accomplished via increased clock frequencies in addition to streamlining the protocols used by the system to make memory reads and writes. According to the JEDEC standard, DDR-2 will have 240 pins and will offer reductions in power consumption and heat output, which are two problems

that grow larger as systems carry more and faster memory. In a similar fashion to the migration from SDRAM to DDR, DDR-2 sacrifices latency. An interesting tidbit on the side is that Intel's P4 architecture, using all kinds of optimizations, will be hurt less than AMD by the high latencies of DDR II. We didnt complain much last time, so maybe we wont this time either? DDR-2 will likely be the dominant type of memory in desktop space for several years as DDR-1 is/was, but it won't arrive in quantity until 2005. QDR and XDR Quad Data Rate Memory (QDR DRAM) - Instead of two data samples per clock cycle, QDR sends four data samples per cycle. QDR is not a JEDEC standard, but instead has been developed as a memory timing technology by Kentron. Kentron has said that QDR technology can leverage existing DDR-1 technology. Note that QDR isn't simply 2x the speed of standard DDR. Instead, Kentron and VIA propose using a single QDR channel to achieve the performance of dual-channel DDR. (DDR-2 is still on VIA's road map) XDR DRAM - getting catchy? XDR DRAM stands for eXtreme Data Rate DRAM, and is the final name for Rambus's "Yellowstone" technologies which have been announced in pieces over time. XDR brings all of these formerly announced technologies under one big umbrella, which will be marketed as a high-bandwidth memory solution. XDR is effectively a hybrid of DDR and Rambus DRAM, designed to combine the best elements of both. Rambus claims that their mid-range XDR memory module is 8x faster compared to today's DDR-400. By "faster", they are referring to the module clock speed, along with how many bits can be transmitted per clock cycle. XDR modules are not in production yet, and are not scheduled to go into full-scale production until 2006.

DDR Memory Speeds The speed of DDR is usually expressed in terms of its "effective

data rate", which is twice its actual clock speed. PC3200 memory, or DDR400, or 400 MHz DDR, is not running at 400 MHz, it is running at 200 MHz. The fact that it accomplishes two data transfers per clock cycle gives it nearly the same bandwidth as SDRAM running at 400 MHz, but DDR400 is indeed still running at 200 MHz. Actual clock speed/effective transfer rate => specification 100/200 133/266 166/333 185/370 200/400 217/433 233/466 250/500 267/533 283/566 MHz MHz MHz MHz MHz MHz MHz MHz MHz MHz => => => => => => => => => => DDR200 DDR266 DDR333 DDR370 DDR400 DDR433 DDR466 DDR500 DDR533 DDR566 or or or or or or or or or or PC1600 PC2100 PC2700 PC3000 PC3200 PC3500 PC3700 PC4000 PC4200 PC4500

So how do they come about those names? Well, the industry specifications for memory operation, features and packaging are finalized by a standardization body called JEDEC. JEDEC, the acronym, once stood for Joint Electron Device Engineering Council, but now is just called the JEDEC Solid State Technology Association. The naming convention specified by JEDEC is as follows: Memory chips are referred to by their native speed. Example, 333 MHz DDR SDRAM memory chips are called DDR333 chips, and 400 MHz DDR SDRAM memory chips are called DDR400. DDR modules are also referred to by their peak bandwidth, which is the maximum amount of data that can be delivered per second. Example, a 400 MHz DDR DIMM is called a PC3200 DIMM. To illustrate this on a 400 MHz DDR module: Each module is 64 bits wide, or 8 Bytes wide (each byte = 8 bits). To get the transfer rate, multiply the width of the module (8 Bytes) by the rated speed of the memory module (in MHz): (8 Bytes) x (400 MHz/second) = 3,200 Mbytes/second or 3.2 Gbytes/second, hence the name

PC3200. To date, the JEDEC consortium is yet to finalize specifications for PC3500 & higher modules. PC2400 was a very short lived label applied to overclocked PC2100 memory. PC3000 was not and will not ever be an official JEDEC standard. Processors and Bandwidth Reminder: Athlon64 Info to be added or maybe dedicated new topic The front side bus (FSB) is basically the main highway or channel between all the important functions in the motherboard that surround the processor through which information flows. The faster and wider the FSB, the more information can flow over the channel, much as a higher speed limit or wider lanes can improve the movement of cars on a highway. As with the FSB, a low speed limit or narrower lanes will retard the movement of cars on the highway causing a bottleneck of traffic. Intel has been able to reduce the FSB bottleneck by accomplishing four data transfers per clock cycle. This is known as quad-pumping, and has resulted in an effective FSB frequency of 800 MHz, with an underlying 200 MHz clock. AMD Athlon XPs, on the other hand, must be content with a bus that utilizes different technology, one that utilizes both the rising and falling sides of a signal. This is in essence the same double data rate technology used by memory of the same name (DDR), and results in a doubling of the FSB clock frequency. That is, a 200 MHz clock results in an effective 400 MHz FSB. Processors also have a FSB data width which can be thought of as the "lanes on a highway" that go in and out of the processor. When the first 8088 processor was released, it had a data bus width of 8 bits and was able to access one character at a time (8 bits = 1 character/byte) every time memory was read or written. The size in bits thus determines how many characters it can transfer at any one time. An 8-bit data bus transfers one character at a time, a 16-bit data bus transfers 2 characters at a time and a 32-bit data bus transfers 4 characters at a time.

Modern processors, like the Athlon XP and Pentium 4, have a 64bit wide data bus enabling them to transfer 8 characters at a time. Although, these processors have 64-bit data bus widths, their internal registers are only 32 bits wide and they're only capable of processing 32 bit commands and instructions while new AMD64 series of processors are capable of processing both 32 bit and 64 bit commands and instructions. When talking memory, bandwidth refers to how fast data is transferred once it starts and is often expressed in quantities of data per unit time. The peak bandwidth that may be transmitted by an Athlon XP or a Pentium 4 is the product of the width of the FSB and the frequency it runs at. To illustrate: Athlon XP Barton 3200+ -- 400FSB 64(bits) * 400,000,000(Hz) = 25,600,000,000 bits/sec (25,600,000,000/8) / (1000*1000) = 3200 Mb/sec Intel Pentium 4 C 3.2 GHz -- 800FSB 64(bits) * 800,000,000(Hz) = 51,200,000,000 bits/sec (51,200,000,000/8) / (1000*1000) = 6400 Mb/sec These figures are theoretical. There's a difference between peak bus bandwidth and effective memory bandwidth. Where peak bus bandwidth is the product of the bus width and bus frequency, effective bandwidth takes into consideration others factors such as addressing and delays that are necessary to perform a memory read or write. The memory could very well be capable of putting out 8 bytes on every single clock pulse for an indefinitely long time, and the CPU could likewise be capable of consuming data at this rate indefinitely. The problem is that there are turnaround times (or delays) in between when the processor places a request for data on the FSB; when the requested data is reproduced by RAM and when this requested data finally arrives for use by the CPU. So, potential peak bandwidth is very rarely, if ever, realized.

DDR Dual Channel

Most of todays mainstream chipsets are using some form of dual channel to supply processors with bandwidth. Take note that the memory isn't dual channel, the platform (or chipset) is. In fact there is no such thing as dual channel memory. Rather, it is most often a memory interface composed of two (or more) normal memory modules coordinated by the chipset on the motherboard, or in the case of the AMD64 processors, coordinated by the integrated memory controller. But for the sake of simplicity, we refer to DDR dual channel architecture as dual channel memory. The nforce2 platform has two 64 bit memory controllers (which are independent of each other) instead of just a single controller like other chipsets. These two controllers are able to access "two channels" of memory simultaneously. The two channels, together, handle memory operations more efficiently than one module by utilizing the bandwidth of two modules (or more) combined. By combining DDR400 (PC3200) with dual memory controllers, the nForce2 could offer up to 6.4 GB/sec of bandwidth in theory. However, this extra bandwidth produced by dual channel cannot be fully utilitized by the Athlon XP and Duron family (K7) of processors. Data(bandwidth) will reach these processors no sooner than the system bus (FSB) allows them, and the processor therefore cannot derive an advantage from memory operating faster than DDR266 when operating on a 133/266Mhz FSB, DDR333 with a 166/333Mhz FSB or DDR400 at 200/400Mhz FSB even in single channel mode. Visualize a four lane highway, symbolizing your Dual Channel configuration. As you go along the highway you come up to a bridge that is only 2 lanes wide. That bridge is the restriction posed by the dualpumped AMD FSB. Only two lanes of traffic may pass through the bridge at any one time. That's the way it is, with the K7 processors and Dual Channel chipsets. In case you're wondering, the K in K7 stands for Kryptonite later changed to Krypton to avoid copyright infringement. Yes, that very same fictional element from comic books that could bring the otherwise all-powerful Superman to his knees. Speaking of which, Intel's P4 architecture is, in contrast, designed to exploit the increased bandwidth afforded by dual channel memory architectures. The 64-bit Quad Pumped Bus of the modern

Pentium 4 CPU working at 800MHz, in theory, requires 6.4GB/s of bandwidth. This is the exact match of the bandwidth produced by the Intel i875 (Canterwood) and i865 (Springdale) chipset families. The quad pumped P4 FSB seemed like drastic overkill in the days of single channel SDR memory, but is paying handsome dividends in today's climate of dual channel DDR memory subsystems. This is one lasting and productive legacy of Intel's RDRAM efforts. As implemented on the P4 RDRAM was also dual channel architecture, and mandated the quad-pumped FSB for its extra bandwidth to be exploited. This factor continues to serve the P4 well in the dual channel DDR era we are currently in, and allows P4's greater memory performance than all other PC platforms, save the new AMD Athlon64 FX with all its new bells and whistles. The Athlon 64 FX processor has a fully integrated DDR Dual Channel memory controller providing a 128-bit wide path to memory and therefore eliminating the need for a Dual Channel interface on the motherboard which traditionally was always located in the Northbridge. The old term front-side bus has always represented the speed at which the processor moves memory traffic and other data traffic to and from the chipset. Since the AMD64 processors has the memory controller located on the processor die, that memory subsystem traffic no longer has to go through the chipset for CPU-to-memory transfer. Therefore, the old term "front-side bus" does no good as it is not applicable anymore. With AMD64 processors, the CPU and memory controller interface with each other at full CPU core frequency. The speed at which the processor and chipset communicate is now dependent on the chipset's HyperTransport spec, running at speeds of up to 1600 MHz. Although the P4 (800fsb variety) and the A64 FX 940 pins, both share the same theoretical peak memory bandwidth of 6.4GB/sec, the Athlon FX realizes significantly more throughput due mainly to its integrated memory controller which drastically reduces latency. Even so, it still suffers from the required use of registered modules which are slower than regular modules. The upcoming Athlon 64 / A64 FX processors designed for Socket 939 will be free from this major drawback and will also feature Dual Channel memory controllers. One negative, though, of having the memory controller integrated into the processor is that to

support emerging memory technologies, like DDR-2 for example, the controller has to be redesigned and the processor needs to be replaced. What do these terms mean? Parity Parity is form of error checking. Non-parity is "regular" memory - it contains exactly one bit of memory for every bit of data to be stored. 8 bits are used to store each byte of data. Parity memory adds an extra single bit for every eight bits of data, used only for error detection. So with parity modules, 9 bits of data are used to store each byte. This extra chip detects if data was correctly read or written, however, it will not correct any errors that may have occurred. ECC This stands for error correcting circuits, error correcting code, or error correction code. These modules go beyond simple parity checking. They also have an extra chip (or two, depending on how much chips total the module has) that not only detects errors but also corrects the error (depending on type) on the fly. When this correction takes place, the computer will continue without a hiccup; it will have no idea that anything even happened. However, if you have a corrected error, it is useful to know this; a pattern of errors can indicate a hardware problem that needs to be addressed. Chipsets allowing ECC normally include a way to report corrected errors to the operating system, but it is up to the operating system to support this. Registered & Unbuffered Registered modules contain a 'register' that helps to ensure data is handled properly. Registered modules are therefore slower than unbuffered modules. They are generally used in mission critical machines and machines that require large amounts of memory. The Opteron series of AMD processors uses registered DDR. The Athlon64 FX (940 pins) inherits its architecture from the Opteron 100 series, thus it too requires registered modules to function.

Registered memory must be supported by the motherboard and cannot be mixed with "Unbuffered" modules. Buffered memory is basically the same as registered memory, but the term is used for older types of memory. Unbuffered or standard memory modules do not have a register. They are cheaper and are the popular choice for home computers.

Installing New Memory The actual installation part is easiest. Some tips while installing memory: Open your computer case and locate the memory sockets on your motherboard. You may need to unplug cables and peripherals, and re-install them afterward. Always handle the module by the edges. Properly ground yourself before taking your RAM in hand; simply grab hold of an unpainted metal surface, such as the frame inside your PC. But if you tend to get zapped when you touch metal objects in the vicinity of your PC, consider more drastic measures before opening that ram packaging and get yourself a grounding wrist strap. Antistatic straps usually cost less than $10 and should be available where memory products are sold. If not your local electronic part supply house will have them. Electro-Static Discharge (ESD) is a frequent cause of damage to the memory module. ESD is the result of handling the module without first properly grounding yourself and thereby dissipating static electricity from your body or clothing. If ESD damages memory, problems may not show up immediately and may be difficult to diagnose. DIMMs usually slide straight down into the slot and lock into place, when a little pressure is applied to each side of module itself and this is secured by the ejector tabs/clips on the ends of the slots which automatically snap into a locked position. Note how the module is keyed to the memory socket. This ensures the module can be plugged into the slot one way only -- the right way. You can probably determine how your PC's modules snap in by looking at the already installed memory. Repeat this procedure for any

additional modules are installing. If you're removing RAM, the process is reversed--unlock the module by pushing out the clips, then lifting it up. With the memory in place, turn on your PC. You should see evidence of your newly installed memory as the system does its power-on test (POST). If you don't or if a memory error appears or is heard then remove and re-seat all the memory modules--the old and the new. If this doesn't solve the problem, remove the new modules and try again. Do not remove any stickers from the modules. Removing these stickers will likely void the warranty. The information present on these stickers maybe required for warranty replacement and information, as well as determining which module you have and its characteristics. These stickers will not have any affect on performance nor will they be affected by the heat inside the system. Which slots to use? If you're using a single module, it's best practice to use the first slot. If using two or more modules in a non-dual channel motherboard, populate the first slot and use any other slots you wish. Q: I've had my single module installed in slot 2 for the last few months now, should I change it? No, it's also best practice to keep on using the slot(s) you're been using before. If you replace RAM, then insert the new modules, in the same slots the older ones were in before. You may find the system overclocks better with the ram in a different slot. It is very hard to predict when this effect occurs, as well as which one might work best. In the overclocking game he who tries the most things wins, and if you are running an overclocked configuration that is asking a lot of the ram it is a good idea to try all available slots to make sure the one you are using yields the best results. If you're using two or more modules of unequal size, you will get the best performance if you put the largest module(s) (in megabytes) in the lowest-numbered slot(s). For example, if your system currently has 256MB of memory and you want to add 512MB, it would be best to put the 512MB module into slot 0 and the 256MB module into slot 1.

Can I mix DRAM? Mixing memory speeds refers to the use of more than one DRAM; each of which holds unlike properties such as speed, arrangement, size and even SPD programming. It doesn't always work as expected and so, it is generally preferable to avoid doing this. Often, many folks who upgrade machines, particularly older ones, find themselves in a situation where they may not be able to find additional memory that is identical to that which is already in the machine. The risks of running into problems are greatly increased if the modules used have significantly different properties. A system will only run at speed of the slowest module, if you mix different speed modules. As you will see, some systems automatically detect the properties of memory modules being used, and set the system timing and other settings accordingly. They usually look at the speed of the memory in the first bank when detecting these settings. So if you use two or more dissimilar modules, it is advisable to place to the slowest module in the first slot. As well, if the goal is overclocking, often the best results are obtained with perfectly matched modules. Additionally dual channel architectures generally work very poorly with mismatched modules. There is no guarantee a mismatch won't work for one reason or another, and it won't really hurt anything to try (assuming your data is backed up), but to maximize the chances of success, modules should match.

How do I use Dual Channel? Dual Channel requires at least two modules for operation. It is recommended that the modules you use be of the same size, speed, arrangement etc. Dual Channel is optional on the original nforce2 motherboards and nforce2 ultra400. You can also choose to run in single channel mode on these motherboards. Nforce2 400 boards are singe-channel only. Most dual channel capable

nforce2 motherboards come with three slots. On these motherboards the first memory controller controls only the first slot (or the slot by itself), while the second memory controller controls the last two slots (which are usually closer together). Name them slots 1, 2 & 3 respectively. To implement Dual Channel, it is necessary to occupy the slot 1 (channel 0) and either one of the two slots that are closer together, slots 2 or 3 (channel 1). The entire config would be running in 128 bit mode. In addition, on nForce2 motherboards, you may use three modules in Dual Channel Mode, by filling the third unoccupied slot. With three sticks, slots 1 remains as channel 0 while slot 2&3 become channel 1. To maintain 128-bit mode, with all three slots filled, each channel must have an equal amount of memory. For example, slots 1 should be filled with a 512 Mb module, while slots 2 & 3 are populated 256 Mb modules. If you were to use three modules of the same size, then only the first two modules would be running in 128 bit Dual Channel Mode. Example, using 3x 256 Mb modules will have the first 512 Mb running in 128 bit Dual Channel mode, while the remaining 256 Mb will be in 64-bit Single Channel mode. Intel dual-channel systems are different. They have either two or four slots, and to run dual channel mode must have either one or two pairs of (hopefully) matching modules. Running three modules on a P4 system will force it to run in single channel mode, and is therefore to be avoided. Consult your motherboard manual for instruction on exactly which slots to use. ================================= =================================

Bios Settings

Memory timings

Memory performance is not entirely determined by bandwidth or MHz, but also the speeds at which it responds to a command or the times it must wait before it can start or finish the processes of reading or writing data. These are memory latencies or reaction times (timings). Memory timings control the way your memory is accessed and can be either a contributing factor to better or worse 'real-world' performance of your system. Internally DRAM has a huge array of cells that contain data. (If you've ever used Microsoft's Excel, try and picture it that way) A pair of row and column addresses can uniquely address each cell in the DRAM. DRAM communicates with a memory controller through two main groups of signals: Control-Address signals and Data signals. These signals are sent to the RAM in order for it to read/write data, address and control. The address is of course where the data is located on the memory banks, and the control signals are various commands needed to read or write. There are delays before a control signal can be executed or finish and this is where we get memory timings. The standard format for memory timings are most often expressed as a string of four numbers, separated by dashes, from left to right or vice-versa like this 2-2-2-5 [CAS-tRCD-tRPtRAS] . These values represent how many clock cycles long each delay is but are not expressed in the order in which they occur. Different bioses will display them differently and there maybe additional options (timings) available.

Which timings mean what? In most motherboards, numerous settings can be found to optimize your memory. These settings are often found the Advanced Chipset section of the popular award bioses. In certain instances, the settings maybe placed in odd locations and even given unfamiliar names, so please consult your motherboard manual for specific information. Below are common latency options: Command rate - is the delay (in clock cycles) between when

chip select is asserted (i.e. the RAM is selected) and commands (i.e. Activate Row) can be issued to the RAM. Typical values are 1T (one clock cycle) and 2T (two clock cycles). CAS (Column Address Strobe or Column Address Select) - is the number of clock cycles (or Ticks, denoted with T) between the issuance of the READ command and when the data arrives at the data bus. Memory can be visualized as a table of cell locations and the CAS delay is invoked every time the column changes, which is more often than row changing. tRP (RAS Precharge Delay) - is the speed or length of time that it takes DRAM to terminate one row access and start another. In simpler terms, it means switching memory banks. tRCD (RAS (Row Access Strobe) to CAS delay) - As it says it's the time between RAS and CAS access, ie. the delay between when a memory bank is activated to when a read/write command is sent to that bank. Picture an Excel spreadsheet with a number across the top and along the left side. They numbers down the left side represent the Rows and the numbers across the top represent the Columns. The time it would take you, for example, to move down to Row 20 and across to Column 20 is RAS to CAS. tRAS (Active to Precharge or Active Precharge Delay) - controls the length of the delay between the activation and precharge commands ---- basically how long after activation can the access cycle be started again. This influences row activation time which is taken into account when memory has hit the last column in a specific row, or when an entirely different memory location is requested. These timings or delays occur in a particular order. When a Row of memory is activated to be read by the memory controller, there is a delay before the data on that Row is ready to be accessed, this is known as tRCD (RAS to CAS, or Row Address Strobe to Column Access Strobe delay). Once the contents of the row have been activated, a read command is sent, again by the memory controller, and the delay before it starts actually reading is the CAS (Column Access Strobe) latency. When reading is complete, the Row of data must be de-activated, which requires another delay, known as tRP (RAS Precharge), before another

Row can be activated. The final value is tRAS, which occurs whenever the controller has to address different rows in a RAM chip. Once a row is activated, it cannot be de-activated until the delay of tRAS is over.

What is SPD? SPD (Serial Presence Detect) is a feature available on all DDR modules. This feature solves compatibility problems by making it easier for the BIOS to properly configure the system to optimize your memory. The SPD device is an EEPROM (Electrically Erasable Programmable Read Only Memory) chip, located on the memory module itself that stores information about the DIMM modules' size, timings, speed, data width, voltage, and other parameters. If you configure your memory by SPD, the bios will read those parameters during the POST routine (bootup) and will automatically adjust values in the BIOS according preset module manufacturer specifications. There is one caveat though. At times the SPD contents are not read correctly by the bios. With certain combinations of motherboard, bios, and memory setting SPD or Auto may result in the bios selecting full-fast timings (lowest possible numbers), or at times full-slow timings (highest possible numbers). This is often the culprit in situations where it appears that a particular memory module is not compatible with a given board. Often in these cases the SPD contents are not being read correctly and the bios is using faster memory timings than the module or system as a whole can boot with. In cases like these try replacing the module with another, setting the bios to allow manual timings, and setting those timings to safer (higher) values will allow the combination to work.

To tweak or not to tweak? In order to really maximize performance from your memory, you'll need to gain access to your system's BIOS. There is usually

a Master Memory setting, often rightly called Memory Timing or Interface, which gives usually gives you the choice to set your memory timings by SPD or Auto, preset Optimal and Aggressive timings (e.g. turbo and ultra), and lastly an Expert or Manual setting that will enable you to manipulate individual memory timing settings to your liking. Are the gains of the perfect, hand-tweaked memory timing settings worth it over the automatic settings? If you're just looking to run at stock speeds and want absolute stability, then the answer to that question would probably be no. The relevance would be nominal at best and you would be better off going by SPD or Auto. However, if your setup is up on the cutting edge of technology or youre pushing performance to the limit as do some overclockers, or gamers or tweakers, it may have great relevance. Ok so I want to tweak, what do I do? Now for the kewl stuff!!! Here are general guidelines to follow while "tweaking". Some of these points can go much deeper, then stated. If you have any Qs or if anything isn't clear, you can PM me. The first order of business, when tweaking your memory, is to deactivate the automatic RAM configuration -- SPD or Auto. With SPD enabled, the SPD chip on the memory module is read to obtain information about the timings, voltage and clock speed and those settings are adjusted accordingly. These settings are, however, very conservative to ensure stable operation on as many systems as possible. With a manual configuration, you can customize these settings for your own system to your liking. As with CPU/video card overclocking, adjusting the memory timings should be done methodically and with ample time to test each adjustment. Testing each adjustment WILL take alot of your time. If you've been reading this, then it appears you've got alot of it! ....So the one and only way to know if your memory is capable of your desired timings is to use stress testing programs, benchmarks or even your favorite game. Three

popular programs used for this are memtest86, Prime95 and 3dmark2001. I recommend at least 8 hours of testing before concluding that your RAM is stable at your FINAL timings. BEFORE getting to your final timings, you would have had previously made smaller adjustments from your "stock" timings. You may carry out testing after each of these adjustment, lasting for short periods. More experienced users, would just skip this and cut the chance. Lower values (or figures) = better performance, but lower overclockability and possibly diminished stability. Higher values = lesser performance, but increased overclockability and more stability. As a general rule, a lower value (or timing) will result in improved performance. After all, if it takes fewer cycles to complete an operation, then it can fit more operations within X amount of time. However, this comes at a cost, and that is stability. It is similar to wireless networking with short and long preambles. A long preamble might be slower, but in a heavy network environment it is much more reliable than short preamble because there is more certainty a packet is for your NIC. The same is for memory - the more cycles used, in general, the more stable the performance. This is inherently true for all of them because to access precisely the right part of the memory, you have to be accurate, and the more time to do a calculation will make it more accurate in this instance. The memory timings can also play a role in how far the memory will go, in keeping with the FSB. Lower timings may hinder how fast the memory can run, while higher timings allow for more memory speed. So which is better, lower timings or higher memory speeds? Overall data throughput depends on bandwidth and latencies. Peak bandwidth is important for certain applications that employ mostly streaming memory transfers. In these applications, the memory will burst the data, many characters or bytes after each other. Only the very first character will have a latency of maybe several cycles, but all other characters after it will be delivered one after another. Other applications with more random accesses, like most games, will get more mileage out of lower latency timings. So if you have to choose, weigh the importance of higher memory

clocks against lower latency timings and decide which is most important for your application. (See "Buying Memory" for more) If you are not planning on overclocking the clock speed of your RAM or if you have fast RAM rated at speeds above that of your current FSB, it may be possible to just lower the timings for a performance gain in certain applications that require most frequent accesses to system memory like, for instance, games. Memory timings can vary depending on the performance of RAM chips used by the module maker. One might think that, by buying Corsair's XMS PC4400 rated at 3-4-4-8(for example), they'll be able to lower latencies down to 2-2-2-6 if running at just 200MHz (or PC3200 speeds). It doesn't quite work that way. Not all memory modules will exhibit the ability to use certain timings without producing errors, instability or even worse. Most typical values for memory timings are 2 and 4. You might ask: Why can't we use 1 or even 0 values for memory timings? JEDEC specifies that it's not possible for current DRAM technology to operate as it should under such conditions. Depending on motherboard, you might be able to squeeze '1' on certain timings, but will very likely result in memory errors and instability. And even if it doesn't, it is unlikely to result in a performance gain. tRCD & tRP are usually equal numbers between 2 and 3. CAS Latency should be either 2.0 or 2.5. Many systems, running performance PC3200/PC3500 memory fail to boot with a CAS3 setting. CAS is not most critical of the various timings, unlike what is taught by many and what RAM sells try to market. In general, the importance of CAS when placed against tRP and tRCD is nominal. Reducing CAS has a relatively minor effect on memory performance, while lower tRP & tRCD values result in a much more substantial gain. In other words if you had to choose, 3-3-2.5 would be better than 4-4-2.0 (tRCD-tRP-CAS). The value of tRCD most often accounts for the biggest hit in performance if increased, followed by tRP, then CAS. So if you need to loosen RAM timings in hopes of achieving a higher clock, it is recommended and accepted that you increase the value of CAS first, then tRP, and then finally tRCD.

tRAS is unique, in that lowering it can lead to problems and lesser performance. As said before, it is a delay after row activation until the access cycle be started again. If the value of tRAS is too high, the row will be unnecessarily delayed from starting for another cycle. However, if it is set too low, there may not be enough time to complete the cycle. When that happens, there will be loss or corruption of data. This whitepaper from the RAM manufacturer Mushkin outlines how tRAS should be a sum of tRCD, CAS, and 2. For example, if you are using a tRCD of 2 and a CAS of 2 on your RAM, then you should set tRAS to 6. At values lower than that, theory would dictate lesser performance as well as catastrophic consequences for data integrity including hard drive addressing schemes - truncation, data corruption, etc - as a cycle or process would be ended before it's done. How is it possible for memory timings to affect my hard drive? When the system is shut down or a program is closed, physical ram data that becomes corrupted may be written back to the hard drive and thats where the consequences for the hard drive come in. Also lets not forget when physical ram data is translated by the operating system to virtual memory space located on the hard drive. While it's important to consider the advice of experts like Mushkin, your own testing is still valuable. Systems both AMD & Intel alike, can indeed operate with stability with 2-2-2-5 timings, and even exhibit a performance gain as compared to the theoretically mandated 2-2-2-6 configuration. The most important thing in any endeavor is to keep an open mind, and don't spare the effort. Once you've tried both approaches extensively it will be clear to you which is superior for your particular combination of components. Unlike CPU overclocking or video card tweaking, adjusting memory timings offers very little physical risk to your system, other than the possibility of a windows failure to load or a program failure while testing.

The Anomaly: nVIDIAs nForce2 and tRAS

An anomaly can be described as something thats difficult to classify; a deviation from the norm or common form. This is exactly the situation with tRAS (Active to Precharge) and nVIDIAs nforce2 chipset. As said before, not sparing the effort is what has lead to the initial discovery of this anomaly many months ago. Its pretty well known by now, in a nutshell, a higher tRAS (i.e. higher than, say, the Mushkin mandated sum of CAS+tRCD+2) on nforce2 motherboards consistently shows slightly better results in several benchmarks and programs. In most cases, 11 seems to be the magic number. Other chipsets do not display this deviation from the norm, so what makes the nforce2 different? 'TheOtherDude' has given a possible explanation for this anomaly in this thread.
Quote:

Unlike most modern chipsets, the Nforce2 doesn't seem to make internal adjust "internal" (not really sure if thats the right word) settings seem to include Bank optimal performance, tRAS (as measured in clock cycles) should equal the sum o number of clock independent operations involved with closing a bank (~40 ns) m slightly effected by CAS, but should not play a role in optimal tRAS). To complica other specifies a column. This brings tRCD into the mix.

Higher isn't always better, but the reason everything is so weird with tRAS and t optimizations to accommodate your inputted tRAS value like most other chipsets Dealing with Memory Speeds: What is sync / async? (NOTE: ..Does NOT include the Athlon64/FX. They are alot different when it comes to this ...will be updated shortly). When the memory frequency runs at the same speed as the FSB, it is said to be running in synchronous operation. When memory and FSB are clocked differently (lower or higher than), it is known to be in asynchronous mode. On both AMD and Intel platforms, the most performance benefits are seen when the FSB frequency of the processor is running synchronously with that memory Although Intel based systems have a slight exception, this is completely true of all AMD-supporting chipsets. Only Intel chipsets have implemented async modes that have any merit.

The async modes in SiS P4 chipsets also work correctly. When looking at the AMD-supporting chipsets async modes are to be avoided like a plague. AMD-supporting chipsets offer less flexibility in this regard due to poorly implemented async modes. Even if it means running our memory clock speed well below the maximum feasible for a given memory, an Athlon XP system will ALWAYS exhibit best performance running the memory in sync with the FSB. Therefore, a 166FSB Athlon XP would run synchronously with DDR333/PC2700 (2*166) and give better performance than running with DDR400/PC3200, despite its numbers being bigger. This does not mean to say that PC3200 isnt a good idea for 166FSB Athlon XPs. Buying slightly higherrated memory than needed is a good idea if your intent is to overclock and it also allows you some future upgrade room. To achieve synchronous operation, there is usually a Memory Frequency or DRAM ratio setting in the bios of your system that will allow you to manipulate the memory speed to a either a percentage of the FSB (i.e. 100%) or a fraction (or ratio) i.e. N/N where N is any integer available to you. If you want to run memory at non 1:1 ratio speeds, motherboards use dividers that create a ratio of [CPU FSB]:[memory frequency] or through the use of percentages of the FSB. However, intrinsically, it is possible to see the problem with this and why synchronous operation is preferable on all PC platforms. If for there is divider, then there is going to be a gap between the time that data is available for the memory, and when the memory is available to accept the data (or vica versa). There will also be a mismatch between the amount of data the CPU can send to the memory and how much the memory can accept from the CPU. This will cause slowdowns as you will be limited by the slowest component. Here are three examples illustrating the three possible states of memory operation: 200MHz FSB speed with 100% or 1:1 (FSB:Memory ratio) results in 200MHz memory speed (DDR400) Such a configuration is wholly acceptable for any AMD system,

memory should be set this way at all times for best performance. The core architecture of the Athlon XP processor dictates that it works best when run with memory running synchronously to the CPU's FSB. Asynchronous FSB/Memory Speeds are horridly inefficient on AMD systems, but may well be the optimal configuration for P4 systems. 200MHz FSB speed with 120% or 5:6 (FSB:Memory ratio) results in 240MHz memory speed (DDR480) This example shows running the memory at higher asynchronous speeds. Assume we have a Barton 2500+ which by default is running at a FSB of 333 MHz (166 MHz X 2) and we also have PC3200 memory which by default is running at 400 MHz. This is a typical scenario because many people think that faster memory running at 400 MHz, will speed up their system. Or they fail to disable the SPD or Auto setting in their bios. There is NO benefit at all derived from running your memory at a higher frequency (MHz) than your FSB on Athlon XP/Duron sytems. In actuality, doing so has a negative effect. Why does this happen? It happens because the memory and FSB can't "talk" to each other at the same speeds, even though the memory is running at higher speeds than the FSB. The memory would have to "wait for the FSB to catch up", because higher async speeds forces desynchronization of the memory and FSB frequencies and therefore increases the initial access latency on the memory path; causing as much as a 5% degradation in performance. That is another ramification of the limiting effect of the AMD dual-pumped FSB. A P4's quad pumped FSB (along with the superior optimization of the async modes) allows P4's to benefit in some cases from async modes that run the memory faster than the FSB. This is especially true of single channel P4 systems like the older i845 series where running an async mode that runs the memory faster than the FSB was crucial to top system performance. There still are synchronization losses inherent in an async mode on any system, but the adequate FSB bandwidth of the P4 allows the additional memory bandwidth produced by async operation to overcome these losses and produce a net gain. 250MHz FSB speed with 80% or 5:4 (FSB:Memory ratio)

results in 200MHz memory speed (DDR400) This example is most often used in overclocking situations where the memory is not able to keep up with the speed of the FSB. On AMD platforms, there is really no point having a high FSB, if the memory cant keep up. When the memory or any other component is holding back system performance, this is called a bottleneck. As in the example above, a memory bottleneck would be if you were running your memory at DDR400 MHz with a 500 MHz (250x2) system bus. The memory would only be providing 3.2GB/s of bandwidth while the bus would be theoretically capable of transmitting 4.0GB/s of bandwidth. A situation like this would not help overall system performance. Think of it like this; let's say you had a highway going straight into a mall, with an identical highway going straight out of the mall. Both highways have the same number of lanes and initially they have the same 45mph speed limit. Now let's say that there's a great deal of traffic flowing in and out of the mall and in order to get more people in and out of the mall quicker, the department of transportation agrees to increase the speed limit of the highway going into the mall from 45mph to 70mph; the speed limit of the highway leaving the mall is still stuck at 45mph. While more people will be able to reach the mall quicker, there will still be a bottleneck in the parking area leaving the mall - since the increased numbers of people that are able to get to the mall still have to leave at the same rate. This is equivalent to increasing the FSB frequency but leaving the memory frequency/bandwidth unchanged or set to a slower speed. You're speeding up one part of the equation while leaving the other part untouched. Sometimes the fastest memory is not always afforded or available. In this case, more focus should be placed on balancing the FSB and memory frequencies while still keeping latencies as low as possible AND while still maintaining CPU clock speed (GHz) by increasing the multiplier. The benefit of a faster FSB (and higher bandwidth) will only become more and clearer as clock speeds (GHz) increase; the faster the CPU gets, the more it will depend on getting more data quicker. The only real benefit of async modes on AMD platforms is the fact that it comes in handy to overclockers for testing purposes; to determine their max FSB and to eliminate the memory as a possible cause for not being able to achieve a desired stable FSB

speed. Even so, async modes on early nforce2 based motherboards caused many problems; problems as serious as bios corruption. Looking to the Intel side of the fence, async modes that run the memory slower than the FSB have merit because of how async modes are implemented in the Intel chipsets. This is extremely important, as we cannot change the CPU multiplier on modern Intel systems and therefore have to use an async mode to allow substantial overlcocks on the majority of systems utilizing the current 200/800MHz fsb family of P4 processors. To illustrate, if you increase the FSB on a new C stepped P4 to 250 MHz with a 1:1 ratio, memory will need to be capable of running at 250 MHz (DDR500). This can be done in two ways. The first is with exotic PC4000 or DDR500 memory modules, but these are expensive just to run synchronously at such speeds and their timings arent exactly delightful either. The other way is to overclock DDR400/DDR433 to much higher speeds through overvolting, but this is seemingly dangerous to the memory module and often motherboards dont provide nearly enough voltage to achieve such speeds without physical voltage mods. Therefore to avoid expensive PC4000 or volt mods, you change the memory ratio so that a 250FSB overclock will become something that the memory can handle to allow for a substantial overclock of the Pentium 4. As in the example, PC3200 (DDR400) remains as DDR400 with a 250MHz system bus. ================================= =================================

Overclocking

How do I overclock my memory? On modern systems, memory is very rarely, if ever, overclocked for the sake of overclocking memory. Lemme rephrase that,

nowadays people dont overclock memory to make it run faster than what is actually needed. There are many instances where memory is even underclocked. You first determine the default frequency of your memory, 1MHz higher than that frequency is the point where overclocking begins. Now how do you increase that frequency? As previously discussed, best performance on all PC platforms is gained by running the memory frequency synchronously with speed of the FSB. This means that for every 1MHz the FSB is increased, so too will the frequency of memory clock. So in effect, memory overclocking is just a part of overclocking your processor. They are done simultaneously. Since FSB frequency and Memory frequency are most times made to be the same, this poses a problem - as overclockers look for the highest possible FSB while the memory may struggle behind because its not able to keep up synchronously.

What to do with ddr voltage? Sometimes a little extra voltage is all that's required to encourage your defiant DDR to straighten up and fly right. You can adjust the ddr voltage quite easily through your motherboards BIOS as you would for your CPUs voltage. Like CPU overclocking, raising memory voltage above default (default is usually between 2.5v - 2.7v for DDR) at higher memory clock speeds may aid stability and/or enable you to use lower latency timings. Although the ddr voltage has nothing to do with the CPU itself, it plays an integral part in the big picture. If we are running a synchronous mode (1:1), then for every 1MHz increase in FSB speed, the RAM speed will increase by 1MHz. So in these cases an elevated memory voltage will often prove helpful in maximizing the overclocking potential of the CPU. A few points to consider when raising memory voltage: Like CPU overclocking, increasing memory voltage should be done in the smallest increments available. Put your system through a few paces of a program like memtest86 after each step. If it fails testing, bump the voltage a little more and test again. 0.3 Volts over Default - For hardcore overclockers that's

considered conservative, but should be enough for most. This is also the maximum provided by most motherboards. On such motherboards, hardware mods or modified bioses maybe required to gain access to more voltage. Some of the higher voltages (2.9v to 3.3v) available on certain motherboards may damage the RAM with long exposure, so check with other people who have your RAM to get a feel for its voltage tolerances. The memory you save may be your own.

Do I Need Ram Cooling? Does system memory get hot enough to require cooling? Depends on what you consider is hot. My opinion is that memory modules never build up enough heat, to require any sort of cooling. Even when overclocking, they still stay pretty cool. If extra cooling, puts your mind at ease, then go for it, but you can't necessarily expect better overclocking results or even any extensions in the life of your overclocked / overvolted memory. Premier manufacturers such as Corsair, Mushkin, and OCZ ship their modules with heatspreaders across the chips. They look very nice and are often solid copper or aluminum. A handful of other companies sell ram cooling kits, and other solutions for modules that come without cooling. Ram sinks are pretty much the same as standard heatsinks for graphics chips and CPUs, except they're a lot smaller and tailored for RAM chip sizes. Most reliable tests show these heatspreaders & kits to do VERY little as far as cooling the memory goes. With no real benefit, placing these cooling kits on memory modules is more for looks than for cooling, and that can be appreciated.

How do I burn-In Memory? Burn-in can be defined as the process of exercising an integrated circuit (IC) for a period of time at elevated voltage, speed and temperature with the aim of improving performance. With CPUs its another one of those debated topics, but in my experience

burning-in memory does make it perform better. The time required varies and it doesn't always work, but it's worth a try. A guide on how to go about doing this. If your DDR400, for example, doesnt run stably at 220MHz, try something lower. Leave the computer on for a few days straight. Give it a workout, and then try it again at 220MHz

Memory Chips Very few companies in the world actually make memory chips, but literally hundreds of companies sell memory modules. Some of these few companies are Winbond, Hynix (Hyundai), Micron, Samsung, Infineon (Siemens), Nanya, Mosel-Vitalic, TwinMos and V-data/A-data. Like most other PC components, all RAM are not created equal. For DDR400, you can find memory varying from very fast modules sporting 2-2-2-6 timings from Mushkin, Corsair, OCZ, Kingston (for example), to relatively low-cost modules that aren't as favorable with the timings. Even memories of the same brand and model may exhibit varying performance levels because of the chips being used. Manufactures use whatever memory chips are available at the time, and certain memory chips dont stay in stock indefinitely. Take a look at the markings on the chips of your memory module. If your module has heatspreaders, it will have to be carefully removed to see the memory chips. (Doing so will void your warranty). Each chip is covered with numbers and those numbers have tell what chips they are and may even have the logo and name of the chip maker. Why does this matter? Like motherboards, for example, not all brands offer the same performance and overclocking potential. The same goes for memory chips. So people (usually overclockers) seek out certain preferred brands of memory chips for their systems. A perfect example of this is Winbonds famed BH-5, BH-6 & CH-5, which have now all been discontinued, although they were able to clock very high whilst still maintaining very tight timings. Check your module manufacturers website, they may or may not list what chips they use their modules.

What does double-sided mean? A memory module is said to be "double-sided" when memory chips are physically built onto each sides of the modules PCB. From that, you can guess what "single-sided" means. However, people have come to use this term to describe two different things. The confusion is mainly due to a poor choice of terminology, and the terms in question are the words "side" and "bank". As said before, the determining factor for "sidedness" is the chip arrangement on the module's PCB. "Bank" is used to describe the internal electrical organization of the DIMM rather than single-sided memory or double-sided memory. (See this Mushkin whitepaper on banks.) =================================

Buying Memory
=================================

What Memory to buy? Very touchy subject for some. For others, RAM is RAM, right? Right! If you plan on just running at stock speeds, then your hunt for memory just became easier. Not by a whole lot, but nevertheless, easier. With AMD platforms, the requirements for memory are more varied, due to the fact, that there are several different models of processors, many of which are utilizing different bus speeds. Higher end Athlon XPs ( like the 3000+ and 3200+) and the new Athlon64s require the use of at least

DDR400 while lower end AMD processors may be satisfied with the use of DDR333 or DDR266 modules. Newer C Stepped P4s, all require DDR400 for optimal performance. My only suggestion is to never buy generic RAM. There are three factors that go into the making of a reliable memory module: good quality chips, quality printed circuit board (PCB), and quality manufacturing. None of these factors are present on poorly manufactured modules. How do you tell whats poorly manufactured? Simple, those are cheap and you dont see a name of a company attached to it. So really, when buying memory for typical operation, buy something thats fairly well known. As an IT professional, I work on a fairly large network. I'll conservatively estimate, that group of PCs has 500 memory modules collectively. We buy name-brand memory (Crucial & Kingston Value) exclusively. Crucial, a division of Micron Technology, are probably the best known as one of the leaders in system memory upgrades. Crucial has a solid reputation for quality that they have earned with trustworthy products and service. If your intent is to overclock one should buy PC3200/PC3500 at the bare minimum for any new system - AMD or Intel. Not only for overclocked systems but for the sake of performance in general. Usual brands of choice for enthusiasts are Mushkin, Corsair, OCZ and Kingstons HyperX. Experience dictates that the advantages of fast memory are worth the slightly higher price that you have to pay to get it. However, buying much faster RAM isn't always the best idea, especially for AMD chipsets. Overall, PC4000 and higher modules are optimized for Intel platforms and are quite often incompatible with AMD systems. High speed DDR (PC3700 and all the way up to PC4400) generally, sacrifice all the useful functions (i.e. lower timings, compatibility with motherboards, ability to run in async mode) for the sake for attaining stable operation at high speeds. An individual buying these speeds of memory must have enough budget (or a fat trust fund) to avoid disappointment in the event of unsatisfactory results. Otherwise, it is advisable to stick with lower latency PC3200 or PC3500 modules. On Intel systems, there's choice of either higher latency PC4000+ memory capable running at high synchronous

frequencies but at very conservative timings or lower latency PC3200/PC3500. Since, as discussed earlier, we can realistically use an async mode on Intel systems, it would well prove that doing so and using top quality PC3200 or PC3500 memory and their attendant lower latency will allow lower latency memory to be the memory of choice in most all cases. Cruising many forums, you'll find forum-posters telling others (usually from what they've been told themselves), that timings don't matter on Pentium 4 systems, because of very high bandwidth or for whatever reason they give. Memory timings DO matter just as much on Intel systems as they do on AMD systems.

Proof of this can ascertained by running aggressive 2-2-2-5 timings with a 5:4 divider (explained later) at 250Mhz FSB against running conservative 2.5-4-4-8 timings with a 1:1 divider at 250MHz FSB. Your results will show that the aggressive timings, even running asynchronous to the FSB (using the 5:4 divider) provide better performance than a synchronous FSB with conservative timings. You'll notice this in everything except the SiSoft Sandra memory benchmark module and less so in PCMark 2004, which uses a lot of real world applications, but in 3D benchmarks, applicatins and games, it will be significant. Mushkin, Corsair and OCZ have always been favorites for overclockers and their memory needs. They usually hand pick their memory chips and select only the very best to put on their performance modules so that you are guaranteed to get the performance printed on the module - and then some more. Bottom line is, don't skimp on your RAM selection. You'll be kicking yourself later if you do. But just the same, don't assume that because a particular memory type is expensive it is also superior for your application or intended use. Make all effort to avoid generic brands as they are famous for cutting corners

during production and burning the wrong values into the SPD chips. Unhappy buyers are forced to struggle with poor performance or system crashes without knowing exactly why.

Why are you recommending PC3500, when my motherboard only supports PC3200? Memory modules really have no fixed speed. Like the tire to a car, there is a "rating" on it. When a tire is rated to be 150mph, it means it can run as fast as 150mph maximum. It also means that it can run at any speed lower than that. It is also quite safe to say that the tire should also withstand at 160 mph, just not as "safe" according to the Government's test environment. Memory is very much similar in this way. Many people ask if a PC3500 or PC3700 module would run/blow up/be compatible in a motherboard originally designed to use PC3200 or PC2700. The answer is, hell ya! JEDEC (the government) has only approved PC3200. This reason, coupled with the fact that no processor really needs memory rated higher than PC3200, are causes for motherboard manufacturers not stating support for newer, faster modules. But higher rated speeds of DDR are always backward compatible so to speak, or capable of running at lower speeds. Older systems stand to gain from newer and faster modules. Even if they can't run the module at its top supported frequency, you can still tweak the timing parameters to maximize performance at lower clock speeds, that otherwise would not be possible at higher clock speeds or with lower-rated modules.

How much is enough? That, of course, depends on how you use your PC. When you determine memory needs, you'll also want to consider what your needs will be six months to a year down the road. If you think you may be upgrading your operating system or adding more software or doing further system upgrades, it's a good idea to factor that into the equation now. Before you actually upgrade or

buy new system memory, you need to obtain a few facts about your motherboard. The first bit of information you need is the amount of RAM that is currently installed, if you're upgrading. If you bought/built your PC a while ago and don't remember how much memory it has, there's a simple way to check without getting inside the box. Right-click the My Computer icon on your Windows desktop, and select Properties from the pop-up menu. On the General tab, you'll see the amount of installed memory listed in the lower right side of the box, just below the information about your PC's processor. Next, you need to determine how much memory your PC can handle. Every PC has an upper limit that is determined by the motherboard's chipset, which, in very simple terms, works as a bridge between the processor and the memory. Most new PCs have a maximum memory capacity of between 1GB and 4GB-more than you'll probably need, unless your job is calculating rocket-reentry effects for NASA. 4GB is the addressing limit for 32bit operating systems and processors, so until 64 bit devices and their supporting operating systems become common it is unwise to aspire to more than 4GB of memory. Chances are you already know exactly how much memory your motherboard can take, if you don't check your board's manual or inquire elsewhere.

Matched or Certified Dual Channel Memory Companies like Corsair, Mushkin, OCZ, etc sell what they call "dual channel" memory, or Dual Channel Kits. These are sold in pairs, meaning, you might buy a 2x256MB Dual Channel Kit, which consists of 2 sticks of 256MB DDR memory paired together by the manufacturer for a total of 512Mb. Companies don't just throw two sticks of RAM together to produce these kits, but they don't necessarily produce a totally different batch of RAM either. Testing or qualifying Dual Channel memory might involve something as simple as technicians booting up pairs of RAM in a Dual Channel motherboard and ensuring they work together under a set of conditions, or it could

be more complicated, including so called "SPD" optimizations and even chip selection. For your purposes, you should assume that Dual Channel memory is qualified through testing as all companies will claim that every pair of Dual Channel memory is tested for dual channel operation prior to being packaged. Will Non-Dual Channel Matched RAM work in my dual channel motherboard then? It most certainly, will. As long as it fits the requirements of Dual Channel operation (two of the same types of memory, same size modules, speed, etc). Two modules of the same model/brand purchased from the same vendor at the same time is essentially as likely to work properly in a dual channel configuration as is a dual channel kit. The ONLY thing you can lose by buying "single channel" memory for use in Dual Channel mode is that manufacturers may or may not provide support and replace your memory if it won't work in dual channel mode, whereas if Dual Channel memory fails to work in Dual Channel mode, the manufacturers will help you resolve the problem and possibly replace the memory to ensure proper Dual Channel operation. There are instances of people using two different types of RAM together and have had no problems. You can't damage your motherboard or RAM just by trying to use two un-identical module in Dual Channel mode. But be advised that if the machine is unstable for any reason, it is entirely possible to corrupt your data upon operation of the machine. Mis-matched memory sticks in a dual channel configuration often produce unstable operation, so as with any new overclocking or upgrading venture make sure you have adequate backups so you can recover from a data loss.

======================================== ========== ======================================== ==========

======================================== = Jarad's Overclocking Guide to Overclocking --------------------------and Rogue_Jedi's FAQ -------by Jarad --------------screen names ---------------------AIM: uberl33tjarad ---------------------MSN: ssprncvegeta at msn dot com ---------------------email: uberl33tjarad at gmail dot com ======================================== ========== ======================================== ========== ======================================== = What is Overclocking? Overclocking is the process of making various components of your computer run at faster speeds than they do when you first buy them. For instance, if you buy a Pentium 4 3.2GHz processor, and you want it to run faster, you could overclock the processor to make it run at 3.6GHz. Before You Read Any Further... Short version: If your computer is a Dell, eMachine, Gateway, etc... you can NOT overclock. Long version: This guide is intended for owners of a computer that supports overclocking. If you are unsure if yours is capable, it's safe to assume it doesn't. If you own one of the following, this guide it not for you: Hewlett Packard, Compaq, Dell, Gateway, eMachine, Sony, or anything Best Buy or Circuit City sell, the rest of this guide no longer pertains to you. This overclocking guide is intended for owners with an after-market motherboard made by ASUS, Gigabyte, Abit, DFI, or others. There is a way to bypass the motherboard and overclock using software, but it's not meant to be an alternative. This method is intended as a shortcut for overclockers to change settings without restarting. For those with a motherboard that does not

support overclocking, you will be severly restricted for several reasons. - Motherboards without overclocking support aren't built for it, meaning they won't go as far, become unstable sooner, and heat up quicker - Computers with boards that don't support overclocking will have inadequate cooling - If your computer uses a Celeron/Sempron or equivalent processor, no amount of overclocking can save you. These are great for learning to overclock, but they won't yield the performance you're looking for in games or benchmarks

Disclaimer! WARNING: Overclocking can F up your stuff. Overclocking wares down the hardware and the life-expectancy of the entire computer will be lowered if you overclock. If you attempt to overclock, I, Rogue_Jedi, this forum and its inhabitants are not responsible for anything broken or damaged when using this guide. This guide is merely for those who accept the possible outcomes of this overclocking guide/FAQ, and overclocking in general. Why would you want to overclock? Well, the most obvious reason is that you can get more out of a processor than what you payed for. You can buy a relatively cheap processor and overclock it to run at the speed of a much more expensive processor. If you're willing to put in the time and effort, overclocking can save you a bunch of money in the future or, if you need to be at the bleeding edge like me, can give you a faster processor than you could possibly buy from a store The Dangers of Overclocking First of all, let me say that if you are careful and know what you are doing, it will be very hard for you to do any permanent damage to your computer by overclocking. Your computer will

either crash or just refuse to boot if you are pushing the system too far. It's very hard to fry your system by just pushing it to it's limits. There are dangers, however. The first and most common danger is heat. When you make a component of your computer do more work than it used to, it's going to generate more heat. If you don't have sufficient cooling, your system can and will overheat. By itself, overheating cannot kill your computer, though. The only way that you will kill your computer by overheating is if you repeatedly try to run the system at temperatures higher than recommended. As I said, you should try to stay under 60 C. Don't get overly worried about overheating issues, though. You will see signs before your system gets fried. Random crashes are the most common sign. Overheating is also easily prevented with the use of thermal sensors which can tell you how hot your system is running. If you see a temperature that you think is too high, either run the system at a lower speed or get some better cooling. I will go over cooling later in this guide. The other "danger" of overclocking is that it can reduce the lifespan of your components. When you run more voltage through a component, it's lifespan decreases. A small boost won't have much of an affect, but if you plan on using a large overclock, you will want to be aware of the decrease in lifespan. This is not usually an issue, however, since anybody that is overclocking likely will not be using the same components for more than 4-5 years, and it is unlikely that any of your components will fail before 4-5 years regardless of how much voltage you run through it. Most processors are designed to last for up to 10 years, so losing a few of those years is usually worth the increase in performance in the mind of an overclocker. The Basics To understand how to overclock your system, you must first understand how your system works. The most common component to overclock is your processor. When you buy a processor, or CPU, you will see it's operating speed. For instance, a Pentium 4 3.2GHz CPU runs at 3.2GHz, or

3200 MHz. This is a measurement of how many clock cycles the processor goes through in one second. A clock cycle is a period of time in which a processor can carry out a given amount of instructions. So, logically, the more clock cycles a processor can execute in one second, the faster it can process information and the faster your system will run. One MHz is one million clock cycles per second, so a 3.2GHz processor can go through 3,200,000,000, or 3 billion two hundred million clock cycles in every second. Pretty amazing, right? The goal of overclocking is to raise the GHz rating of your processor so that it can go through more clock cycles every second. The formula for the speed of your processor if this: FSB (in MHz) x Multiplier=Speed in MHz. Now to explain what the FSB and Multiplier are: The FSB (or, for AMD processors, the HTT*), or Front Side Bus, is the channel through which your entire system communicates with your CPU. So, obviously, the faster your FSB can run, the faster your entire system can run. CPU manufacturers have found ways to increase the effective speed of the FSB of a CPU. They simply send more instructions in every clock cycle. So instead of sending one instruction every one clock cycle, CPU manufacturers have found ways to send two instructions per clock cycle (AMD CPUs) or even four instructions per clock cycle (Intel CPUs). So, when you look at a CPU and see it's FSB speed, you must realize that it is not really running at that speed. Intel CPUs are "quad pumped", meaning they send 4 instructions per clock cycle. This means that if you see an FSB of 800MHz, the underlying FSB speed is really only 200MHz, but it is sending 4 instructions per clock cycle so it achieves an effective speed of 800MHz. The same logic can be applied to AMD CPUs, but they are only "double pumped", meaning they only send 2 instructions per clock cycle. So an FSB of 400MHz on an AMD CPU is comprised of an underlying 200MHz FSB sending 2 instructions per clock cycle. This is important because when you are overclocking, you will be

dealing with the real FSB speed of the CPU, not the effective CPU speed. The multiplier portion of the speed equation is nothing more than a number that, when multiplied by the FSB speed, will give you the total speed of the processor. For instance, if you have a CPU that has a 200MHz FSB (real FSB speed, before it is double or quad pumped) and has a multiplier of 10, then the equation becomes: (FSB) 200MHz x (Multiplier) 10= 2000MHz CPU speed, or 2.0GHz. On some CPUs, such as the Intel processors since 1998, the multiplier is locked and cannot be changed. On others, such as the AMD Athlon 64 processors, the multiplier is "top locked", which means that you can change the multiplier to a lower number but cannot raise it higher than it was originally. On other CPUs, the multiplier is completely unlocked, meaning you can change it to any number that you wish. This type of CPU is an overclockers dream, since you can overclock the CPU simply by raising the multiplier, but is very uncommon nowadays. It is much easier to raise or lower the multiplier on a CPU than the FSB. This is because, unlike the FSB, the multiplier only effects the CPU speed. When you change the FSB, you are really changing the speed at which every single component of your computer communicates with your CPU. This, in effect, is overclocking all of the other components of your system. This can bring about all sorts of problems when other components that you don't intend to overclock are pushed too far and fail to work. Once you understand how overclocking works, though, you will know how to prevent these issues. *On AMD Athlon 64 CPUs, the term FSB is really a misnomer. There is no FSB, per se. The FSB is integrated into the chip. This allows the FSB to communicate with the CPU much faster than Intel's standard FSB method. It also can cause some confusion, since the FSB on an Athlon 64 can sometimes be referred to as the HTT. If you see somebody talking about raising the HTT on an Athlon 64 CPU and is talking about speeds that you recognize

as common FSB speeds, then just think of the HTT as the FSB. For the most part, they function in the same way and can be treated the same and thinking of the HTT as the FSB can eliminate some possible confusion. How to Overclock So now you understand how a processor gets it's speed rating. Great, but how do you raise that speed? Well, the most common method of overclocking is through the BIOS. The BIOS can be reached by pressing a variety of keys while your system is booting up. The most common key to get into the BIOS is the Delete key, but others may be used such as F1, F2, any other F button, Enter, and some others. Before your system starts loading Windows (or whatever OS you have), it should have a screen that will tell you what button to use at the bottom. Once you are in the BIOS, assuming that you have a BIOS that supports overclocking*, you should have access to all of the settings needed to overclock your system. The settings that you will most likely be adjusting are: Multiplier, FSB, RAM Timings, RAM Speed, and RAM Ratio. On a very basic level, all you are trying to do is to get the highest FSB x Multiplier formula that you can achieve. The easiest way to do this is to just raise the multiplier, but that will not work on most processors since the multiplier is locked. The next method is to simply raise the FSB. This is pretty self explanatory, and all of the RAM issues that have to be dealt with when raising the FSB will be explained below. Once you've found the speed at which the CPU won't go any faster, you have one more option. If you really want to push your system to the limit, you can try lowering the multiplier in order to raise the FSB even higher. In order to understand this, imagine that you have a 2.0GHz processor that has a 200MHz FSB and a 10x multiplier. So 200MHz x 10=2.0GHz. Obviously, that equation works, but there are other ways to get to 2.0GHz. You could raise the multiplier to

20 and lower the FSB to 100MHz, or you could raise the FSB to 250MHz and lower the multiplier to 8. Both of those combinations would give you the same 2.0GHz that you started out with. So both of those combinations should give you the same system performance, right? Wrong. Since the FSB is the channel through which your system communicates with your processor, you want it to be as high as possible. So if you lowered the FSB to 100MHz and raised the multiplier to 20, you would still have a clock speed of 2.0GHz, but the rest of the system would be communicating with your processor much slower than before resulting in a loss in system performance. Ideally, you would want to lower the multiplier in order to raise the FSB as high as possible. In principle, this sounds easy, but it gets complicated when you involve the rest of the system, since the rest of the system is dependent on the FSB as well, chiefly the RAM. Which leads me to the next section on RAM. *Most retail computer manufacturers use motherboards and BIOSes that do not support overclocking. You won't be able to access the settings you need from the BIOS. There are utilities that will allow you to overclock from your desktop, such as this one, but I don't recommend them since I have never tried them out myself. RAM and what it has to do with Overclocking First and foremost, I consider this site to be the Holy Grail of RAM information. Learn to love it As I said before, the FSB is the pathway through which your system communicates with your CPU. So raising the FSB, in effect, overclocks the rest of your system as well. The component that is most affected by raising the FSB is your RAM. When you buy RAM, it is rated at a certain speed. I'll use the table from my post to show these speeds:
Quote:

PC-2100 - DDR266 PC-2700 - DDR333 PC-3200 - DDR400 PC-3500 - DDR434 PC-3700 - DDR464 PC-4000 - DDR500 PC-4200 - DDR525 PC-4400 - DDR550 PC-4800 - DDR600 To understand what this table means, lookhere. Note how the RAM's rated speed is DDR PC-4000. Then refer to this table and see how PC-4000 is equivalent to DDR 500. To understand this, you must first understand how RAM works. RAM, or Random Access Memory, serves as temporary storage of files that the CPU needs to access quickly. For instance, when you load a level in a game, your CPU will load the level into RAM so that it can access the information quickly whenever it needs to, instead of loading the information from the relatively slow hard drive. The important thing to know is that RAM functions at a certain speed, which is much lower than the CPU speed. Most RAM today runs at speeds between 133MHz and 300MHz. This may confuse you, since those speeds are not listed on my chart. This is because RAM manufacturers, much like the CPU manufacturers from before, have managed to get RAM to send information twice every RAM clock cycle.* This is the reason for the "DDR" in the RAM speed rating. It stands for Double Data Rate. So DDR 400 means that the RAM operates at an effective speed of 400MHz, with the "400" in DDR 400 standing for the clock speed. Since it is sending instructions twice per clock cycle, that means it's real operating frequency is 200MHz. This works much like AMD's "double pumping" of the FSB. So go back to the RAM that I linked before. It is listed at a speed of DDR PC-4000. PC-4000 is equivalent to DDR 500, which means that PC-4000 RAM has an effective speed of 500MHz with an underlying 250MHz clock speed.

So what does this all have to do with overclocking? Well, as I said before, when you raise the FSB, you effectively overclock everything else in your system. This applies to RAM too. RAM that is rated at PC-3200 (DDR 400) is rated to run at speeds up to 200MHz. For a non-overclocker, this is fine, since your FSB won't be over 200MHz anyway. Problems can occur, though, when you want to raise your FSB to speeds over 200MHz. Since the RAM is only rated to run at speeds up to 200MHz, raising your FSB higher than 200MHz can cause your system to crash. How do you solve this? There are three solutions: using a FSB:RAM ratio, overclocking your RAM, or simply buying RAM rated at a higher speed. Since you probably only understood the last of those three options, I'll explain them: to learn more about RAM timings, go here. FSB:RAM Ratio: If you want to raise your FSB to a higher speed than your RAM supports, you have the option of running your RAM at a lower speed than your FSB. This is done using an FSB:RAM ratio. Basically, the FSB:RAM ratio allows you to select numbers that set up a ratio between your FSB and RAM speeds. So, say you are using the PC-3200 (DDR 400) RAM that I mentioned before which runs at 200MHz. But you want to raise your FSB to 250MHz to overclock your CPU. Obviously, your RAM will not appreciate the raised FSB speed and will most likely cause your system to crash. To solve this, you can set up a 5:4 FSB:RAM ratio. Basically, this ratio will mean that for every 5MHz that your FSB runs at, your RAM will only run at 4MHz. To make it easier, convert the 5:4 ratio to a 100:80 ratio. So for every 100MHz your FSB runs at, your RAM will only run at 80MHz. Basically, this means that your RAM will only run at 80% of your FSB speed. So with your 250MHz target FSB, running in a 5:4 FSB:RAM ratio, your RAM will be running at 200MHz, which is 80% of 250MHz. This is perfect, since your RAM is rated for 200MHz.

This solution, however, isn't ideal. Running the FSB and RAM with a ratio causes gaps in between the time that the FSB can communicate with the RAM. This causes slowdowns that wouldn't be there if the RAM and the FSB were running at the same speed. If you want the most speed out of your system, using an FSB:RAM ratio wouldn't be the best solution. Overclocking your RAM Overclocking your RAM is really very simple. The principle behind overclocking RAM is the same as overclocking your CPU: to get the RAM to run at a higher speed than it is supposed to run at. Luckily, the similarities between the two types of overclocking end there, or else RAM overclocking would be much more complicated than it is To overclock RAM, you just enter the BIOS and attempt to run the RAM at a higher speed than it is rated at. For instance, you could try to run PC-3200 (DDR 400) RAM at a speed of 210MHz, which would be 10MHz over the rated speed. This could work, but in some cases it will cause the system to crash. If this happens, don't panic. The problem can be solved pretty easily by raising the voltage to your RAM. The voltage to your RAM, also known as vdimm, can be adjusted in most BIOSes. Raise it using the smallest increments available and test each setting to see if it works. Once you find a setting that works, you can either keep it or try to push your RAM farther. If you give the RAM too much voltage, however, it could get fried. For info on what voltages are safe, refer back to my Holy Grail of RAM The only other thing that you have to worry about when overclocking RAM are the latency timings. These timings are the delays between certain RAM functions. If you want more info on this, you know where to look Basically, if you want to raise the speed of your RAM, you may have to raise the timings. It's not all that complicated, though, and shouldn't be too hard to understand. That's really all there is to it. If only overclocking the CPU were that easy

Buying RAM rated at a Higher Speed This one's the simplest thing in this entire guide If you want to raise your FSB to, say, 250MHz, just buy RAM that is rated to run at 250MHz, which would be DDR 500. The only downside to this option is that faster RAM will cost you more than slower RAM. Since overclocking your RAM is relatively simple, you might want to consider buying slower RAM and overclocking it to fit your needs. It could save you over a hundred bucks, depending on what type of RAM you need. That's basically all you need to know about RAM and overclocking. Now onto the rest of the guide. Voltage and how it affects Overclocking There will be a point when you are overclocking and you simply cannot increase the speed of your CPU anymore no matter what you do and how much cooling you have. This is most likely because your CPU is not getting enough voltage. This is very similar to the RAM voltage scenario that I addressed above. To solve this, you simply up the voltage to your CPU, also known as the vcore. Do this in the same fashion described in the RAM section. Once you have enough voltage for the CPU to be stable, you can either keep the CPU at that speed or attempt to overclock it even further. As with the RAM, be careful not to overload the CPU with voltage. Each processor has recommended voltages setup by the manufacturer. Look on the website to find these. Try not to go past the recommended voltages. Keep in mind that upping the voltage to your CPU will cause much greater heat output. This is why it is essential to have good cooling when overclocking. Which leads me to my next topic... Cooling As I said before, when you up the voltage to your CPU, the heat output great increases. This makes proper cooling a necessity. Here is a good set of links related to cooling and a few other topics.

There are basically three "levels" of case cooling: Air Cooling (Fans) Water Cooling (look here) Peltier/Phase Change Cooling (VERY expensive and high end cooling[/b] I really don't have much knowledge on the Peltier/Phase Change method of cooling, so I won't address it. All you need to know is that it could cost you upwards of $1000 dollars and can keep your CPU at sub-zero temperatures. It's intended for VERY high end overclockers, and I assume that nobody here will be using it. The other two, however, are much more affordable and realistic. Everybody knows about air cooling. If you're on a computer now (and I don't know how you'd be seeing this if you're not ), you probably hear a constant humming coming from it. If you look in the back, you will see a fan. This fan is basically all that air cooling is: the use of fans to suck cold air in and push hot air out. There are various ways to set up your fans, but you generally want to have an equal amount of air being sucked in and pushed out. For more info, refer to the link that I gave at the beginning of this section. Water cooling is more expensive and exotic than air cooling. It is basically the use of pumps and radiators to cool your system more effectively than air cooling. For more info on it, check out the link that I gave next to water cooling before. Those are the two most commonly used methods of case cooling. Good case cooling, however, is not the only component necessary for a cool computer. The other main component is the CPU Heatsink/Fan, or HSF. The purpose of the HSF is to channel heat away from the CPU and into the case so that it can be pushed out from the case fans. It is necessary to have an HSF on your CPU at all times. Your CPU will be fried in a matter of seconds if it is not.

There are tons of HSF's out there. For a ton of info on HSF's and everything that goes with them, check out this page again. It basically covers all you need to know about HSF's and air cooling. Alright, that's the basics to overclocking. Have any questions, feel free to ask me or anyone on this board. **Overclocking FAQ** This is just a compilation of basic hints/tips for overclocking, and a basic overview of what it is and what it involves. PM me if you have an addition or a correction, and I'll mention you if I use it. How well will X overclock? YMMV. Not all chips/components overclock the same. Just because Bobby got his Prescott to 5ghz, it doesn't mean yours is guaranteed to do 4ghz. And the like. Each chip is unique in it's overclockability. Some are great, some are duds, most are average (well, duh!). Try it and see. Whether or not you keep it if you get a dud is up to you and your store's return policy. Is this a good overclock? Are you happy with what you got? If so, then sure (unless it is just a 5% or less overclock - then you need to keep going unless it becomes unstable after that ). Otherwise, keep going. If you're at the limit of your chip, then you are at the limit of your chip. Nothing else you can do. How hot is too hot/How much voltage is too much? This is a very good reference for this type of thing. Look there before asking questions about this here. As a general guideline for safe temperatures, temps at full load should be at or below 60C for a P4 and 55C for Athlons. Lower is better, but don't freak if your temps are high. Check the resource, and see if it is well within specifications (as it likely is). For voltages, 1.651.7 is a good limit for a P4 and an Athlon can go up to 1.8 on air/2.0 on water - generally speaking. Depending on cooling,

more voltage/less voltage may be appropriate. The limits on chips are surprisingly high. For example, the maximum temperature/voltage on a Barton core Athlon XP+ is 85C and 2.0 volts. 2 volts is plenty for most overclocks, and 85C is rather high. Do I need better cooling? Depends on what your current temperatures are and what you're planning to do with your system. If your temperatures are too high, then you probably need better cooling, or at least need to reseat your heatsink and work on cable management. Good cable management can do wonders for case airflow. Also, proper application of thermal paste is very important for temps. Use the guide from the TIM(Thermal Interface Material) manufacturer and follow it as closely as you can. If that doesn't help enough or at all, than you probably need better cooling. But see the above section and included link before you start complaining about your temperatures. We don't really want to hear it, unless they are spectacularly bad (or good). Then again, most of those instances can be chalked up to inaccurate temperature sensors on the motherboard. So don't post these questions unless you really can't figure it out and you have tried a few things already. What are the common methods of cooling? The most common method is air cooling. This involves putting a fan on top of a heatsink which is then placed on top of the CPU. These can be either very quiet, very loud, or somewhere in between, based on the fan used. They can be fairly effective coolers, but there are more effective cooling solutions. One of these is watercooling, but I'll get into that in a bit. Air coolers are made by companies such as Zalman, Thermalright, Thermaltake, Swiftech, Alpha, Coolermaster, Vantec, etc. Zalman makes some of the best quiet cooling units and is known for their "flower cooler" design. They have one of the most effective quiet cooling designs in the 7000Cu/AlCu (all copper or aluminum and copper construction) and it is also one of the better performing designs. Thermalright is (quite) arguably the producer of the highest performing heatsinks when used with

appropriate fans. Swiftech and Alpha were the performance kings before Thermalright came into the spotlight and are still excellent heatsinks and can be used in more applications than the Thermalright heatsinks because they are generally smaller than the Thermalright heatsinks and so fit on more motherboards. Thermaltake produces an abundance of cheap heatsinks, but they aren't really worth it IMHO. They don't perform on the same level as the other heatsink manufacturers' heatsinks, but they can be used in a pinch or in a budget box. That covers the most popular heatsink manufacturers. On to watercooling. Watercooling is still mainly a fringe movement, but it is becoming more mainstream all the time. NEC and HP (I believe) make watercooled systems that can be bought retail. Still, most of watercooling is in the enthusiast area. There are several components involved in a watercooling loop, even the most basic one. There is at least one waterblock, usually on the CPU and sometimes the GPU. There is a pump and sometimes a reservoir. There is also a radiator or two. The waterblocks are generally constructed from copper or (less commonly) aluminum. Even less common, but becoming more so, is waterblocks made of silver. Danger Den makes the S-TDX, and it is possible to procure a silver Cascade block. There are several different kinds of internal designs for waterblocks, but I won't get into those here. Visit the watercooling sub forum to learn more. The pump is responsible for pushing water through the loop. The most common pumps are Eheim pumps (1046, 1048, 1250), Hydor (L20/L30), and the Danner Mag3. Iwaki pumps are also popular among the high-end crowd. The Swiftech MCP600 pump is becoming more popular, as is the Liang D4. Both of those are high-head 12V pumps. A reservoir is helpful because it adds to the volume of the water in the loop and makes filling and bleeding (getting the air bubbles out of the loop) and maintenance easier. However, it takes up a good deal of space in most cases (a small reservoir isn't much good) and it is just one more thing that could leak. The radiator can either be a retail one, such as Swiftech's radiators or the Black Ice radiators, or made from a heatercore from a car. The heatercores generally offer superior performance as well as a lower price tag, but are also harder to assemble, as they usually don't come in a form

that can be adopted to watercooling quickly or easily. Oil coolers are an alternative for those with strange size requirements, as they come in a great variety of shapes and sizes (well, usually a rectangle). However, they don't perform quite as well as a heatercore. Again, look at the watercooling sub forum for more info. The tubing is also a factor in performance. Generally, 1/2" ID is considered to be the best for high performance. However, 3/8" and even 1/4" ID setups are becoming more common, and their performance is also getting closer to that of a 1/2" ID loop. That's about it for watercooling in this section. What are some of the less common cooling types? Phase change, chilled water, peltier (TEC), and submersion setups are less common, but higher performance, cooling alternatives to those listed above. Ask in the extreme cooling sub forum about any of these methods. Read up on any of these before you use them. Peltier cooling and chilled water loops are both based on watercooling, in that they are based on a modified watercooling loop. Peltier is the most common of these types. A peltier is a device that, when current is applied, gets hot on one side and cold on the other. This can be used between a CPU and a waterblock or a GPU and a waterblock. Less common is peltier cooled northbridges, but this isn't really necessary. Ever. A chilled water loop uses either a peltier or phase change to cool off the water in the loop, usually replacing the radiator in the loop cooling the CPU/GPU. Using a peltier to do this is not very effective, because it often requires another watercooling loop to cool it off. The peltier is generally sandwiched between either a heatsink and a waterblock or a waterblock and another waterblock. The phase change method involves placing the cooling head or cooling component from an A/C unit or the like in a reservoir. Antifreeze is usually added to the water in about a 50/50 ratio in chilled water setups, because freezing isn't good. The tubing has to be insulated as do the blocks if sub ambient temperatures are ever reached in case of condensation. Phase change involves a compressor and a cooling head attached to the CPU or sometimes the GPU. I won't go into much depth about it here. Read up on it at overclockers.com or the extreme cooling sub forum or your extreme overclocking forum of choice.

Other less common methods involve dry ice, liquid nitrogen, watercooling the PSU and hard drives, and other things like that. Using the case as a heatsink has also been considered and done as well. I just thought up a cool idea for cooling, is it original? Is it listed up above? If so, then no. Also, the search function is a wonderful thing now that it works. My cool idea has been listed already, should I post it? Only if you are in the process of doing it. Then we want - no, demand - pictures. Hypothetical discussions are okay, but make sure it is something useful. Don't let me discourage you though, just don't take it too hard if no one cares. What about prebuilt watercooling units? The Koolance one and the Corsair one are the only ones really worth considering. The little Globalwin one is alright, but no better than any half-decent air cooling. The rest are no better. Avoid them. The newest Thermaltake one may be alright, but see the above warning about Thermaltake products. New kits may be decent (The kingwin one seems to be so) but read multiple reviews and at least one that tests it on the platform you will be using before buying anything. What are the dangers of overclocking? There are several dangers attached to overclocking, and they should definitely not be overlooked. Running any component out of spec will shorten its lifespan; though newer chips are able to deal with this far better than older ones, so this is less of a problem than it used to be, especially if you upgrade every 6 months or every year. For long term stability, IE computers that are going to be running for more than 2 years or so with a load most of the time, overclocking is not a good idea. Also, there is the possibility that overclocking will corrupt data, so if you don't do backups of any data you care about, overclocking is not really for you (and you should really start doing backups anyway)

unless you can easily replicate the data and it will not cause any problems. But take possible data loss into account BEFORE you start overclocking. You will thank yourself for doing this if anything goes wrong. Overclocking (especially large overclocks at a high voltage) is not recommended if you only have one computer and you need it for anything important, as the possibility of component failure is quite real (I have lost a few components to overclocking, but not as many as some have lost) so that needs to be taken into account as well. On a lighter note, addiction and "Empty Wallet Syndrome" are also very real risks to overclocking . Be careful of those. How do I overclock? This is a rather complex question, but the basics are pretty easy. The simplest method is to just raise the FSB. This will work on almost any platform. However, Via chipsets (KT266/333/400(a)/600/880(the 880 may have a lock, but I think it still lacks one) and K8T800 - not to be confused with the K8T800 Pro which has one) do not have a PCI/AGP lock, so you have to be careful about raising the FSB, as running the PCI bus out of spec (33mhz is the standard speed) can corrupt hard drive data, prevent peripherals from functioning correctly (especially ATI AGP video cards), and generally cause instability. This will be revisited later. The nForce2 chipset for AMD's XP chip, the nForce3 250 (the 150 is unlocked on most boards, but some motherboards have either dividers or rudimentary locks to allow higher FSB, but I'm not an expert on these - ask around), the Via K8T800 Pro, and the Intel 865/875 chipsets all possess locked PCI frequencies. Many, if not all, i845 based motherboards also have the PCI/AGP lock. This makes adjusting the FSB far easier, as it removes certain limiting factors, such as frequency-sensitive peripherals (most of them). However, limits still exist. Besides the limit imposed by the chip itself, the RAM and the chipset, as well as the motherboard itself, can limit the FSB that can be attained. That is where multiplier adjustment comes in. On certain Athlon XP chips, the multiplier is adjustable. These chips are referred to as 'unlocked.' The Athlon 64 series (I believe) allows multiplier adjustment to lower multipliers only

aside from the fully unlocked FX series. The Pentium 4 is locked unless you have acquired an engineering sample through some stroke of luck or ebay. However, almost all motherboards allow multiplier adjustment as long as the chip supports it. For the Athlon XP boards that don't, a pinmodding guide to raise/lower the multiplier is available in the 'workshop' section of ocinside.de/index_e.html. The site explains how to perform the modification and also has several other useful tools. Once the system becomes unstable because of the CPU limitations, there are two options. You can either back down a little to where it is stable, or you can raise the CPU voltage (and possibly the RAM and AGP voltages) to where it becomes stable, or even raise it higher and keep pushing the overclock. You can also try 'loosening' the memory timings (raising the numbers) until it becomes stable if raising the CPU voltage doesn't help or raising the memory voltage. If none of these help, your motherboard may have a provision for raising the chipset voltage, which can help if your chipset is adequately cooled. If nothing helps, you may need better cooling on the CPU or other components (cooling the MOSFETS - the little chips next to the CPU socket which regulate power - can help and is rather common) If that still doesn't help, or the gain is only marginal, you are at the limit of your chip or your motherboard. If lowering the voltage doesn't hurt stability than it is most likely your motherboard. Voltmodding the chipset is a possibility, but is a bit advanced and requires better cooling than stock. Also, cooling the southbridge as well as the northbridge may help, or may improve stability. I know that on my motherboard, the integrated sound starts to crackle if I run WinAMP/XMMS and UT2004 (this happens in both Windows and Linux) no matter the FSB if i don't have a heatsink on the southbridge. So it isn't a bad idea, but may not be necessary. It also generally voids your warranty (more than overclocking does - overclocking can usually be undone without a trace). That covers basic overclocking. More advanced overclocking usually involves adding heatsinks to everything, voltmodding the motherboard and possibly power supply, adding more/better fans and/or watercooling and/or phase change/cascade cooling. Google can help you learn about the more extreme side of

overclocking. What do I do if my computer won't post (Display the BIOS screen when it is turned on)? This varies based on the motherboard you have. The "fail-safe" solution (unless you killed something) is to reset the CMOS, usually by moving a jumper for a set amount of time. Check your motherboard manual for the specifics. Most recent enthusiast level boards have an option to post at reduced frequencies if the overclock is pushed too high but leave the BIOS settings intact, so you can go in and lower the clock speed to where it is stable. On some motherboards, this is done by holding the Insert key when you turn on the computer (usually has to be a PS/2 keyboard). The DFI LanParty PRO875 is one such board. Others automatically reduce the frequency if the computer didn't post on the previous attempt. The A7N8X usually does this. Sometimes a computer will not cold boot (post when the power button is pressed) but will work if it is left on for a while, then reset. On other occasions the computer will cold boot fine, but will fail to warm boot (reboot). Those are both indications of instability, but if you are happy with the stability and able to deal with the issues than it usually won't cause any huge problems. What limits my overclock? Generally, the RAM and CPU are the only significant limiting factors, especially in AMD systems because of the problems inherent in running the memory asynchronously (see the FSB section down below) The RAM has to run at the same speed as the FSB or at a fraction of it. Complex fractions are allowed, meaning the memory can be run at a higher rate than the FSB, not just a lower one. With the option to run looser timings/more voltage through memory, though, it is becoming less and less the limiting factor, especially since newer platforms (P4 and A64) suffer less of a performance hit from running async. (again, see below) The CPU has become the main limiting factor. The only way to deal with a CPU that doesn't want to run any faster is to pump more voltage through it, though exceeding the maximum core voltage shortens the life of the chip (though overclocking does this as well) but sufficient cooling stems this problem.

Another problem with running too high of a core voltage manifested itself on the P4 platform in the form of SNDS, or Sudden Northwood Death Syndrome, wherein running any voltage over something like 1.7 (not sure of the exact number, no one is) would result in the quick and untimely death of the processor, even with phase change cooling. However, the newer 'C' core chips, the EE chips, and the Prescott chips have not had this problem, at least not to nearly the same extent. The cooling can also prevent a good overclock, as having temps that are too high can lead to instability. But if your system is stable, then the temps usually are not too high. Now that I've overclocked a lot, what should I do? Run some benchmarks if you want to. Run Prime95 (Or your stress test of choice - it is up to you) for a sufficient time period (Usually 24 hours straight is considered a stable system) /Begin shameless F@H plug Then install Folding@Home if you haven't already. /End shameless F@H plug That covers the basic aspects of overclocking. The questions from this point on are the more technically involved sections. What is the FSB? FSB (or the Front Side Bus) is one of the easiest and most common ways to overclock. The FSB is the speed at which the CPU interfaces with the rest of the system. It also affects the memory clock, which is the speed the memory runs at. Generally speaking, higher=better for both the FSB and the memory clock. However, there are certain cases where this is not true. For example, running the memory clock faster than the FSB does not really help at all. Also, on Athlon XP systems, running the FSB at a higher rate, but forcing the memory to run out of sync with the FSB (using memory dividers which will be discussed later on) will hamper performance far more than running at a lower FSB with the memory in sync. The FSB is referred to in different ways on Athlon and P4

systems. On the Athlon side, it is a DDR bus, meaning that if the actual clock is 200mhz, it is said to be running at 400mhz. On a P4, it is "Quad-Pumped" so if the actual clock is the same 200mhz, it is said to be 800mhz. This makes for a good marketing strategy for Intel, because as every average Joe knows, higher=better. Intel's "Quad-Pumped" FSB actually has a real-world advantage, as it is what allows P4 chips to run the memory out of sync with less performance loss. The higher rate of cycles per clock gives it a better chance of having the memory cycles line up with the CPU cycles, which equates to better performance. Why does running the PCI/AGP bus out of spec cause instability? Running the PCI bus out of spec causes instability mainly because it forces components with very strict tolerances to run at a different frequency then they are intended to. The PCI spec is usually stated at 33mhz. Sometimes it is stated at 33.3mhz, which I believe is closer to the real spec. The main victim of high PCI speeds is the hard drive controller. Certain controller cards have a higher tolerance than others, and so are able to run at increased speeds without noticeable corruption. However, the onboard controllers on most motherboards (especially SATA controllers) are extremely sensitive to high PCI speeds, and can have corruption and data loss if the PCI bus is running at even 35mhz. Most are able to do 34mhz, as it is really less then 1mhz out of spec (depending on where the motherboard stops rounding to 34mhz... for example, most motherboards will probably report any FSB from 134mhz-137 as being a 34mhz PCI speed. The actual range is from 33.5mhz to 34.25mhz, and may vary even more based on variations in the clock frequency of the motherboard. At higher FSBs and higher dividers, the range can be even more). Audio and other integrated peripherals also suffer when the PCI bus is run out of spec. ATI video cards are a lot less tolerant to high AGP speeds (directly related to PCI speed) than nVidia cards. With that in mind, most Realtek lan cards (the PCI based ones that occupy an expansion slot) are rated for safe operation at anywhere from 30-40mhz.

What is the multiplier? The multiplier acts in conjunction with the FSB to determine the clock speed of the chip. For example, a multiplier of 12 with a FSB of 200 will give a clock speed of 2400mhz. As explained in the overclocking section above, some chips are locked and some are unlocked, meaning only certain chips allow adjustment of the multiplier. If you have multiplier adjustment, it can be used to either get a higher clock speed if the FSB is limited on your motherboard or to allow a higher FSB if the chip is limited. What is a memory divider? A memory divider determines the ratio of the memory clock speed to the FSB. A 2:1 FSB:RAM divider would net a 100mhz ram clock with an FSB of 200mhz. The most common use of a divider is to allow a P4C system to run 250FSB with PC3200 ram, with a 5:4 divider. There are also 4:3 dividers and 3:2 dividers on most Intel (DDR1) systems. Athlon systems can't use the memory as efficiently as P4 systems when a divider is used, as explained above in the FSB section. The memory dividers should only be used to obtain stability, not just at a whim, because even on a P4 it hurts performance somewhat. If your system is stable without resorting to a memory divider (or if a memory voltage bump can fix the problem) then don't use the dividers. What do the different memory timings mean? CAS Latency , sometimes referred to as CL or CAS, is the minimum number of cycles the ram must wait until it can read or write again. Obviously, the lower the number (time), the better. tRCD is the delay before the data on a particular row in memory is read/written. Again, the lower the number the better. tRP is basically the precharge time of the row. tRP is the time the system waits after writing something to a row before another row can be active. Once again, lower is better. tRAS is the minimum value for how long a row can be active. So basically, tRAS is how long the row has to be turned on. This number varies quite a bit with RAM settings.

What do the various memory ratings refer to? (PC2100/PC2700/PC3200 etc.) The rating refers directly to the maximum bandwidth obtainable and indirectly to the memory clock rate. PC2100, for example, has a 2.1GB/S maximum transfer rate, and a clock rate of 133mhz. PC4000, as another example, has a 4GB/S ideal transfer rate and a 250mhz clock. To obtain the clock rate from the PCXXXX rating, divide the rating by 16. Multiply the mhz rating by 16 to obtain the bandwidth rating. How does the DDRXXX refer to the actual clock speed of the memory? The DDRXXX is just two times the actual clock speed; i.e. DDR400 is clocked at 200mhz. if you want know the pc-XXXX speed of the DDRXXX speed, times it by 8. Links (Provided by "mota" on the HardForums) Misc. Stores (Most sell cooling related products) http://www.2cooltek.com/ http://www.3dcool.com/ http://www.caseetc.com http://www.outsideloop.com/ http://www.plycon.com/ http://www.sidewindercomputers.com http://www.so-trickcomputers.com http://www.sysfx.net http://www.heatsinkstore.com http://www.crazypc.com http://www.inflowdirect.com http://www.svcompucycle.com/ http://www.frozencpu.com http://www.coldcpu.com http://www.sysfx.net http://www.newegg.com/

http://www.heatsinkfactory.com/ http://www.highspeedpc.com/ http://www.pclincs.co.uk/ http://www.coolerguys.com/ Vapor Phase Systems http://www.kryotech.com http://www.vapochill.com http://www.maxxxpert.com/english/default.htm http://www.chip-con.com Water Cooling http://www.aquastealth.com/ http://www.coolwhip.dk/ http://www.dangerden.com/ http://www.dtekcustoms.com/ http://www.swiftnets.com/ http://www.cooltechnica.com/ http://www.inmind.ca/ http://www.watercooler.com.br http://www.aqua-computer.de http://www.case-mod.com/store/default.php?cPath=21_24 http://www.sharkacorp.com/ http://www.cooltech.info http://www.cryophilia.com/ Singapore http://www.silverprop.com Austrailia http://www.cooling-store.de http://3rotor.dns2go.com/store/ Canada http://www.inmind.ca Canada http://www.e-compuvision.com Canada http://www.wc101.com/ - Watercooling information website www.wbhouse.it http://www.mscdirect.com/ - Hoses, fittings, quick disconnects Heatsink Manufacturers http://www.aavid.com/ http://www.alphanovatech.com/ http://www.micforg.co.jp/ http://www.coolermaster.com/ http://www.coolinnovations.com/

http://www.est.hi-ho.ne.jp/kanie/fetop.htm http://www.globalwin.com.tw/ http://www.pcm-solutions.com/sink.html Fans and Fan related products http://gizzo.8m.com/fans/ - GiZzO's Fan Database Page http://www.adda.com.tw/ http://www.cooltron.com/ http://www.comairrotron.com/index.html http://www.deltaww.com/products/dcfans.htm http://www.fanbus.com/ http://www.mechatronicsinc.com/ http://www.nidec.com/index.html http://nmbtechn.iserver.net http://www.ystech.com.tw http://www.comairrotron.com/index.html http://www.ebm.com http://www.sunon.com http://www.nidec.com http://www.vantecusa.com Misc. Parts Suppliers http://www.allelectronics.com/ http://www.coleparmer.com/ http://www.digikey.com/ http://www.jameco.com/ http://mcmelectronics.com/ http://www.mcmaster.com/ http://www.meci.com/ http://www.mouser.com/ http://www.mpja.com/ http://www.mscdirect.com/ http://www.dse.com.au/ - Australia Peltiers http://www.marlow.com/ http://www.melcor.com/ http://www.supercool.se/ http://www.tedist.com http://www.lytron.com http://www.maximumoc.com - Sysfailur's cold-plate selling site.

Temperature controllers http://www.tetech.com/temp/ http://www.ovenind.com http://www.oatleyelectronics.com/kits/k140.html - Austrailia http://www.melcor.com/tempctrl.htm http://www.sscooling.com/products.html http://www.variablepc.com/ Dedicated Power Supplies http://www.astroncorp.com/ http://www.meanwell.com/ http://www.rivergatedist.com/ http://www.peaktopeakpower.com/ ATX power supplies PC Power and Cooling Heroichi (HEC) Antec Enermax USA Taiwan DynaPower Sparkle Technical Info & How-to (must be OC & C related) http://www.overclockers.com/ http://www.procooling.com Processor Electrical Specifications Software http://mbm.livewiredev.com/ - Motherboard Monitor http://www.h-oda.com/ - WCPUID http://www.sisoftware.demon.co.uk/sandra/ - SiSoft Sandra http://www.mersenne.org/freesoft.htm - Prime95

Você também pode gostar