Você está na página 1de 24

ATMIYA INSTITUTE OF TECHNOLOGY AND SCIENCE RAJKOT

CERTIFICATE

This is to certify that

Achyut Vaidya(Reg. No.:064046) Vikas Shukla (Reg. No.: 064050) Of Instrumentation and Control Department, Semester 6th, have satisfactorily completed the Seminar on DDR3 SDRAM, as a part of curriculum. Date of Submission:__________________________

Guide Department

Head of

Acknowledgement

I take immense pleasure in thanking Bhayani Sir, the H.O.D of our department for having permitted me to carry out this seminar work. I wish to express my deep sense of gratitude to my Guide,

Bhautik Sir & Rakshit Sir, for his able guidance and useful suggestions, which helped me in completing the seminar work, in time.

Finally, yet importantly, I would like to express my heartfelt thanks to my beloved parents for their blessings, my friends/classmates for their help and wishes for the successful completion of this seminar.

Table of contents:
1. Basic Terms -What is R.A.M. -Types of R.A.M.(D.R.A.M AND S.R.A.M.) -S.D.R.A.M. -D.D.R. S.D.R.A.M. 2. Introduction to D.D.R.3 S.D.R.A.M. 3. S.D.R.A.M. Access 4. Fly-By Architecture 5. 6. 7. 8. 9. 10. 11. D.D.R.3 Efficiency D.D.R.3 Prefetch Buffer D.D.R.3 Speed D.D.R.3 Overclocker Functionality D.D.R.3 Dynamic ODT Disadvantages References

BASIC TERMS
-What is R.A.M.

Random-access memory (usually known by its acronym, RAM) is a form of computer data storage. Today it takes the form of integrated circuits that allow the stored data to be accessed in any order (i.e., at random). The word random thus refers to the fact that any piece of data can be returned in a constant time, regardless of its physical location and whether or not it is related to the previous piece of data.

This contrasts with storage mechanisms such as tapes, magnetic discs and optical discs, which rely on the physical movement of the recording medium or a reading head. In these devices, the movement takes longer than the data transfer, and the retrieval time varies depending on the physical location of the next item.

The word RAM is mostly associated with volatile types of memory (such as DRAM memory modules), where the information is lost after the power is switched off. However, many other types of memory are RAM as well (i.e., Random Access Memory), including most types of ROM and a kind of Flash memory called NOR-Flash.

-Types of R.A.M.

Now,there are two types of RAMS.R.A.M. -Static Random Access Memory D.R.A.M.-Dynamic Random Access Memory

Dynamic random access memory (DRAM) is a type of random access memory that stores each bit of data in a separate capacitor within an integrated circuit. Since real capacitors leak charge, the information eventually fades unless the capacitor charge is refreshed periodically. Because of this refresh requirement, it is a dynamic memory as opposed to SRAM and other static memory. The advantage of DRAM is its structural simplicity: only one transistor and a capacitor are required per bit, compared to six transistors in SRAM. This allows DRAM to reach very high density. Unlike Flash memory, it is volatile memory (cf. non-volatile memory), since it loses its data when the power supply is removed.

-S.D.R.A.M.

SDRAM refers to synchronous dynamic random access memory, a term that is used to describe dynamic random access memory that has a synchronous interface. Traditionally, dynamic random access memory (DRAM) has an asynchronous interface which means that it responds as quickly as possible to changes in control inputs. SDRAM has a synchronous interface, meaning that it waits for a clock signal before responding to control inputs and is therefore synchronized with the computer's system bus. The clock is used to drive an internal finite state machine that pipelines incoming instructions. This allows the

chip to have a more complex pattern of operation than asynchronous DRAM which does not have a synchronized interface. Pipelining means that the chip can accept a new instruction before it has finished processing the previous one. In a pipelined write, the write command can be immediately followed by another instruction without waiting for the data to be written to the memory array. In a pipelined read, the requested data appears after a fixed number of clock pulses after the read instruction, cycles during which additional instructions can be sent. (This delay is called the latency and is an important parameter to consider when purchasing SDRAM for a computer.) SDRAM latency refers to the delays incurred when a computer tries to access data in SDRAM. SDRAM latency is often measured in memory bus clock cycles. Because a modern CPU is much faster than SDRAM, the CPU has to wait for a relatively long time for a memory access to complete before it can process the data. SDRAM latency contributes to total memory latency, which causes a significant bottleneck for system performance in modern computers.

- D.D.R. S.D.R.A.M.

In computing, a computer bus operating with double data rate transfers data on both the rising and falling edges of the clock signal. This is also known as double pumped, dual-pumped, and double transition. The simplest way to design a clocked electronic circuit is to make it perform one transfer per full cycle (rise and fall) of a clock signal. This, however, requires that the clock signal operate at twice the rate as the data signals, which change at most once per cycle. When operating at a high bandwidth signal integrity limitations constrain the clock frequency. By using both edges of the clock, the data signals operate at the same limiting frequency, doubling the data transmission rate.

This technique has been used for microprocessor front side busses, Ultra-3 SCSI, the AGP bus, DDR SDRAM, and the HyperTransport bus on AMD's Athlon 64 processors. An alternative to double or quad pumping is to make the link self-clocking. This tactic was chosen by InfiniBand and PCI Express. Describing the bandwidth of a double-pumped bus can be confusing. Each clock edge is referred to as a "beat", with two beats (one upbeat and one downbeat) per cycle. Some people talk about the basic clock frequency, while others refer to the number of transfers per second. Careful usage generally talks about "500 MHz, double data rate" or "1000 MT/s", but people will refer casually to a "1000 MHz bus", even though no signal cycles higher than 500 MHz. DDR SDRAM popularized the technique of referring to the bus bandwidth in megabytes per second, the product of the transfer rate and the bus width in bytes. DDR SDRAM operating with a 100 MHz clock is called DDR-200 (after its 200 MT/s data transfer rate), and a 64 bit (8 byte) wide DIMM operated at that data rate is called PC-1600, after its 1600 MB/s peak (theoretical) bandwidth. Note that DDR SDRAM only uses double-data-rate signalling on the data lines. Address and control signals are still sent to the DRAM once per clock cycle.

Introduction to DDR3 SDRAM


These are uncertain financial times we live in today, and the rise and fall of our economy has had direct affect on consumer spending. It has already been one full year now that DDR3 has been patiently waiting for the enthusiast community to give it proper consideration, yet it's success is still undermined by misconceptions and high price. Benchmark Reviews has been testing DDR3 more actively than anyone, which is why over fifteen different kits fill our System Memory section of reviews. Sadly, it might take an article like this to open the eyes of my fellow hardware enthusiast and overclocker, because it seems like DDR3 is the technology nobody wants bad enough to learn about. Pity, because DDR3 is the key to extreme overclocking. First and foremost, DDR3 is not just a faster version of DDR2. In fact, the worst piece of misinformation I see spread in enthusiast forums is how DDR3 simply picks up speed where DDR2 left off... which is as accurate as saying an airplane picks up where a kite left off. DDR3 does improve upon the previous generation in certain shared areas, and the refined fabrication process has allowed for a more efficient integrated circuit (IC) module. Although DDR3 doesn't share the same pin connections or key placements, it does still share the DIMM profile and overall appearance. From a technical perspective however, this is where the similarities end.

Features:

Now supports a system level flight time compensation Mirror-friendly DRAM pin out are now contained on-DIMM CAS Write latency are now issued to each speed bin Asynchronous reset function is available for the first time in SDRAM I/O calibration engine monitors flight time and correction levels Automatic data bus line read and write calibration

Improvements:

Higher bandwidth performance increase, up to 1600 MHz per spec DIMM-terminated 'fly-by' command bus Constructed with high-precision load line calibration resistors Performance increase at low power input Enhanced low power features conserve energy Improved thermal design now operates DIMM cooler

In electronic engineering, DDR3 SDRAM or double-data-rate three synchronous dynamic random access memory is a random access memory technology used for high bandwidth storage of the working data of a computer or other digital electronic devices. DDR3 is part of the SDRAM family of technologies and is one of the many DRAM (dynamic random access memory) implementations. DDR3 SDRAM is an improvement over its predecessor, DDR2 SDRAM. The primary benefit of DDR3 is the ability to transfer twice the data rate of DDR2 (I/O at 8X the data rate of the memory cells it contains), thus enabling higher bus rates and higher peak rates than earlier memory technologies. However, there is no corresponding reduction in latency, which is therefore proportionally higher. In addition, the DDR3 standard allows for chip capacities of 512 megabits to 8 gigabits, effectively enabling a maximum memory module size of 16 gigabytes.

S.D.R.A.M. Access

SDRAM is notationally organized into a grid like pattern, with "rows", and "columns". The data stored in SDRAM comes in blocks, defined by the coordinates of the row and column of the specific information. The steps for the memory controller to access data in SDRAM follow in order: 1. First, the SDRAM is in an idle state.

2. The controller issues the "Active" command. It activates a certain row, as indicated by the address lines, in the SDRAM chip for accessing. This command typically takes a few clock cycles.

3. After the delay, column address and either "Read" or "Write" command is issued. Typically the read or write command can be repeated every clock cycle for different column addresses (or a burst mode read can be performed). The read data isn't however available until a few clock cycles later, because the memory is pipelined. When an access is requested to another row, the current row has to be deactivated by issuing the "Precharge" command. The precharge command takes a few clock cycles before a new "Active" command can be issued

FLY-BY Architecture

What is fly-by - DDR3 DRAM modules use a new fly-by topology for the commands, addresses, control signals, and clocks to improve the signaling and sustain memory signal integrity at high speeds. A fly-by topology daisy chains the address and control lines through a single path across each DRAM similar to how an FBDIMM module operates. This is unlike DDR2 memory, which splits the signal path between all DRAM chips through TBranching.

For better signal quality at higher speed grades, DDR3 adopts a so called Fly-by architecture for the commands, addresses and clock signals. This effectively reduced the number of stubs and signalling length from the DDR2 T-Branch architecture to a more elegant and straightforward design. The Fly-by topology generally connects the DRAM chips on the memory module in a series, and at the end of the linear connection is a grounded termination point that absorbs residual signals, to prevent them from being reflected back along the bus. We recently asked Aaron Boehm, an Application Engineer at Micron about how important the Fly-by topology is to DDR3. He said that the Fly-by design was very important in DDR3. Probably one of the biggest advantages of moving to the Fly-by topology is that we are able to achieve a much faster slew rate for the signal. This gives us a bigger data-eye, which is very important in DRAM. In the past generations of SDRAM to include DDR2, system memory used a 'Star' topology to disseminate data across many branched signal paths. This improvement is similar to when automobiles first began using the front-wheel drive system to bypass the long drivetrain linkage which incrementally sapped power from the wheels. Essentially, the Fly-by data bus topology utilizes a single direct link between all DRAM components which allows the system to respond much quicker than if it had to address stubs. The reason DDR2 cannot develop beyond the point it already has isn't truly an issue of fabrication refinements, it is more specifically an issue with mechanical

limitations. Essentially, DDR2 technology is no better prepared to reach higher speeds than a propeller airplane is capable of breaking the sound barrier; in theory it's possible, just not with the mechanical technology presently developed. At higher frequency the DIMM module becomes very dependant on signal integrity, and topology layout becomes an critical issue. For DDR2 this would mean that each of the 'T branches' in the topology must remain balanced, an effort which is beyond its physical limitation. With DDR3 however, the signal integrity is individually tuned to each DRAM module rather than balanced across the entire memory platform. Now both the address and control line travel a single path instead of the inefficient branch pattern T topology in DDR2. Each DDR3 DRAM module also incorporates a managed leveling circuit dedicated to calibration, and it is the function of this circuit to memorize the calibration data. The Fly-by topology removes the mechanical line balancing limitations of DDR2, and replaces it with an automatic signal time delay generated by the controller fixed at the memory system training. Although it is a rough analogy, DDR3 is very similar to the advancement of jet propulsion over prop-style aircraft, and an entirely new dimension of possibility is made available. There is a downside however, and this is primarily in the latency timings.

DDR3 EFFICIENCY
Efficiency is a double-edged sword when we talk about DDR3, because aside from fabrication process efficiency there are also several architectural design improvements which create a more efficient transfer of data and a reduction in power. All of these items tie in together throughout this article, so for you to understand why DDR3 is going to be worth your money, you should probably also know why it's going to deliver more.

Power Consumption

So lets begin with power: at the JEDEC JESD 79-3B standard of 1.5 V, DDR3 system memory reduces the base power consumption level by nearly 17% compared the 1.8 V specified base power requirement for DDR2 modules. Taking this one step further, consider that at the high end of DDR2 there are 1066 MHz memory modules demanding 2.2 V to properly function. Then compare this to the faster 1600 MHz DDR3 RAM modules operating at 1.5 V nominal and you'll begin to see where money can be saved on energy costs - conserving nearly 32% of the power previously needed. The reduced level of base power demand works particularly well with the 90 nm fabrication technology presently used for most DDR3 chips. In high-efficiency purposed modules, some manufacturers have reduced current leakage even more by using "dual-gate" transistors.

You might be wondering how big a difference 0.7 V can make on your power bill, and that's a fair concern. In reality, it's not going to be enough to justify the initial costs of the new technology, at least not for your average casual computer user who operates a single workstation in their home. But before you dismiss the power saving, consider who might really make an impact here: commercial and industrial industries. If the average user can see only a few dollars saved per month on utility costs, imagine the savings to be made from large data center and server farm facilities. Not only will the reduced cost of operation help minimize overhead expenses, but the improved design also reduces heat output. Many commercial facilities spend a double-digit portion of their monthly expenses on utilities, and the cost savings sustained from lower power consumption and reduced cooling expenses will have an enormous effect on overhead. If you can imagine things just one step further, you'll discover that reduced cooling needs will also translate into reduced maintenance costs on that equipment and prolong the lifespan of HVAC equipment.

Voltage Notes for Overclockers


According to JEDEC standard JESD 79-3B approved on April 2008, the maximum recommended voltage for any DDR3 module is must be regulated at 1.575 V (see reference documents at the end of this article). Keeping in mind that the vast majority of system memory resides in commercial workstations and mission-

critical enterprise servers, if system memory stability is important then this specification should be considered the absolute maximum voltage limit. But there's still good news for overclockers, as JEDEC states that these DDR3 system memory modules must also withstand up to 1.975 V before any permanent damage is caused.

DDR3: Prefetch Buffer

The SDRAM family has seen generations of change. JEDEC originally stepped in to define standards very early on in the production timeline, and subsequently produced DDR, DDR2 and now DDR3 DRAM (dynamic random access memory) implementations. You already know that DDR3 SDRAM is the replacement for DDR2 by virtue of its vast design improvements, but you might not know what all of those improvements actually are. In additional to the logically progressive changes in fabrication process, there are also improvements made to the architectural design of the memory. In the last section I extolled the benefits of saving power and conserving natural resources by using DDR3 system memory, but most hardware enthusiasts are not aware of how efficiency is now also extended into a new data transfer architecture introduced fresh in DDR3. One particularly important new change introduced with DDR3 is in the improved prefetch buffer: up from DDR2's four bits to an astounding eight bits per cycle. This translates to a full 100% increase in the prefetch payload; not just the small incremental improvement we've seen from past generations .

DDR3: Speed
Even in it's infancy, DDR3 offers double the standard JEDEC standard maximum speed over DDR2. According to JEDEC standard JESD 79-3B drafted on April 2008, DDR3 offers a maximum default speed of 1600 MHz compared to 800 MHz for DDR2. But this is all just ink on paper if you aren't able to actually notice an improvement in system performance. So far, we discussed how DDR3 is going to save the average enthusiast up to 37% of their energy costs consumed by system memory. We also added a much larger 8-bit prefetch buffer to the list of compelling features. So let's cinch it all together with a real ground-breaking improvement, because DDR3 has introduced a brand new system for managing the data bandwidth bus.

DDR Speed DDR3-800 DDR31066 DDR31333 DDR31600

Memory clock 100 MHz 133 MHz

Cycle time 10 ns 7.5 ns

FSB Bus clock 400 MHz 533 MHz

Module name PC3-6400 PC3-8500

Peak transfer rate 6400 MB/s 8533 MB/s

166 MHz

6 ns

667 MHz

PC3-10600

10667 MB/s

200 MHz

5 ns

800 MHz

PC3-12800

12800 MB/s

DDR3: Overclocker Functionality


So let's pause for a moment to recap what we've covered: DDR3 RAM modules can conserve up to 32% of the energy used on system memory, while at the same time saving money on maintenance costs for enterprise HVAC systems. The data prefetch buffer has doubled from only 4 bits per cycle to a full 8 bits with each pass. Finally, the Fly-by topology removes the mechanical limitations of physical line balancing by replacing it with an automatically controlled and calibrated signal time delay. Not just a speed improvement, like some would like you to think.

XMP
So then, when was the last time enthusiasts were actually encouraged to overclock their system memory by the manufacturer? Better yet, when was the last time Intel endorsed the practice? To be fair, Intel processors have been capable of overclocks for quite some time already, but not nearly to the level of convenience introduced in XMP technology. XMP, or eXtreme Memory Profile is an automatic memory settings technology developed by Intel and Corsair to compete with Nvidia's SLI Memory and Enhanced Performance Profiles (EPP). It works very similar to EPP, with one major exception: XMP manages everything from the CPU multiplier, to voltage and front side bus frequencies. This makes overclocking one of the easiest thing possible, since it only requires an XMP compatible motherboard such as Intel's X48 series and an XMP enhanced set of system memory modules.

The XMP Specification was first officially introduced by Intel on March 23rd, 2007 to enable a enthusiast performance extension to the traditional JEDEC SPD specifications. It is very common for Intel Extreme Memory Profiles to offer two different performance profiles. Profile 1 is used for the hardware enthusiast or for certified settings and is the profile that is tested under the Intel Extreme Memory Certification program. Profile 2 is designed to host the Extreme or Fastest possible settings that have no guard band and may or may not work on every system. It should also be noted that XMP settings are not always defined as overclocked or over-volted components. In some less common cases, Extreme Memory Profiles can be used to define conservative power saving settings or reduced (faster) latencies timings.

Dynamic ODT
On-die termination (ODT) is the technology where the termination resistor for impedance matching in transmission lines is located inside a semiconductor chip instead of on a printed circuit board. Although the termination resistors on the motherboard reduce some reflections on the signal lines, they are unable to prevent reflections resulting from the stub lines that connect to the components on the module card (eg. DRAM Module). A signal propagating from the controller to the components encounters an impedance discontinuity at the stub leading to the components on the module. The signal that propagates along the stub to the component(eg. DRAM component) will be reflected back onto the signal line, thereby introducing unwanted noise into the signal.In addition, the ODT on die termination can reduce the number of resistor elements and complex wirings on the mother board. Accordingly, the system design can be simpler and cost effective. For optimum signaling, a typical dual-slot system will have a module terminate to a LOW impedance value (30 or 40) when in an idle condition. When the module is being accessed during a WRITE operation, greater termination impedance is desired, for example, 60 or 120. Dynamic ODT enables the DRAM to switch between HIGH or LOW termination impedance without issuing a mode register set (MRS) command. This is advantageous because it improves bus scheduling and decreases bus idle time.

Disadvantages
When you compare DDR3 to previous SDRAM generations, it inherently claims a higher CAS latency. The higher timings may be compensated by higher bandwidth which increases overall system performance, but they aren't nullified. Additionally, this is new technology and it wears the new technology price tag. DDR3 generally costs more if you compare the price per megahertz ratio, this was also the case when DDR2 replaced DDR years ago. In fact, I still have the receipt for a nearly $400 set of Corsair Dominator 1066 MHz DDR2 from just under two years ago. For that some amount today, I could get a lot more performance for my dollar. There are also a few technical difficulties which must be overcome in order to take advantage of DDR3. For example, to achieve a maximum memory efficiency on the system level, the systems front side bus frequency must also extend to that level. In most cases, it's best to have the front side bus operate at a matching memory frequency. Now obviously this isn't going to be a problem as 1600 MHz FSB processors become mainstream, it still places burden on the processor and motherboard chipset to make accommodations. But we're not quite out of the woods yet... a higher operating frequency also means more signal integrity issues. Both motherboard and memory module design engineers now have to overcome new technologies and purchase test equipment for verification which often takes very expensive equipment just to look at the specific performance routines. In the end, the lab facilities costs will be passed along to you know who. This might explain how $300+ DDR3 motherboards have become such a common sight.

Potential Concerns
System memory has had the opportunity to evolve and improve, but it hasn't been alone. Processor and motherboard technology have also moved forward at what might be considered a faster rate of development. Just as the speed of system memory has increased, the amount of onboard processor cache memory has also increased. As I write this article, I have a set of DDR3 memory modules running at 2000 MHz, and an Intel E8200 processor with 6 MB of cache buffer. It seems that at some point in the upcoming wave of product evolution, my computer may not

see the need to call on system memory unless I'm utilizing a graphics-intensive application. If the trend continues, as it likely will, we might not see any benefit from the ever-increasing operating frequency of system memory because the processor will have large amount of buffer operating at a far faster speed. Another concern is scalability and expansion. While I admire the brilliance of JEDEC to bring a more efficient module into mainstream use, I sometimes wonder how they come about other decisions. One key issue that may become a problem down the road is the specification calling for a maximum of 2 two rank modules per channel at 800-1333 MHz frequencies. It get's worse, only one memory slot is allowed at the present-specification top operational frequency of 1600 MHz. All in all, DDR3 isn't perfect. It's unquestionably better than its predecessor, but I think my points have illustrated that the good also comes with the bad. For the past year our concentration here at Benchmark Reviews has constantly centered around DDR3, as if it's a new toy to play with. But it's not; DDR3 is here to stay, and whether you want it to or not the market will soon be treating DDR2 the same way it presently treats DDR. You can cling on to your old technology, but at this point that would also be like reverting to AGP discreet graphics... which also cost a lot less than PCI Express.

References

Intel XMS Technology Standards: http://download.intel.com/personal/gaming/367654.pdf JEDEC DDR3 SDRAM Standards JESD 79-3B: http://www.jedec.org/download/search/JESD79-3B.pdf JEDEC Specialty DDR2-1066 SDRAM Standards JESD 208: http://www.jedec.org/download/search/JESD208.pdf JEDEC DDR2 SDRAM Standards JESD 79-2E: http://www.jedec.org/download/search/JESD79-2E.pdf