Você está na página 1de 8

ECS 401 COMPUTER ORGANIZATION UNIT - I

Computer Buses
A bus is a common electrical pathway between multiple devices.
Can be internal to the CPU to transport data to and from the ALU. Can be external to the CPU, to connect it to memory or to I/O devices.

BUSES

Early PCs had a single external bus or system bus. Modern PCs have a special-purpose bus between the CPU and memory and (at least) one other bus for the I/O devices.

The System Bus

Physical Implementations
Parallel lines on circuit boards (ISA or PCI) Ribbon cables (IDE)

Physical Implementations (continued) Strip connectors on mother boards (PC104) External cabling (USB or Firewire)

Buses Common Characteristics


Multiple devices communicating over a single set of wires Only one device can talk at a time or the message is garbled Each line or wire of a bus can at any one time contain a single binary digit. Over time, however, a sequence of binary digits may be transferred These lines may and often do send information in parallel A computer system may contain a number of different buses

MANISH MAHAJAN

ECS 401 COMPUTER ORGANIZATION UNIT - I

Buses Structure (continued)


Bus lines (parallel)
Data Address Control Power

Data Bus
Carries data
Remember that there is no difference between data and instruction at this level

Width is a key determinant of performance


8, 16, 32, 64 bit

Bus lines (serial)


Data, address, and control are sequentially sent down single wire There may be additional control lines Power

Address bus
Identify the source or destination of data e.g. CPU needs to read an instruction (data) from a given location in memory Bus width determines maximum memory capacity of system
e.g. 8080 has 16 bit address bus giving 64k address space

Control Bus
Control and timing information
Memory read/write signal I/O read/write signal Transfer ACK Bus Request Bus Grant Interrupt request Interrupt Acknowledge Clock signals Reset

10

Bus Interconnection Scheme

Operation Sending Data


Obtain the use of the bus Transfer the data via the bus Possible acknowledgement

MANISH MAHAJAN

ECS 401 COMPUTER ORGANIZATION UNIT - I

Operation Requesting Data


Obtain the use of the bus Transfer the data request via the bus Wait for other module to send data Possible acknowledgement

Single Bus Problems


Lots of devices on one bus leads to: Physically long buses
Propagation delays Long data paths mean that coordination of bus use can adversely affect performance Reflections/termination problems

Aggregate data transfer approaches bus capacity Slower devices dictate the maximum bus speed

Multiple Buses
Most systems use multiple buses to overcome these problems Requires bridge to buffer (FIFO) data due to differences in bus speeds Sometimes I/O devices also contain buffering (FIFO)

Multiple Buses Benefits


Isolate processor-to-memory traffic from I/O traffic Support wider variety of interfaces Processor has bus that connects as direct interface to chip, then an expansion bus interface interfaces it to external devices (ISA) Cache (if it exists) may act as the interface to system bus

Computer Buses
Some devices that attach to a bus are active and can initiate bus transfers. They are called masters. Some devices are passive and wait for requests. They are called slaves. Some devices may act as slaves at some times and masters at others. Memory can never be a master device.

Traditional (ISA) - (with cache)

18

MANISH MAHAJAN

ECS 401 COMPUTER ORGANIZATION UNIT - I

19

High Performance Bus

Elements of Bus Design

20

Type dedicated multiplexed

Method of arbitration Centralized Distributed

Timing Synchronous Asynchronous

Bus width Address Data

Data transfer types Read Write Read-modifywrite Read-afterwrite Block

Bus Types Dedicated vs. Time Multiplexed


Dedicated
Separate data & address lines

Bus Width
Bus width refers to the data and address bus widths. System performance improves with a wider data bus as we can move more bytes in parallel. We increase the addressing capacity of the system by adding more address lines. Wider the bus the better the data transfer rate or the wider the addressable memory space The address bus determines the system memory addressing capacity. A system with n address lines can directly address 2n memory words. In byte-addressable memories, that means 2n bytes

Time multiplexed
Shared lines Address valid or data valid control line Advantage - fewer lines Disadvantages
More complex control Degradation of performance

Bus Timing
Co-ordination of events on bus Synchronous a bus clock provides synchronization of all bus operations Asynchronous donot use a common bus clock signal; instead, these buses use handshaking to complete an operation by using additional synchronization signals

Synchronous Bus Timing


Events determined by clock signals Control Bus includes clock line A single 1-0 cycle is a bus cycle All devices can read clock line Usually sync on leading/rising edge Usually a single cycle for an event Analogy Orchestra conductor with baton Usually stricter in terms of its timing requirements

MANISH MAHAJAN

ECS 401 COMPUTER ORGANIZATION UNIT - I

Synchronous Bus Timing Memory Read Operation

Synchronous Bus Timing Memory Write Operation

Asynchronous Timing
Devices must have certain tolerances to provide responses to signal stimuli More flexible allowing slower devices to communicate on same bus with faster devices. Performance of faster devices, however, is limited to speed of bus

Asynchronous Bus Operation


In asynchronous buses, there is no clock signal. Instead, they use four-way handshaking to perform a bus transaction. This handshaking is facilitated by two synchronization signals: master synchronization (MSYN) and slave synchronization (SSYN). We can summarize the operation as follows:

1. Typically, the master places all the required data to initiate a bus transaction and asserts the master synchronization signal MSYN. 2. Asserting MSYN indicates that the slave can receive the data and initiate the necessary actions on its part. When the slave is ready with its reply, it asserts SSYN. 3. The master receives the reply and then removes the MSYN signal to indicate receipt. For example, in a memory read transaction, the CPU reads the data supplied by the memory. 4. Finally, in response to the master deasserting MSYN, the slave removes its own synchronization signal SSYN to terminate the bus transaction.

Asynchronous Timing Read


The master places the address and command information on the bus. Then it indicates to all devices that it has done so by activating the Masterready line. This causes all devices on the bus to decode the address. The selected slave performs the required operation and informs the processor it has done so by activating the Slave-ready line. The master waits for Slave-ready to become asserted before it removes its signals from the bus. In the case of a read operation, it also strobes the data into its input buffer.

Asynchronous Timing - Write


In this case, the master places the output data on the data lines at the same time that it transmits the address and command information. The selected slave strobes the data into its output buffer when it receives the Master-ready signal and indicates that it has done so by setting the S1aveready signal to 1. The remainder of the cycle is identical to the input operation.

MANISH MAHAJAN

ECS 401 COMPUTER ORGANIZATION UNIT - I

Synchronous OR Asynchronous?
Asynchronous buses allow more exibility in timing. In synchronous buses, all timing must be a multiple of the bus clock. For example, if memory requires slightly more time than the default amount, we have to add a complete bus cycle The main advantage of asynchronous buses is that they eliminate this dependence on the bus clock. However, synchronous buses are easier to implement, as they donot use handshaking.

Bus Arbitration
I/O chips have to become bus master to read and write memory and to cause interrupts. If two or more devices want to become bus master at the same time, a bus arbitration mechanism is needed. Arbitration mechanisms can be centralized or decentralized.

Static Vs Dynamic Arbitration


In static bus arbitration, bus allocation among the masters is done in a predetermined way. For example, we might use a roundrobin allocation that rotates the bus among the masters. The main advantage of a static mechanism is that it is easy to implement. However, since bus allocation follows a predetermined pattern rather than the actual need, a master may be given the bus even if it does not need it. This kind of allocation leads to inefficient use of the bus. In dynamic bus arbitration, bus allocation is done in response to a request from a bus master. To implement dynamic arbitration, each master should have a bus request and grant lines. A bus master uses the bus request line to let others know that it needs the bus to perform a bus transaction. Before it can initiate the bus transaction, it should receive permission to use the bus via the bus grant line. Dynamic arbitration consists of bus allocation and release policies.

Centralized Vs Decentralized Arbitration


In the centralized scheme, a central arbiter receives bus requests from all masters. The arbiter, using the bus allocation policy in effect, determines which bus request should be granted. This decision is conveyed through the bus grant lines. Once the transaction is over, the master holding the bus would release the bus; the release policy determines the actual release mechanism. In the distributed implementation, arbitration hardware is distributed among the masters. A distributed algorithm is used to determine the master that should get the bus.

Bus Arbitration

Bus Allocation Policies


Fixed Priority Policies
Each master is assigned a unique fixed priority. When multiple masters request the bus, the highest priority master will get to use the bus Priority of a master is not fixed. For example, priority of a master can be a function of the time waiting to get the bus. Thus, the longer a master waits, the higher the priority Does not allow starvation Some examples of fairness are :

Rotating Priority Policies

Fair Policies

Hybrid Policies

All bus requests in a predefined window must be satised before granting requests from the next window A bus request should not be pending for more than M milliseconds

Combination of Priority and Fairness Also called Combined Policies E.g. PCI Bus

MANISH MAHAJAN

ECS 401 COMPUTER ORGANIZATION UNIT - I

Bus Release Policies


Non-Preemptive
In these policies, the current bus master voluntarily releases the bus

Centralized Bus Arbitration Daisy Chaining

Transaction-Based Release : A bus master holding the bus releases the bus when its current transaction is finished Demand Based Release : the current master releases the bus only if there is a request from another bus master; otherwise, it continues to use the bus. Typically, this check is done at the completion of each transaction

A potential disadvantage of the non-preemptive policies is that a bus master may hold the bus for a longtime, depending on the transaction type. For example, long block transfers can hold the bus for extended periods of time.

Preemptive

Preemptive policies force the current master to release the bus without completing its current bus transaction.

Daisy Chain Method


When the central arbiter receives a bus request, it sends out a bus grant signal to the first master in the chain. The bus grant signals are chained through the masters Each master can pass the incoming bus grant signal to its neighbor in the chain if it does not want to use the bus. If a master wants to use the bus, it grabs the bus grant signal and will not pass it on to its neighbor. This master can then use the bus for its bus transaction. Bus release is done by the release policy in effect. Daisy chaining is simple to implement and requires only three control lines independent of the number of hosts. Disadvantages
It implements a fixed priority policy The bus arbitration time varies and is proportional to the number of masters This scheme is not fault tolerant

Centralized Arbitration - Polling

Polling
In response to the bus request from one or more devices, the controller polls them (in a predesigned priority order) and selects the highest priority device among them and grants the bus to it. Only one bus grant line is shown. But, only the selected device will be activated as bus master (i.e., accepts the bus grant). All the other devices will ignore it.

Centralized Arbitration Independent Request

MANISH MAHAJAN

ECS 401 COMPUTER ORGANIZATION UNIT - I

Independent Requests
The arbiter is connected to each master by separate bus request and grant lines When a master wants the bus, it sends its request through its own bus request line. Once the arbiter receives the bus requests from the masters, it uses the allocation policy to determine which master should get the bus next. Since the bus requests are received on separate lines, the arbiter can implement a variety of allocation policies: a rotating priority policy, a fair policy, or even a hybrid policy. It provides short, constant arbitration times and allows exible priority assignment so that fairness can be ensured. In addition, it provides good fault tolerance. If a master fails, the arbiter can ignore it and continue to serve the other masters This implementation is complex. The number of control signals is proportional to the number of masters.

Decentralized Arbitration
Decentralized bus arbitration is also possible.
A computer could have 16 prioritized bus request lines. When a device wants to use the bus, it assert its request line. All devices monitor all request lines, so at the end of each bus cycle, each device knows whether it was the highest priority requester. This method avoids the necessity of an arbiter, but requires more bus lines. Another decentralized scheme equivalent to the daisy chain arbitration minus the arbiter is shown on the following slide.

Bus Arbitration

Data Transfer Types

46

MANISH MAHAJAN

Você também pode gostar