Você está na página 1de 36

Hazard Algebras 


Z. Esik

J. Brzozowski

Dept. of Computer S ien e


University of Waterloo
Canada

Dept. of Computer S ien e


University of Szeged
Hungary

brzozouwaterloo. a

esikinf.u-szeged.hu

De ember 3, 2001

Abstra t
We introdu e algebras apable of representing, dete ting, identifying, and ounting
stati and dynami hazard pulses that an o ur in the worst ase on any wire in a
gate ir uit. These algebras also permit us to ount the worst- ase number of signal
hanges on any wire. This is of interest to logi designers for two reasons: ea h signal
hange onsumes energy, and unne essary multiple signal hanges slow down the ir uit
operation. We des ribe e ient ir uit simulation algorithms based on our algebras
and illustrate them by several examples. Our method generalizes Ei helberger's ternary
simulation and several other algebras designed for hazard dete tion.
1

Introdu tion

The problem of hazards, i.e., unwanted short pulses on the outputs of gates in logi ir uits,
is of great importan e. In an asyn hronous ir uit a hazard pulse may ause an error in
the ir uit operation. Syn hronous ir uits are prote ted from su h errors, sin e all a tions
are ontrolled by a ommon lo k, and all ombinational ir uits stabilize before the lo k
pulse arises. However, an unwanted hange in a signal in reases the energy onsumption in
the ir uit. From the energy point of view it is ne essary not only to dete t the presen e
of unwanted signal hanges, but also to ount them, in order to obtain an estimate of the
energy onsumption. Su h unwanted hanges also add to the omputation time. In this
paper we address the problem of ounting hazards and signal hanges in gate ir uits.
One of the earliest simulation methods for hazard dete tion is ternary simulation. For
the dete tion of hazards, ternary algebra has been used sin e 1948 [14. Ternary simulation
was then used by many authors; see, for example, [4 for a list of early referen es on this
 This resear h was supported by Grant No. OGP0000871 from the Natural S ien es and Engineering
Resear h Coun il of Canada, and Grant No. T30511 from the National Foundation of Hungary for S ienti
Resear h. An extended abstra t of this paper has been presented at the onferen e Half Century of Automata
Theory , London, ON, July 26, 2000 [6.

subje t, and also [3 for a detailed dis ussion of ternary methods. A two-pass ternary simulation method was introdu ed by Ei helberger in 1965 [11, and later studied by others [3.
Ternary simulation is apable of dete ting stati hazards and os illations, but does not dete t dynami hazards. A quinary algebra was proposed by Lewis in 1972 for the dete tion
of dynami hazards. A survey of various simulation algebras for hazard dete tion was given
in 1986 by Hayes [16. We show that several of these algebras are spe ial ases of the hazard
algebra presented here.
The remainder of the paper is stru tured as follows. In Se tion 2 we de ne our model
of gate ir uits, des ribe the binary analysis method, and de ne hazards. In Se tion 3
we dis uss our representation of transients in gate ir uits. Using this representation, in
Se tion 4 we de ne Algebra C apable of ounting an arbitrary number of signal hanges
on any wire in a gate ir uit. To make the algebra more appli able, in Se tion 5 we modify
it to Algebra C , where k is any positive integer; su h an algebra is apable of ounting
and identifying up to k 1 signal hanges. In Se tion 6 we des ribe ir uit simulation
algorithms based on Algebras C and C , and we illustrate our method by several examples.
In Se tion 7 we extend our de nitions to arbitrary Boolean fun tions. Complexity issues
are then treated in Se tion 8. In Se tion 9 we dis uss simulation with initial, middle, and
nal values, and Se tion 10 on ludes the paper. Several additional results about algebra
C and three proofs related to Se tion 7 are given in the appendix.
Chara terizations of the simulation results are brie y mentioned at the end of Se tion 6,
and are treated further in [13.
k

Stati and Dynami Hazards in Logi Cir uits

We use _, ^, and for the Boolean or, and, and not operations, respe tively.
s1

s3

s4

s2

Figure 1: Cir uit with hazards.


Figure 1 shows a gate ir uit that we will use several times to illustrate various on epts.
We assume in this example that gates have delays, but wire delays are negligible. Hen e,
the internal state of the ir uit is represented by the state of the four gate outputs s1 , s2 ,
s3 , and s4 . If the input X is 0 and the internal state is (s1 ; s2; s3 ; s4) = (1; 0; 0; 0), ea h
gate is stable, sin e the value of its output agrees with the value of the fun tion omputed
by the gate. Thus, the inverter is stable, be ause s1 = 1 = X , the top and gate is stable,
be ause s3 = 0 = 1 ^ 0 = X ^ s1 , et .
Now suppose that the input hanges to X = 1 and is held onstant at that value. It is
2

lear that eventually s1 be omes 0; onsequently s3 be omes 0. Also, sin e s2 be omes 1,


s4 be omes 1 as well. Thus we know that the nal state of the ir uit is the stable state
(s1 ; s2; s3; s4 ) = (0; 1; 0; 1). Assume that the spe i ation of this ir uit requires that, when
the initial state is (1; 0; 0; 0) and the input hanges from 0 to 1, s1 should hange on e from
1 to 0, s2 and s4 should hange on e from 0 to 1, and s3 should not hange. Does the ir uit

satisfy this spe i ation?


If we take a loser look at the analysis above, we dis over that the behavior of a ir uit
depends very mu h on the delays of its omponents. In the theory of asyn hronous ir uits,
it is usually assumed that the sizes of these delays are not known [3, 18, 19, 21. Therefore,
the analysis onsiders all possible relative sizes of the delays. In our example, the initial
state is (1; 0; 0; 0). After the input hange s1 , s2 , and s3 are all unstable. If the delay of s1 is
smaller than those of s2 and s3 , the next state is (0; 0; 0; 0). If the delay of s2 is the smallest
of the three, then the state be omes (1; 1; 0; 0). If the delay of s1 is equal to that of s2 and
smaller than that of s3 , state (0; 1; 0; 0) is rea hed. If all three delays are equal, the state
be omes (0; 1; 1; 0), et . Altogether, there are seven possible su essor states, as shown in
Figure 2, where we have deleted the parentheses and ommas from the state tuples, for
simpli ity. Thus, 0100 represents (0; 1; 0; 0), et .
1000

1100

1010

0100

0010

1110

0110

1011
0011
0001
0000
0100
0101

Figure 2: Analysis of ir uit with hazards.


The binary analysis method des ribed above is quite old|see, for example, the work of
Muller [18, 19|and the details are outside the s ope of the present paper. We refer the
reader to [3 for a formal treatment of this topi . Here we give only an example illustrating
the main ideas. The general analysis step onsists of nding all possible su essors of a
given state s = (s1 ; : : :; s ) in a ir uit with n (internal) state variables. Any state obtained
n

from s by hanging any nonzero number of unstable variables is a possible su essor. In


this way we obtain a dire ted graph showing all the possible paths from the initial state.
The graph is nite, sin e the total number of states is nite. Part of the behavior graph of
our example ir uit is shown in Fig. 2.
If we ontinue the analysis for our example, we nd that one possible path is
p = 1000; 0000; 0100; 0101;
where the last state (0101) is stable. Along p, variable s1 hanges on e from 1 to 0, variables
s2 and s4 hange on e from 0 to 1, and s3 does not hange. This is in agreement with the
spe i ation. This path o urs when the initial \ra e" among the rst three variables is
\won" by s1 , that is, when the delay of s1 is the shortest of the three.
The Boolean fun tion omputed by a gate is alled its \ex itation." Our model uses
so alled \inertial" delays [3, 21, in whi h short pulses in the ex itation are ignored. For
example, when the input is X = 1 and the internal state is 1000, variable s3 is unstable
be ause both of its inputs are 1. Variables s1 and s2 are also unstable. If s1 wins this ra e,
state 0000 is rea hed, and now s3 is no longer unstable. By assumption, the delay of s3
is longer than the period of time during whi h s3 had two 1s on its inputs; this period is
the same as the delay of the fastest gate s1 . Thus the short pulse in the ex itation has
been ignored. This is in ontrast to \pure" or \ideal" delay models [3, 21, in whi h every
hange in the ex itation auses a orresponding hange in the output.
Another possible path in Fig. 2 is
q = 1000; 1010; 1011; 0011; 0001; 0000; 0100; 0101:
Along q the inverter output s1 hanges only on e from 1 to 0, and variable s2 hanges only
on e from 0 to 1. Variable s3 is 0 in both the initial and nal state; thus, its value should
be \stati ." However, there is a time during whi h s3 has the value 1. This represents a
\hazardous" behavior, sin e a devi e having s3 as input may rea t to this 1-signal, although
this signal is not in the spe i ation. Su h a behavior is alled a \stati hazard." Variable
s4 has initial value 0 and nal value 1; its behavior is therefore \dynami ." However, it
hanges from 0 to 1 twi e, and this represents a \dynami hazard."
In summary, our example ir uit may or may not satisfy its spe i ation, depending on
the relative sizes of its delays.
We refer to the behavior of a ir uit after an input hange from a stable state as a
\transition." The set of states in whi h a ir uit an be at the end of a transition is alled
the \out ome" of that transition [3. In our example, the out ome is the singleton set
ontaining the stable state (0; 1; 0; 1). In general, the out ome may also ontain states that
appear in y les, whi h represent os illations. A stati hazard is said to be present in a
transition if there is a state variable that has the same value in all the states of the out ome
as it has in the initial state, and there exists a path from the initial state to a state in
the out ome along whi h the variable hanges (ne essarily an even number of times). A
dynami hazard exists if there is a variable whi h has some value v in the initial state and
the omplementary value v in all the states of the out ome, but hanges at least three times
along some path from the initial state to a state in the out ome.
4

The binary analysis method above is exponential in the number of state variables. One
obje tive of this paper is to nd a more e ient method for dete ting hazards. This method
is des ribed in Se tion 6.
3

Transients

We use waveforms to represent hanging binary signals. In parti ular, we are interested
in studying transient phenomena in ir uits. For this appli ation we onsider waveforms
with a onstant initial value, a transient period involving a nite number of hanges, and a
onstant nal value. Waveforms of this type will be alled transients .
Figure 3 gives four examples of transients. With ea h su h transient we asso iate a
binary word, i.e., a sequen e of 0s and 1s, in a natural way. In this binary word a 0 (1)
represents a maximal interval during whi h the signal has the value 0 (1). Su h an interval
is alled a 0-interval (1-interval ). Of ourse, no timing information is represented by the
binary word. This is to our advantage, however, sin e we assume that the hanges an
happen at any time and that the intervals between su essive hanges an vary arbitrarily.
(a)

01010

(b)

10101

( )

010101

(d)

101010

Figure 3: Transients: (a) onstant 0 with stati hazards; (b) onstant 1 with stati hazards;
( ) hange from 0 to 1 with dynami hazards; (d) hange from 1 to 0 with dynami hazards.
In general, transients are of the following types:

 If a signal is supposed to have the onstant value 0, but has i  0 1-intervals, these

intervals represent i stati -hazard pulses. This transient is denoted by the word
0(10) = (01) 0, has 2i unwanted signal hanges, and (i + 1) 0-intervals. In regular expression notation [2, 3, 20, the set of all words of this type is 0(10) = (01)0.
 If a signal is supposed to have the onstant value 1, but has i  0 0-intervals, these
intervals represent i stati -hazard pulses. This transient is denoted by the word
1(01) = (10) 1, has 2i unwanted signal hanges, and (i + 1) 1-intervals. The set
of all words of this type is 1(01) = (10)1.
 If a signal is supposed to hange from 0 to 1, but has i  0 unwanted 0-intervals after
the rst hange, these 0-intervals represent i dynami -hazard pulses. Su h a transient
is denoted by 01(01) = 0(10) 1 = (01) 01, has 2i unwanted signal hanges, (i + 1)
i

1-intervals, and (i + 1) 0-intervals. The set of all words of this type is 01(01) =
0(10)1 = (01)01.
 If a signal is supposed to hange from 1 to 0, but has i  0 unwanted 1-intervals after
the rst hange, these 1-intervals represent i dynami -hazard pulses. Su h a transient
is denoted by 10(10) = 1(01) 0 = (10) 10, has 2i unwanted signal hanges, (i + 1)
1-intervals, and (i + 1) 0-intervals. The set of all words of this type is 10(10) =
1(01)0 = (10)10.
For the present, we assume that our ir uits are onstru ted with 2-input or gates,
2-input and gates and inverters; these restri tions will be removed in Se tion 7. Given a
transient at ea h input of a gate, we wish to nd the longest possible transient at the output
of that gate. The ase of the inverter is the easiest one. If t = a1 : : :a is the binary word of
a transient at the input of an inverter, then its output has the transient t = a1 : : :a . For
example, in Fig. 3, the rst two transients are omplementary, as are the last two.
For the or and and gates, we assume that the hanges in ea h input signal an o ur at
arbitrary times. The following proposition permits us to nd the largest number of hanges
possible at the output of an or gate.
i

Proposition 3.1 If the two inputs of an or gate have m and n 0-intervals respe tively,
then the maximum number of 0-intervals in the output signal is 0 if m = 0 or n = 0, and
is m + n 1, otherwise.

Proof: We postpone the proof until Se tion 7, where it is shown that this proposition is
a spe ial ase of a result on erning or gates with an arbitrary number of inputs.
Example 1 Figure 4 shows waveforms of two inputs X1 and X2 and output y of an

or

gate. The input transients are 010 and 1010, and the output transient is 101010. Here,
the inputs have two 0-intervals ea h, and the output has three 0-intervals, as predi ted by
Proposition 3.1.
X1

010

X2

1010
101010

Figure 4: 0-intervals in or gate.


By an argument similar to that for Proposition 3.1, we obtain:
Proposition 3.2 If the two inputs of an and gate have m and n 1-intervals respe tively,
then the maximum number of 1-intervals in the output signal is 0 if m = 0 or n = 0, and
is m + n 1, otherwise.

These two results will be used in the next se tion to de ne operations on transients.
6

Change-Counting Algebra

Let T = 0(10) [ 1(01) [ 0(10)1 [ 1(01)0; this is the set of all nonempty words over 0
and 1, in whi h no two onse utive letters are the same. As explained above, elements of
T are alled transients. Note that every transient is uniquely determined by its rst letter
and length, by its last letter and length, by its rst and last letters and the number of 0s,
and by its rst and last letters and the number of 1s. These hara terizations may help the
reader in understanding some proofs that follow.
We de ne the (signal) hange- ounting algebra C = (T; ;
; ; 0; 1). For any t 2 T
de ne z (t) (z for zeros) and u(t) (u for units) to be the number of 0s in t and the number of
1s in t, respe tively. Let (t) and ! (t) be the rst and last letters of t, and let l(t) denote
the length of t. For example, if t = 10101, then z (t) = 2, u(t) = 3, (t) = ! (t) = 1, and
l(t) = 5.
Operations  and
are binary operations on T intended to represent the worst- ase
or-ing and and-ing of two transients at the inputs of a gate. They are de ned as follows:
t  0 = 0  t = t; t  1 = 1  t = 1;

for any t 2 T . Furthermore, if w and w0 are words in T of length > 1, their sum , denoted by
w  w0, is de ned as the unique word t that begins with (w) _ (w0), ends with ! (w) _ ! (w0)
and has z (t) 0s, where z (t) = z (w)+ z (w0) 1. By Proposition 3.1, t is the longest transient
that an be produ ed at the output of an or gate, if transients w and w0 appear at the
inputs of the or gate. For example, 010  1010 = 101010, as illustrated in Fig. 4.
Next, de ne
t
1 = 1
t = t; t
0 = 0
t = 0;
for any t 2 T . Consider now the produ t of two words w; w0 2 T of length > 1, and denote
this produ t by t = w
w0. Then t is the unique word in T that begins with (w) ^ (w0),
ends with ! (w) ^ ! (w0), and has u(t) = u(w) + u(w0) 1, by Proposition 3.2. For example,
0101
10101 = 01010101.
The (quasi-) omplement t of a word t 2 T is obtained by omplementing ea h letter in
t. For example, 1010 = 0101. Finally, the onstants 0 and 1 of C are the words 0 and 1 of
length 1.
In this paper we use several algebrai stru tures. These stru tures are fully de ned
in the paper; onsequently, the paper is self- ontained. For more details and ba kround
material, we refer the reader to a text on universal algebra, for example [8, 15. Also, a
re ent survey of various algebrai stru tures used for hazard dete tion and of the algebrai
properties these stru tures satisfy appears in [7.
A ommutative bisemigroup is an algebra C = (S; ;
), where S is a set, and  and

are asso iative and ommutative binary operations on S , i.e., (S; ) and (S;
) are both
ommutative semigroups with the same underlying set. Thus a ommutative bisemigroup
satis es equations L1, L2, L10, L20 in Table 1, where the laws are listed in dual pairs.
A ommutative bisemigroup is de Morgan if it has two onstants 0 and 1, and a unary
operation satisfying L3{L6, L30 , L40 , and L60. All the laws of Table 1 are also satis ed
by Boolean algebras, but several laws of Boolean algebras are not ne essarily satis ed by
7

de Morgan bisemigroups. Thus, de Morgan bisemigroups are generalizations of Boolean


algebras.
Note that the set of laws in Table 1 is redundant. For example, one an obtain all the
primed laws from L1{L6. Also, 0 = 1 and 1 = 0 hold in any ommutative de Morgan
bisemigroup.
Table 1: Laws of hange- ounting algebra.
L1
L2
L3
L4
L5
L6

L10
L20
L30
L40

xy = yx
x  (y  z ) = (x  y )  z
x1 = 1
x0 = x
x=x
xy = x
y

x
y = y
x
x
(y
z ) = (x
y )
z
x
0 = 0
x
1 = x

L60 x
y = x  y

The hange- ounting algebra C = (T; ;


; ; 0; 1), is a ommutative de
Morgan bisemigroup, i.e., it satis es the equations of Table 1.

Proposition 4.1

 is ommutative by de nition. Clearly, asso iativity holds if one of


the omponents is 0 or 1. Suppose now that u; v; w are all of length > 1. Then, using the
asso iativity of disjun tion,
(u  (v  w)) = (u) _ ( (v ) _ (w))
= ( (u) _ (v )) _ (w)
= ((u  v )  w):
Similarly, ! (u  (v  w)) = ! ((u  v )  w). Thus we have u  (v  w) = (u  v )  w if the
two words have the same number of zeros. But
z (u  (v  w)) = z (u) + z (v  w) 1
= z (u) + (z (v ) + z (w) 1) 1
= (z (u) + z (v ) 1) + z (w) 1
= z (u  v ) + z (w) 1
= z ((u  v )  w):
Laws L3 and L4 are obvious by de nition of , and law L5 is immediate by the de nition of
quasi- omplementation. Also L6 is lear when one of the two words has length 1. Suppose
now that x; y have length > 1. Then, using De Morgan's law for the Boolean operations,
(x  y ) = (x) _ (y )
= (x) ^ (y )
= (x
y ):

Proof: Operation

In the same way, ! (x  y) = ! (x


y ). Sin e the number of ones in x  y is z (x)+ z (y ) 1 =
u(x) + u(y ) 1, it follows that x  y = x
y . Finally, laws L10 {L40, and L60 an be derived
from L1{L6.
We note some further properties of the operations in C . For any two binary words t and
0t , we denote by tt0 the word obtained by on atenating t and t0 , i.e., by writing the letters
of t0 after those of t. We say that t is a pre x of t0 if there exists a (possibly empty) binary
word t00 su h that t0 = tt00 . The pre x relation is a partial order on the set of binary words,
i.e., it is re exive, antisymmetri and transitive. We use  to denote the pre x order.
The pre x order restri ted to T is represented by the inequalities below together with
the re exive and transitive laws:
0  01  010  0101  01010  : : :;
1  10  101  1010  10101  : : ::
Re all that a fun tion f (x1 ; : : :; x ) is nonde reasing or monotoni with respe t to a
partial order  if and only if
n

x1  x01 ; : : :; x

 x0 implies f (x ; : : :; x )  f (x0 ; : : :; x0 ):
1

Similarly, f is nonin reasing if


x1  x01 ; : : :; x

 x0 implies f (x ; : : :; x )  f (x0 ; : : :; x0 ):
For example, we know that 01
10 = 010. Sin e
is monotoni , as will be shown below,
and sin e 01  0101 and 10  10101, we know that 010 is a pre x of the result 0101
10101.
n

In fa t that result is 01010101.


Proposition 4.2

The ;
and

operations are monotoni with respe t to the pre x order.

Proof: This proposition is a spe ial ase of Proposition 7.2, proved in Se tion 7.
The two transients 01 and 10 play an important role in Algebra C , sin e they represent

signal hanges from 0 to 1 and from 1 to 0, respe tively. Suppose that an arbitrary transient
t o urs at one input of a gate, and a single hange (01 or 10) o urs at the other input.
The next lemma ompletely hara terizes the output of the or and and gates for these
inputs.
Lemma 4.3

For all words t 2 T ,

1. If (t) = 0, then t  10 = 1t = t! (t); otherwise, t  10 = t.


2. If (t) = 1 then t
01 = 0t = t! (t); otherwise, t
01 = t.
3. If ! (t) = 0 then t  01 = t1 = (t)t; otherwise, t  01 = t.
4. If ! (t) = 1 then t
10 = t0 = (t)t; otherwise, t
10 = t.

5. It follows from the observations above that either t or t is a pre x of t  10, t


01,
t  01, and t
10.

Proof: First, onsider the ase where t begins and ends with a 0, i.e., has the form
t = (01) 0, for some i  0. Adding 10 to t produ es a word w that begins with 1, ends with
0, and has z (w) = z (t) + z (10) 1 = z (t), by Proposition 3.1. Therefore, w must be the
word w = 1t = 1(01) 0 = (10) 10 = t0 = t! (t). Similarly, if t ends with 1, then t = (01) ,
for some i  1, and w must be w = 1t = 1(01) = (10) 1 = t! (t). If t begins with a 1 and
ends with a 0, then adding 10 results in t, sin e the rst letter, last letter, and number of
0s in the output w is the same as it is in t. This proves the rst laim; the remaining three
i

ases follow by similar arguments. The last laim holds, sin e we have examined all the
possible ases.
The next lemma establishes the fa t that for any t; s 2 T , t or t is always a pre x of
t  s, if s 6= 1, and t or t is always a pre x of t
s, if s 6= 0. This lemma has an important
orollary below.
Lemma 4.4

Suppose that t; s 2 T .

1. If (s) = 0, then t  t  s.
2. If (s) = 1, then t  t
s.
3. If (s) = 1 and s 6= 1, then t  t  s or t  t  s.
4. If (s) = 0 and s 6= 0, then t  t
s or t  t
s.

Proof: If (s) = 0, then 0 is a pre x of s, i.e., 0  s. Be ause  is monotoni with respe t


to the pre x order, we have t = t  0  t  s. This proves the rst laim, and the se ond

laim follows similarly.


For the third laim, assume that (s) = 1 and s 6= 1. Then s must have at least two
letters, and begins with 10, i.e., 10  s. By monotoni ity of , we have t  10  t  s, and
also t  10  t  s. By Lemma 4.3, either t  t  10 or t  t  10. Thus the laim holds.
The last laim follows similarly.
The next result shows that the length of a word t annot be de reased by adding another
word s 6= 1, or by multiplying it by another word s 6= 0.
Corollary 4.5

Suppose that t; s 2 T .

1. If s 6= 1, then l(t)  l(t  s).


2. If s 6= 0, then l(t)  l(t
s).

Proof: If s 6= 1, then either t  t  s or t  t  s, by Lemma 4.4. Hen e, l(t)  l(t  s).

The se ond laim follows by duality.

10

For any word w, let w 1 denote the mirror image of w. It is lear that w 2 T i w 1 2 T .
Moreover,
(t  s) 1 = t 1  s 1
(t
s) 1 = t 1
s 1
t 1 = (t ) 1 ;
for all t 2 T . Sin e also (t 1 ) 1 = t, we have:
The fun tion w 7! w 1 , w 2 T , de nes an automorphism of C , i.e., a
one-to-one and onto mapping T ! T whi h preserves the operations ,
, , and onstants
0 and 1.
Proposition 4.6

The sux order on T is represented by the inequalities below together with the re exive
and transitive laws:
0  10  010  1010  01010  : : :;
1  01  101  0101  10101  : : ::
It follows from Proposition 4.6 that the operations  and
also preserve the sux
order.
Some additional properties of the algebra C are given in the appendix.
5

Counting Changes to a Threshold

Sin e the underlying set T of Algebra C is in nite, an arbitrary number of hanges an be


ounted. An alternative is to ount only up to some threshold k 1, k  1, and onsider
all transients with length k or more as equivalent.
Re all that a ongruen e relation of an algebra is an equivalen e relation on the underlying set of the algebra preserved by the operations. Sin e the  operation in Algebra C
is ommutative, and by De Morgan's law L6, it follows that an equivalen e relation  on
T is a ongruen e relation of C if for all transients t; s; w 2 T with t  s we have that
(w  t)  (w  s) and t  s. When  is a ongruen e relation of C , there is a unique
algebra C= = (T= ; ;
; ; 0; 1), de ned on the quotient set T=  of equivalen e lasses
of T with respe t to , su h that the map taking a transient to its equivalen e lass is a
homomorphism, i.e., su h that it preserves the operations and onstants. For any equivalen e lasses [s and [t ontaining the transients s and t, respe tively, we have in the
quotient algebra C=  that [s  [t = [s  t , [s
[t = [s
t and [t = [ t  .
Moreover, the onstants 0; 1 in C=  are the ongruen e lasses [0 and [1 . The fa t
that the operations are well-de ned is a onsequen e of the ongruen e property of . It is
known (see, e.g., [15, 8) that C=  satis es any equation that holds in C . Thus, when  is
a ongruen e relation, C=  is a ommutative De Morgan bisemigroup.
Suppose that k  1. Relation  is de ned on the set T of transients as follows: For
t; s 2 T , t  s if either t = s or t and s are both of length  k.
k

11

Proposition 5.1

Relation

is a ongruen e relation on C .

Proof: Relation  is learly an equivalen e relation. Suppose now that t  s. It is lear


that t  s. We argue by indu tion on the length of t to show that (w  t)  (w  s),
for all transients w. When the length of t is less than k we have t = s, and our laim is
obvious. When t is of length  k then, by Corollary 4.5, w  t and w  s are both of length
 k, so that (w  t)  (w  s).
The equivalen e lasses of the quotient algebra C = C=  are of two types. Ea h
transient t with l(t) < k is in a lass by itself, and all the words of length  k onstitute a
lass, whi h is denoted by . We denote by T this set of equivalen e lasses, and we denote
by t the equivalen e lass onsisting of the singleton t. Thus T = ft j l(t) < kg [ fg. The
operations on equivalen e lasses are as follows. The omplement of the lass ontaining t is
the lass ontaining t. The sum (produ t) of the lass ontaining t and the lass ontaining
t0 is the lass ontaining t  t0 (t
t0 ). Thus, the quotient algebra C is a ommutative de
Morgan bisemigroup with 2k 1 elements.
k

Table 2:

or

and and operations for k = 2.

 0  1

0  1

0 0  1
   1
1 1 1 1

0 0 0 0
 0  
1 0  1

Table 3: Ternary laws.

L7

xx = x
L70
x  (x
y ) = x
L80
x  (y
z ) = (x  y )
(x  z ) L90

L8
L9
L10  = 
L11 (x  x)   = x  x
L12 (x  x)  (y
y) = x  x

x
x = x
x
(x  y ) = x
x
(y  z ) = (x
y )  (x
z )

L110 (x
x)
 = x
x
L120 (x
x)
(y  y ) = x
x

Example 2 For k = 2, the  and


operations are shown in Table 2. These are the
operations of the well known 3-element ternary algebra [3. In addition to laws L1{L6, and
their duals, ternary algebra also satis es the laws of Table 3. Ternary algebras are losely
related to Boolean algebras. Ternary algebras obey L1{L9, and their duals, whi h all hold
in Boolean algebras. Notably absent from ternary algebras are the laws for omplements:
x  x = 1; x
x = 0:

12

Laws L10{L12 and their duals do not hold in Boolean algebras.

Example 3 The or and and operations for the ase k = 3 are shown in Table 4. This is
the quinary algebra introdu ed in 1972 by Lewis [17, and studied also in [5, 9, 16. Note
that both binary operations are idempotent, i.e., laws L7 and L70 hold; hen e, we have a
bisemilatti e [5.

Table 4:

or

and and operations for k = 3.

 0 01  10 1

0
01

10
1

0
01

10
1

01
01


1





1

10


10
1

0 01  10 1

1
1
1
1
1

0
01

10
1

0
0
0
0
0

0
01


01

0





0


10
10

0
01

10
1

Example 4 For k = 4, the or and and operations are shown in Table 5. In this ase, the
binary operations are no longer idempotent.

Table 5:

0
01
010

101
10
1

0
0
01
010

101
10
1

01
01
01


101
101
1

010
010





1








1

or

101
101
101


101
101
1

and and operations for k = 4.

10
10
101


101
10
1

0 01 010  101 10

1
1
1
1
1
1
1
1

0
01
010

101
10
1

0
0
0
0
0
0
0

0
01
010


010
01

0
010
010


010
010

0







0





101

0
010
010


10
10

1
0
01
010

101
10
1

In the following proposition we examine whi h of the ternary laws hold in algebras C ,
for k  3.
Proposition 5.2 The following results apply to Algebras C for k  3:
 C3 satis es L7 and L70, but C , with k  4 does not.
 C with k  3 does not satisfy L8, L9, L80, and L90.
k

13

 For k  3, C satis es L10.


 C satis es L11, L12, L110, and L120, but C
k

does not satisfy these laws for k  4.

Proof: We show the arguments only for the unprimed laws; the primed laws follow by
duality.
 For C3, the operation tables show that L7, L70 hold. For k  4, we have k 2  2. If
k is even, then (k 2)=2 is an integer  1. Let t = (01)( 2) 20. Then l(t) = k 1, and
t by itself onstitutes an equivalen e lass of  . By the de nition of  in Algebra C ,
t  t = (01) 20. Thus l(t  t) = 2k 3 = k +(k 3)  k, and t  t is in the equivalen e
lass , showing that t 6= t  t. If k is odd, then k  5. Let t = (01)( 3) 20; then
l(t) = k 2 < k. However, t  t = (01) 30, and l(t  t) = 2k 5 = k + (k 5)  k,
showing again that t 6= t  t.
 Let x = 01 and y = 10. In algebra C , we have x  (x
y) = 01  010 = 0101. For
ea h k  3, x and x  (x
y ) are in di erent equivalen e lasses. Hen e L8 does not
hold.
For L9, use x = 01, y = 10, and z = 0. Then in Algebra C , x  (y
z ) = 01  0 = 01,
while (x  y )
(x  z ) = 010
01 = 010. Again, the two results are in di erent
equivalen e lasses of  for all k  3.
 L10 holds, be ause  is a ongruen e, and the omplement of a word of length  k
has length  k.
 For k = 3, L11 and L12 are easily veri ed from the operation tables.
For k  4, 01  01 = 01  10 = 101, but (01  01)   = 101   =  6= 101. Thus
L11 fails for k  4.
For L12, (01  01)  (01
01) = 101  010 = 10101 and 01  01 = 101. For k  4,
101 and 10101 are in diferent equivalen e lasses of  . Hen e L12 is not satis ed.
k

We now dis uss some further properties of Algebras C . Roughly speaking, the next
lemma shows that, if two words s and t are ongruent with respe t to a ongruen e  on C
and end in the same letter, then we an always nd longer words s0 and t0 whi h are also
ongruent with respe t to .
k

Suppose that t; s 2 T with t 6= s and ! (t) = ! (s). If  is a ongruen e relation


on C with ts, then for ea h m  0 there exist t0 ; s0 2 T with t0 6= s0 , ! (t0 ) = ! (s0 ), t0 s0 ,
l(t0) = l(t) + m and l(s0) = l(s) + m.
Lemma 5.3

Proof: It is su ient to prove the laim for m = 1. If t and s both end in 0, then by
Lemma 4.3 t  01 = t1 and s  01 = s1. Sin e  is a ongruen e, (t  01)  (s  01). Thus,
(t1)  (s1). If t and s both end in 1, then take t0 = t0 = t
10 and s0 = s0 = s
10.

14

Suppose that k
stri tly ontaining  .

Proposition 5.4

 2.

Then

is the smallest ongruen e relation

Proof: Suppose that  is a ongruen e relation stri tly ontaining  . We rst show that
there is a transient u of length k 1 whi h is ongruent modulo  to some transient v of
length  k.
If  stri tly ontains  , then there exist distin t words t; s 2 T with ts, l(t)  l(s)
and l(t) < k. Suppose rst that ! (s) 6= ! (t). If ! (s) = 0 and ! (t) = 1, then s  01 = s1
and t  01 = t, by Lemma 4.3(3). Thus ts1. If ! (s) = 1 and ! (t) = 0, then s
10 = s0
and t
10 = t, by Lemma 4.3(4). Thus ts0. In either ase, we have two transients t
and s0 , where s0 is s0 or s1, ongruent with respe t to , and su h that l(t) < l(s0). If
l(t) = k 1, we are done: let u = t and v = s0 . Otherwise, note that ! (t) = ! (s0 ).
Consider the transients tb and s0 b, where b denotes the last letter of t. By Lemma 5.3 we
have that tb  s0 b. Continuing in the same way, by repeated appli ations of Lemma 5.3 we
an onstru t words u and v with uv , l(u) = k 1 and l(v ) > k.
In ase ! (s) = ! (t), onsider tb and sb, where b = ! (t), and apply Lemma 5.3, as above.
Note that there are exa tly two transients of any given length, one beginning with 0
and the other with 1. Having found u and v as above, we laim that all transients of length
 k 1 are ongruent with respe t to . Indeed, sin e  ontains  , we have that uw for
all transients w of length  k. Thus, using the ongruen e property for , also uw for all
trasients w with length  k. Moreover, by transitivity, also uu. Thus,  ontains  1 .
k

Corollary 5.5

The latti e of ongruen es of ea h C is isomorphi to a hain of length k.


k

By Corollary 5.5, we immediately have:


Corollary 5.6 Ea h C with k  2 is subdire tly irredu ible.
The last result means that C is not a subalgebra of a dire t produ t of algebras with
fewer elements. Informally, this means that C annot be onstru ted from simpler algebras.
This implies, for example, that C annot be expressed in any nontrivial way as an algebra
of ordered triples [16, where ea h omponent of the triple is evaluated in its own algebra.
As we did for T , we de ne two partial orders on T ; these are derived from the pre x
and sux orders. The lass onsisting of t is  the lass onsisting of t0 if and only if t is
a pre x of t0 , and every lass is  . The se ond partial order is similarly de ned using
suxes. For example, the Hasse diagrams of the two orders on T4 are shown in Fig. 5.
As was the ase in C , the three operations ,
, and in C are monotoni with respe t
to both partial orders. This property plays an important role in the simulation algorithms
des ribed in the next se tion.
k

Cir uit Simulations

For a logi ir uit with n gates, the binary analysis des ribed in Se tion 2 may have as many
as 2 states. If one an be satis ed with partial information about the ir uit behavior,
n

15


101

010

010

101

01

10

10

01

(a)

(b)

Figure 5: Partial orders on T4 : (a) ; (b) .


then simulation in a hange- ounting algebra is often an e ient method for nding that
information.
For the dete tion of hazards, ternary algebra has been used sin e 1948 [14. A two-pass
ternary simulation method was introdu ed by Ei helberger in 1965 [11, and later studied by
many authors (see [3 for further details). The following is an adaptation of these algorithms
to hange- ounting algebra.
We denote ve tors by unsubs ripted letters and their omponents by subs ripted letters.
Let N be a ir uit with X = (X1; : : :; X ) as the ve tor of input variables, and s =
(s1 ; : : :; s ) as the ve tor of state variables. Ea h state variable has a Boolean ex itation
fun tion S : f0; 1g  f0; 1g 7! f0; 1g, i.e., the ve tor S (x; y ) = (S1(x; y ); : : :; S (x; y )) is
the ve tor of ex itations of the ir uit.
For example, the ir uit of Fig. 1 has input ve tor X = (X1), state ve tor s =
(s1 ; : : :; s4), and ex itation fun tions given by the following Boolean expressions:
m

S1 = X; S2 = 1 ^ X; S3 = X ^ s1 ; S4 = s2 _ s3 ;

where we have identi ed X with X1 , sin e X has only one omponent.


Suppose initially the input X has the value X = a^ = (^a1 ; : : :; ^a ), and the state has the
value s = b = (b1; : : :; b ), where all the a and b are in the set f0; 1g. We assume that the
ir uit is initially stable, i.e., that S (^a; b) = b, and the input hanges to a = (a1; : : :; a ).
We de ne an operation as follows. For a; b 2 f0; 1g, if a = b, then a b = a. For a 6= b,
if the simulation is done in algebra C2, whi h is ternary algebra, then a b = b a = .
Otherwise, if a 6= b, and the simulation uses algebra C or algebra C with k > 2, then
a b = ab, where ab represents the on atenation of a and b. This notation is extended to
ve tors. If a^ = (^a1 ; : : :; ^a ) and a = (a1; : : :; a )
m

a^ a = (^a1 a1 ; : : :; ^a

a ):
m

For example, (1; 0; 0; 1) (1; 1; 0; 0) = (1; ; 0; ) in C2 ; otherwise, (1; 0; 0; 1) (1; 1; 0; 0) =


(1; 01; 0; 10). In ase of C2 , a^ a indi ates by a  all those variables that hange in going
from a^ to a. In the other ases, a^ a spe i es the hange as being either from 0 to 1 or from
16

1 to 0. Su h detail is not possible in the ase of C2, sin e only one value  is available for
denoting a value that is neither 0 nor 1.
For the simulation algorithms we use the extensions of Boolean fun tions to transients.
This topi is dis ussed in Se tion 7; for now we use ir uits onstru ted with or gates, and
gates, and inverters, for whi h the extensions are ,
, and omplement in the appropriate
algebra. Variables and their values in a hange- ounting algebra are denoted in boldfa e.
For example, for the ir uit of Fig. 1, the ex itation equations be ome
S1 = X; S2 = 1
X; S3 = X
s1 ; S4 = s2  s3:

Our simulation onsists of two parts, alled Algorithms A and B. Algorithm A starts
with the ir uit in the stable (binary) initial state (^a; b). The input is then set to a = a^ a,
and is kept onstant at that value for the duration of the algorithm. After the input hange,
some state variables be ome unstable. We hange all unstable variables at the same time
to their ex itations. This an be viewed as the \unit-delay" model, in whi h all gates have
the same (unit) delay. We obtain a new internal state (a ve tor of transients from the set
T or T of the hange- ounting algebra used), and the pro ess is then repeated. Formally,
Algorithm A is spe i ed below.
k

Algorithm A
h := 0;
a := a^ a;
s0 := b;
repeat
h := h + 1;
s := S(a; s 1 );
until s = s 1 ;
h

Re all that  is the pre x relation on transients. We extend this notion to ve tors
of transients. Thus, if a = (a1 ; : : :; a ) and a = (a01; : : :; a0 ), then a  a0 i a  a0 for
i = 1; : : :; m. Note that a^  ^aa = a. By the stability of the initial state we have b = S (^a; b).
Sin e the transient operations ,
, and omplement agree with their Boolean ounterparts
on binary values, we have S (^a; b) = S(^a; b). Hen e, s0 = b = S (^a; b) = S(^a; b)  S(a; b) =
S(a; s0 ) = s1, where the inequality follows by Proposition 4.2, whi h states that ,
, and
omplement are monotoni with respe t to the pre x order. It then follows by indu tion
on h that Algorithm A results in a nonde reasing state sequen e:
m

s0  s1  : : :  s

 :::

Algorithm A may not terminate in Algebra C , but it must terminate in every algebra C ,
k  2. Let the result of Algorithm A be state s , if the algorithm terminates. Note that
s = S(a; s ), i.e., that the ir uit is again in a stable state at the end of Algorithm A.
Example 5 below illustrates Algorithm A for C2, and further examples follow.
k

The se ond part of the simulation onsists of Algorithm B, whi h is appli able when
Algorithm A terminates. Let the result of Algorithm A be state s . Algorithm B starts
A

17

with the ir uit in state s , and input a, and the input is hanged to a. Algorithm B is
de ned below.
A

Algorithm B
h := 0;
t0 := s ;
repeat
h := h + 1;
t := S(a; t 1 );
until t = t 1 ;
A

Re all now that  is the sux order on transients, and that ,


, and omplement are
monotoni with respe t to the sux order, as a onsequen e of Proposition 4.6. It is easy
to verify that Algorithm B results in a nonin reasing sequen e of states:
s = t0  t1  : : :  t :
A

Again we rea h a stable state, sin e t = S(a; t ).


Both algorithms ompute xed points.
B

Suppose that Algorithm A terminates with state s . Then s is the least


xed point of the fun tion S(a; x) over b, i.e.,
A

Proposition 6.1

S(a; s ) = s ; and
b  s & S(a; s) = s
A

(1)
(2)

) s  s:
A

Proof: Re all that Algorithm A terminates if and only if a state ve tor s is rea hed with
s = s 1 , and then s = s = S(a; s 1) = S(a; sA). This proves (1).
Suppose now that the premisses of (2) hold for a state ve tor s. We prove by indu tion on
h using the monotoni ity of the ,
and omplement operations with respe t to the pre x
order that s  s. The basis ase h = 0 is obvious, sin e s0 = b and b  s, by assumption.
Assuming the laim for h 1, where h > 0, we obtain s = S(a; s 1 )  S(a; s) = s, by the
h

indu tion hypothesis.


As noted above, Algorithm A always terminates in any Algebra C . Moreover, by
Proposition 6.1, it omputes the least xed point of the fun tion S(a; x) over b with respe t
to the pre x order. It follows from general fa ts ( f. Chapter 4 in [10) that this least xed
point also exists over C . However, some omponents of the least xed point may be ,
whi h is not attainable by the algorithm in a nite number of steps.
In Algebra C , we have:
k

Algorithm A terminates in Algebra C if and only if there is a xed point


of the fun tion S(a; x) over b none of whose omponents is .

Proposition 6.2

18

Proof: One dire tion follows from Proposition 6.1. Indeed, if the algorithm terminates,
then it omputes a xed point of S(a; x) over b. Moreover, sin e ea h omponent of any
s is di erent from , none of the omponents of this xed point is . For the opposite
dire tion, suppose that s is a xed point of S(a; x) over b none of whose omponents is .
As above, we have that s  s, for ea h h. But sin e no omponent of s is , there are only
a nite number of state ve tors  s. Thus, the nonde reasing sequen e s0  s1  : : : must
be eventually onstant.
Algorithm B always terminates, regardless whi h algebra we use. The proof of the
following fa t is similar to that of Proposition 6.1.
h

Proposition 6.3

Algorithm B omputes the greatest xed point of fun tion S(a; x) below

s with respe t to the sux order.


A

Table 6: Ternary simulation.


X s1 s2 s3 s4

initial state 0


result A

1
1
result B
1

1
1



0
0

0
0



1
1

0
0




0

0
0
0



1

Example 5 Consider the ir uit shown in Fig. 1. Refer now to Table 6. The values of
the input variable X and the state variables s1 ; : : :; s4 are shown in rows as the simulation

progresses. We begin in the initial state 01000, whi h is stable. We wish to study the
behavior of the ir uit when the input hanges from 0 to 1 and is kept onstant at 1.
In Algorithm A, the input hanges to the un ertain or unknown value . Instead of
Boolean fun tions, we now use the ternary extensions of these fun tions as they are de ned
in Algebra C2. After X hanges to , Gates 1, 2, and 3, be ome unstable in the ternary
model. After all unstable variables are hanged, we obtain the se ond row of Table 6. Now
Gate 4 be omes unstable and hanges, to yield the third row. Sin e this state is stable,
Algorithm A terminates here.
In the ase of C2 the two partial orders  and  oin ide and are given by

0  ; 1  ; 0  0; 1  1;   :
This partial order is known as the un ertainty partial order [3.
In our example, the input be ame more un ertain by hanging from 0 to . Sin e the gate
operations preserve this order, the gate outputs an only be ome more un ertain. Thus the

19

sequen e of states produ ed by Algorithm A is nonde reasing, and the pro ess terminates in
at most n steps if there are n gates in the ir uit. Intuitively, in Algorithm A we introdu e
un ertainty in the ir uit inputs and we see how this un ertainty spreads throughout the
ir uit.
In Algorithm B we start in the state produ ed by Algorithm A, but now we hange the
inputs to their nal values from , thus redu ing un ertainty. In our example, X be omes 1.
We again use the ternary ex itation fun tions to see whether the un ertainty will be removed
from any gates. This time Algorithm B produ es a nonin reasing sequen e of states, whi h
must terminate in no more than n steps. The nal result is the ve tor 10101, showing that
ea h gate rea hes a binary value after the transient is over. Of ourse, this is the answer
we expe t sin e the ir uit is feedba k-free.
It is lear from the ir uit diagram that s1 hanges from 1 to 0 and s2 hanges from 0
to 1 without any hazards. As we have shown in Se tion 2, there is a stati hazard in s3 ,
and a dynami hazard in s4 .
This example illustrates that ternary simulation is apable of dete ting stati hazards.
Gate s3 is 0 at the beginning and 0 at the end, but it is  at the end of Algorithm A, and
this indi ates a stati hazard [3. Our example also learly shows that ternary simulation is
not apable of dete ting dynami hazards. Gates s2 and s4 both hange from 0 to  to 1,
yet s2 has no dynami hazard, while s4 has one.

In the examples that follow, we show how the a ura y of the simulation improves when
we use Algebra C as k in reases.
k

Example 6 In Table 7 we repeat the simulation, this time using quinary extensions of
gates as de ned in Table 4. Instead of hanging X to , we hange it to 01 in Algorithm A,
sin e this is the hange we wish to study. From the result of Algorithm A we see that s1
hanges from 1 to 0 and s2 hanges from 0 to 1, both without hazards. The stati hazard in
s3 is dete ted as before, as is the dynami hazard in s4 . This example shows that quinary
simulation is apable of dete ting both stati and dynami hazards.

Example 7 We repeat the simulation now using Algebra C4 with seven values. Refer to
Table 8. This time Algorithm A not only reveals that there is a stati hazard in s3 , but also

identi es it as

010.

Note, however, that the dynami hazard is still not identi ed.

Example 8 We repeat the simulation using Algebra C5 with nine values. Refer to Table 9.
This time Algorithm A identi es the dynami hazard as 0101.
Observe that the same table results if we simulate our ir uit in Algebra C with any
k  5 or in Algebra C . Note also, that for k  4 Algorithm B is no longer ne essary for
this ir uit, sin e the entire history of worst- ase signal hanges is re orded on ea h gate
output.
k

20

Table 7: Quinary simulation.


initial state
result A
result B

X s1 s2 s3 s4

0
01
01
01
01
1
1
1

1
1
10
10
10
10
0
0

0
0
01
01
01
01
1
1

0
0
01



10
0

0
0
0
01



1

Table 8: Septenary simulation.


initial state
result A
result B

X s1 s2

0
01
01
01
01
1
1
1

1
1
10
10
10
10
0
0

0
0
01
01
01
01
1
1

s3

0
0
01
010
010
010
10
0

s4

0
0
0
01



1

Example 9 The simulation of the ir uit of Fig. 6 is shown in Table 10. Here, if we use
Algebra C , Algorithm A does not terminate. Binary analysis of this ir uit shows that there

are transient os illations during this input hange. After the input hanges to 1, the nand
gate an os illate while the or gate is unstable. For further details see [3. The or gate
must eventually hange, sin e it has a nite delay. However, it is not possible to bound the
number of os illations of the nand gate without the knowledge of the relative sizes of the
gate delays. Hen e the result obtained from the simulation is orre t. If one uses Algebra
C5, for example, Algorithm A does terminate in four steps after the input hange, and
Algorithm B predi ts state 101 as the nal out ome of the input hange.

Example 10 Consider the ir uit of Fig. 7. As in Example 9, Algorithm A using Algebra


C does not terminate, this time be ause there is a nontransient os illation in the or gate
and the wire delay in the feedba k loop. For more details see [3. Simulation in Algebra C
does, of ourse, terminate. In Table 11 we show the simulation with C4 . The nontransient
k

os illation is dete ted by the presen e of  in the result of Algorithm B. This example also
shows that multiple input hanges an be handled by our simulation.

21

Table 9: Nonary simulation.


initial state
result A
result B
X

X s1 s2

0
01
01
01
01
1
1
1

1
1
10
10
10
10
0
0

s1

0
0
01
01
01
01
1
1

s3

0
0
01
010
010
010
10
0

s4

0
0
0
01
0101
0101
0101
1

s2

s3

Figure 6: Cir uit with transient os illations.


X1
X2

s1

s2

Figure 7: Cir uit with nontransient os illation.


Example 11 Consider the ir uit of Fig. 7, again. Suppose that X1 = 1, but input X2
and the initial state of the ir uit are initially unknown. This is a ounted for by setting
X2 = s1 = s2 = . The initial state is now stable in Algebra C2. Suppose now input X1
hanges to 0. No information is obtained from Algorithm 1, sin e the gate variables are in
a state of maximal un ertainty. Algorithm B, does however provide the information that s1
be omes 0.

The detailed dis ussion of the relation between the simulations des ribed above and
binary analysis of Se tion 2 is beyond the s ope of the present paper. A omplete hara terization of ternary simulation is given in [3. In [13 it is shown that the result of
Algorithm A for a feedba k-free ir uit N in Algebra C agrees with the result of binary
analysis of a ir uit N~ , whi h is obtained from N by adding a su ient number of wire
delays. This means that a stati or dynami hazard is dete ted by the simulation of N i
it is dete ted by the simulation of N~ .
22

Table 10: Simulation of ir uit with transient os illations.


initial state

X s1 s2

s3

::: ::: :::

:::

0
01
01
01
01

0
0
01
01
01

1
1
1
1
1 10
10 101
10 10101

Table 11: Simulation of ir uit with nontransient os illation.


initial state
resultA
resultB
7

X1 X2 s1

1
10
10
10
10
0
0

0
01
01
01
01
1
1

0
0
010
010
010
010
0

s2

0
0
0
010




Extensions of Boolean Fun tions

In this se tion we show that simulation in Algebra C or C is not limited to 2-input or


and and gates and inverters. Let B = f0; 1g. We now show how to extend any Boolean
fun tion f : B ! B to a fun tion f^ : T ! T . We use the notation [n to mean f1; : : :; ng.
Consider a simple example rst. Suppose that a 2-input gate performs Boolean fun tion
f , and that the two inputs have transients 01 and 101, respe tively. We want to nd the
maximum number of hanges that an appear at the output of the gate, assuming that the
input hanges may o ur at any time. Initially, the gate `sees' the rst letters of the two
transients, i.e., 0 and 1; hen e the output of the gate is f (0; 1). One possibility is that
the se ond transient o urs while the rst input has the value 0. Then the gate would see
the onse utive ordered pairs (0; 1), (0; 0), (0; 1), and nally (1; 1). Another possible order
would be (0; 1), (0; 0), (1; 0), and (1; 1), and the third possible order would be (0; 1), (1; 1),
(1; 0), and (1; 1). Note that we do not need to onsider sequen es like (0; 1); (1; 0); (1; 1), in
whi h both inputs hange in some steps, sin e ea h su h sequen e is a subsequen e of one
of the three sequen es above. The orresponding output sequen e in the rst ase would
be f (0; 1); f (0; 0); f (0; 1); f (1; 1), in the se ond ase, f (0; 1); f (0; 0); f (1; 0); f (1; 1), and in
the third ase, f (0; 1); f (1; 1); f (1; 0); f (1; 1). If f is the or fun tion, we would have the
sequen es 1011, 1011, and 1111, respe tively. The orresponding transients would be 101,
101, and 1. Sin e we are looking for the longest possible transient, we have 101 as the
k

23

Table 12: Simulation of ir uit with unknown initial values.


X1 X2 s1 s2

initial state

1

0
0

resultA

resultB









0






result of the operation 01  101, whi h, of ourse, agrees with our de nition in Se tion 4.
Similarly, if f is the and fun tion, we get the result 0101, and if it is the xor fun tion, we
have 1010.
This example is generalized as follows. Suppose that x = (x1; : : :; x ), where x 2 T , for
ea h i 2 [n. De ne the dire ted graph D(x) to have as verti es n-tuples y = (y1 ; : : :; y ),
where ea h y is a pre x of length > 0 of x , for ea h i 2 [n. There is an edge from vertex
y = (y1 ; : : :; y ) to vertex y 0 = (y10 ; : : :; y 0 ) i y and y 0 di er in exa tly one oordinate, say
i, and y 0 = y a, where a 2 B . It is lear that the graph D(x) shows all possible orders
in whi h the n variables an hange, while undergoing a transition from the initial values
( (x1); : : :; (x )) to the nal values (x1; : : :; x ). For the example given above, we have
the graph of Fig. 8.
n

(0; 1)

(01; 1)

(0; 10)

(01; 10)

(0; 101)

(01; 101)

Figure 8: Graph D(01; 101).


Let x = b1 : : : b 2 T . Call a word y an expansion of x if y results from x by repla ing
ea h letter b , i 2 [n by b , for some m > 0. Note that for any nonempty word y over B
there is a unique word x 2 T su h that y is an expansion of x. We all x the ontra tion of
y . For example, y = 000110010 is an expansion of x = 01010, and x is the ontra tion of y .
Returning now to graph D(x), we label ea h vertex y = (y1 ; : : :; y ) of D(x) with the
value f (a1; : : :; a ), where a is the last letter of y , i.e., a = ! (y ), for ea h i 2 [n.
De nition 1 Given a Boolean fun tion f : B ! B , we de ne fun tion f^ to be that
fun tion from T to T whi h, for any n-tuple (x1; : : :; x ) of transients, produ es the longest
n

mi

transient when x1 ; : : :; x are applied to the inputs of a gate performing fun tion f .
n

24

The following proposition is now lear from the de nition of f^ and the graph D(x).
Proposition 7.1 The value of f^(x1 ; : : :; x ) is the ontra tion of the label sequen e of
those paths in D(x) from ( (x1 ); : : :; (x )) to (x1 ; : : :; x ) whi h have the largest number
of alternations between 0 and 1.
n

We now give a omplete des ription of the extension of the n-input xor fun tion f :
! B to fun tion f^ : T ! T . For this fun tion, no two adja ent verti es have the same
label. Hen e, the length of the transient f^(x1 ; : : :; x ) is the length of any path in D(x)
from ( (x1); : : :; (x )) to (x1 ; : : :; x ) (all su h paths have the same length). One easily
veri es that y = f^(x1 ; : : :; x ) is that word in T satisfying the onditions
B

(y ) = f ( (x1 ); : : :; (x ))
! (y ) = f (! (x1); : : :; ! (x ))
n

X
X
l(y ) = 1 + (l(x ) 1) = 1 n + l(x );
n

i=1

i=1

where the expression for the length of y has 1 for the initial state and l(x ) 1 for the
hanges in variable i, for ea h i.
For the or fun tion, onsider rst an example with two inputs x1 and x2 . Suppose we
have rea hed a vertex (y1 ; y2) in D(x) = D((x1; x2)), where (y1 ; y2) is not the nal vertex
and f (! (y1 ); ! (y2)) = 0. Then we must have ! (y1) = ! (y2 ) = 0. The two su essor verti es
must be (y1 1; y2) and (y1 ; y21), be ause of the alternating nature of transients. Thus, the
next value of f must be 1, independently of the su essor vertex. Suppose that vertex
(y1 ; y2) is labeled 1, so that f (! (y1); ! (y2)) = 1. Then it has a su essor labeled 0 exa tly
when only one of y1 and y2 ends in 1. Moreover, if ! (y1) = 1 and ! (y2) = 0, say, then
the vertex has a su essor labeled 0 if and only if (y1 0; y2) is a su essor, so that y1 0 is a
pre x of x1. It follows that to obtain the maximum number of alternations in the output
sequen e, we should take a path with the largest number of verti es labeled 0, sin e any
hange is aused by entering, or leaving su h a vertex.
In general, for any n-tuple (x1; : : :; x ) of transients, the maximum number of 0s is
z = 1 + (z (x1) 1) + : : : + (z (x ) 1). This holds, be ause we get the rst 0 when we rea h
the rst 0s in all the n transients, and then ea h variable i ontributes z 1 additional 0s,
while the remaining variables are held at 0. For example, for x = (01010; 0101) there are
three sequen es with 1 + 2 + 1 = 4 0s:
i

(0; 0); (01; 0); (010; 0); (0101; 0); (01010; 0); (01010; 01); (01010; 010); (01010; 0101);
and
(0; 0); (0; 01); (0; 010); (01; 010); (010; 010); (0101; 010); (01010; 010); (01010; 0101);
and
(0; 0); (01; 0); (010; 0); (010; 01); (010; 010); (0101; 010); (01010; 010); (01010; 0101):
25

In summary, y = f^(x1; : : :; x ) is the word in T determined by the onditions


(y ) = (x1) _ : : : _ (x )
! (y ) = (
! (x1) _ : : : _ ! (x )
0 P
if 9i 2 [n x = 1
z (y ) =
1+
(z (x ) 1) otherwise.
n

n
i=1

Dually, if f is the n-input and fun tion, then y = f^(x1; : : :; x ) 2 T is given by


(y ) = (x1 ) ^ : : : ^ (x )
! (y ) = ! (x1 ) ^ : : : ^ ! (x )
(
0 P
if 9i 2 [n x = 0
u(y ) =
(u(x ) 1) otherwise.
1+
n

i=1

One easily veri es that the extension of the nor (nand) fun tion of any number of
arguments is the omplement of the extension of the or (and) fun tion. Note, however,
that fun tion omposition does not preserve extensions in general. For a two input xor gate
with inputs 01 and 101 the output is 1010. Suppose now that the xor gate is onstru ted
with or gates, and gates and inverters. Then
(01
101)  (01
101) = (10
101)  (01
010) = 1010  010 = 101010:
Thus, the hazard properties of the xor fun tion are quite di erent from those of the network
N onsisting of an or gate, two and gates, and two inverters. This di eren e an be
explained as follows. The network N has wires onne ting the inverters to and gates
and the and gates to the or gate. In our simulation algorithm we ompute the worst ase transient for ea h gate output, given the transients at the gate inputs. In omputing
this transient we assume that the input hanges may o ur in all possible orders. This is
equivalent to taking into a ount the wire delays. In ontrast to this, if we onsider the
xor fun tion as being realized by a single omponent, these \intermediate" wire delays are
not taken into a ount. Hen e, a shorter transient may result in the latter ase.
The following proposition and its orollary show that the monotoni ity results of Se tion 4 for two-input or gates, and gates, and inverters, and onsequently the monotoni ity
of Algorithms A and B, apply also to gates realizing arbitrary Boolean fun tions.
Let f be a Boolean fun tion and f^ its extension to transients. Then f^
is monotoni with respe t to the pre x order.
Proposition 7.2

Proof: When x = (x1 ; : : :; x ) and x0 = (x01; : : :; x0 ) with x  x0 , for all i 2 [n, D(x)
is a subgraph of D(x0). Also (x ) = (x0 ), for all i 2 [n, so that any path from (x) =
( (x1); : : :; (x )) to x in D(x) an be ontinued to a path from (x0) = ( (x01 ); : : :; (x0 ))
to x0. Thus, ea h label sequen e orresponding to a path from (x) to x is a pre x of the
label sequen e orresponding to a path from (x0) to x0 . It follows that f^(x1; : : :; x ) 
f^(x01 ; : : :; x0 ).
n

26

Corollary 7.3

f^ is monotoni with respe t to the sux order.

It also follows from the de nition of f^ that the length of f^(x1; : : :; x ) is bounded by
X
X
1 + (l(x ) 1) = 1 n + l(x ):
n

i=1

i=1

When n = 1, this bound an be a hieved for the identity fun tion and fun tion , and for
n > 1, for the n-input xor fun tion.
The next proposition and two lemmas are te hni al results needed in the proof of Proposition 7.7.

Proposition 7.4 Suppose that f depends on ea h of its arguments and none of the x is
a single letter. Then the length of f^(x1 ; : : :; x ) is at least the maximum of the lengths of
the x .
i

Proof: See Appendix.


For a fun tion f : B

! B, integer i 2 [n and b 2 B, let f : B


i;b

fun tion obtained by xing the ith argument of f at b, i.e.,


f (b1; : : :; b
i;b

; b +1; : : :; b ) = f (b1; : : :; b
i

! B denote the

; b; b +1; : : :; b );
i

for all b 2 B , j 2 [n; j 6= i.


j

Lemma 7.5

2 T , j 2 [n, if x = b is a single letter, then

For all x

f^(x1 ; : : :; x ) = f^ (x1 ; : : :; x
n

i;b

; x +1; : : :; x ):
i

(3)

Proof: See Appendix.


If f : B ! B does not depend on its ith argument, then f^ does not depend
on that argument either.
n

Lemma 7.6

Proof: See Appendix.


Let k  1, and let  be the equivalen e relation on T , de ned before.
is a ongruen e in the sense that, for any Boolean fun tion f : B ! f0; 1g, and
x1 ; : : :; x ; x01; : : :; x0 2 T , if x1  x01 ; : : :; x  x0 , then f^(x1; : : :; x )  f^(x01; : : :; x0 ).
Proposition 7.7

Then

Proof: Our laim is lear for k = 1, sin e all transients are equivalent with respe t to 1 .
Hen e we may assume that k > 1. It su es to show that, if x  x0 and x = x0 for
j 6= i, then f^(x1; : : :; x )  f^(x01; : : :; x0 ). The laim then follows by applying this result
to ea h x . (For example, for n = 2, we rst show that f^(x1; x2)  f^(x01; x2), and then that
f^(x01 ; x2)  f^(x01; x02), and ea h step involves hanging just one variable.) Sin e k > 1, x
is a single letter i x0 is a single letter, in whi h ase x = x0 and our laim is obvious. By
i

27

Lemmas 7.5 and 7.6, we may thus assume that f depends on all of its arguments and none
of the x is a single letter, so that x0 is not a single letter either. But then Proposition 7.4
applies and the result follows.
Sin e  is a ongruen e relation on T , it is meaningful to extend any Boolean fun tion
to T . For example, the xor table for k = 3 is shown in Table 13. It follows that the
extension preserves the pre x and sux order on T .
j

Table 13: Operation of a xor gate for k = 3.


xor

0
01

10
1

0
0
0

10
1

01
01



10








10
10



01

1
1
10

01
1

Complexity Issues

In this se tion we give an estimate of the worst ase performan e of the simulation algorithms.
Using Algebra C , Algorithm A may not terminate unless the ir uit is feedba k-free. So
suppose that we are given an m-input feedba k-free ir uit with n gates. We assume that
ea h gate is an or or and gate, or an inverter, or more generally, a gate with a bounded
number of inputs su h that the extension of the binary gate fun tion to T an be omputed
in linear time when ea h transient t is represented by the triple ( (t); z (t); ! (t)). Here, we
assume that z (t), the number of 0s in t is stored in binary.
Suppose the ir uit is in a stable state and some of the inputs are supposed to hange.
If an input is to hange from 0 to 1 (1 to 0), we set it to 01 (10). It takes O(m) time to
re ord these hanges. Let s1 ; : : :; s denote the state variables orresponding to the gates.
Initially, ea h s has a binary value. For ea h i 2 [n, let h denote the height of the ith
gate, i.e., the length of the longest path from an input to the gate orresponding to s .
We learly have that h  n, for ea h i 2 [n. In the rst step of Algorithm A, all state
variables s with h = 1 will be set to their respe tive nal values. More generally, after
step j , all state variables s with h  j will assume their nal values. Thus, Algorithm A
terminates in O(n) steps. In ea h step, ea h variable s is set to a value a ording to its
urrent ex itation. Of ourse, this value may be the same as the value urrently attained
by the variable. For example, when the ith gate is an or gate whi h takes its inputs from
the j th and kth gates, where j; k < i, then s is set to s  s , the sum of the urrent values
of s and s . The  operation is that of Algebra C . Using the triple representation of
transients, the middle omponent of the new value of s is at most twi e the maximum of
the middle omponents of s and s , so that its binary representation is at most one longer
n

28

than the maximum of the lengths of the middle omponents of s and s . We see that
updating the value of s in a step takes O(n) time, and sin e there are n state variables,
ea h step requires O(n2) time. Thus, in O(n) steps, we an set all n state variables to their
nal values in O(n3) time, showing that Algorithm A runs in O(m + n3 ) time.
If the gates are given in topologi al order so that ea h s depends at most on the input
variables and the state variables s with j < i, then a better performan e an be a hieved by
an alternative algorithm that runs in n steps. In step i, it sets s to its nal value a ording
to its ex itation. The time required is now O(m + n2 ). Sin e topologi al sort an be done
within this time limit, the same bound applies if the gates are given in an arbitrary order.
If in Algorithm A we use Algebra C , for a xed k, then at ea h step, the value of any
state variable an be stored in spa e O(log k) and updated in time O(k log k) by a simple
table look-up. In this ase, Algorithm A terminates in O(n) steps for all ir uits, in luding
those with feedba k, sin e T is nite, and the subsequent values of ea h state variable
form a nonde reasing sequen e with respe t to the pre x order. This follows from the fa t,
proved in Se tion 7, that the extension of any Boolean fun tion preserves the pre x order
on T . Thus, using Algebra C , Algorithm A runs in O(m + n2k log k) time, even for ir uits
with feedba k.
If we use Algorithm A with Algebra C for a feedba k-free ir uit, then Algorithm B
be omes unne essary. The last letter of the nal value of ea h state variable gives the
response of the ir uit to the intended hange in the input. Moreover, the nal value of
ea h state variable is the transient that des ribes all of the (unwanted) intermediate hanges
that an take pla e in worst ase at the respe tive gate. The same holds if we use C for any
ir uit, whi h now may ontain y les, su h that upon termination of Algorithm A, ea h
state variable assumes a value other than . However, if the nal value of a state variable
is , Algorithm B does be ome ne essary. It will stop in no more than n steps, sin e the
subsequent values of ea h state variable now form a nonin reasing sequen e with respe t to
the sux order. The total time required by Algorithm B is O(n2k log k).
It follows from the arguments presented above that the following de ision problem is
de idable in polynomial time: For a given ir uit in a stable initial state, a given input
hange, and an integer k, are there k or more (unwanted) signal hanges on the output of
any given gate, or in the entire ir uit, during that input hange? Also, it is de idable in
nondeterministi polynomial time, for a given ir uit in a stable initial state and an integer
k, whether there exists an input hange that would ause k or more (unwanted) signal
hanges on the output of any given gate, or in the entire ir uit.
It is NP- omplete to de ide for an n-input Boolean fun tion f given in onjun tive
normal form, transients x1; : : :; x and integer k whether the length of f^(x1; : : :; x ) is > k.
Indeed, we an guess a path in D(x) and verify in polynomial time if the ontra tion of
the asso iated label sequen e is longer than k, showing that the problem belongs to NP.
As for NP-hardness, onsider an n-input Boolean fun tion f given by a onjun tive normal
form ' su h that f (0; : : :; 0) = 0. Then ' is satis able i f^(01; : : :; 01) is not 0, i.e.,
when the length of f^(01; : : :; 01) is > 1. To see this, suppose rst that ' is satis able and
let x = (01; : : :; 01). Sin e ' is satis able, there is some b = (b1; : : :; b ) 2 f0; 1g su h
that f (b) = 1. Now let y = 0 if b = 0, and let y = 1, if b = 1, for ea h i 2 [n. Thus,
j

29

! (y ) = (! (y1); : : :; ! (y )) = b, so that y = (y1 ; : : :; y ) is a vertex of D labeled by f (b) = 1.


Take any path from (x) = (0; : : :; 0) to x going through y . Sin e vertex (0; : : :; 0) is labeled
by 0 and y is labeled by 1, the ontra tion of the label sequen e asso iated with this path
has length at least 2, showing that the length of f^(01; : : :; 01) is > 1. On the other hand, if
the length of f^(01; : : :; 01) is > 1, then at least one vertex y = (y1 ; : : :; y ) of D is labeled
by 1. Thus, letting b = ! (y ) = (! (y1); : : :; ! (y )), we have that f (b) = 1, so that ' is
satis able.
n

Simulation with Input, Middle, and Output Values

In a number of simulators [9, 16, the signal values are ordered triples ontaining the initial,
transient, and nal values of a signal. We now show how su h algebras an be des ribed in
our framework.
For k  1, relation  in the algebra C = (T; ;
; ; 0; 1) is de ned as follows: For
t; s 2 T , t  s if either t = s or (t) = (s), ! (t) = ! (s), and t and s are both of length
 k. Denote by  (for left) and  (for right) the ongruen es de ned by
t  s i (t) = (s);
t  s i ! (t) = ! (s):
Then
 =  \  \ :
k

Proposition 9.1

Relation

is a ongruen e relation on C .

Proof: Sin e , , and  are ongruen es, so is  .


The quotient algebra C 0 = C=  is a ommutative de Morgan bisemigroup with 2(k
1) + 4 = 2k + 2 elements. Ea h word t 2 T with l(t) < k determines a singleton ongruen e
lass. In addition, for any b1; b2 2 B , the words t 2 T with l(t)  k, (t) = b1, and ! (t) = b2
determine a ongruen e lass that we denote by b1b2. Sin e   , C is a quotient of
C 0 . It an be onstru ted from C 0 by identifying the four elements 00, 01, 10, and
11. Also, if k  m, then C 0 is a quotient of C 0 .
0
Proposition 9.2 C is isomorphi to a subdire t produ t (i.e., is a subalgebra of the dire t
produ t) of two opies of the 2-element Boolean algebra B and Algebra C .
Proof: This follows from the fa t that  =  \  \ , and that C= and C= are both
isomorphi to B .
k

Let T 0 = T nfg[f00; 01; 10; 11g. We an extend any Boolean fun tion B ! B
to a fun tion (T 0 ) ! T by using the ongruen e property of  .
k

Example 12 Consider the ir uit onsisting of a 2-input xor gate with output s and inputs
X and s. If the ir uit is started in state X = 0, s = 0, it is stable. The simulation in C20
is shown in Table 14 when the input hanges to

30

01.

This simulation does not terminate.

Table 14: Simulation with initial, middle, and nal values.


initial state

10

:::

:::

0 0
01 0
01 01
01 00
01 01

Con lusions

We on lude the paper with a short summary of our results. Our main ontribution is a
general treatment of signal hanges and hazards that en ompasses the existing methods and
permits a systemati study and omparison of these approa hes. Some detailed properties
of our method are highlighted below.

 Hazards

We have presented a general theory of simulation of gate ir uits for the purpose of
hazard dete tion, identi ation, and ounting.

 Energy Estimation

The same simulation algorithms an be used to ount the number of signal hanges
during a given input hange of a ir uit. This provides an estimate of the worst- ase
energy onsumption of that input hange.

 E ien y

If a ir uit has m inputs and n gates, our simulation algorithms run in O(m + n2 )
time.

 A ura y

By hoosing the value of the threshold k one an ount signal hanges and hazards to
any degree of a ura y.

 Feedba k-Free Cir uits

{ In Algebra C Algorithm A always terminates, and Algorithm B is not required.


{ In Algebra C Algorithm A produ es a result without s if k is su iently large.
k

In that ase, Algorithm B is not needed.


{ Simulation with (initial, middle, nal) values terminates in Algebra C 0 .
k

 Cir uits with Feedba k

{ In Algebra C Algorithm A may not terminate.

31

{ Simulation in Algebra C always terminates.


{ Simulation with (initial, middle, nal) values may not terminate; hen e, it is not
k

suitable for su h ir uits.

 Multivalued Algebras

Many known algebras are in luded as spe ial ases in our theory:
{ Ternary algebra is isomorphi to Algebra C2.
{ The quinary algebra of Lewis [17 is isomorphi to Algebra C3.
{ The 6-valued algebra H6 of Hayes [16 is isomorphi to C20 .
{ The 8-valued algebra of Breuer and Harrison [1 and Fantauzzi [12, 16 is isomorphi to C30 .

 De ision Problems

{ For a given ir uit in a stable initial state, a given input hange, and an integer
k, it is de idable in polynomial time whether there are k or more (unwanted)

signal hanges on the output of any given gate, or in the entire ir uit, during
that input hange.
{ For a given ir uit in a stable initial state and an integer k, it is de idable
in nondeterministi polynomial time whether there exists an input hange that
would ause k or more (unwanted) signal hanges on the output of any given
gate, or in the entire ir uit.
Chara terizations of the simulation results des ribed here in terms of the results of binary analysis, extensions to ir uits started in unstable states, and other related results will
be presented in a ompanion paper urrently in preparation.

A knowledgement

The authors thank Steve Nowi k for his areful reading of our paper and for his onstru tive
omments.
11

Appendix

Additional Properties of Algebra C

We now state without proof some basi properties of C . Ea h word in T is generated by


the two transients 01 and 10 representing up and down hanges. Sin e C is a bisemigroup,
both operations are used. The word 0 is the trivial sum of zero generators, and 1 is the
trivial produ t of zero generators1. Words 10 and 01 are the generators themselves. Next
010 = 01
10, 101 = 10  01, 0101 = 010  01 = 01
10  01, et .
1

This agreement is standard in algebra, and is analogous to onsidering any nonnegative integer n as a
sum of n 1s; then the integer 0 is the sum of zero 1s.

32

We refer to any expression of the form 10  01


10  01 : : : or 01
10  01
10 : : : as an
of generators. One an verify that the order in whi h the operations are applied
is immaterial, for example, (10  01)
(10  01) = ((10  01)
10)  01 = (10  (01
10))  01.
Every word in t 2 T is an alternation of generators. In general, we an obtain this
representation of t as follows. Dupli ate all the letters in t, ex ept the rst and the last.
This expresses t as a on atenation of up and down hanges. Insert  between every pair
of 0s, and
between every pair of 1s. The resulting expression is a representation of t in
terms of the two generators. If l(t) = n, then t is an alternation of n 1 generators. For
example, let t = 01010101. We rst nd 01100110011001 and then insert the operations, to
obtain 01
10  01
10  01
10  01.

10
alternation

0(10)

01

10

01

1(01)

0(10)1

10

01

01

1(01)0

10

Figure 9: Operations in C .
Figure 9 shows how addition of a generator or multipli ation by a generator a e ts any
word in T . The gure shows a transition graph with four states, ea h labeled by a regular
expression. The languages denoted by the four regular expressions are pairwise disjoint.
Given a word t 2 T , start in that state q whose expression ontains t. To nd t g , where
is  or
and g is a generator, look for a transition labeled g. If there is no su h
transition, then t g = t. Otherwise, follow the transition indi ated; let the state rea hed
be q 0. Then t g is the word of length l(t) + 1 that is ontained in the regular expression of
q 0 . For example, 1010 belongs to state 1(01)0. From that state we nd 1010  01 = 10101,
1010  10 = 1010, 1010
01 = 01010, and 1010
10 = 1010.
From the graph of Fig. 9 one an also read o t  t0 and t
t0 for any two words t; t0 2 T .
Start in the state ontaining t, and follow the transitions in the alternation of t0 . For example, onsider t = 1010 and t0 = 010. The alternation of t0 is 01
10. We start in the state
labeled 1(01)0 whi h ontains 1010. To nd t
t0 we rst look for
01; this leads us to
01010. Next we look for
10; there is no hange of state. Hen e, the result is 01010.
Proof of Proposition 7.4: Let x be one of the longest of the words x1 ; : : :; x 2 T ,
and let m denote the length of x . For ea h k 2 [m, let z denote the pre x of x of length
i

33

k. Sin e f depends on its ith argument, there exist b


f (b1; : : :; b

2 B, j 6= i su h that

; 0; b +1; : : :; b ) =
6 f (b1; : : :; b
i

; 1; b +1; : : :; b ):
i

(4)

Sin e none of the x is a single letter, there is a vertex in D(x) of the form
j

(y1 ; : : :; y 1 ; z1; y +1; : : :; y )


i

su h that ! (y ) = b , for all j 6= i. It follows that for ea h k 2 [m,


j

= (y1; : : :; y 1 ; z ; y +1 ; : : :; y )
i

is a vertex of D(x); moreover, there is a path p form v1 to v whose internal verti es are
v2 ; : : :; v 1. By (4), the label sequen e asso iated with this path is alternating. Thus, the
ontra tion of the label sequen e of any path from ( (x1); : : :; (x )) to (x1; : : :; x ) whi h
ontains p as a subpath has length at least k.
Proof of Lemma 7.5: Any vertex (y1 ; : : :; y ) 2 D(x), where x = (x1; : : :; x ), has a
single b as its ith omponent, i.e., y = b. Thus, denoting x0 = (x1 ; : : :; x 1; x +1; : : :; x ),
it holds that (y1 ; : : :; y 1; y +1 ; : : :; y ) 2 D(x0) and
m

f (! (y1 ); : : :; ! (y )) = f (! (y1); : : :; ! (y
n

i;b

); ! (y +1); : : :; ! (y )):
i

(5)

Hen e any label sequen e orresponding to a path from (x) = ( (x1); : : :; (x )) to x


in D(x) determined by fun tion f is also the label sequen e of a path from (x0) =
( (x1); : : :; (x 1); (x +1); : : :; (x )) to x0 in D(x0) determined by fun tion f .
Conversely, if the (n 1)-tuple (y1 ; : : :; y 1 ; y +1 ; : : :; y ) is in D(x0), then
n

i;b

(y1 ; : : :; y 1; b; y +1; : : :; y )
i

is in D(x). It follows that any word whi h is the label sequen e of a path from (x0) to x0
in D(x0) determined by f is a label sequen e of a path from (x) to x in D(x) determined
by f .
Proof of Lemma 7.6: If f : B ! B does not depend on its ith argument, then for all
x1 ; : : :; x 2 T , for any (y1 ; : : :; y ) in D(x), where x = (x1 ; : : :; x ), and for any b 2 B ,
i;b

f (! (y1); : : :; ! (y )) = f (! (y1); : : :; ! (y
n

); b; ! (y +1); : : :; ! (y )):
i

It follows that
f^(x1; : : :; x ) = f^(x1; : : :; x
n

proving that f^ is independent of x .


i

34

; b; x +1; : : :; x );
i

Referen es

[1 M. A. Breuer and L. Harrison, \Pro edures for Eliminating Stati and Dynami Hazards in Test Generation," IEEE Trans. on Computers , vol. C-23, pp. 1069{1078,
O tober 1974.
[2 J. A. Brzozowski, \A Survey of Regular Expressions and Their Appli ations," IRE
Trans. on Ele troni Computers , Vol. EC{11, No. 3, pp. 324{335, 1962.
[3 J. A. Brzozowski and C-J. H. Seger, Asyn hronous Cir uits, Springer-Verlag, 1995.
[4 J. A. Brzozowski, \Some Appli ations of Ternary Algebras," Publi ationes Mathemati ae (Debre en). Vol. 54, Supplement, pp. 583{599, 1999.
[5 J. A. Brzozowski, \De Morgan Bisemilatti es," Pro . 30th Int. Symp. on MultipleValued Logi , Portland, OR, pp. 173{178, IEEE Computer So iety Press, Los Alamitos,
CA, May 2000.
[6 J. A. Brzozowski and Z. E sik, \Hazard Algebras" (Extended Abstra t), A Half-Century
of Automata Theory , A. Salomaa, D. Wood, and S. Yu, eds., pp. 1{19, World S ienti ,
Singapore, 2001.
[7 J. A. Brzozowski, Z. E sik, and Y. Iland, \Algebras for Hazard Dete tion," Pro . 31st
Int. Symp. on Multiple-Valued Logi , Warsaw, Poland, pp. 3{12, IEEE Computer So iety Press, Los Alamitos, CA, May 2000.
[8 S. Burris and H. P. Sankappanavar, A Course in Universal Algebra, Springer-Verlag,
1981.
[9 S. Chakraborty and D. L. Dill, \More A urate Polynomial-Time Min-Max Simulation," Pro . 3rd Int. Symp. on Advan ed Resear h in Asyn hronous Cir uits and Systems , Eindhoven, The Netherlands, pp. 112-123, IEEE Computer So iety Press, Los
Alamitos, CA, April 1997.
[10 B. A. Davey and H. A. Priestley, Introdu tion to Latti es and Order, Cambridge University Press, 1990.
[11 E. B. Ei helberger, \Hazard Dete tion in Combinational and Sequential Swit hing
Cir uits," IBM J. Resear h and Development, vol. 9, pp. 90{99, Mar h 1965.
[12 G. Fantauzzi, \An Algebrai Model for the Analysis of Logi Cir uits," IEEE Trans. on
Computers , vol. C-23, pp. 576{581, June 1974.
[13 M. Gheorghiu, Cir uit Simulation Using a Hazard Algebra , MMath Thesis, Department
of Computer S ien e, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1,
De ember 2001. http://maveri .uwaterloo. a/publi ation.html
35

[14 M. Goto, \Appli ation of Three-Valued Logi to Constru t the Theory of Relay Networks" (in Japanese), Pro eedings of the Joint Meeting of IEE, IECE, and I.of Illumination E. of Japan, 1948.
[15 G. Gratzer, Universal Algebra, Se ond Edition, Springer-Verlag, 1979.
[16 J. P. Hayes, \Digital Simulation with Multiple Logi Values," IEEE Trans. on
Computer-Aided Design , vol. CAD{5, no. 2, April 1986.
[17 D. W. Lewis, Hazard Dete tion by a Quinary Simulation of Logi Devi es with Bounded
Propagation Delays, MS Thesis, Ele tri al Engineering, Syra use University, Syra use,
NY, January 1972.
[18 D. E. Muller, A Theory of Asyn hronous Cir uits . Te hni al Report 66, Digital
Computer Laboratory, University of Illinois, Urbana-Champaign, Illinois, USA, 1955.
[19 D. E. Muller and W. S. Bartky, A Theory of Asyn hronous Cir uits. In Pro eedings of
an International Symposium on the Theory of Swit hing, Annals of the Computation
Laboratory of Harvard University, Harvard University Press, pp. 204{243, 1959.
[20 A. Salomaa, Theory of Automata. Pergamon Press, Oxford, 1969.
[21 S. H. Unger, Asyn hronous Sequential Swit hing Cir uits. Wiley-Inters ien e, 1969.

36

Você também pode gostar