Escolar Documentos
Profissional Documentos
Cultura Documentos
Abhimanyu Nagrath Abhishek Chauhan Abhishek Kumar Achal Agrawal Adarsh Singh Akansha Chandrakar 10115001 10115002 10115003 10115004 10115005 10115006
Contents
I. Definitions ......................................................................................................... 2 II. Basic functions ................................................................................................. 2 III. Function calls, parentheses, and blanks .......................................................... 3 IV. COND .............................................................................................................. 3 V. How to write functions .................................................................................... 4 VI. The logic of functions ...................................................................................... 5 VII. Examples ........................................................................................................ 7 VIII. Additional built-in functions .......................................................................... 8 IX. Auxiliary Functions and Accumulator Variables ............................................. 10 Factorial Revisited...................................................................................... 13 X. Tail Recursions ............................................................................................... 15 XI. Functions as First-Class Objects ..................................................................... 17 Higher-Order Functions ............................................................................. 18 Lambda Expressions ................................................................................... 20 XII.Iterating Through a List ................................................................................. 21 Search Iteration ......................................................................................... 22 Filter Iteration ............................................................................................ 24 XIII.Functions Returning Multiple Values ............................................................ 27 XV. Abstract Data Types ..................................................................................... 28 XIV. Implementing Various Constructs ............................................................... 29 Binary Trees ............................................................................................... 29 Searching Binary Trees ............................................................................... 31 Traversing Binary Trees .............................................................................. 33 Binary Search Trees ................................................................................... 36 Polynomials ............................................................................................... 40 Tower of Hanoi .......................................................................................... 47 Various Useful Functions ........................................................................... 52 References ......................................................................................................... 56
I. Definitions
A recursive definition is a definition in which (1) certain things are specified as belonging to the category being defined, and (2) a rule or rules are given for building new things in the category from other things already known to be in the category. An atom is either an integer or an identifier. A list is a left parenthesis, followed by zero or more S-expressions, followed by a right parenthesis. An S-expression is an atom or a list. NIL means "false.'' NIL is also the name for the empty list, and may be written as (). In addition, NIL is unique in that it is at once both an atom and a list (just learn this, don't try to make it make sense). T means "true,'' but actually anything that is not NIL can be used to mean "true.''
NULL of an S-expression is "true'' if the S-expression is the empty list (that is, NIL), and NIL otherwise.
IV. COND
COND is an unusual function which may take any arbitrary number of arguments. Each argument is called a clause, and consists of a list of exactly two S-expressions. We will call the first S-expression in a clause a condition, and the second S-expression a result. Thus, a call to COND looks like this: (COND (condition1 result1 )
3
(condition2 ... (T
result2 )
resultN ) )
The value returned by COND is computed as follows: if condition1 is true (not NIL), then return result1; else if condition2 is true then return result2; else if ...; else return resultN. In most LISP systems, it is an error if none of the conditions are true, and the result of theCOND is undefined. For this reason, T is usually used as the final condition.
function_name is an identifier, parameter_list is a list (possibly empty) of identifiers, and function_body is an S-expression.
When writing functions, clarity is important, especially with all those parentheses to keep track of. Here are some hints that you may find to be helpful.
When writing a function, begin the first line (containing the name of the function) in column 1. Start all other lines indented by about 10 spaces, so it is easy to scan down the function names. When writing a conditional, put the word COND on a line all by itself. Put only one clause per line, starting each clause directly under the COND. If you nest a second COND inside the first, indent the nested COND. Separate the condition and the result of a clause with extra blanks. If a clause takes more than one line, try to break between the condition and the result. Group closing parentheses according to the lines on which the opening parentheses occurred, with blanks between groups. For example, if the first line of a function has two open parentheses which are not closed on the same line, then the last line should have the two matching close parentheses together, separated from any other parentheses by blanks. If you put COND on a line by itself, then its closing parenthesis should be in a group by itself.
4
((ATOM (CAR L)) (REMATOMS (CDR L))) (T (CONS (CAR L) (REMATOMS (CDR L)))) ) ) Rule 3c: To transform the elements of a list, CONS the transformed CAR onto the result of recurring with the CDR. When you want to perform an operation on every element of a list, and form a list of the results, you can do this by performing the operation on the CAR, recurring with the CDR, then CONSing the modified CAR onto the modified CDR. As an example, to add 1 to every element of a list of numbers (using the 1+ function): (DEFUN ADDONE (L) (COND ((NULL L) L) (T (CONS (1+ (CAR L)) (ADDONE (CDR L)))) ) ) Rule 4: In each case of a COND you can use the fact that all previous tests have failed. Programming in LISP is largely a matter of enumerating all the possible cases and writing a clause to deal with each. In each clause you know that the tests of all the preceding clauses failed, and you write a test to bite off a small section of the possibilities that are left. Be sure that you always check the arguments of a function for legality before you call the function with them. For example, don't call (CAR L) until you know that L is a nonempty list. Don't call (EQ X Y) until you know that X and Y are atoms. Rule 5: Use T as the last test in a COND. Your COND should cover all possible cases, and return a value for each case. Using T for the last test makes this a "catchall" case. It's poor practice to assume you can think of all possible cases.
Where algorithmic languages use assignment statements, sequential execution of statements, and loops, LISP uses function composition and recursion. Programmers used to an algorithmic language may therefore find LISP to be difficult and confusing. It takes some experience with the language before the beauty of LISP begins to be apparent.
6
Most implementations of LISP provide constructs similar to those found in algorithmic languages. These are best avoided by the beginning LISP programmer, as they allow the novice to "write Fortran programs in LISP'' and never fully master the LISP approach to programming.
VII. Examples
MEMBER Given an atom A and a list of atoms LAT, determine whether A occurs in LAT: (DEFUN MEMBER (A LAT) (COND ((NULL LAT) NIL) T)
(Actually, MEMBER is already defined in LISP. Older LISP systems will let you redefine built-in functions, and will use your function in preference to the built-in version. The only Common LISP system that I have used will let you redefine built-in functions, but then gets horribly confused. In any case, redefining built-in functions is generally a bad idea.) UNION Define a set to be a list of atoms, such that no atom is ever repeated, and the order of atoms does not matter. The following function computes the union of two sets, that is, a third set containing any atom found in either or both of the two given sets. (DEFUN UNION (SET1 SET2) (COND ((NULL SET1) SET2) (UNION (CDR SET1) SET2))
(REVERSE L) Return a list containing the same elements as L but in reverse order. (LENGTH L) Returns the length of L, that is, the number of top-level elements in L. Predicates (tests): (LISTP S) True if S is a list. (NUMBERP S) True if S is a number. (NOT S) True if S is false, and false if S is true. Same as (NULL S). (EQUAL S1 S2) True if S1 and S2 are equal. Like EQ, but may be used for anything. (ZEROP N) True if number N is zero. (PLUSP N) True if number N is positive. (MINUSP N) True if number N is negative. (EVENP N) True if integer N is even. (ODDP N) True if integer N is odd. Arithmetic operations: (+ N1 N2 ...) Returns the sum of the numbers. (- N1 N2 ...) Returns the result of subtracting all subsequent numbers from N1 . (* N1 N2 ...) Returns the product of all the numbers. (/ N1 N2 ...) Returns the result of dividing N1 by all subsequent numbers. (1+ N) Returns N plus one. (Note that there is no space between the "1" and the "+".) (1- N) Returns N minus one. (Note that there is no space between the "1" and the "-".) (/ N) Return the reciprocal of N. Input/output:
(LOAD F) Load the source file whose name (without extensions) is F . (DRIBBLE F) Causes the current session to be recorded file whose name (without extensions) is F . To stop recording, call (DRIBBLE) with no parameters. Not available on all systems. (PRIN1 S) Print, on the current line, the result of evaluating the S-expression S. (TERPRI) Print a newline.
10
So, why does (slow-list-reverse L) returns the reversal of L? The list L is constructed either by nil or by cons:
Case 1: L is nil. The reversal of L is simply nil. Case 2: L is constructed from a call to cons. Then L has two components: (first L) and (rest L). If we append (first L) to the end of the reversal of (rest L), then we obtain the reversal of L. Of course, we could make use of list-append to do this. However, list-append expects two list arguments, so we need to construct a singleton list containing (first L) before we pass it as a second argument to list-append.
Let us trace the execution of the function to see how the recursive calls unfold: USER(3): (trace slow-list-reverse) (SLOW-LIST-REVERSE) USER(4): (slow-list-reverse '(1 2 3 4)) 0: (SLOW-LIST-REVERSE (1 2 3 4)) 1: (SLOW-LIST-REVERSE (2 3 4)) 2: (SLOW-LIST-REVERSE (3 4)) 3: (SLOW-LIST-REVERSE (4)) 4: (SLOW-LIST-REVERSE NIL) 4: returned NIL 3: returned (4) 2: returned (4 3) 1: returned (4 3 2) 0: returned (4 3 2 1) (4 3 2 1) Everything looks fine, until we trace also the unfolding of list-append: USER(9): (trace list-append) (LIST-APPEND) USER(10): (slow-list-reverse '(1 2 3 4)) 0: (SLOW-LIST-REVERSE (1 2 3 4)) 1: (SLOW-LIST-REVERSE (2 3 4)) 2: (SLOW-LIST-REVERSE (3 4)) 3: (SLOW-LIST-REVERSE (4)) 4: (SLOW-LIST-REVERSE NIL) 4: returned NIL 4: (LIST-APPEND NIL (4)) 4: returned (4) 3: returned (4) 3: (LIST-APPEND (4) (3)) 4: (LIST-APPEND NIL (3)) 4: returned (3)
11
3: returned (4 3) 2: returned (4 3) 2: (LIST-APPEND (4 3) (2)) 3: (LIST-APPEND (3) (2)) 4: (LIST-APPEND NIL (2)) 4: returned (2) 3: returned (3 2) 2: returned (4 3 2) 1: returned (4 3 2) 1: (LIST-APPEND (4 3 2) (1)) 2: (LIST-APPEND (3 2) (1)) 3: (LIST-APPEND (2) (1)) 4: (LIST-APPEND NIL (1)) 4: returned (1) 3: returned (2 1) 2: returned (3 2 1) 1: returned (4 3 2 1) 0: returned (4 3 2 1) (4 3 2 1) What we see here is revealing: given a list of N element, slow-listreverse makes O(N) recursive calls, with each level of recursionl involving a call to the linear-time function list-append. The result is that slow-listreverse is an O(N2) function. We can in fact build a much more efficient version of reverse using auxiliary functions and accumulator variables: (defun list-reverse (L) "Create a new list containing the elements of L in reversed order." (list-reverse-aux L nil)) (defun list-reverse-aux (L A) "Append list A to the reversal of list L." (if (null L) A (list-reverse-aux (rest L) (cons (first L) A)))) The function list-reverse-aux is an auxiliary function (or a helper function). It does not perform any useful function by itself, but the driver function listreverse could use it as a tool when building a reversal. Specifically, (listreverse-aux L A) returns a new list obtained by appending list A to the reversal of list L. By passing nil as A to list-reverse-aux, the driver function listreverse obtains the reversal of L.
12
Let us articulate why (list-reverse-aux L A) correctly appends A to the reversal of list L. Again, we know that either L is nil or it is constructed by cons:
Case 1: L is nil. The reversal of L is simply nil. The result of appending A to the end of an empty list is simply A itself. Case 2: L is constructed by cons. Now L is composed of two parts: (first L) and (rest L). Observe that (first L) is the last element in the reversal of L. If we are to append A to the end of the reversal of L, then (first L) will come immediately before the elements of A. Observing the above, we recognize that we obtain the desired result by recursively appending (cons (first L) A) to the reversal of (rest L).
Tracing both list-reverse and list-reverse-aux, we get the following: USER(17): (trace list-reverse list-reverse-aux) (LIST-REVERSE LIST-REVERSE-AUX) USER(18): (list-reverse '(1 2 3 4)) 0: (LIST-REVERSE (1 2 3 4)) 1: (LIST-REVERSE-AUX (1 2 3 4) NIL) 2: (LIST-REVERSE-AUX (2 3 4) (1)) 3: (LIST-REVERSE-AUX (3 4) (2 1)) 4: (LIST-REVERSE-AUX (4) (3 2 1)) 5: (LIST-REVERSE-AUX NIL (4 3 2 1)) 5: returned (4 3 2 1) 4: returned (4 3 2 1) 3: returned (4 3 2 1) 2: returned (4 3 2 1) 1: returned (4 3 2 1) 0: returned (4 3 2 1) (4 3 2 1) For each recursive call to list-reverse-aux, notice how the first element of L is "peeled off", and is then "accumulated" in A. Because of this observation, we call the variable A an accumulator variable. Factorial Revisited To better understand how auxiliary functions and accumulator variables are used, let us revisit the problem of computing factorials. The following is an alternative implementation of the factorial function: (defun fast-factorial (N) "A tail-recursive version of factorial." (fast-factorial-aux N 1))
13
(defun fast-factorial-aux (N A) "Multiply A by the factorial of N." (if (= N 1) A (fast-factorial-aux (- N 1) (* N A)))) Let us defer the explanation of why the function is named "fast-factorial", and treat it as just another way to implement factorial. Notice the structural similarity between this pair of functions and those for computing list reversal. The auxiliary function (fast-factorial-aux N A) computes the product of A and the N'th factorial. The driver function computes N! by calling fast-factorialaux with A set to 1. Now, the correctness of the auxiliary function (i.e. that (fastfactorial-aux N A) indeed returns the product ofN! and A) can be established as follows. N is either one or larger than one.
Case 1: N = 1 The product of A and 1! is simply A * 1! = A. Case 2: N > 1 Since N! = N * (N-1)!, we then have N! * A = (N-1)! * (N * A), thus justifying our implementation.
Tracing both fast-factorial and fast-factorial-aux, we get the following: USER(3): (trace fast-factorial fast-factorial-aux) (FAST-FACTORIAL-AUX FAST-FACTORIAL) USER(4): (fast-factorial 4) 0: (FAST-FACTORIAL 4) 1: (FAST-FACTORIAL-AUX 4 1) 2: (FAST-FACTORIAL-AUX 3 4) 3: (FAST-FACTORIAL-AUX 2 12) 4: (FAST-FACTORIAL-AUX 1 24) 4: returned 24 3: returned 24 2: returned 24 1: returned 24 0: returned 24 24 If we compare the structure of fast-factorial with list-reverse, we notice certain patterns underlying the use of accumulator variables in auxiliary functions: 1. An auxiliary function generalizes the functionality of the driver function by promising to compute the function of interest and also combine the result
14
with the value of the accumulator variable. In the case of list-reverseaux, our original interest were computing list reversals, but then the auxiliary function computes a more general concept, namely, that of appending an auxiliary list to some list reversal. In the case of fast-factorial-aux, our original interest were computing factorials, but the auxiliary function computes a more general value, namely, the product of some auxiliary number with a factorial. 2. At each level of recursion, the auxiliary function reduces the problem into a smaller subproblem, and accumulating intermediate results in the accumulator variable. In the case of list-reverse-aux, recursion is applied to the sublist (rest L), while (first L) iscons'ed with A. In the case of fast-factorial, recursion is applied to (N - 1)!, while N is multiplied with A. 3. The driver function initiates the recursion by providing an initial value for the auxiliary variable. In the case of computing list reversals, listreverse initializes A to nil. In the case of computing factorials, fastfactorial initializes A to 1. Now that you understand how fast-factorial works, we explain where the adjective "fast" come from ...
X. Tail Recursions
Recursive functions are usually easier to reason about. Notice how we articulate the correctness of recursive functions in this and the previous tutorial. However, some naive programmers complain that recursive functions are slow when compared to their iterative counter parts. For example, consider the original implementation of factorial we saw in the previous tutorial: (defun factorial (N) "Compute the factorial of N." (if (= N 1) 1 (* N (factorial (- N 1))))) It is fair to point out that, as recursion unfolds, stack frames will have to be set up, function arguments will have to be pushed into the stack, so on and so forth, resulting in unnecessary runtime overhead not experienced by the iterative counterpart of the above factorialfunction: int factorial(int N) { int A = 1; while (N != 1) { A = A * N;
15
N = N - 1; } return A; } Because of this and other excuses, programmers conclude that they could write off recursive implementations ... Modern compilers for functional programming languages usually implement tailrecursive call optimizations which automatically translate a certain kind of linear recursion into efficient iterations. A linear recursive function is tail-recursive if the result of each recursive call is returned right away as the value of the function. Let's examine the implementation of fast-factorial again: (defun fast-factorial (N) "A tail-recursive version of factorial." (fast-factorial-aux N 1)) (defun fast-factorial-aux (N A) "Multiply A by the factorial of N." (if (= N 1) A (fast-factorial-aux (- N 1) (* N A)))) Notice that, in fast-factorial-aux, there is no work left to be done after the recursive call (fast-factorial-aux (- N 1) (* N A)). Consequently, the compiler will not create new stack frame or push arguments, but instead simply bind (- N 1) to N and (* N A) to A, and jump to the beginning of the function. Such optimization effectively renders fast-factorial as efficient as its iterative counterpart. Notice also the striking structural similarity between the two. When you implement linearly recursive functions, you are encouraged to restructure it as a tail recursion after you have fully debugged your implementation. Doing so allows the compiler to optimize away stack management code. However, you should do so only after you get the prototype function correctly implemented. Notice that the technique of accumulator variables can be used even when we are not transforming code to tail recursions. For some problems, the use of accumulator variables offers the most natural solutions.
Exercise: Recall that the N'th triangular number is defined to be 1 + 2 + 3 + ... + N. Give a tail-recursive implementation of the function (fasttriangular N) which returns the N'th triangular number.
16
Exercise: Give a tail-recursive implementation of the function (fastpower B E) that raises B to the power E (assuming that both B and E are nonnegative integers).
Exercise: Give a tail-recursive implementation of the function (fast-listlength L), which returns the length of a given list L.
(if (zerop N) X (repeat-transformation F (1- N) (funcall F X)))) The definition follows the standard tail recursive pattern. Notice the form (funcall F X). Given a function F and objects X1 X2 ... Xn, the form (funcall F X1 X2 ... Xn) invoke the function F with arguments X1, X2, ..., Xn. The variable N is a counter keeping track of the remaining number of times we need to apply function F to the accumulator variable X. To pass a the function double as an argument to repeat-transformation, we need to annotate the function name double by a closure constructor, as in the following: USER(11): (repeat-transformation (function double) 4 1) 16 There is nothing magical going on, the closure constructor is just a syntax for telling Common LISP that what follows is a function rather than a local variable name. Had we not included the annotation, Common LISP will treat the name double as a variable name, and then report an error since the name double is not defined. To see how the evaluation arrives at the result 16, we could, as usual, trace the execution: USER(12): (trace repeat-transformation) REPEAT-TRANSFORMATION USER(13): (repeat-transformation #'double 4 1) 0: (REPEAT-TRANSFORMATION # 4 1) 1: (REPEAT-TRANSFORMATION # 3 2) 2: (REPEAT-TRANSFORMATION # 2 4) 3: (REPEAT-TRANSFORMATION # 1 8) 4: (REPEAT-TRANSFORMATION # 0 16) 4: returned 16 3: returned 16 2: returned 16 1: returned 16 0: returned 16 16 Higher-Order Functions Notice that exponentiation is not the only use of the repeattransformation function. Let's say we want to build a list containing 10 occurrences of the symbol Achal. We can do so with the help of repeattransformation:
18
USER(30): (defun prepend-achal (L) (cons 'achal L)) PREPEND-ACHAL USER(31): (repeat-transformation (function prepend-achal) 10 nil) (ACHAL ACHAL ACHAL ACHAL ACHAL ACHAL ACHAL ACHAL ACHAL ACHAL) Suppose we want to fetch the 7'th element of the list (a b c d e f g h i j). Of course, we could use the built in function seventh to do the job, but for the fun of it, we could also achieve what we want in the following way: USER(32): (first (repeat-transformation (function rest) 6 '(a b c d e f g h i j))) G Basically, we apply rest six times before apply first to get the seventh element. In fact, we could have defined the function list-nth (see previous tutorial) in the following way: (defun list-nth (N L) (first (repeat-transformation (function rest) N L))) (list-nth numbers the member of a list from zero onwards.) As you can see, functions that accepts other functions as arguments are very powerful abstractions. You can encapsulate generic algorithms in such a function, and parameterize their behavior by passing in different function arguments. We call a function that has functional parameters (or return a function as its value) a higherorder function. One last point before we move on. The closure constructor function is used very often when working with higher-order functions. Common LISP therefore provide another equivalent syntax to reduce typing. When we want Common LISP to interpret a name F as a function, instead of typing (function F), we can also type the shorthand #'F. The prefix #' is nothing but an alternative syntax for the closure constructor. For example, we could enter the following: USER(33): (repeat-transformation #'double 4 1) 16 USER(34): (repeat-transformation #'prepend-achal 10 nil) (ACHAL ACHAL ACHAL ACHAL ACHAL ACHAL ACHAL ACHAL ACHAL ACHAL) USER(35): (first (repeat-transformation #'rest 6 '(a b c d e f g h i j))) G
19
Lambda Expressions Some of the functions, like prepend-achal for example, serves no other purpose but to instantiate the generic algorithm repeat-transformation. It would be tedious if we need to define it as a global function using defun before we pass it into repeat-transformation. Fortunately, LISP provides a mechanism to help us define functions "in place": USER(36): (repeat-transformation #'(lambda (L) (cons 'achal L)) 10 nil) (ACHAL ACHAL ACHAL ACHAL ACHAL ACHAL ACHAL ACHAL ACHAL ACHAL) The first argument (lambda (L) (cons 'achal L)) is a lambda expression. It designates an anonymous function (nameless function) with one parameter L, and it returns as a function value (cons 'achal L). We prefix the lambda expression with the closure constructor#' since we want Common LISP to interpret the argument as a function rather than a call to a function named lambda. Similarly, we could have computed powers as follows: USER(36): (repeat-transformation #'(lambda (x) (* 2 x)) 4 1) 16 Exercise: Define a function (apply-func-list L X) so that, given a list L of functions and an object X, apply-func-list applies the functions in L to X in reversed order. For example, the following expression (apply-func-list (list #'double #'list-length #'rest) '(1 2 3)) is equivalent to (double (list-length (rest '(1 2 3))))
Exercise: Use apply-func-list to compute the following: 1. 10 times the fourth element of the list (10 20 30 40 50), 2. the third element of the second element in the list ((1 2) (3 4 5) (6)), 3. the difference between 10 and the length of (a b c d e f),
20
The functions double-list-elements and reverse-list-elements can be replaced by the following: USER(18): (mapfirst #'double '(1 2 3 4)) (2 4 6 8) USER(19): (mapfirst #'reverse '((1 2 3) (a b c) (4 5 6) (d e f))) ((3 2 1) (C B A) (6 5 4) (F E D)) Of course, you could also pass lambda abstractions as arguments: USER(20): (mapfirst #'(lambda (x) (* x x)) '(1 2 3 4)) (1 4 9 16) In fact, the higher-order function is so useful that Common LISP defines a function mapcar that does exactly what mapfirst is intended for: USER(22): (mapcar #'butlast '((1 2 3) (a b c) (4 5 6) (d e f))) ((1 2) (A B) (4 5) (D E)) The reason why it is called mapcar is that the function first was called car in some older dialects of LISP (and rest was called cdr in those dialects; Common LISP still supports car and cdr but we strongly advice you to stick with the more readable first and rest). We suggest you to consider using mapcar whenever you are tempted to write your own list-iterating functions. The function mapcar is an example of generic iterators, which capture the generic logic of iterating through a list. If we look at what we do the most when we iterate through a list, we find that the following kinds of iterations occurs most frequently in our LISP programs: 1. Transformation iteration: transforming a list by systematically applying a monadic function to the elements of the list. 2. Search iteration: searching for a list member that satisfies a given condition. 3. Filter iteration: screening out all members that does not satisfy a given condition. As we have already seen, mapcar implements the generic algorithm for performing transformation iteration. In the following, we will look at the analogous of mapcar for the remaining iteration categories. Search Iteration Let us begin by writing a function that returns an even element in a list of numbers:
22
(defun find-even (L) "Given a list L of numbers, return the leftmost even member." (if (null L) nil (if (evenp (first L)) (first L) (find-even (rest L))))) Exercise: Implement a function that, when given a list L of lists, return a non-empty member of L.
We notice that the essential logic of searching can be extracted out into the following definition: (defun list-find-if (P L) "Find the leftmost element of list L that satisfies predicate P." (if (null L) nil (if (funcall P (first L)) (first L) (list-find-if P (rest L))))) The function list-find-if examines the elements of L one by one, and return the first one that satisfies predicate P. The function can be used for locating even or non-nil members in a list: USER(34): (list-find-if #'evenp '(1 3 5 8 11 12)) 8 USER(35): (list-find-if #'(lambda (X) (not (null X))) '(nil nil (1 2 3) (4 5))) (1 2 3) Common LISP defines a built-in function find-if which is a more general version of list-find-if. It can be used just like list-find-if: USER(37): (find-if #'evenp '(1 3 5 8 11 12)) 8 USER(38): (find-if #'(lambda (X) (not (null X))) '(nil nil (1 2 3) (4 5))) (1 2 3)
23
Exercise: Use find-if to define a function that searches among a list of lists for a member that has length at least 3.
Exercise: Use find-if to define a function that searches among a list of lists for a member that contains an even number of elements.
Exercise: Use find-if to define a function that searches among a list of numbers for a member that is divisible by three.
Filter Iteration Given a list of lists, suppose we want to screen out all the member lists with length less than three. We could do so by the following function: (defun remove-short-lists (L) "Remove all members of L that has length less than three." (if (null L) nil (if (< (list-length (first L)) 3) (remove-short-lists (rest L)) (cons (first L) (remove-short-lists (rest L)))))) To articulate the correctness of this implementation, consider the following. The list L is either nil or constructed by cons.
Case 1: L is nil. Removing short lists from an empty list simply results in an empty list. Case 2: L is constructed by cons. L has two components: (first L) and (rest L). We have two cases: either (first L) has fewer than 3 members or it has at least 3 members. o Case 2.1: (first L) has fewer than three elements. Since (first L) is short, and will not appear in the result of removing
24
short lists from L, the latter is equivalent to the result of removing short lists from (rest L). Case 2.2: (first L) has at least three elements. Since (first L) is not short, and will appear in the result of removing short lists from L, the latter is equivalent to adding (first L) to the result of removing short lists from (rest L).
A typical execution trace is the following: USER(17): (remove-short-lists '((1 2 3) (1 2) nil (1 2 3 4))) 0: (REMOVE-SHORT-LISTS ((1 2 3) (1 2) NIL (1 2 3 4))) 1: (REMOVE-SHORT-LISTS ((1 2) NIL (1 2 3 4))) 2: (REMOVE-SHORT-LISTS (NIL (1 2 3 4))) 3: (REMOVE-SHORT-LISTS ((1 2 3 4))) 4: (REMOVE-SHORT-LISTS NIL) 4: returned NIL 3: returned ((1 2 3 4)) 2: returned ((1 2 3 4)) 1: returned ((1 2 3 4)) 0: returned ((1 2 3) (1 2 3 4)) ((1 2 3) (1 2 3 4)) Alternatively, we could have removed short lists using Common LISP's built-in function remove-if: USER(19): (remove-if #'(lambda (X) (< (list-length X) 3)) '((1 2 3) (1 2) nil (1 2 3 4))) ((1 2 3) (1 2 3 4)) The function (remove-if P L) constructs a new version of list L that contains only members not satisfying predicate P. For example, we can remove all even members from the list (3 6 8 9 10 13 15 18) by the following: USER(21): (remove-if #'(lambda (X) (zerop (rem x 2))) '(3 6 8 9 10 13 15 18)) (3 9 13 15) Without remove-if, we would end up having to implement a function like the following: (defun remove-even (L) "Remove all members of L that is an even number." (if (null L) nil (if (zerop (rem (first L) 2)) (remove-even (rest L)) (cons (first L) (remove-even (rest L))))))
25
Exercise: Demonstrate the correctness of remove-even using arguments you have seen in this tutorial.
Exercise: Observe the recurring pattern in remove-short-lists and removeeven, and implement your own version of remove-if.
We could actually implement list-intersection using remove-if and lambda abstraction: (defun list-intersection (L1 L2) "Compute the intersection of lists L1 and L2." (remove-if #'(lambda (X) (not (member X L2))) L1)) In the definition above, the lambda abstraction evaluates to a predicate that returns true if its argument is not a member of L2. Therefore, the remove-if expression removes all elements of L1 that is not a member of L2. This precisely gives us the intersection of L1 and L2.
Exercise: Look up the functionality of remove-if-not in CTRL2. Reimplement list-intersection using remove-if-not and lambda abstraction.
26
27
(make-empty-set) creates an empty set. (set-insert S E) returns a set containing all members of set S plus an additional member E. (set-remove S E) returns a set containing all members of set S except for E. (set-member-p S E) returns true if E is a member of set S. (set-empty-p S) returns true if set S is empty.
To implement an abstract data type, we need to decide on a representation. Let us represent a set by a list with no repeated members. (defun make-empty-set () "Creates an empty set." nil) (defun set-insert (S E) "Return a set containing all the members of set S plus the element E." (adjoin E S :test #'equal)) (defun set-remove (S E) "Return a set containing all the members of set S except for element E." (remove E S :test #'equal)) (defun set-member-p (S E) "Return non-NIL if set S contains element E." (member E S :test #'equal)) (defun set-empty-p (S) "Return true if set S is empty." (null S)) Exercise: Look up the definition of adjoin, remove and member from CLTL2. In particular, find out how the :test keyword is used to specify the equality test function to be used by the three functions. What will happen if we omit the :test keyword and the subsequent#'equal when invoking the three functions?
28
We represent an leaf with element E by a singleton list containing E (i.e. (list E)). A node with element E, left subtree B1 and right subtree B2 is represented as a list containing the three components (i.e. (list E B1 B2)).
Fixing the representation, we can thus implement the recursive data type functions: ;; ;; Binary Trees ;; ;; ;; Constructors for binary trees ;; (defun make-bin-tree-leaf (E) "Create a leaf." (list E)) (defun make-bin-tree-node (E B1 B2) "Create a node with element K, left subtree B1 and right subtree B2." (list E B1 B2)) ;; ;; Selectors for binary trees ;; (defun bin-tree-leaf-element (L) "Retrieve the element of a leaf L." (first L)) (defun bin-tree-node-element (N) "Retrieve the element of a node N." (first N)) (defun bin-tree-node-left (N) "Retrieve the left subtree of a node N." (second N)) (defun bin-tree-node-right (N) "Retrieve the right subtree of a node N." (third N)) ;; ;; Recognizers for binary trees ;;
30
(defun bin-tree-leaf-p (B) "Test if binary tree B is a leaf." (and (listp B) (= (list-length B) 1))) (defun bin-tree-node-p (B) "Test if binary tree B is a node." (and (listp B) (= (list-length B) 3))) The representation scheme works out like the following: USER(5): (make-bin-tree-node '* (make-bin-tree-node '+ (make-bin-tree-leaf 2) (make-bin-tree-leaf 3)) (make-bin-tree-node '(make-bin-tree-leaf 7) (make-bin-tree-leaf 8))) (* (+ (2) (3)) (- (7) (8))) The expression above is a binary tree node with element * and two subtrees. The left subtree is itself a binary tree node with + as its element and leaves as its subtress. The right subtree is also a binary tree node with - as its element and leaves as its subtrees. All the leaves are decorated by numeric components. * / \ / \ / \ + / \ / \ 2 3 7 8 Searching Binary Trees As discussed in previous tutorials, having recursive data structures defined in the way we did streamlines the process of formulating structural recursions. We review this concept in the following examples. Suppose we treat binary trees as containers. An expression E is a member of a binary tree B if: 1. B is a leaf and its element is E. 2. B is a node and either its element is E or E is a member of one of its subtrees.
31
For example, the definition asserts that the members of (* (+ (2) (3)) ((7) (8))) are *, +, 2, 3, -, 7 and 8. Such a definition can be directly implemented by our recursive data type functions: (defun bin-tree-member-p (B E) "Test if E is an element in binary tree B." (if (bin-tree-leaf-p B) (equal E (bin-tree-leaf-element B)) (or (equal E (bin-tree-node-element B)) (bin-tree-member-p (bin-tree-node-left B) E) (bin-tree-member-p (bin-tree-node-right B) E)))) The function can be made more readable by using the let form: (defun bin-tree-member-p (B E) "Test if E is an element in binary tree B." (if (bin-tree-leaf-p B) (equal E (bin-tree-leaf-element B)) (let ((elmt (bin-tree-node-element B)) (left (bin-tree-node-left B)) (right (bin-tree-node-right B))) (or (equal E elmt) (bin-tree-member-p left E) (bin-tree-member-p right E))))) Tracing the execution of bin-tree-member-p, we get: USER(14): (trace bin-tree-member-p) (BIN-TREE-MEMBER-P) USER(15): (bin-tree-member-p '(+ (* (2) (3)) (- (7) (8))) 7) 0: (BIN-TREE-MEMBER-P (+ (* (2) (3)) (- (7) (8))) 7) 1: (BIN-TREE-MEMBER-P (* (2) (3)) 7) 2: (BIN-TREE-MEMBER-P (2) 7) 2: returned NIL 2: (BIN-TREE-MEMBER-P (3) 7) 2: returned NIL 1: returned NIL 1: (BIN-TREE-MEMBER-P (- (7) (8)) 7) 2: (BIN-TREE-MEMBER-P (7) 7) 2: returned T 1: returned T 0: returned T T
32
Exercise: Let size(B) be the number of members in a binary tree B. Give a recursive definition of size(B), and then implement a LISP function (bin-treesize B) that returns size(B).
Traversing Binary Trees Let us write a function that will reverse a tree in the sense that the left and right subtrees of every node are swapped: (defun binary-tree-reverse (B) "Reverse binary tree B." (if (bin-tree-leaf-p B) B (let ((elmt (bin-tree-node-element B)) (left (bin-tree-node-left B)) (right (bin-tree-node-right B))) (make-bin-tree-node elmt (binary-tree-reverse right) (binary-tree-reverse left))))) The correctness of the above implementation can be articulated as follows. Given a binary tree B and an object E, either the binary tree is a leaf or it is a node:
Case 1: B is a leaf. Then the reversal of B is simply B itself. Case 2: B is a node. Then B has three components, namely, an element elmt, a left subtree left and a right subtree right. The reversal of B is a node with element elmt, left subtree the reversal of right, and right subtree the reversal of left.
The following shows us how the recursion unfolds: USER(21): (trace bin-tree-reverse) (BIN-TREE-REVERSE) USER(22): (bin-tree-reverse '(* (+ (2) (3)) (- (7) (8)))) 0: (BIN-TREE-REVERSE (* (+ (2) (3)) (- (7) (8)))) 1: (BIN-TREE-REVERSE (- (7) (8))) 2: (BIN-TREE-REVERSE (8)) 2: returned (8) 2: (BIN-TREE-REVERSE (7)) 2: returned (7) 1: returned (- (8) (7))
33
1: (BIN-TREE-REVERSE (+ (2) (3))) 2: (BIN-TREE-REVERSE (3)) 2: returned (3) 2: (BIN-TREE-REVERSE (2)) 2: returned (2) 1: returned (+ (3) (2)) 0: returned (* (- (8) (7)) (+ (3) (2))) (* (- (8) (7)) (+ (3) (2))) The resulting expression represents the following tree: * / \ / \ / \ + / \ / \ 8 7 3 2 Let us implement a function that will extract the members of a given binary tree, and put them into a list in preorder. (defun bin-tree-preorder (B) "Create a list containing keys of B in preorder." (if (bin-tree-leaf-p B) (list (bin-tree-leaf-element B)) (let ((elmt (bin-tree-node-element B)) (left (bin-tree-node-left B)) (right (bin-tree-node-right B))) (cons elmt (append (bin-tree-preorder left) (bin-tree-preorder right)))))) Tracing the execution of the function, we obtain the following: USER(13): (trace bin-tree-preorder) (BIN-TREE-PREORDER) USER(14): (bin-tree-preorder '(* (+ (2) (3)) (- (7) (8)))) 0: (BIN-TREE-PREORDER (* (+ (2) (3)) (- (7) (8)))) 1: (BIN-TREE-PREORDER (+ (2) (3))) 2: (BIN-TREE-PREORDER (2)) 2: returned (2) 2: (BIN-TREE-PREORDER (3)) 2: returned (3) 1: returned (+ 2 3) 1: (BIN-TREE-PREORDER (- (7) (8))) 2: (BIN-TREE-PREORDER (7))
34
2: returned (7) 2: (BIN-TREE-PREORDER (8)) 2: returned (8) 1: returned (- 7 8) 0: returned (* + 2 3 - 7 8) (* + 2 3 - 7 8) As we have discussed before, the append call in the code above is a source of inefficiency that can be obtimized away: (defun fast-bin-tree-preorder (B) "A tail-recursive version of bin-tree-preorder." (preorder-aux B nil)) (defun preorder-aux (B A) "Append A to the end of the list containing elements of B in preorder." (if (bin-tree-leaf-p B) (cons (bin-tree-leaf-element B) A) (let ((elmt (bin-tree-node-element B)) (left (bin-tree-node-left B)) (right (bin-tree-node-right B))) (cons elmt (preorder-aux left (preorder-aux right A)))))) An execution trace of the implementation is the following: USER(15): (trace fast-bin-tree-preorder preorder-aux) (PREORDER-AUX FAST-BIN-TREE-PREORDER) USER(16): (fast-bin-tree-preorder '(* (+ (2) (3)) (- (7) (8)))) 0: (FAST-BIN-TREE-PREORDER (* (+ (2) (3)) (- (7) (8)))) 1: (PREORDER-AUX (* (+ (2) (3)) (- (7) (8))) NIL) 2: (PREORDER-AUX (- (7) (8)) NIL) 3: (PREORDER-AUX (8) NIL) 3: returned (8) 3: (PREORDER-AUX (7) (8)) 3: returned (7 8) 2: returned (- 7 8) 2: (PREORDER-AUX (+ (2) (3)) (- 7 8)) 3: (PREORDER-AUX (3) (- 7 8)) 3: returned (3 - 7 8) 3: (PREORDER-AUX (2) (3 - 7 8)) 3: returned (2 3 - 7 8) 2: returned (+ 2 3 - 7 8) 1: returned (* + 2 3 - 7 8)
35
0: returned (* + 2 3 - 7 8) (* + 2 3 - 7 8)
Exercise: Implement a function that will create a list containing members of a given binary tree in postorder. Implement also a tail-recursive version of the same function. Exercise: Repeat the last exercise with inorder.
Notice that we have implemented an abstract data type (sets) using a more fundamental recursive data structure (lists) with additional computational constraints (no repetition) imposed by the interface functions. Binary Search Trees Another way of implementing the same set abstraction is to use the more efficient binary search tree (BST). Binary search trees are basically binary trees with the following additional computational constraints:
All the members in the left subtree of a tree node is no greater than the element of the node. All the members in the right subtree of a tree node is greater than the element of the node. All the leaf members are distinct.
Again, we are implementing an abstract data type (sets) by a more fundamental recursive data structure (binary trees) with additional computational constraints. In particular, we use the leaves of a binary tree to store the member of a set, and the tree nodes for providing indexing information that improves search performance. for example, a BST representing the set {1 2 3 4} would look like: 2 / \ / \ / \ 1 3 / \ / \ 1 2 3 4 An empty BST is represented by NIL, while a nonempty BST is represented by a binary tree. We begin with the constructor and recognizer for empty BST. (defun make-empty-BST ()
36
"Create an empty BST." nil) (defun BST-empty-p (B) "Check if BST B is empty." (null B)) Given the additional computational constraints, membership test can be implemented as follows: (defun BST-member-p (B E) "Check if E is a member of BST B." (if (BST-empty-p B) nil (BST-nonempty-member-p B E))) (defun BST-nonempty-member-p (B E) "Check if E is a member of nonempty BST B." (if (bin-tree-leaf-p B) (= E (bin-tree-leaf-element B)) (if (<= E (bin-tree-node-element B)) (BST-nonempty-member-p (bin-tree-node-left B) E) (BST-nonempty-member-p (bin-tree-node-right B) E)))) Notice that we handle the degenerate case of searching an empty BST separately, and apply the well-known recursive search algorithm only on nonempty BST. USER(16): (trace BST-member-p BST-nonempty-member-p) (BST-NONEMPTY-MEMBER-P BST-MEMBER-P) USER(17): (BST-member-p '(2 (1 (1) (2)) (3 (3) (4))) 3) 0: (BST-MEMBER-P (2 (1 (1) (2)) (3 (3) (4))) 3) 1: (BST-NONEMPTY-MEMBER-P (2 (1 (1) (2)) (3 (3) (4))) 3) 2: (BST-NONEMPTY-MEMBER-P (3 (3) (4)) 3) 3: (BST-NONEMPTY-MEMBER-P (3) 3) 3: returned T 2: returned T 1: returned T 0: returned T T Insertion is handled by the following family of functions: (defun BST-insert (B E) "Insert E into BST B." (if (BST-empty-p B) (make-bin-tree-leaf E)
37
(BST-nonempty-insert B E))) (defun BST-nonempty-insert (B E) "Insert E into nonempty BST B." (if (bin-tree-leaf-p B) (BST-leaf-insert B E) (let ((elmt (bin-tree-node-element B)) (left (bin-tree-node-left B)) (right (bin-tree-node-right B))) (if (<= E (bin-tree-node-element B)) (make-bin-tree-node elmt (BST-nonempty-insert (bin-treenode-left B) E) right) (make-bin-tree-node elmt left (BST-nonempty-insert (bin-tree-noderight B) E)))))) (defun BST-leaf-insert (L E) "Insert element E to a BST with only one leaf." (let ((elmt (bin-tree-leaf-element L))) (if (= E elmt) L (if (< E elmt) (make-bin-tree-node E (make-bin-tree-leaf E) (make-bin-tree-leaf elmt)) (make-bin-tree-node elmt (make-bin-tree-leaf elmt) (make-bin-tree-leaf E)))))) As before, recursive insertion to nonempty BST is handled outside of the general entry point of BST insertion. Traversing down the index nodes, the recursive algorithm eventually arrives at a leaf. In case the element is not already in the tree, the leaf is turned into a node with leaf subtrees holding the inserted element and the element of the original leaf. For example, if we insert 2.5 into the tree represented by (2 (1 (1) (2)) (3 (3) (4))), the effect is the following: 2 / \ / \ / \ 1 3 / \ / \ 1 2 3 4 2 / \ / \ / \ 1 3 / \ / \ 1 2 2.5 4 / \
38
==>
USER(22): (trace BST-insert BST-nonempty-insert BST-leafinsert) (BST-LEAF-INSERT BST-NONEMPTY-INSERT BST-INSERT) USER(23): (BST-insert '(2 (1 (1) (2)) (3 (3) (4))) 2.5) 0: (BST-INSERT (2 (1 (1) (2)) (3 (3) (4))) 2.5) 1: (BST-NONEMPTY-INSERT (2 (1 (1) (2)) (3 (3) (4))) 2.5) 2: (BST-NONEMPTY-INSERT (3 (3) (4)) 2.5) 3: (BST-NONEMPTY-INSERT (3) 2.5) 4: (BST-LEAF-INSERT (3) 2.5) 4: returned (2.5 (2.5) (3)) 3: returned (2.5 (2.5) (3)) 2: returned (3 (2.5 (2.5) (3)) (4)) 1: returned (2 (1 (1) (2)) (3 (2.5 (2.5) (3)) (4))) 0: returned (2 (1 (1) (2)) (3 (2.5 (2.5) (3)) (4))) (2 (1 (1) (2)) (3 (2.5 (2.5) (3)) (4))) Removal of elements is handled by the following family of functions: (defun BST-remove (B E) "Remove E from BST B." (if (BST-empty-p B) B (if (bin-tree-leaf-p B) (BST-leaf-remove B E) (BST-node-remove B E)))) (defun BST-leaf-remove (L E) "Remove E from BST leaf L." (if (= E (bin-tree-leaf-element L)) (make-empty-BST) L)) (defun BST-node-remove (N E) "Remove E from BST node N." (let ((elmt (bin-tree-node-element N)) (left (bin-tree-node-left N)) (right (bin-tree-node-right N))) (if (<= E elmt) (if (bin-tree-leaf-p left) (if (= E (bin-tree-leaf-element left)) right N)
39
(make-bin-tree-node elmt (BST-node-remove left E) right)) (if (bin-tree-leaf-p right) (if (= E (bin-tree-leaf-element right)) left N) (make-bin-tree-node elmt left (BST-node-remove right E)))))) This time, removal from empty BST's and BST's with a single leaf are both degenerate cases. The recursive removal algorithm deals with BST nodes. Traversing down the index nodes, the recursive algorithm searches for the parent node of the leaf to be removed. In case it is found, the sibling of the leaf to be removed replaces its parent node. For example, the effect of removing 2 from the BST represented by (2 (1 (1) (2)) (3 (3) (4))) is depicted as follows: 2 2 / \ / \ / \ / \ / \ ==> / \ 1 3 1 4 / \ / \ / \ 1 2 3 4 1 2 A trace of the deletion operation is given below: USER(4): (trace BST-remove BST-node-remove) (BST-NODE-REMOVE BST-REMOVE) USER(5): (BST-remove '(2 (1 (1) (2)) (3 (3) (4))) 3) 0: (BST-REMOVE (2 (1 (1) (2)) (3 (3) (4))) 3) 1: (BST-NODE-REMOVE (2 (1 (1) (2)) (3 (3) (4))) 3) 2: (BST-NODE-REMOVE (3 (3) (4)) 3) 2: returned (4) 1: returned (2 (1 (1) (2)) (4)) 0: returned (2 (1 (1) (2)) (4)) (2 (1 (1) (2)) (4)) Exercise: A set can be implemented as a sorted list, which is a list storing distinct members in ascending order. Implement the sorted list abstraction.
Polynomials We demonstrate how one can perform symbolic computation using LISP. To begin with, we define a new recursive data type for polynomials, which is defined recursively as follows:
40
If num is a number, then (make-constant num) is a polynomial; If sym is a symbol, then (make-variable sym) is a polynomial; If poly1 and poly2 are polynomials, then the following are also polynomials: o (make-sum poly1 poly2) o (make-product poly1 poly2) If poly is a polynomial and num is a number, then (makepower poly num) is a polynomial.
One can represent polynomials in the most standard way: ;; ;; Constructors for polynomials ;; (defun make-constant (num) num) (defun make-variable (sym) sym) (defun make-sum (poly1 poly2) (list '+ poly1 poly2)) (defun make-product (poly1 poly2) (list '* poly1 poly2)) (defun make-power (poly num) (list '** poly num)) For example, (make-power (make-sum (make-variable 'x) (makeconstant 1)) 2) is represented by the LISP form (** (+ x 1) 2), which denotes the polynomail (x + 1)2 in our usual notation. We then define a recognizer for each constructor: ;; ;; Recognizers for polynomials ;; (defun constant-p (poly) (numberp poly)) (defun variable-p (poly) (symbolp poly)) (defun sum-p (poly) (and (listp poly) (eq (first poly) '+)))
41
(defun product-p (poly) (and (listp poly) (eq (first poly) '*))) (defun power-p (poly) (and (listp poly) (eq (first poly) '**))) We then need to define selectors for the composite polynomials. We define a selector for each component of each composite constructor. ;; ;; Selectors for polynomials ;; (defun constant-numeric (const) const) (defun variable-symbol (var) var) (defun sum-arg1 (sum) (second sum)) (defun sum-arg2 (sum) (third sum)) (defun product-arg1 (prod) (second prod)) (defun product-arg2 (prod) (third prod)) (defun power-base (pow) (second pow)) (defun power-exponent (pow) (third pow)) One may ask why we define so many trivial looking functions for carrying out the same task (sum-arg1 and product-arg1 have exactly the same implementation). The reason is that we may end up changing the representation in the future, and there is no guarantee that sums and products will be represented similarly in the future. Also, programs written like this tends to be self-commenting. Now that we have a completely defined polynomial data type, let us do something interesting with it. Let us define a function that carries out symbolic differentiation. In particular, we want a function (d poly x) which returns the derivative of
42
polynomial poly with respect to variable x. Let us review our first-year differential calculus:
The derivative (dC / dx) of a constant C is zero. The derivative (dy/dx) of a variable y is 1 if the x = y. Otherwise, we leave the derivative unevaluated. We represent unevaluated derivatives using the following functions ;; ;; Unevaluated derivative ;; (defun make-derivative (poly x) (list 'd poly x)) (defun derivative-p (poly x) (and (listp poly) (eq (first poly) 'd))) The derivative (d(F+G)/dx) of a sum (F+G) is (dF/dx) + (dG/dx). The derivative (d(F*G)/dx) of a product (F*G) is F*(dG/dx) + G*(dF/dx). The derivative (d(FN)/dx) of a power FN is N * FN-1 * (dF/dx).
The above calculus can be encoded in LISP as follows: ;; ;; Differentiation function ;; (defun d (poly x) (cond ((constant-p poly) 0) ((variable-p poly) (if (equal poly x) 1 (make-derivative poly x))) ((sum-p poly) (make-sum (d (sum-arg1 poly) x) (d (sum-arg2 poly) x))) ((product-p poly) (make-sum (make-product (product-arg1 poly) (d (product-arg2 poly) x)) (make-product (product-arg2 poly) (d (product-arg1 poly) x)))) ((power-p poly) (make-product (make-product (power-exponent poly) (make-power (power-base poly) (1- (power-exponent poly))))
43
(d (power-base poly) x))))) Test driving the differentiation function we get: USER(11): (d '(+ x y) 'x) (+ 1 (D Y X)) USER(12): (d '(* (+ x 1) (+ x 1)) 'x) (+ (* (+ X 1) (+ 1 0)) (* (+ X 1) (+ 1 0))) USER(13): (d '(** (+ x 1) 2) 'x) (* (* 2 (** (+ X 1) 1)) (+ 1 0)) The result is correct but very clumsy. We would like to simplify the result a bit using the following rewriting rules:
This can be done by defining a simplification framework, in which we can implement such rules: ;; ;; Simplification function ;; (defun simplify (poly) "Simplify polynomial POLY." (cond ((constant-p poly) poly) ((variable-p poly) poly) ((sum-p poly) (let ((arg1 (simplify (sum-arg1 poly))) (arg2 (simplify (sum-arg2 poly)))) (make-simplified-sum arg1 arg2))) ((product-p poly) (let ((arg1 (simplify (product-arg1 poly))) (arg2 (simplify (product-arg2 poly)))) (make-simplified-product arg1 arg2))) ((power-p poly) (let ((base (simplify (power-base poly))) (exponent (simplify (power-exponent poly))))
44
(make-simplified-power base exponent))) ((derivative-p poly) poly))) The simplify function decomposes a composite polynomial into its components, apply simplification recursively to the components, and then invoke the type-specific simplification rules (i.e. make-simplified-sum, make-simplifiedproduct, make-simplified-power) based on the type of the polynomial being processed. The simplification rules are encoded in LISP as follows: (defun make-simplified-sum (arg1 arg2) "Given simplified polynomials ARG1 and a simplified sum of ARG1 and ARG2." (cond ((and (constant-p arg1) (zerop arg1)) ((and (constant-p arg2) (zerop arg2)) (t arg2)))) ARG2, construct arg2) arg1) (make-sum arg1
(defun make-simplified-product (arg1 arg2) "Given simplified polynomials ARG1 and ARG2, construct a simplified product of ARG1 and ARG2." (cond ((and (constant-p arg1) (zerop arg1)) (make-constant 0)) ((and (constant-p arg2) (zerop arg2)) (make-constant 0)) ((and (constant-p arg1) (= arg1 1)) arg2) ((and (constant-p arg2) (= arg2 1)) arg1) (t (make-product arg1 arg2)))) (defun make-simplified-power (base exponent) "Given simplified polynomials BASE and EXPONENT, construct a simplified power with base BASE and exponent EXPONENT." (cond ((and (constant-p exponent) (= exponent 1)) base) ((and (constant-p exponent) (zerop exponent)) (makeconstant 1)) (t (make-power base exponent)))) Let us see how all these pay off: USER(14): (simplify (d '(* (+ x 1) (+ x 1)) 'x))
45
(+ (+ X 1) (+ X 1)) USER(15): (simplify (d '(** (+ x 1) 2) 'x)) (* 2 (+ X 1)) Comparing to the original results we saw before, this is a lot more reasonable.
Define a new type of polynomial --- difference. If poly1 and poly2 are polynomials, then (make-difference poly1 poly2) is also a polynomial. Implement the constructor, recognizer and selectors for this type of polynomial. The derivative (d(F-G)/dx) of a difference (F-G) is (dF/dx) - (dG/dx). Extend the differentiation function to incorporate this. Implement the following simplification rule: o E-0=E
Define a new type of polynomial --- negation. If poly1 is a polynomial, then (make-negation poly) is also a polynomial. Implement the constructor, recognizer and selectors for this type of polynomial. The derivative (d(-F)/dx) of a negation -F is -(dF/dx). Extend the differentiation function to incorporate this. Implement the following simplification rules: o -0 = 0 o -(-E) = E
Exercise: The simplification rules we have seen so far share a common feature: the right hand sides do not involve any new polynomial constructor. For example, -(-E) is simply E. However, some of the most useful simplification rules are those involving constructors on the right hand sides:
E * (-1) = -E (-1) * E = -E
Within the type-specific simplification functions, if we naively apply the regular constructors to build the expressions on the right hand sides, then we run into the risk of constructing polynomials that are not fully simplified. For example, -x and -1 are both fully simplified, but if we now construct their product (-1) * (-x), the last simplification rule above says that we can rewrite the product into -(-x), which needs further simplification. One naive solution is to blindly apply full simplification to the newly constructed polynomials, but this is obviously an overkill. What then is an efficient and yet correct implementation of the above simplification rules?
Exercise: If all the components of a composite polynomial are constants, then we can actually perform further simplification. For example, (+ 1 1) should be simplified to 2. Extend the simplification framework to incorporate this.
Tower of Hanoi The Tower of Hanoi problem is a classical toy problem in Artificial Intelligence: There are N disks D1, D2, ..., Dn, of graduated sizes and three pegs 1, 2, and 3. Initially all the disks are stacked on peg 1, with D1, the smallest, on top and Dn, the largest, at the bottom. The problem is to transfer the stack to peg 3 given that only one disk can be moved at a time and that no disk may be placed on top of a smaller one. [Pearl 1984] We call peg 1 the "from" peg, peg 3 the "to" peg. Peg 2 is a actually a buffer to facilitate movement of disks, and we call it an "auxiliary" peg. We can move N disks from the "from" peg to the "to" peg using the following recursive scheme. 1. Ignoring the largest disk at the "from" peg, treat the remaining disks as a Tower of Hanoi problem with N-1 disks. Recursively move the top N-1 disks from the "from" peg to the "auxiliary" peg, using the "to" peg as a buffer. 2. Now that the N-1 smaller disks are in the "auxiliary" peg, we move the largest disk to the "to" peg. 3. Ignoring the largest disk again, treat the remaining disks as a Tower of Hanoi problem with N-1 disks. Recursively move the N-1 disks from the "auxiliary" peg to the "to" peg, using the "from" peg as a buffer.
47
To code this solution in LISP, we need to define some data structure. First, we represent a disk by a number, so that Di is represented by i. Second, we represent a stack of disks by a tower, which is nothing but a list of numbers, with the first element representing the top disk. We define the usual constructors and selectors for the tower data type. ;; ;; A tower is a list of numbers ;; (defun make-empty-tower () "Create tower with no disk." nil) (defun tower-push (tower disk) "Create tower by stacking DISK on top of TOWER." (cons disk tower)) (defun tower-top (tower) "Get the top disk of TOWER." (first tower)) (defun tower-pop (tower) "Remove the top disk of TOWER." (rest tower)) Third, we define the hanoi data type to represent a Tower of Hanoi configuration. In particular, a hanoi configuration is a list of three towers. The elementary constructors and selectors are given below: ;; ;; Hanoi configuration ;; (defun make-hanoi (from-tower aux-tower to-tower) "Create a Hanoi configuration from three towers." (list from-tower aux-tower to-tower)) (defun hanoi-tower (hanoi i) "Select the I'th tower of a Hanoi construction." (nth (1- i) hanoi)) Working with towers within a Hanoi configuration is tedious. We therefore define some shortcut to capture recurring operations: ;; ;; Utilities
48
;; (defun hanoi-tower-update (hanoi i tower) "Replace the I'th tower in the HANOI configuration by tower TOWER." (cond ((= i 1) (make-hanoi tower (second hanoi) (third hanoi))) ((= i 2) (make-hanoi (first hanoi) tower (third hanoi))) ((= i 3) (make-hanoi (first hanoi) (second hanoi) tower)))) (defun hanoi-tower-top (hanoi i) "Return the top disk of the I'th tower in the HANOI configuration." (tower-top (hanoi-tower hanoi i))) (defun hanoi-tower-pop (hanoi i) "Pop the top disk of the I'th tower in the HANOI configuration." (hanoi-tower-update hanoi i (tower-pop (hanoi-tower hanoi i)))) (defun hanoi-tower-push (hanoi i disk) "Push DISK into the I'th tower of the HANOI configuration." (hanoi-tower-update hanoi i (tower-push (hanoi-tower hanoi i) disk))) The fundamental operator we can perform on a Hanoi configuration is to move a top disk from one peg to another: ;; ;; Operator: move top disk from one tower to another ;; (defun move-disk (from to hanoi) "Move the top disk from peg FROM to peg TO in configuration HANOI." (let ((disk (hanoi-tower-top hanoi from)) (intermediate-hanoi (hanoi-tower-pop hanoi from))) (hanoi-tower-push intermediate-hanoi to disk))) We are now ready to capture the logic of our recursive solution into the following code:
49
;; ;; Subgoal: moving a tower from one peg to another ;; (defun move-tower (N from aux to hanoi) "In the HANOI configuration, move the top N disks from peg FROM to peg TO using peg AUX as an auxiliary peg." (if (= N 1) (move-disk from to hanoi) (move-tower (- N 1) aux from to (move-disk from to (move-tower (- N 1) from to aux hanoi))))) We use the driver function solve-hanoi to start up the recursion: ;; ;; Driver function ;; (defun solve-hanoi (N) "Solve the Tower of Hanoi problem." (move-tower N 1 2 3 (make-hanoi (make-complete-tower N) nil nil))) (defun make-complete-tower (N) "Create a tower of N disks." (make-complete-tower-aux N (make-empty-tower))) (defun make-complete-tower-aux (N A) "Push a complete tower of N disks on top of tower A." (if (zerop N) A (make-complete-tower-aux (1- N) (tower-push A N)))) To solve a Tower of Hanoi problem with 3 disks, we call (solve-hanoi 3): USER(50): (solve-hanoi 3) (NIL NIL (1 2 3)) All we get back is the final configuration, which is not as interesting as knowing the sequence of moves taken by the algorithm. So we trace usage of the movedisk operator: USER(51): (trace move-disk) (MOVE-DISK) USER(52): (solve-hanoi 3)
50
0: (MOVE-DISK 1 3 ((1 2 3) NIL NIL)) 0: returned ((2 3) NIL (1)) 0: (MOVE-DISK 1 2 ((2 3) NIL (1))) 0: returned ((3) (2) (1)) 0: (MOVE-DISK 3 2 ((3) (2) (1))) 0: returned ((3) (1 2) NIL) 0: (MOVE-DISK 1 3 ((3) (1 2) NIL)) 0: returned (NIL (1 2) (3)) 0: (MOVE-DISK 2 1 (NIL (1 2) (3))) 0: returned ((1) (2) (3)) 0: (MOVE-DISK 2 3 ((1) (2) (3))) 0: returned ((1) NIL (2 3)) 0: (MOVE-DISK 1 3 ((1) NIL (2 3))) 0: returned (NIL NIL (1 2 3)) (NIL NIL (1 2 3)) From the trace we can actually read off the sequence of operator applications necessary for one to achieve the solution configuration. This is good, but not good enough. We want to know why each move is being taken. So we trace also the highlevel subgoals: USER(53): (trace move-tower) (MOVE-TOWER) USER(54): (solve-hanoi 3) 0: (MOVE-TOWER 3 1 2 3 ((1 2 3) NIL NIL)) 1: (MOVE-TOWER 2 1 3 2 ((1 2 3) NIL NIL)) 2: (MOVE-TOWER 1 1 2 3 ((1 2 3) NIL NIL)) 3: (MOVE-DISK 1 3 ((1 2 3) NIL NIL)) 3: returned ((2 3) NIL (1)) 2: returned ((2 3) NIL (1)) 2: (MOVE-DISK 1 2 ((2 3) NIL (1))) 2: returned ((3) (2) (1)) 2: (MOVE-TOWER 1 3 1 2 ((3) (2) (1))) 3: (MOVE-DISK 3 2 ((3) (2) (1))) 3: returned ((3) (1 2) NIL) 2: returned ((3) (1 2) NIL) 1: returned ((3) (1 2) NIL) 1: (MOVE-DISK 1 3 ((3) (1 2) NIL)) 1: returned (NIL (1 2) (3)) 1: (MOVE-TOWER 2 2 1 3 (NIL (1 2) (3))) 2: (MOVE-TOWER 1 2 3 1 (NIL (1 2) (3))) 3: (MOVE-DISK 2 1 (NIL (1 2) (3))) 3: returned ((1) (2) (3)) 2: returned ((1) (2) (3)) 2: (MOVE-DISK 2 3 ((1) (2) (3))) 2: returned ((1) NIL (2 3)) 2: (MOVE-TOWER 1 1 2 3 ((1) NIL (2 3)))
51
3: (MOVE-DISK 1 3 ((1) NIL (2 3))) 3: returned (NIL NIL (1 2 3)) 2: returned (NIL NIL (1 2 3)) 1: returned (NIL NIL (1 2 3)) 0: returned (NIL NIL (1 2 3)) (NIL NIL (1 2 3)) The trace gives us information as to what subgoals each operator application is trying to establish. For example, the top level subgoals are the following: 0: (MOVE-TOWER 3 1 2 3 ((1 2 3) NIL NIL)) 1: (MOVE-TOWER 2 1 3 2 ((1 2 3) NIL NIL)) ... 1: returned ((3) (1 2) NIL) 1: (MOVE-DISK 1 3 ((3) (1 2) NIL)) 1: returned (NIL (1 2) (3)) 1: (MOVE-TOWER 2 2 1 3 (NIL (1 2) (3))) ... 1: returned (NIL NIL (1 2 3)) 0: returned (NIL NIL (1 2 3)) They translate directly to the following: In order to move a tower of 3 disks from peg 1 to peg 3 using peg 2 as a buffer (i.e. (MOVE-TOWER 3 1 2 3 ((1 2 3) NIL NIL))) we do the following: "1: (MOVE-TOWER 2 1 3 2 ((1 2 3) NIL NIL))" Move a tower of 2 disks from peg 1 to peg 2 using peg 3 as a buffer. The result of the move is the following: "1: returned ((3) (1 2) NIL)" "1: (MOVE-DISK 1 3 ((3) (1 2) NIL))" Move a top disk from peg 1 to peg 3. The result of this move is: "1: returned (NIL (1 2) (3))" "1: (MOVE-TOWER 2 2 1 3 (NIL (1 2) (3)))" Move a tower of 2 disks from peg 2 to peg 3 using peg 1 as a buffer, yielding the following configuration: "1: returned (NIL NIL (1 2 3))" Various Useful Functions ;;;a . triple (defun triple (X) "Compute three times X." (* 3 X)) ; Inline comments can ; be placed here.
52
;;;double (defun double (X) "Compute two times X." (* 2 X)) ;;;b. negate ; This is a documentation ; Inline comments can ; be placed here.
;;;CONTROL STRUCTURES: RECURSION AND ;;;CONDITIONALS ;;;Relational Operators ;;;(= x y) ;;;(/= x y) x is ;;;(< x y) x is ;;;(> x y) x is ;;;(<= x y) x is ;;;(>= x y) x is ;;;Shorthand ;;;(1+ x) ;;;(1- x) ;;;(zerop x) ;;;(plusp x) ;;;(minusp x) ;;;(evenp x) ;;;(oddp x) (/= Meaning x is equal to y not equal to y less than y greater than y no greater than y no less than y
;;; 1. FACTORIAL (defun factorial (N) "Compute the factorial of N." (if (= N 1) 1 (* N (factorial (- N 1))))) ;;; 2. Fibonacci (defun fibonacci (N) "Compute the N'th Fibonacci number."
53
(if (or (zerop N) (= N 1)) 1 (+ (fibonacci (- N 1)) (fibonacci (- N 2))))) ;; recursive list length (defun list-length (L) "A recursive implementation of list-length." (if (null L) 0 (1+ (list-length (rest L))))) ;; LIST Nth ELEMENT (defun list-nth (N L) "Return the N'th member of a list L." (if (null L) nil (if (zerop N) (first L) (list-nth (1- N) (rest L))))) ;;LIST -MEMBER (defun list-member (E L) "Test if E is a member of L." (cond ((null L) nil) ((= E (first L)) t) (t (list-member E (rest L))))) ;;; setq - change variable ;;;cons - (cons x L): Given a LISP object x and a list L, ;;;evaluating (cons x L) creates a list containing x followed by ;;;the elements in L. ;; APPEND (defun list-append (L1 L2) "Append L1 by L2." (if (null L1) L2
54
(cons (first L1) (list-append (rest L1) L2)))) ;;reverse (defun list-reverse (L) "Create a new list containing the elements of L in reversed order." (if (null L) nil (list-append (list-reverse (rest L)) (list (first L))))) ;;;intersection (defun list-intersection (L1 L2) "Return a list containing elements belonging to both L1 and L2." (cond ((null L1) nil) ((list-member (first L1) L2) (cons (first L1) (list-intersection (rest L1) L2))) (t (list-intersection (rest L1) L2)))) ;;; reapeat - transformation ;;; repeat applying a given transformation F on X for N times by ;;;simply writing (repeat-transformation F N X) (defun repeat-transformation (F N X) "Repeat applying function F on object X for N times." (if (zerop N) X (repeat-transformation F (1- N) (funcall F X)))) ;;;double-list-elements (defun double-list-elements (L) "Given a list L of numbers, return a list containing the elements of L multiplied by 2." (if (null L) nil (cons (double (first L)) (double-list-elements (rest L))))) ;;map
55
(defun mapfirst (F L) "Apply function F to every element of list L, and return a list containing the results." (if (null L) nil (cons (funcall F (first L)) (mapfirst F (rest L))))) ;;;(mapfirst #'double '(1 2 3 4)) ;;; add two list (defun add-two-list (l r) ; e.g. (1 2 3) (2 3 4) ; returns (3 5 7) (setq return-value '()) (loop for i from 0 to (- (list-length l) 1) do (setq return-value (cons (+ (list-nth i l) (list-nth i r)) return-value)) ) ;;;(reverse return-value) )
References
1. http://en.wikipedia.org 2. http://www.lisp.org/index.html 3. http://www.cis.upenn.edu 4. http://www.apl.jhu.edu 5. http://www.cs.sfu.ca 6. http://paulgraham.com
56