Escolar Documentos
Profissional Documentos
Cultura Documentos
1994
Contents
0.1 Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
0.2 License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1 Neurology and Machine Learning 4
1.1 Some neurology that is related to articial intelligence . . . . . . 4
2 Searching 9
2.1 Searching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.1 GUI Java Search Tool . . . . . . . . . . . . . . . . . . . . 14
3 Games and Game Theory 60
3.1 Game Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.2 Intelligent games . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4 Misc. AI 64
4.1 AI Language, speech . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.1.1 Hidden Markov Models . . . . . . . . . . . . . . . . . . . 65
4.2 Fuzzy Stu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.3 Evolutionary AI . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.4 Computer Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.5 Turing Machines, State Machines and Finite State Automaton . 94
4.6 Blackboard Systems . . . . . . . . . . . . . . . . . . . . . . . . . 95
4.7 User Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
4.8 Support Vector Machines . . . . . . . . . . . . . . . . . . . . . . 96
4.9 Bayesian Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5 Reasoning Programs and Common Sense 98
5.1 Common Sense and Reasoning Programs . . . . . . . . . . . . . . 98
5.2 Knowledge Representation and Predicate Calculus . . . . . . . . 99
5.3 Knowledge based/Expert systems . . . . . . . . . . . . . . . . . . 105
5.3.1 Perl Reasoning Program 'The Plant Dr.' . . . . . . . . . 108
6 Agents, Bots, and Spiders 125
6.1 Spiders and Bots . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
6.1.1 Java Spider to check website links . . . . . . . . . . . . . 126
1
6.2 Adaptive Autonomous Agents . . . . . . . . . . . . . . . . . . . . 151
6.3 Inter-agent Communication . . . . . . . . . . . . . . . . . . . . . 151
6.3.1 Java Personal Agent . . . . . . . . . . . . . . . . . . . . . 154
7 Neural Networks 244
7.1 Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
7.2 Hebbian Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
7.3 Perceptron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
7.4 Adeline Neural Nets . . . . . . . . . . . . . . . . . . . . . . . . . 247
7.5 Adaptive Resonance Networks . . . . . . . . . . . . . . . . . . . . 248
7.6 Associative Memories . . . . . . . . . . . . . . . . . . . . . . . . 248
7.7 Probabilistic Neural Networks . . . . . . . . . . . . . . . . . . . . 249
7.8 Counterpropagation Network . . . . . . . . . . . . . . . . . . . . 250
7.9 Neural Net Meshes . . . . . . . . . . . . . . . . . . . . . . . . . . 250
7.10 Kohnonen Neural Nets (Self Organizing Networks) . . . . . . . . 250
7.10.1 C++ Self Organizing Net . . . . . . . . . . . . . . . . . . 252
7.11 Backpropagation . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
7.11.1 GUI Java Backpropagation Neural Network Builder . . . 266
7.11.2 C++ Backpropagation Dog Track Predictor . . . . . . . 328
7.12 Hopeld Networks . . . . . . . . . . . . . . . . . . . . . . . . . . 394
7.12.1 C++ Hopeld Network . . . . . . . . . . . . . . . . . . . 396
8 AI and Neural Net Related Math Online Resources 404
8.1 General Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
8.1.1 C OpenGL Sierpinski Gasket . . . . . . . . . . . . . . . . 405
8.1.2 C OpenGL 3D Gasket . . . . . . . . . . . . . . . . . . . . 408
8.1.3 C OpenGL Mandelbrot . . . . . . . . . . . . . . . . . . . 410
8.2 Specic Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
9 Bibliography 419
9.1 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
9.2 Links to other AI sites . . . . . . . . . . . . . . . . . . . . . . . . 421
2
0.1 Preface
The website for this book has moved! http://herselfsai.com
There are new chapters, source code, and news on that website. Source code
can be directly downloaded from there.
After I nish adding the new chapters and updating and making corrections
to old chapters on the website I'll update this pdf. But you should consider
this pdf outdated for now 1994 was a long time ago in the world of articial
intelligence.
This book is intended to be an introduction to articial intelligence and
neural networks. I put in lots of source code in C/C++ and JAVA. Having
examples to experiment with can make the concepts clearer. Although I touch
on some math there isn't enough time and space to go through it in detail. A
long list of Internet resources are listed at the back that do an excellent job of
explaining the math needed for AI and NN.
0.2 License
This work licensed under Creative Commons Attribution-Noncommercial-Share
Alike 3.0 United States License.
3
Chapter 1
4
identify intelligence, or lack thereof in people we come across. It is also clear
that intelligence tests measure one's ability to take tests and education, rather
than one's intelligence. General intelligence remains xed over the life of an
individual. Education grows as a person learns more whether through self ed-
ucation, academia or other methods. Some areas of the brain can be damaged
with out harming a persons intelligence, instead only costing the person some
memories or skills.
Hemispherectomy done in extreme cases of epilepsy removes one hemisphere
of the brain surgically. There is some paralysis on the opposite side of the
body, but interestingly intelligence is preserved. Often intelligence is increased
after surgery probably because seizures and the devastating eect they have are
stopped.
Although psychologists have tested for self awareness by placing dots on the
foreheads of sleeping beasts and determining they were or were not self aware
if they recognized the dot in the mirror as being on them. It is now clear that
self awareness, is a matter of degree and not a have or have not part of beings.
It is also becoming clear that consciousness is a dynamical ongoing process
in the brain. It is not something that can be found in the pieces of the brain
but only in the operation of the brain. How a piece of software works can not
be determined by dismantling the computer into circuits and chips, nor can
consciousness be understood by only looking at pieces of the brain.
The level of intelligence of a species is related to a constant multiplied by
brain size divided by body size. Male human brains average 1371cc, female
brains average 1216 cc. Normal IQ scores have been documented for brains
between 735cc and 1470cc. Before anyone gets confused, remember that it is
excess neurons above and beyond what are needed for body maintenance that
matters, since male bodies are usually much larger than female bodies more
neurons are needed for general maintenance.
There are about ten billion neurons in the human brain. Each of these has
about ten thousand connections to other neurons. There are over two hundred
known neurotransmitters interacting with these neurons. Axons are the single
connection leading away from the neuron, sending out a frequency through
10,000 or so branches o to other neurons. Dendrites are the many connections
leading in to the neuron. The incoming dendrite electric pulses are superimposed
on each other, the intensity of the incoming wave is the important part. Even at
rest a small pulse is maintained in the neuron. There is a set point, a preferred
point that the population returns to between excitation states. As a person's
age increases the steady state amplitude increases, neurons acting as part of
groups increases, so the amplitude increases, neurotransmitters can increase or
decrease this amplitude.
Neurons are in the gray matter of the cortex. The ones you are born with are
all you get in the neocortex. Neurons increase branching and size as you learn
skills and knowledge, and from birth to adolescence die o if unused. If some do
not die o mental illness results. They communicate using neurotransmitters.
The shape varies by task. There are two main types of neurons in the brain,
spiny and smooth. The spiny neurons make up about 80% of the neurons in the
5
brain and are further broken into two groups, pyramidal and stellate. Neurons
change their behavior with experience. Axons are in white matter of the cortex
and form the long distance connections. They increase with age. The more
white matter the faster communication occurs. Older people think faster.
Neurons do not aect things individually. They each eect the conditions
in the neighborhood. Each is connected to every other neuron in the brain
with in a few connections. The neurons form populations that have many semi
autonomous independent elements; each has many weak interactions with many
others; the input-output relationships are non-linear; and from the neuron's
point of view have endless energy coming in and leaving. The connections
between neurons can be in series or in parallel, branch into many or reduce
from many neurons to few or one. The feedback can be both cooperative, both
inhibitory or one cooperative and the other inhibitory. Some neurons have only
local connections and can be contributory or inhibitory. Some neurons are long
distance, there are always excitatory.
When the density of connections is deep enough the neurons begin acting
as part of the group rather than individuals. Chaotic attractors and point
attractors form to stabilize the pattern. Once part of a group the neuron gives
as many pulses as it receives. The 40hz background cycle keeps the steady state
going instead of dying o. Positive and negative feed back loops are what allow
for the intentional responses to stimuli.
A neuron takes the incoming pulses, converts them to waves, sums them,
converts the integrated signal to a pulse train and sends it out on the axon
if it is over a certain threshold. The charge travels down the dendrite toward
the soma (main part of the cell ), jumping from one neuron to another which
release neurotransmitters as the charge moves along. When the frequency of
pulses increases each is diminished in the amount it adds to the wave amplitude
so the amplitude can not increase above a certain amount. Incoming
ows are
excitatory, out going
ows act as inhibitors. The neurotransmitters turn the
ow o and on and then rapidly diminish. After ring, neurons need time to
recover before they can re-re.
Neuromodulary neurons receive input from all over the brain, most impor-
tantly the limbic system during the formation of intentional action. They have
widely branching axons, that do not form synapses but release the neuromodu-
lators through out the brain forming a global in
uence, they move about in the
neuropil. Neuromodulators (histamine, serotonin, dopamine, melatonin, CCK,
endorphins, vasopressin, oxytocin, acetylcholine, noradrenaline and others) en-
hance or diminish eectiveness of synapses bringing lasting changes, cumulative
changes.
Neurotransmitters act locally. One, oxytocin is a neurotransmitter that is
released during orgasm, and childbirth, it erases memories and also is related to
bonding between couples and parents and children. NMDA, mono modulation
of glutamate may be related to intelligence.
The cerebral cortex is about 2.67 square feet when stretched out. It is
about six cells deep. The major wrinkles are common to everyone, just as
everyones basic face structure has two eyes, a nose and a mouth. The wrinkles
6
are individual in the same way that peoples faces are individual despite the
same basic features. The bottom two layers of the cortex send connections to
other parts of the brain, the third layer from the bottom is incoming signals,
the the top three layers receive input from the third from the bottom layer.
1 up
2 up
3 up
4 incoming
5 outgoing
6 outgoing
Large areas of the cortex are known to perform dierent tasks, such as
language or math. Gifted people tend to have a more dierentiated pre-frontal
cortex, and brain organization is also dierent. There are also smaller areas
about a half inch square areas known as 'bumps' or 'patches' each of these
includes millions of neurons and
ashes o and on at ve to twenty times per
second. Most perceptions, behaviors and experiences are somehow recorded in
these patches.
The frontal lobes of the cortex contain the motor cortices,the connections
to the muscles and nerves that control motion. This part of the brain also
contains a map of the body. The frontal lobes are highly involved in forming
intent, and length of attention spans. The rate of ring in pre-frontal cortex
res at dierent rates during delayed-choice tasks, depending on previous focus
of attention, and is most active during IQ tests. Dierent small areas of the
pre-frontal cortex are used for dierent types of tasks. The dierence in the
pre-frontal cortex is not in structure but in the places it connects to. The left
pre-frontal cortex encodes memories and the right pre-frontal cortex retrieves
memories. Working memory, also found in this area, is not just a blank scratch
pad but performs other functions as well. Dorsal and lateral areas of the frontal
lobe deal with cognitive functions. While the medial and ventral areas handle
social skills and empathy.
The hippocampuses are two structures about the size and shape of your
little nger deep inside your brain. They release ACh (acetylcholine) along one
of the two cortex layers where dendrites are found when a new thing is to be
learned. The hippocampuses are responsible for sending the signals to compress
information into existing information; treat it as something new and separate;
or recall existing information.
Human and animal learning is broken into three main groupings: instru-
mental conditioning; classical conditioning; and observational. In Instrumental
Conditioning; specic behavior is rewarded or punished. In Classical Condition-
ing; two stimuli are presented together repeatedly, the animal or person learns
to associate one stimuli with the other. This is the same as Pavlov's condition-
ing of dogs with bells to which the dog does not initially salivate, and food to
which the dog does salivate from the beginning. In Observational; behavior is
learned by watching others.
Memory in humans and animals has three main divisions: Sensory has after
images persisting in the eye after focus is turned away; Short term working
7
memory is where only a few things are kept, this is the working buer or cache;
Long term storage handles semi-permanent to permanent information storage.
Falsely implanted memories do not record sensory data. We are now beginning
to be able to dierentiate between real memories that have recorded sensory
data and false memories using fMRIs.
8
Chapter 2
Searching
2.1 Searching
Searches are broken into two main categories; uninformed searches (brute-force,
blind), and informed (heuristic, directed) searches. Uninformed searches are
done when there is no information about a preferred search path. Informed
searches have some information to help pick search paths, usually a rule of
thumb is used to reduce the search area. A traveling salesman search going from
Boston to Dallas is uninformed if it begins searching randomly, or methodically
with no preference. An informed search knows Dallas is South West of Boston,
so it begins and concentrates its search in that direction.
Directed graphs (state-space graphs) are used to keep track of possible steps
and the state of the world from step-to-step. The edges are used to dene
steps and the nodes dene states of the world. A state space graph has three
basic components: A start node; functions that transform a state description
representing one state into the representation after an action is taken; and a
goal condition.
Four main criteria for evaluating search strategies are:
completeness; likelihood of nding a solution if a solution exists
time complexity; (path cost) time to nd a solution (Order O(n))
space complexity; amount of memory, ram, needed
optimality; does it nd best solution, if several solutions exist?
There are three steps you must take to avoid loops in your search algorithm:
do not return to the state you just came from; do not create paths with cycles
in them; do not regenerate prior states.
The Breadth First Search rst checks all of the nodes directly connected
to the start node, then it checks the nodes connected to each of the beginning
nodes. In a tree graph it checks the top level, then the second level nodes, and
9
so on. If a solution exists, the Breadth First Search will nd it (algorithmic
search), and it will nd the shallowest solution rst.
Breadth First Search: Algorithm
top::
Is it the end of the queue?
true: quit, no solution
false:
Remove rst node
Is it the solution node?
true: return node and quit
false: expand node and put children of this node at end of the queue
loop to top:
Depth First (Uniform Cost) Search is similar to the Breadth First Search
except it searches minimum cost rst rather than level. If the cost is equal on
all the levels then it is the same as the Breadth First Search. It will probably
nd a solution faster than Breath First, if several solutions exist. It may get
stuck on dead ends and not nd a solution even if a solution exists, so it is not
an algorithmic search. It may not nd the shallowest or least cost solution, but
is uses far less memory than the Breadth First Search. Usually a boundary is
placed, a depth bound, so if a solution isn't found at a certain depth it backs
up and tries the next section.
Depth First Search: Algorithm:
top::
Is it the end of the queue?
true: quit, no solution
false:
Remove rst node
Is it the solution node?
true: return node and quit
false: expand node and put children of this node at the front of the queue
loop to top:
10
Iterative Deepening only uses the memory of a Depth-First Search, but will
nd the lowest cost solution if any solution exists. It does a Depth-First Search
with a depth-bound of one, then a Depth-First Search with a depth-bound of
two, and continues until a solution is found.
Constraint Satisfaction Search: Is a search algorithm in which a set of vari-
ables must meet a set of constraints or conditions rather than meet a goal.
(scheduling and the eight queens problem are examples of this.) There are two
main methods of solving constraint problems. One is 'Constructive Methods'
and it works by constructing piece-by-piece a solution, a second is 'Heuristic
Repair' and it works by trying a random solution and moving any piece that
doesn't t into a space where it meets the constraints. The graph search algo-
rithms are used to look for solutions state or constraint graphs.
Greedy Search: A best-rst strategy, it arrives at a solution by making a
sequence of choices, each of which looks the best at the moment. This is like
making change in a store, rst you deal out the largest coins, quarters, then
when the dierence is less than .25, you hand out dimes until it is less than .10,
etc. Cost is estimated using a formula h(). h(n) then gives cost of cheapest
path to goal state. Greedy is similar to Depth First Search.
Greedy Search: Algorithm
top::
have we got a solution?
true: quit, return answer
false: grab the largest/best selection we can
loop to top:
The following searches are heuristic. They combine one or another of the
above searches with a rule of thumb or a weighting method to direct the search.
Several evaluation functions exist which are used to construct an ordered search.
A* (A star) This algorithm combines a best-rst search with a uniform cost
search, usually breadth rst. h(n) is the best-rst formula which is added to
g(n), the known path cost. f (n) = g (n) + h(n): The lowest cost f(n) is followed
rst. Often the example given is of the fteen square puzzle where you slide the
numbers to put them in order. A* works by examining the surrounding squares
and beginning with the most promising of them, and repeating that until the
puzzle is solved.
A*: Algorithm
Put rst node on SearchList
loop::
Pop top node on SearchList and put on DoneList
11
if no nodes on SearchList, break, no solution
is this node the solution?
..yes
....break with answer
..no
....calculate f(n) for each node o of this node
....check DoneList, if node on DoneList discard
....add each node to SearchList in order of smallest f(n) (including previous
nodes on SearchList)
....loop::
h(n), an 'admissible heuristic', must be chosen in a way that it never overes-
timates the path cost. If you are trying to nd a route between two cities then
h(n) is a straight line between the two cities. The better the h(n) function is
the better the search will work.
Some examples of h(n) are: For the sliding block children's puzzle that has
numbered blocks that you order by sliding about, h(n) might be the number of
tiles in the correct location For a path from one location to another h(n) might
determine distance from goal of the city node expanded.
A* is optimally ecient for any given h(n) function. It will nd and expand
fewer nodes than any other algorithm. A* is a complete algorithm, and it is of
Order, O(log h (n))h (n) is the true cost of reaching the goal. It will also nd
the lowest cost path from start to nish if there is a path.
idA* (iterative deepening A*), A* does a depth rst search up to a cost limit
i place of the breadth-rst part.
SMA* (simplied memory bounded A*), A* to stay with in memory space,
this algorithm lls available memory, then drops the highest cost node, to make
room for the new node.
RBFS (recursive best-rst search). This uses depth-rst and best-rst to-
gether. It calculates f(n) for all the nodes expanded o the current node, then
backs up the tree and re-calculates f(n) for the previous nodes. Then it expands
the smallest f(n) of those nodes.
Planning out a set of steps to reach a goal may be done with STRIPS
rules using dierent search techniques. Searching a group of plans begins with
an incorrect or incomplete plan that is changed until it satises the situation.
Sometimes a rule is learned if it will save time and have general applicability.
STRIPS combines state-space and situational calculus in an eort to over come
the problems of situational calculus. Situational Calculus is a form of rst-order
predicate calculus with states, actions, and the eects of states after actions have
taken place. A list of states is kept. States are treated as things and actions are
treated as functions. The eects of actions are mapped onto the states. The
12
eects of actions on the states can not always be inferred and this is a major
weakness of Situational Calculus. STRIPS has a set of precondition literals, a
set of delete literals and a set of add literals. To obtain the after action state
using a forward-search the before action conditions are deleted, and add all of
the literals in the add list. Everything not in the delete list is carried over to
the next state.
A recursive STRIPS method adding to each achieved part of the state can
also be used. This is the method used in the 'General Problem Solver', a com-
monsense reasoning program. It uses a global data structure that is set to the
initial state and changed until the goal state is reached. The Sussman anomaly
occurs if a state closer to the goal must be undone to achieve the goal state,
breadth rst searches can sometimes work around this. A backward search with
STRIPS works backward grabbing sub-goals as it goes. It is usually more ef-
cient, but it is also more complicated and Sussman anomalies appear here as
well.
Following is the code to show the eects of various types of searches between
US cities. It provides a GUI interface and graphical output.
13
2.1.1 GUI Java Search Tool
//AStar.java
//www.timestocome.com
//Fall 2000
import java.io.*;
import java.util.*;
class AStar
{
while(i.hasMoreElements() ){
City t = (City)i.nextElement();
if ( ((t.city + ", " + t.state).compareTo(start) ) == 0){
begin = t;
}
if( ((t.city + ", " + t.state).compareTo(finish) ) == 0){
end = t;
}
}
14
//get top node off of vector and see if it is destination
temp.addElement(begin);
out.addElement(begin);
// if so return... nothing to do
if( (finish.compareTo(start) ) == 0 ){
return out;
}
//loop
Enumeration j = temp.elements();
while ( j.hasMoreElements() ){
// is queue empty? if so return failure
return out;
}else{
Enumeration k = q2.elements();
while( k.hasMoreElements() ) {
if( out.contains(tc1) ){
}else{
15
out.add(0, tc1);
temp.add(0, tc1);
}
}
}
//end loop
return out;
}
Enumeration j = v.elements();
while( j.hasMoreElements() ){
Enumeration i = v.elements();
while( i.hasMoreElements() ){
16
double d = Math.sqrt( (tempCity.lat - destination.lat)*
(tempCity.lat - destination.lat) +
(tempCity.lon - destination.lon)*
(tempCity.lon - destination.lon) );
d = smallestDistance;
closestCity = tempCity;
}
sorted.add(closestCity);
v.remove(closestCity);
j = v.elements();
}
return sorted;
17
//Breadth.java
//www.timestocome.com
//Fall 2000
import java.io.*;
import java.util.*;
class Breadth
{
while(i.hasMoreElements() ){
City t = (City)i.nextElement();
if ( ((t.city + ", " + t.state).compareTo(start) ) == 0){
begin = t;
}
if( ((t.city + ", " + t.state).compareTo(finish) ) == 0){
end = t;
}
18
}
// if so return... nothing to do
if( (finish.compareTo(start) ) == 0 ){
return out;
}
//loop
Enumeration j = temp.elements();
while ( j.hasMoreElements() ){
// is queue empty? if so return failure
}else{
while( k.hasMoreElements() ) {
if( out.contains(tc1) ){
}else{
out.add(tc1);
temp.add(tc1);
19
}
}
}
//end loop
return out;
}
20
//Depth.java
//www.timestocome.com
//Fall 2000
import java.io.*;
import java.util.*;
class Depth
{
while(i.hasMoreElements() ){
City t = (City)i.nextElement();
if ( ((t.city + ", " + t.state).compareTo(start) ) == 0){
begin = t;
}
if( ((t.city + ", " + t.state).compareTo(finish) ) == 0){
end = t;
}
}
21
temp.addElement(begin);
out.addElement(begin);
// if so return... nothing to do
if( (finish.compareTo(start) ) == 0 ){
return out;
}
//loop
Enumeration j = temp.elements();
while ( j.hasMoreElements() ){
// is queue empty? if so return failure
return out;
}else{
while( k.hasMoreElements() ) {
if( out.contains(tc1) ){
}else{
out.add(0, tc1);
temp.add(0, tc1);
}
}
22
}
}
//end loop
return out;
}
23
//City.java
//www.timestocome.com
//Fall 2000
import java.util.*;
class City
{
String city;
String state;
double lat;
double lon;
class Edge
{
String city1;
int routeNumber;
double length;
24
}
public void setLength( double lat1, double lat2, double lon1, double lon2)
{
double temp =( (lat1-lat2)*(lat1-lat2) + (lon2-lon2)*(lon2-lon2) );
length = ( Math.sqrt(temp ) * 100); //roughly convert to miles
}
25
//CityList.java
//www.timestocome.com
//Fall 2000
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
import java.awt.event.*;
if( i == 1){
list[0] = "Start";
}else if( i ==2){
list[0] = "Finish";
}
26
list[19]= "Augusta, Me";
// jlist.setVisibleRowCount(10);
27
wordList.addListSelectionListener(this);
28
//DrawMap.java
//www.timestocome.com
//Fall 2000
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
import javax.swing.event.*;
import java.util.*;
Vector p, e;
int s=0;
int minh=0, minw=0, maxw=0, maxh=0;
int height=0, width=0;
p = point;
e = edge;
height = h;
width = w;
while (counter.hasMoreElements() ) {
tempCity = (City)counter.nextElement();
29
minh = (int)tempCity.lon;
}
maxw += 10;
maxh += 10;
//downsize or upscale?
if( (yDim > h) || (xDim > w) ){
scaleX = xDim/w;
scaleY = yDim/h;
}else{
scaleX = w/xDim;
scaleY = h/yDim;
}
30
}else{
scale = (int)scaleY;
}
s = scale+1;
}
while (counter1.hasMoreElements() ) {
tempCity1 = (City)counter1.nextElement();
//shift left and scale x
x = (int) ( (tempCity1.lat)*s - left );
x = width - x; //latitude runs right to left, not left to right
31
//create enumerator for edges
Enumeration counter2 = tempCity1.edge.elements();
Edge tempEdge = new Edge( "", 0);
while( counter2.hasMoreElements() ) {
tempEdge = (Edge)counter2.nextElement();
while( counter3.hasMoreElements() ){
tempCity3 = (City)counter3.nextElement();
if( ( (tempCity3.city).compareTo(tempEdge.city1) ) == 0) {
//label edges
}
repaint();
}
}
32
class DrawMapFrame extends JFrame
{
setTitle("Map");
setSize(width, height);
addWindowListener(new WindowAdapter(){
public void windowClosing(WindowEvent e)
{
// System.exit(0);
}
} );
try{
//collect the data
GetData gd = new GetData();
p = gd.getCity();
e = gd.getEdge(p);
String[] words = gd.buildList(p);
} catch(Exception ex){}
33
JFrame frame1 = new DrawMapFrame(p, e);
frame1.show();
34
//GetData.java
//www.timestocome.com
//Fall 2000
import java.io.*;
import java.util.*;
class GetData
{
//read in city file (city.dat) and build a vector with an element for each city
//cityName State latitude longitude
FileReader filereader = new FileReader("city.dat");
StreamTokenizer streamtokenizer = new StreamTokenizer(filereader);
String wordIn = "", tempCity = "", tempState = "";
double tempLat = 0, tempLong = 0;
int count = 3;
Vector v = new Vector();
35
}
filereader.close();
return v;
}
//read in edge
if( (streamtokenizer.ttype == StreamTokenizer.TT_NUMBER) && (count==2) ){
tempRouteNumber = streamtokenizer.nval;
count = 0;
int i = 0;
City tempCity;
boolean c1=false, c2=false;
Enumeration counter = city.elements();
36
while( counter.hasMoreElements() ){
tempCity = (City)counter.nextElement();
}
}
}
}
return city;
System.out.println("*****************************************************");
37
while( counter.hasMoreElements() ){
tempCity = (City)counter.nextElement();
while( counter2.hasMoreElements() ){
tempEdge = (Edge)counter2.nextElement();
System.out.println( tempEdge.city1 + " " + tempEdge.routeNumber + " "
+ tempEdge.length);
}
}
while( counter.hasMoreElements() ){
tempCity = (City)counter.nextElement();
return list;
38
39
//grabEdges.java
//www.timestocome.com
//Fall 2000
import java.io.*;
import java.util.*;
class grabEdges{
public grabEdges(){}
while( counter.hasMoreElements() ) {
while( counter3.hasMoreElements() ){
tempCity3 = (City)counter3.nextElement();
if( ( (tempCity3.city).compareTo(tempEdge.city1) ) == 0) {
// add that city to the end of the out vector
queue.addElement(tempCity3);
40
}
}
}
return queue;
}
41
//Jpanel.java
//www.timestocome.com
//Fall 2000
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
Jpanel ()
{
setBackground( Color.white );
}
}
}
42
//printData.java
//www.timestocome.com
//Fall 2000
import java.io.*;
import java.util.*;
import javax.swing.*;
class printData{
while( counter.hasMoreElements() ){
tempCity = (City)counter.nextElement();
while( counter2.hasMoreElements() ){
tempEdge = (Edge)counter2.nextElement();
}
}
43
}
44
//Search.java
//www.timestocome.com
//Fall 2000
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
import javax.swing.event.*;
import java.util.*;
Jpanel mainPanel;
JPanel userPanel;
JPanel outputPanel;
JPanel menuPanel;
JPanel buttonPanel;
public Search ()
{
super ("http://www.timestocome.com");
try{
//collect the data
GetData gd = new GetData();
v = gd.getCity();
45
Vector v1 = gd.getEdge(v);
//printData(v);
words = gd.buildList(v);
} catch(Exception e){}
userPanel.add(scrollPane);
wordList.addListSelectionListener(this);
enter.addActionListener(b1);
start.addActionListener(b2);
finish.addActionListener(b3);
buttonPanel.add(enter);
buttonPanel.add(start);
46
buttonPanel.add(finish);
mainPanel.add(buttonPanel);
mainPanel.add(userPanel);
scrollpaneText.setViewportView(output);
outputPanel.add(scrollpaneText);
mainPanel.add(outputPanel);
jmenubar.setUI( jmenubar.getUI() );
JMenu jmenu1 = new JMenu("Searches");
JMenu jmenu4 = new JMenu("Draw Map");
JMenu jmenu2 = new JMenu("Help");
JMenu jmenu3 = new JMenu("Quit");
JRadioButtonMenuItem m1 = new
JRadioButtonMenuItem("Breath-First");
m1.addActionListener(a1);
47
JRadioButtonMenuItem m2 = new
JRadioButtonMenuItem("Depth-First");
m2.addActionListener(a2);
JRadioButtonMenuItem m3 = new
JRadioButtonMenuItem("A*");
m3.addActionListener(a3);
jmenu1.add(m1);
jmenu1.add(m2);
jmenu1.add(m3);
jmenu2.add(m6);
jmenu3.add(m7);
jmenu4.add(m8);
jmenubar.add(jmenu1);
jmenubar.add(jmenu4);
jmenubar.add(jmenu2);
jmenubar.add(jmenu3);
return jmenubar;
48
static ActionListener a1 = new ActionListener()
{
public void actionPerformed( ActionEvent e )
{
JMenuItem m1 = ( JMenuItem )e.getSource();
choice = 1;
output.setText("\n Breadth First algorithm");
}
};
}
};
}
};
49
"\nCopyright Times to Come under GNU Copyleft'');
}
};
}
};
}
};
50
}
};
case 0:
output.setText ("\n\nChose a type of search");
case 1:
output.setText ("\n\n Breadth First Search" +
"\n from " + startSelection + " to " + finishSelection );
break;
case 2:
output.setText ("\n\n Depth First Search" +
"\n from " + startSelection + " to " + finishSelection );
51
case 3:
output.setText ("\n\n A* " "\n from " +startSelection +
" to "+ finishSelection);
default:
output.setText ("\n\n I am so confused... " + choice );
break;
}
}
};
52
---city.dat file---
Montgomery AL 32.4 86.3
Phoenix AZ 33.5 112.1
LittleRock AR 34.7 92.4
Sacramento CA 38.5 121.4
Denver CO 39.8 104.9
Hartford CT 41.8 72.7
Tallahassee FL 30.5 84.3
Atlanta GA 33.8 84.4
Boise ID 43.6 116.2
Springfield IL 39.8 89.6
Indianapolis IN 39.8 86.1
DesMoines IA 41.6 93.6
Topeka KS 39.0 95.7
Frankfort KY 38.2 84.9
BatonRouge LA 30.4 91.1
Augusta ME 44.3 69.7
Boston MA 42.3 71.0
Lansing MI 42.7 84.6
SaintPaul MN 44.8 93.0
Jackson MS 32.3 90.2
JeffersonCity MO 38.6 92.2
Helena MT 46.6 112.0
Lincoln NE 40.8 96.7
CarsonCity NV 39.1 119.7
Concord NH 43.2 71.6
Trenton NJ 40.2 74.8
SantaFe NM 35.7 106.0
Raleigh NC 35.8 78.7
Bismark ND 46.8 100.8
Columbus OH 40.0 83.0
OklahomaCity OK 35.5 97.5
Salem OR 45.0 123.0
Harrisburg PA 40.3 76.9
Providence RI 41.8 71.4
Columbia SC 34.0 80.9
Nashville TN 36.2 86.8
Austin TX 30.3 97.8
SaltLakeCity UT 40.8 111.9
Montpelier VT 44.3 72.6
Richmond VA 37.5 77.5
Olympia WA 47.0 122.9
Charleston WV 38.4 81.6
Madison WI 43.0 89.4
Cheyenne WY 41.1 104.8
53
54
---edge.dat file ---
5 Sacramento Salem
5 Olympia Salem
10 BatonRouge Tallahassee
15 Helena SaltLakeCity
20 Atlanta Columbia
24 Nashville Atlanta
25 Denver SantaFe
25 Denver Cheyenne
35 SaintPaul DesMoines
35 KansasCity OklahomaCity
35 OklahomaCity Austin
35 Topeka OklahomaCity
39 Madison Springfield
40 SantaFe LittleRock
40 Raleigh Nashville
40 Nashville LittleRock
55 Springfield Jackson
64 Frankfort Charleston
65 Nashville Montgomery
65 Nashville Indianapolis
69 Indianapolis Lansing
70 Denver Topeka
70 Harrisburg Columbus
70 Indianapolis Columbus
70 Trenton Harrisburg
70 Topeka JeffersonCity
74 Indianapolis Springfield
75 Atlanta Frankfort
76 Lincoln Denver
80 Sacremento CarsonCity
80 SaltLakeCity CarsonCity
80 SaltLakeCity Cheyenne
80 Lincoln Cheyenne
83 Richmond Harrisburg
84 Harrisburg Hartford
84 SaltLakeCity Boise
85 Atlanta Montgomery
89 Monteplier Concord
93 Boston Concord
94 StPaul Madison
94 Bismark SaintPaul
95 Boston Augusta
95 Richmond Trenton
95 Providence Boston
545 Austin BatonRouge
55
555 Jackson Austin
560 Raleigh Columbia
565 SantaFe Oklahomacity
565 Jackson BatonRouge
565 Austin LittleRock
565 SaltLakeCity Sacramento
577 SantaFe Phoenix
583 Flagstaff SantaFe
585 Mongomery Jackson
585 SaltLakeCity Denver
585 Atlanta Tallahassee
585 BatonRouge Atlanta
589 Olympia Boise
601 Sacramento Phoenix
605 Topeka DesMoines
610 Olympia Helena
629 Frankfort Nashville
634 JeffersonCity Frankfort
635 Richmond Raleigh
638 Frankfort Indianapolis
641 Charleston Richmond
646 Columbus Frankfort
647 Columbus Charleston
656 DesMoines Lincoln
679 CarsonCity Boise
679 Lincoln Topeka
680 Montpelier Hartford
770 Boston Montpelier
56
map.dat
#city #state #lat #long #main routes connecting/through capitol city
Montgomery AL 32.4 86.3 85->Atlanta, 65->Nashville
Junea AK 58.4 134.1
Phoenix AZ 33.5 112.1 17-40-25->Santa Fe, 10-99->Sacramento
Little Rock AR 34.7 92.4 30-35->Austin, 40->Santa Fe, 40->Nashville
Sacramento CA 38.5 121.4 5->Salem, 80->CarsonCity, 50-15->SaltLakeCity
Denver CO 39.8 104.9 25->SantaFe, 70-15->SaltLakeCity, 76->Lincoln
Hartford CT 41.8 72.7 91->89 Montpelier, 84->Harrisburg
Dover DE 39.1 75.5
Tallahassee FL 30.5 84.3 10->Baton Rouge, 10-75->Atlanta
Atlanta GA 33.8 84.4 75-10->Baton Rouge, 85-10->Baton Rouge, 24->Nashville
Honolulu HI 25.0 168.0
Boise ID 43.6 116.2 84->Salt Lake City, 84-5->Olympia, 84-95->Carson City
Springfield IL 39.8 89.6 39->Madison, 74->Indianapolis
Indianapolis IN 39.8 86.1 74-64->Frankfort, 69->Lansing, 65->Nashville
DesMoines IA 41.6 93.6 35-70->Topeka, 80->Lincoln, 35->Saint Paul
Topeka KS 39.0 95.7 70-29-80->Lincoln, 35->Oklahoma City, 70->Denver, 70->Jefferson City
Frankfort KY 38.2 84.9 64-70->Jefferson City, 75->Atlanta, 75-71->Columbus
BatonRouge LA 30.4 91.1 10-55->Jackson, 10-35->Austin, 10->Tallahassee
Augusta ME 44.3 69.7 95->Boston
Annapolis MD 39.0 76.5
Boston MA 42.3 71.0 95->Augusta, 95->Providence
Lansing MI 42.7 84.6 69->Indianapolis
SaintPaul MN 44.8 93.0 35->Des Moines, 94->Bismark, 94->Madison
Jackson MS 32.3 90.2 20-35->Austin, 20-65->Mongomery, 55->Springfield
JeffersonCity MO 38.6 92.2 70->Topeka
Helena MT 46.6 112.0 15->SaltLakeCity, 15-90-5->Olympia
Lincoln NE 40.8 96.7 76->Denver, 76-80->DesMoines
CarsonCity NV 39.1 119.7 80->Sacremento, 80->SaltLakeCity
Concord NH 43.2 71.6 93->Boston, 89->Monteplier
Trenton NJ 40.2 74.8 95->Richmond, 95->NewYork
SantaFe NM 35.7 106.0 25->Denver, 25-40->OklahomaCity, 25-40-17->Flagstaff
NewYork NY 42.7 73.8 90->Boston, 87->NewYork
Raleigh NC 35.8 78.7 40-95->Richmond, 40-95-20->Columbia
Bismark ND 46.8 100.8 94->StPaul
Columbus OH 40.0 83.0 70->Harrisburg, 70->Indianapolis
OklahomaCity OK 35.5 97.5 35->Topeka, 35->Austin, 40-25->SantaFe, 35->KansasCity
Salem OR 45.0 123.0 5->Olympia, 5->Sacramento
Harrisburg PA 40.3 76.9 70->Trenton, 70->Columbus, 83->Richmond
Providence RI 41.8 71.4 95->Boston
Columbia SC 34.0 80.9 20-40->Raleigh, 20->Atlanta
Pierre SD 44.4 100.3
Nashville TN 36.2 86.8 65-64->Franfort, 40->Raleigh, 65->Montgomery
Austin TX 30.3 97.8 35->OklahomaCity, 35-20->Jackson
SaltLakeCity UT 40.8 111.9 15->Helena, 80->CarsonCity, 80-25->Denver
57
Montpelier VT 44.3 72.6 89->Concord, 89-91-90->Boston
Richmond VA 37.5 77.5 95-40->Raliegh, 64-77->Charleston, 95-15->Harrisburg
Olympia WA 47.0 122.9 5->Salem
Charleston WV 38.4 81.6 77-70->Columbus, 64->Frankfort
Madison WI 43.0 89.4 39-55->Springfield, 94->StPaul
Cheyenne WY 41.1 104.8 25->Denver, 80->SaltLakeCity, 80->Lincoln
58
---README---
To compile the program just compile each *java file
>javac AStar.java
>javac Breadth.java
>javac City.java
>javac Depth.java
>javac DrawMap.java
>javac GetData.java
>javac Jpanel.java
>javac Search.java
>javac grabEdges.java
>javac printData.java
59
Chapter 3
60
Crook quiet nk
2 2,2 0,3
1 3,0 1,1
The highest score of all the plays is for both crooks remain quiet and each
receives 2 years jail time. But if Crook one nks, he gets a score of 3 (1 years
time ) against the other player who remains quiet or a score of 1(and gets 3
years jail time) So his best bet is to nk, as is the other crooks. The Nash
equilibrium is at nk/nk (1,1) since nking is the best move for each player
individually.
The payo function for this game is the same for each player and is: f(Fink,
Quiet) > f(Quiet, Quiet) > f(Fink, Fink) > f(Quiet, Fink) So we could as easily
score it 3, 2, 1, 0 rather than counting years out or 32, 21, 9, 1 the score only
serves to order the choices.
A clearer example is a game in which we have 2 players moving pieces on a
3-d game board. Each player can move in the x, y, or z direction.
X1 Y1 Z1
X2 2*,1* 0,1* 0,0
Y2 0,0 2*,1 1*,2*
Z2 1,2* 1,0 0,1
The * is the best move for each player considering what the other player
does. If player 1 moves in the Y direction player 2's best move is also in the Y
direction (2*,1) The squares with both players have *s are X X2
1
and YZ1
2
. These
are both Nash equilibriums. Finding the Nash Equilibrium this way is called
'Best Response'
Suppose instead of set numbers I have a function that describes the payo
for each player. I could have A(x) = y 2 andB (y ) = 12 x + xy then to nd the
Nash equilibrium I take the derivative of each function, set it to zero, solve and
plot. Any and all places the functions cross on the plot are Nash equilibriums.
61
example tic-tac-toe is one of the simplest games we all know. A game tree that
mapped all possible moves from start to nish would be 9! or 362,880 nodes
large. The rst player would have 9 choices of which box to play in, the second
8 choices since the rst player had taken one, the rst player's second move
would have 7 choices, etc. So the top level of nodes would have 9 choices. Each
level in the tree represents a turn in the game. The second level would have 8
nodes o of each of the original 9 nodes and so on. So you can imagine what
chess or other more complicated games have as the number of possible moves.
Pruning is used to take sections o of the search tree that make no dierence
to play. Heuristic (rule of thumb) evaluations allow approximations to save
search time. For instance in the tic-tac-toe tree described above once the rst
player chooses a position to play then the other 8 nodes of the top layer can be
trimmed o and only the 8 trees under that node need to be searched.
Since it is not usually practical to calculate each possible outcome a cut
o is usually put in place. As an example, for each board in play we can
calculate the advantage by adding up the point value of the pieces on the board
or adding points for position. Then the program can see which of those gives
the program a higher score. Then the program need only calculate ve or so
moves ahead, calculate the advantage at each node and choose the best path.
Rather than calculate ahead a set number of moves, the program can use an
iterative deepening approach and calculate until time runs out. A quiescent
search restricts the above approach. This eliminates moves that are likely to
cause wild swings in the score. The horizon problem occurs when searches do
not look ahead to the end of the game. This is a current unsolved problem in
game programming.
The Min Max algorithm assumes a 'zero sum game', such as tic-tac-toe where
what is good for one player is bad for the other player. This algorithm assumes
that both players will play perfectly and attempt to maximize their scores. The
algorithm only generates the trees on the nodes that are likely to be played.
Max is the computer, Min is the opposing player. It is assumed Max will get
rst turn.
generate entire game tree down to the maximum level to check
generate each terminal state value, high values are most benecial to max,
negative values are most benecial to min, zero holds no advantage for
either player.
go up one level, give the node above the previous layer the best score from
the prior layer
continue up the tree one level at a time until top is reached
pick the node with the highest score.
The Alpha-Beta method determines whether an evaluation should be made
of the top node by the Min-Max algorithm. It searches all of the nodes, like Min-
Max, then eliminates (prunes) those that are never going to reached. The pro-
gram begins by proceeding with the Min-Max algorithm systematically through
62
the nodes of a tree. First we go down a branch of the tree and calculate the
score for that node. Then we proceed down the next branch. If the score at one
of the leaves is lower than the score obtained in a previous branch of the tree
we don't nish evaluating all the nodes of the branch, rather we move onto the
next branch. The search can be shallow rather than deep saving time. Further
gains in speed can be made by caching the information from branches in a look
up table, re-ordering results, extending some and shortening other searches, or
using probabilities rather than actual numbers for cutos and using parallel
algorithms.
Ordering may be used to save time as well. In chess captures would be
considered rst, followed by forward moves, followed by backward moves. Or,
ordering can consider the nodes with the highest values rst.
The program must try to nd a winning strategy that does not depend on
the human user's moves. Humans often make small goals and consider moves
that work toward that goal, i.e. capture the queen. David Wilkins Paradise is
the only program so far to do this successfully. Another approach is to use book
learning. Several boards are loaded into a table in memory and if the same board
comes into play the computer can look up what to do from there. The Monte
Carlo simulation has been used successfully in games with non-deterministic
information, such as; Scrabble, dice, and card games.
Temporal-dierence learning is derived from Samuel's machine learning re-
search. Several games are played out and kept in a database. This works well
with board games like backgammon, chess and checkers. Neural nets can be
trained to play games this way, TD Gammon being one the more famous ones.
Most of the AI in games is scripted rather than programmed in traditional
languages so it is an easy starting place for beginners. Python is the currently
preferred languages. All the data is predened in a le so the script can look
up the data. This means the script doesn't have to be changed whenever the
data is changed during play. This is especially useful for bot programming.
63
Chapter 4
Misc. AI
64
4.1.1 Hidden Markov Models
These have been used in speech recognition, handwriting recognition and cur-
rently in many bio-technology projects.
Markov Chain: is a statistical technique that uses a weighted automaton, a
weighted directed graph, in which the input sequence uniquely determines the
path through the automaton to the output observed.
Hidden Markov Model: is a weighted automaton in which only one path is
allowed per specic input. The Viterbi is the most commonly used algorithm
for processing these models.
Viterbi Algorithm: traces through state graph multiplying the probabilities.
If the probability from the previous level is higher it back traces Example, for
words: need (n-iy), neat (n-iy-t), new (n-uw), knee (n-iy)
Begin
n
1:0
iy
:64
uw
:36
t
:24
d
:315
End
Possible paths are:
new=n uw => 1:0 :36 => :36
neat=n iy t => 1:0 :64 :24 => :128
need=n iy d => 1:0 :64 :445 => :178
knee=n iy => 1:0 :64 :315 => :2016
The rst loop checks n uw1:0 :36 = :36 and n iy 1:0 :64 = :64
64 is a higher probability so we pursue that
Next pass gives us iy t:64 :24 = :128
iy:64 :315 = :2016
iy d:64 :445 = :178
But these are smaller than the :36 we collected as a high probability in the
previous pass so we back track to that. If there were more levels through our
graph we would continue this loop until reaching the end.
The probabilities are calculated as so:
weight = log [actualprobability ] so if the probability of n uwis:44 the
graph weight is log (:44) => :36
65
4.2 Fuzzy Stu
Fuzzy logic has had great success in running machinery that is computer oper-
ated. For instance, if I write a program to control the thermostat in my home I
can set it for 'cool'. Coming out of winter into summer, 60'F feels cool. Going
from summer into Fall 70' feels cool. I might describe cool as between 50' and
70', warm as between 60' and 80', cold anything less than 60' and hot anything
over 70'. So the computer doesn't get it when I say I would like the home to be
warm. Should it be 60', but that is also cool.
Softening and fuzzing the data enables the computer to be able to deal with
overlapping or otherwise not clear cut data. It also keeps the machine from
jumping about too much when the inputs change. Groups of overlapping data
are hard coded into software along with rules for fuzzing it.
4.3 Evolutionary AI
Genetic programs create individual programs that compete for survival. Those
that do well reproduce, usually with another program that did well. The child
gets a random mix of traits from the parents and may do better or worse than
the parents. Some of these programs are written for specic problem solving
ability, others for general skills. Often a mutation will be thrown in that will
eect a very small percentage of the population.
The simplest of these is life. A grid of squares is laid out and life multiplies
or dies o depending on the number of occupied neighboring cells. The newer,
more complex versions have genetic code that children inherit as subroutines
from both parents, a bit of randomness mixed in and they compete in a survival
of the ttest environment. The hope is that after many generations we will have
intelligence.
Articial societies are also being used to study and predict what real world
societies will do. Using a simple version of life you can change the rules and
mimic real world situations. This method is also being used by archaeologists
to determine what caused the rises and falls of civilizations gone by. In 1971
an economist, Thomas C. Schelling, used such a method to show how neighbor-
hoods segregate and that racism was not the cause of segregation. Usually a few
very simple rules are all that are needed to have real life simulations develop.
66
Game of Life by Conway in Java. A grid of squares randomly is marked with
on or o. If a square has less than 2 neighbors it dies of lonliness, if it has 2
neighbors it stays the same, if it has 3 neighbors a birth occurs, if it has more
than 3 neighbors it dies of over crowding.
//www.timestocome.com
import java.awt.*;
import java.awt.event.*;
import java.util.*;
67
super ( " Life " );
setBounds ( 0, 0, xdim, ydim );
setVisible ( true );
addWindowListener ( this );
animThread = new Thread ( this );
animThread.start();
setupBoard( startposition );
if ( setup == 1 ){ //random
Random r = new Random ( System.currentTimeMillis() );
for ( int i=0; i<gridsize; i++ ){
for ( int j=0; j<gridsize; j++ ){
if ( r.nextInt()%2 == 0 ){
board[i][j] = 1;
}
}
}
68
board[center+1][center+1] = 1;
board[center-1][center+2] = 1;
board[center+1][center+2] = 1;
board[center][center+3] = 1;
69
board[center+3][center+1] = 1;
board[center+4][center+1] = 1;
board[center+1][center+2] = 1;
board[center+3][center+2] = 1;
board[center-1][center+3] = 1;
board[center+1][center+3] = 1;
board[center+3][center+3] = 1;
board[center+5][center+3] = 1;
board[center-1][center+4] = 1;
board[center+1][center+4] = 1;
board[center+3][center+4] = 1;
board[center+5][center+4] = 1;
board[center-1][center+5] = 1;
board[center][center+5] = 1;
board[center+4][center+5] = 1;
board[center+5][center+5] = 1;
int count;
int oldboard[][] = new int[gridsize][gridsize];
//for each square check surrounding squares and get a count of neighbors
for ( int i=0; i<gridsize; i++){
for ( int j=0; j<gridsize; j++ ){
count = 0;
70
//count neighbors but don't run off edge
if ( i>0 ){ if ( j>0 ){ count += oldboard[i-1][j-1]; } }
if ( i>0 ){ count += oldboard[i-1][j]; }
if ( i>0 ){ if ( j<(gridsize-1) ){ count += oldboard[i-1][j+1]; }}
}
}
}
//animation loop
public synchronized void run()
{
while ( true ) { //keep animating forever!!!
try {
updateBoard(); // calc location
Thread.sleep( delay ); // after if happens, wait a bit
repaint( 0L ); // request redraw
wait(); // wait for redraw
71
{
//draw background
g.setColor( Color.white );
g.fillRect( 0, 0, xdim, ydim );
int x = 15;
int y = 35;
//draw board
for ( int i=0; i<gridsize; i++ ){
for ( int k=0; k<gridsize; k++ ){
notifyAll();
}
72
Another example is
ocking. This is done by creating a
ock of animals and
having them follow 3 rules. Move in same direction as rest of
ock; Move to
position self in center of
ock; Don't wipe out the other guys. The larger the
ock the more interesting behavior you will see.
73
//www.timestocome.com
//flocking example
//rules
//1. Avoid collisions with other birds
//2. Attempt to match velocity to rest of the group
//3. Attempt to remain in center of the flock
import java.awt.*;
import java.awt.event.*;
import java.util.*;
74
{
super ( " Flocking " );
setBounds ( 0, 0, xdim, ydim );
setVisible ( true );
addWindowListener ( this );
animThread = new Thread ( this );
animThread.start();
}
}
//***********************************************************************************
//update here
void updateBoard()
{
75
int oldboard[][] = board;
int oldbirds[][] = birds;
/*
//send data to user while debugging code
for ( int i=0; i<flocksize; i++){
System.out.println ( birds[i][0] + ", " + birds[i][1] );
}
System.out.println ( "center " + xcenter + ", " + ycenter );
System.out.println ( "direction " + xdirection + ", " + ydirection );
System.out.println();
76
*/
int x = 0; int y = 0;
77
else if ( birds[i][1] < gridsize-edgebuffer ) { y++; }
}
//update bird
birds[i][0] += x;
birds[i][1] += y;
//recalculate center
xcenter = 0; ycenter = 0;
for ( int i=0; i<flocksize; i++ ) {
xcenter += birds[i][0];
ycenter += birds[i][1];
}
xcenter /= flocksize;
ycenter /= flocksize;
//update board
78
for ( int i=0; i<gridsize; i++){
for ( int j=0; j<gridsize; j++ ){
if ( board[i][j] == 1 ) { board[i][j] = 0; }
}
}
//board[xcenter][ycenter] = 2;
}
//***********************************************************************************
//animation loop
public synchronized void run()
{
while ( true ) { //keep animating forever!!!
try {
updateBoard(); // calc location
repaint( 0L ); // request redraw
wait(); // wait for redraw
Thread.sleep( delay ); // after if happens, wait a bit
79
g.fillRect( 0, 0, xdim, ydim );
int x = 5;
int y = 25; //clear title bar
//draw board
for ( int i=0; i<gridsize; i++ ){
for ( int k=0; k<gridsize; k++ ){
y += 5;
x = 5;
}
notifyAll();
}
80
}
81
This is the same as Conway's Life previous except I added in dna to make
things a bit more interesting. A day counter moves along a dna strand, 2 marks
per day. When a child is born a mix of both parent's dna makes up baby's. The
longer a creature lives, the brighter the color is. Red for one sex, blue for the
other.
import java.util.*;
x = r.nextInt( xMax );
y = r.nextInt( yMax );
82
void babydna ( int babydna[] )
{
dna = babydna;
}
}
83
import java.awt.*;
import java.awt.event.*;
import java.util.*;
setupBoard();
84
void setupBoard()
{
//erase board
for ( int i=0; i<gridsize; i++){
for ( int j=0; j<gridsize; j++ ){
board[i][j] = empty;
}
}
//position on gameboard
int x = ((Creature)creatures.elementAt(i)).x;
int y = ((Creature)creatures.elementAt(i)).y;
if ( board[x][y] == empty ){
board[x][y] = i;
}else{ //someone has this spot pick a new one
boolean done = false;
85
int dead=0;
int born=0;
generations++;
//clear board
for ( int i=0; i<gridsize; i++){
for ( int j=0; j<gridsize; j++){
board[i][j] = empty;
}
}
//update date
if ( day > 9 ){ day = 0; }else{ day++; }
int mark = day*2; //dna marker
int neighbors = 0;
int neighborhood[] = new int[8];
for ( int k=0; k<8; k++){ neighborhood[k] = empty; }
//count neighbors
if ( i > 0 ){ if ( board[i-1][j] != empty ){
neighborhood[neighbors] = board[i-1][j];
neighbors++; } } //north
86
if (( i > 0 ) && ( j < gridsize-1 )){ if ( board[i-1][j+1] != empty ){
neighborhood[neighbors] = board[i-1][j+1];
neighbors++; }} // north east
if ( neighborhood[k] != empty ){
87
//?found a mate?
if (((Creature)creatures.elementAt(neighborhood[k])).sex != sex ){
if ( board[row][col] == empty ){
creatures.add ( c );
born++;
done = true;
while ( !done ){
count++;
88
//take dna from parents for baby
Random r = new Random ( System.currentTimeMillis());
int dnaMix = r.nextInt(20);
int newDna[] = new int[20];
//first parent
for ( int p1=0; p1<dnaMix; p1++){
newDna[p1] = ((Creature)creatures.elementAt(board[i][j])).dna[p1];
}
//second parent
for ( int p2=dnaMix; p2<20; p2++){
newDna[p2] = ((Creature)creatures.elementAt(neighborhood[k])).dna[p2];
}
//pass dna onto baby
((Creature)creatures.lastElement()).babydna(newDna);
}//for j
89
}//for i
//remove dead
int length = creatures.size();
for ( int i=0; i<length; i++){
if ( ((Creature)creatures.elementAt(i)).alive == false ){
creatures.removeElementAt(i);
length--;
}
}
//********* need to redraw board here to reflect new vector numbers ****************//
//erase board
for ( int i=0; i<gridsize; i++){
for ( int j=0; j<gridsize; j++){
board[i][j] = empty;
}
}
//position on gameboard
int x = ((Creature)creatures.elementAt(i)).x;
int y = ((Creature)creatures.elementAt(i)).y;
board[x][y] = i;
}
//************************************************************************************//
System.out.println ( " generation " + generations + " vector size " + creatures.size() +
" dead " + dead );
}
90
//animation loop
public synchronized void run()
{
while ( true ) { //keep animating forever!!!
try {
//draw background
g.setColor( Color.white );
g.fillRect( 0, 0, xdim, ydim );
Color background = new Color ( 210, 210, 210 );
Color color = new Color ( 0, 100, 0 );
int age=0, sex=0;
//margins
int x = 5;
int y = 25;
//first make sure we initialized everything this keeps graphics thread from
//trying to draw board before we've set up initial conditions
if ( creatures.size() > 0 ){
91
g.setColor ( background ); // if square not empty this gets changed
sex = ((Creature)creatures.elementAt(mark)).sex;
age = ((Creature)creatures.elementAt(mark)).age;
if ( sex == 0 ){
if ( age < 255 ){ color= new Color( age*2, 0, 0 );
}else{ color = new Color ( 255, 0, 0 ); }
}else{
if ( age < 255 ) { color = new Color ( 0, 0, age*2 );
}else{ color = new Color ( 0, 0, 255 ); }
}
g.setColor ( color );
}
x += boxsize+1;
}
y += boxsize +1;
x = 5;
}
notifyAll();
}
92
public void windowClosing(WindowEvent ev)
{
animThread = null;
setVisible(false);
dispose();
System.exit(0);
}
93
homogeneous pixels that dier not more than a small amount, . Adjacent
regions are not homogeneous. Split and Merge [Horowitz & Pavlidis] is one
such method. The whole image is split into equal parts, these are tested for
homogeneity, if the regions are not homogeneous then the splitting continues
until all the regions are homogeneous. Regions are then merged with other
regions that are homogeneous with themselves. This method and the one above
run into many problems with dierentiating shadows from edges.
Now scene analysis is done to extrapolate a scene from the information
gathered. For this part more information is needed. Other scenes, stereo vision,
or positions of the moving camera. In one method a line drawing is extrapolated
and the junctions of the lines are matched to table entries to determine if the
object extends outward or inward. If the scene contains well known objects the
objects may be stored as line drawings in a table to be matched.
((x2 +y 2 )
(2standardDeviation2 ))
Gaussian: (2P istandardDeviation2)
1 e
average: N(XumberOf
i+X j +X k )
0
qX s
2
standard Deviation: ((XNi+umberOfX j +X k +X::)
0 X s
)
The standard Deviation describes a bell shaped curve. This gives closer
pixels a higher weight factor.
2 2
Laplacian: @ (@x
F (x;y )
2 + @ F@y(x;y
2
)
94
unit for each feature of the environment it needs. The input consists of an input
for each sense, plus the input from the hidden units. There is an output for
each possible action that can be taken. Another way to store the information
is with a map that is updated. This is known as 'iconic representation' and is
used for software agents.
95
les stored on a users hard drive. It uses color sound and shape to map the
whole drive on the screen in front of the user for easy cleanups.
Example:
P(A) is the event of a person having cancer (10%)
96
P(B) is the event of person being a smoker (50%)
P(B|A) is the percent cancer patients who smoke (80%)
We wish to know the likelihood of the smoker having cancer
j
P (A B ) = 1) = :16 or a 16%chance.
(: 8 :
:5
A Bayesian network is an acyclic tree graph. An acyclic tree graph can not
cycle back to previous conditions. Its nodes, occurrences, contain the possible
outcomes and tables of probabilities of each considering the inputs to this node.
The connecting edges contain the eects of occurrences on one another. The
probabilities of all occurrences must total 100%, and all occurrences must be
accounted for. A node must be conditionally independent of any subset of
nodules that are not descendants of it, this reduces the number of possibilities
for each node that must be calculated.
There are three commonly used patterns of inference in Bayes Networks;
Top-down which uses a chain rule to add up probabilities; Bottom-up which uses
Bayes Rule; and a hybrid system. All of these use recursion in the algorithms
making them computationally intensive.
Children of a parent node can be independent of each other, none of them
contributing to the probabilities of another. In that case the parent is said to
d-separate them. This can be used to cut down the number of calculations
needed to work through the net.
The is network trained by giving the likely probabilities to seed it. When
something new happens the probabilities are re-evaluated. This causes all the
probabilities to be re-calculated, remember they must total 100%. The network
structure must also be redetermined. Often this can be done before training
occurs. Hidden nodes can sometimes help reduce the size of the network.
97
Chapter 5
98
If it is legal to apply this operator, do so, else determine a situation that
can use that operator and set it as a short term goal.
Reasoning Programs are the next step in this line of research. They will
need to be able to sense and or nd information about their environment, to
prove whether or not solutions exist to problems given them. They will need to
be able to reason out steps from initial situations to goals. But they will need
a language to dene objects, goals, operators, logic, and temporally, states of
being and other things in order to accomplish this.
Common sense is very dierent than intelligence or education. Some people
have one, two, or all three of these qualities. Teaching and testing for common
sense has not progressed well with people and will probably not do well with
computers until we have a greater understanding of exactly what common sense
is, how it is acquired and how it can be tested for. Some other problems de-
veloping these systems are putting common sense into a language that is easily
understood by people and computers. A second major problem has been rep-
resenting time and changes that occur over time. Common sense seems to be
learned from doing rather than being taught, so it may be that agents may gain
common sense about the computer network they exist on, or further down the
line robots may gain a bit of what we consider common sense about our world.
99
First Order Logic (rst-order predicate calculus)
This consists of objects, predicates on objects, connectives and quantiers.
Predicates are the relations between objects, or properties of the objects. Con-
nectives and quantiers allow for universal sentences. Relations between objects
can be true or false as well as the objects themselves. The program may not
know whether something is true or false or give it a probability of truth or
falseness.
Procedural Representation
This method of knowledge representation encodes facts along with the sequence
of operations for manipulation and processing of the facts. This is what expert
systems are based on. Knowledge engineers question experts in a given eld and
code this information into a computer program. It works best when experts
follow set procedures for problem solving, i.e. a doctor making a diagnoses.
The most popular of the Procedural Representations is the Declarative. In the
Declarative Representation the user states facts, rules, and relationships. These
represent pure knowledge. This is processed with hard coded procedures.
Relational Representation
Collections of knowledge are stored in table form. This is the method used for
most commercial databases, Relational Databases. The information is manipu-
lated with relational calculus using a language like SQL. This is a
exible way
to store information but not good for storing complex relationships. Problems
arise when more than one subject area is attempted. A new knowledge base
from scratch has to be built for each area of expertise.
Hierarchical Representation
This is based on inherited knowledge and the relationships and shared attributes
between objects. This good for abstracting or granulating knowledge. Java and
C++ are based on this.
Semantic Net
A data graph structure is used and concrete and abstract knowledge is rep-
resented about a class of problems. Each one is designed to handle a specic
problem. The nodes are the concepts features or processes. The edges are the re-
lationships (is a, has a, begins, ends, duration, etc). The edges are bidirectional,
backward edges are called 'Hyper Types' or 'Back Links'. This allows backward
and forward walking through the net. The reasoning part of the nets includes:
expert systems; blackboard architecture; and a semantic net description of the
problem. These are used for natural language parsing and databases.
Predicate Logic and propositional
Most of the logic done with AI is predicate logic. It used to represent objects,
functions and relationships. Predicate logic allows representation of complex
facts about things and the world. (If A then B). A 'knowledge base' is a set of
facts about the world called 'sentences'. These are put in a form of 'knowledge
representation language'. The program will 'ASK' to get information from the
knowledge base and 'TELL' to put information into the knowledge base. Using
objects, relations between them, and their attributes almost all knowledge can
be represented. It does not do well deriving new knowledge. The knowledge
representation must take perceptions and turn them into sentences for the pro-
100
gram to be able to use them, and it must take queries and put them into a form
the program can understand.
Frames
Each frame has a name and a set of attribute-value pairs called slots. The
frame is a node in a semantic network. Hybrid frame systems are meant to
over come serious limitations in current setups. They work much like an object
oriented language. A frame contains an object, its attributes, relationships and
its inherited attributes. This is much like Java classes. We have a main class
and sub classes that have attributes, relationships, and methods for use.
A logic has a language, inference rules, and semantics. Two logical languages
are propositional calculus and predicate calculus. Propositional Calculus which
is a descendant of boolean algebra is a language that can express constraints
among objects, values of objects, and inferences about objects and values of
objects.
The elements of propositional calculus are:
Atoms The smallest elements
Connectives or, and, implies, not
Sentences aka 'well-formed formula's', ws.
The legal ws
disjunction or
conjunction and
implication implies
negation not
Rules of inference are used to produce other wwfs
modus ponens (x AND (x implies y) ) implies y
AND introduction x, y implies (x AND y)
AND commutativity (x AND y) implies (y AND x)
AND elimination x AND y implies x
OR introduction x, y implies (x OR y)
NOT elimination NOT (NOT x) implies x
resolution combining rules of inference into one rule example::: (x OR y) AND
(NOT y OR z) == x OR z
Horn clauses a clause having one TRUE literal, there are three types; a single
atom (q); an implication or rule (p AND q => r); a set of negative literals
(p OR q =>), these have linear time algorithms.
101
Denitions
Semantics associations of elements of a language with the elements of the
domain
Propositions a statement about an atom, example: The car is running, the
car is the atom, is running is the proposition
interpretation is the association of the proposition with the atom
denotation in a given interpretation the proposition associated with the atom
is the denotation
value TRUE, FALSE, given to an atom
knowledge base a collection of propositional calculus statements that are true
in the domain
102
propositional satisability, aka PSAT a model for the formula that com-
prises the conjunction of all the statements in the set .
Predicate Calculus , takes propositional calculus further by allowing state-
ments about propositions as well as about objects. This is rst order
predicate calculus.
Contains:
object constants, term strings of characters, xyz, linda, paris
relation constants divided by, distance to/from, larger than
function constants small, big, blue
functional expression examples: distance(here, there); xyz;
worlds can have innite objects, functions on objects, relations over objects
interpretations maps object constants into objects in the world
quantiers can be universal or for a selected object or group of objects
Predicate Calculus is used to express mathematical theories. It consists of
sentences, inference rules and symbols. First-order predicate calculus symbols
consist of variables about which a statement can be made, logic symbols (and,
or, not, for all, there exists, implies) and punctuation ( '(', ')' ).
If we have a set S in which all of the statements are true then S is a model.
If S implies U then U is true for all models of S and NOT U is false for all
models of S. If we make a set S' which has all of the statements of S and the
statement NOT U it is not a model. All statements in a model must be true. S'
is unsatisable since there is no way for the statements of S and the statement
NOT U, both of which are in S' to be true at the same time. This is used to
prove formulas in theorem proving. To show S implies U is is sucient to show
S'= S, NOT U is unsatisable.
Resolution and unication
Resolution: prove A true by proving A Not is false. Unication: take two pred-
icate logic sentences and using substitutions make them the same. Unication
is the single operation that can be done on data structures (expression trees) in
Prolog. These are the techniques used to process predicate logic knowledge and
the are the basis for Lisp and Prolog.
Resolution is one way to prove unsatisability.
First replace each statement with its clause form equivalent. (D implies
C becomes OR(A; B ).
Then take each NOT and apply it individually to all the symbols. (\ C )
(D; C )becomes(D;
Remove all dummy variables so each item is represented by only one sym-
bol. (If G or H can represent dogs, replace all the G's with H's or all the
H's with G's so the representation is consistent.
103
Using the Skolem function remove all the 'there exists' logic symbols ( re-
place 8x8y SUCH THAT z SATISFIES P(x,y,z) with 8x8y SATISFYING
P (x; y; z )
Last remove all of the 8 universal quantiers. (expand the formula)
Another method for proving unsatisability is the Unication Procedure.
This uses a substitution as follows:
B = (g (z ); x); (a; y )
y ); Q(b; y )
C = P (x;
104
5.3 Knowledge based/Expert systems
There are knowledge based agents and expert systems that reason using rules
of logic. These systems that do what an expert in a given eld might do, tax
consulting, medical diagnosis etc. They do well at the type of problem solving
that people go to a university to learn. Usually predicate calculus is used to
work through a given problem. This type of problem solving is known as 'system
inference'. The program should be able to infer relationships, functions between
sets, some type of grammar, and some basic logic skills. The system needs to
have three major properties: soundness, condence that a conclusion is true;
completeness, the system has the knowledge to be able to reach a conclusion;
and tractability, it is realistic that a conclusion can be reached.
Reasoning is commonly done with if-then rules in expert systems. Rules
are easily manipulated, forward chaining can produce new facts and backward
chaining can check statements accuracy. The newer expert systems are set up
so that users, who are not programmers, can add rules and objects and alter
existing rules and objects. This provides a system that can remain current and
useful with out having to have a full time programmer working on it.
There are three main parts to the expert system: knowledge base, a set of if-
then rules; working memory, a database of facts; inference engine, the reasoning
logic to create rules and data.
The knowledge base is composed of sentences. Each sentence is a represen-
tation of a fact or facts about the world the agent exists in or facts the expert
system will use to make determinations. The sentences are in a language known
as the knowledge representation language.
Rule learning for knowledge based and expert systems is done with either
inductive or deductive reasoning. Inductive learning creates new rules, that are
not derivable from previous rules about a domain. Deductive learning creates
new rules from existing rules and facts.
Rules are made of antecedent clauses (if), conjunctions (and, or) and conse-
quent clauses (then). A rule in which all antecedent clauses are true is ready to
re or triggered. Rules are generally named for ease of use and usually have a
condence index. The condence index (certainty factor) shows how true some-
thing is, i.e. 100% a car has four wheels, 50% a car has four doors. Sometimes
sensors are also part of the system. They may monitor states in the computer or
environment. The Rete algorithm is the most ecient of the forward chaining
algorithms.
Reasoning can be done using 'Horn Clauses', these are rst-order predicate
calculus statements that have, at most, one true literal. Horn Clauses have
linear order time algorithms and this allows for a faster method of reasoning
through lots of information. This is usually done with PROLOG or lisp. Clauses
are ordered as such: goal, facts, rules. Rules have one or more negative literals
and one positive literal that can be strung together in conjunctions that imply
a true literal. A fact is a rule that has no negative literals. A list of positive
literals with out a consequent are a goal. The program loops checking the list in
order, when a resolution is performed a new loop is begun with that resolution.
105
If the program resolves its goal the proof can be given in tree form, 'and/or
tree'.
Nonmonotomic reasoning is used to x problems created by a change in in-
formation over time. More information coming in negates a previous conclusion
and a new one needs to be drawn.
A con
ict resolution process must be put in place as well to deal with con-
icting information. This can be done by: rst come, rst serve; most specic
rule is kept; most recently changed data rule triggered; once rule is resolved
take it out of the con
ict resolution set.
Forward chaining takes the available facts and rules and deduces new facts
which it then uses to deduce more new facts, or invoke actions. Forward chaining
can also be done by simply the application of if-then statements: The RETE
algorithm is the most ecient at doing forward chaining right now, it compiles
the rules into a network that it traverses eciently. This is similar to the
blackboard systems.
Dynamic knowledge bases, known as truth maintenance systems, may be
used. This uses a 'spreadline' which is similar to a spread sheet that will calcu-
late missing and updated values as other values entered.
General algorithm forward chaining
load rule base into memory
load facts into memory
load initial data into memory
match rules to data and collect triggered rules
LOOP
if con
ict resolution done BREAK
use con
ict resolution to resolve con
icts among rules
re selected rules
END LOOP
Backward Chaining evaluates a goal and moves backward through the rules
to see if true. An example is a medical diagnosis expert system, that takes in
information from questions then returns a diagnoses. PROLOG systems are
backward chaining.
General algorithm backward chaining
load rule base into memory
load facts into memory
load initial data
106
specify a goal
load rules specic to that goal onto a stack
LOOP
if stack is empty BREAK
pop stack
WHILE MORE ANTECEDENT CLAUSES
if antecedent is false pop stack and NEXT WHILE
if antecedent true re rule and NEXT WHILE
if antecedent unknown PUSH onto stack (we may later ask user for more
information about this antecedent.
END LOOP
The Rete Algorithm is considered to be the best algorithm for forward chain-
ing expert systems. It is the fastest but also requires much memory. It uses
temporal redundancy, rules only alter a few facts at a time, and structural
similarity in the left hand side of rules to do so.
The Rete is a an acyclic graph that has a root node. The nodes are patterns
and the paths are the left hand sides of the rules. The root node has one kind
node attached to it for each kind of fact. Each kind node has one alpha node
attached to it for each rule and pattern. Then the alpha nodes have associated
memories which describe relationships. Each rule has two beta nodes. The left
part is from alpha(i) and the right from alpha(i+1). Each beta node stores the
JOIN relationships. Changes to rules are entered at the root and propagated
through the graph.
Knowledge based agents loop through two main functions. One is to sense
the world and TELL the knowledge base what it senses, two is to ASK what it
should do about what it senses, which it then does. An agent can be constructed
by giving it all the sentences it will need to perform its functions. An agent can
also be constructed by a learning mechanism that takes perceptions about the
environment and turns them into sentences that it adds to the knowledge base.
107
5.3.1 Perl Reasoning Program 'The Plant Dr.'
The Plant Doctor. It gets user input from an HTML form and uses the
plantdr.pl Perl program to gure out the problem, using weighted symptoms,
chaining forward. The database is hard coded in the Perl script since this is a
very small system.
#!/usr/bin/perl
$method = $ENV{'REQUEST_METHOD'};
if( $method !~ /POST/ ){
print "<p align=center><blink>invalid input method used</blink><p>\n";
print "<p> please use the post method <p>";
exit (0);
}
$bytes = $ENV{'CONTENT_LENGTH'};
read (STDIN, $query, $bytes );
#possible problems
$tooMuchWater = 0;
$tooLittleWater = 0;
$tooMuchSun = 0;
$tooLittleSun = 0;
$tooMuchHumidity = 0;
$tooLittleHumidity = 0;
$tooMuchFertilizer = 0;
$tooLittleFertilizer = 0;
$tooHighTemperature = 0;
$tooLowTemperature = 0;
$extremeChange = 0;
108
$insects = 0;
$chemicals = 0;
}
if ($form{'a2'} eq 'a2' ) {
$tooLittleSun ++;
$tooLittleWater ++;
$tooLittleFertilizer ++;
$tooMuchWater ++;
$tooMuchFertilizer ++;
$tooHighTemperature ++;
$tooLittleHumidity ++;
$tooLowTemperature ++;
if ($form{'b1'} eq 'b1' ) {
$tooMuchSun ++;
}
if ($form{'b2'} eq 'b2' ) {
$tooLittleSun ++;
$tooLittleFertilizer ++;
$tooLowHumidity ++;
}
if ($form{'b3'} eq 'b3' ) {
$tooLittleFertilizer ++;
109
$tooMuchSun ++;
$toomuchFertilizer ++;
$tooHighTemperature ++;
$chemicals ++;
}
if ($form{'b4'} eq 'b4' ) {
$tooLittleSun +=2;
}
if ($form{'b5'} eq 'b5' ) {
$tooMuchWater ++;
if ($form{'b6'} eq 'b6' ) {
$tooMuchSun += 2;
if ($form{'c1'} eq 'c1' ) {
$tooLittleWater ++;
$tooLittleSun ++;
$tooLittleFertilizer ++;
$tooMuchWater ++;
$tooMuchFertilizer ++;
$tooHighTemperature ++;
if ($form{'d1'} eq 'd1' ) {
$tooMuchSun ++;
}
if ($form{'d2'} eq 'd2' ) {
$tooLittleWater ++;
$tooLittleFertilizer ++;
$tooMuchWater ++;
$tooMuchFertilizer ++;
$chemicals ++;
$tooLittleHumidity ++;
$tooLowTemperature ++;
110
if ($form{'d3'} eq 'd3' ) {
$tooMuchWater ++;
$tooMuchSun ++;
$tooLowHumidity ++;
}
if ($form{'d4'} eq 'd4' ) {
$tooLittleWater ++;
$tooMuchWater ++;
$insects ++;
$tooLowTemperature ++;
if ($form{'e1'} eq 'e1' ) {
$tooLowTemperature += 2;
$tooLittleWater ++;
}
if ($form{'e2'} eq 'e2' ) {
$tooLittleWater += 2;
}
if ($form{'e3'} eq 'e3' ) {
$tooLittleWater ++;
}
if ($form{'e4'} eq 'e4' ) {
$tooLittleWater ++;
}
if ($form{'e5'} eq 'e5' ) {
$insects += 2;
}
if ($form{'e6'} eq 'e6' ) {
$insects +=2;
if ($form{'f1'} eq 'f1' ) {
$tooLittleWater ++;
$tooMuchWater ++;
$tooMuchSun ++;
$tooMuchFertilizer ++;
$tooHighTemperature ++;
111
$tooLowHumidity ++;
$tooLowTemperature ++;
}
if ($form{'f2'} eq 'f2' ) {
$tooLittleSun ++;
$tooLittleFertilizer ++;
}
if ($form{'f3'} eq 'f3' ) {
$tooLittleWater ++;
$tooMuchWater ++;
$tooMuchFertilizer ++;
$tooLowTemperature ++;
}
if ($form{'f4'} eq 'f4' ) {
$tooLittleSun ++;
$tooLittleFertilizer ++;
$tooMuchWater ++;
$tooHighTemperature ++;
}
if ($form{'f5'} eq 'f5' ) {
$tooLittleSun += 2;
}
if ($form{'f6'} eq 'f6' ) {
$tooMuchWater ++;
}
if ($form{'f7'} eq 'f7' ) {
$tooMuchWater ++;
$tooMuchHumidity ++;
}
if ($form{'g1'} eq 'g1' ) {
$tooLittleSun += 4;
$tooHighTemperature ++;
}
if ($form{'g2'} eq 'g2' ) {
$tooLittleSun ++;
$tooHighTemperature ++;
112
}
if ($form{'g3'} eq 'g3' ) {
$tooLowTemperature ++;
$tooHighTemperature ++;
$tooLittleSun ++;
$tooMuchFertilizer ++;
}
if ($form{'g4'} eq 'g4' ) {
$tooLowTemperature ++;
$tooHighTemperature ++;
}
if ($form{'g5'} eq 'g5' ) {
$tooLowSun ++;
if ($form{'h1'} eq 'h1' ) {
$tooMuchWater ++;
}
if ($form{'h2'} eq 'h2' ) {
$insects += 2;
}
if ($form{'h3'} eq 'h3' ) {
$insects += 2;
if ($form{'h4'} eq 'h4' ) {
$tooLittleWater += 2;
if ($form{'h5'} eq 'h5' ) {
$tooLittleWater ++;
#scores
$tooMuchWater /= 13;
113
$tooLittleWater /= 11;
$tooMuchSun /= 5;
$tooLittleSun /= 10;
$tooMuchHumidity /= 1;
$tooLittleHumidity /= 2;
$tooMuchFertilizer /= 7;
$tooLittleFertilizer /= 7;
$tooHighTemperature /= 9;
$tooLowTemperature /= 8;
$extremeChange /= 1;
$insects /= 5;
$chemicals /= 2;
$foundAnswer = 0;
print "\n<br><br><b>Recommendations</b><br><br>";
print "\n<center><table width=500><tr><td>";
print "\n With any plant the best care is that that comes closest";
print " to what it lives like in the wild. Find out as much as ";
print " you can about its native habits and duplicate them as ";
print " best as you can. Always use a well drained pot for your";
print " plant, none of them like to sit in water.";
print "<br><br><br><br>";
print "<br><br><b>Analysis: </b><br>";
114
$total = $tmw +$tlw +$tms +$tls +$tmh +$tlh +$tmf +$tlf +$tht +$tlt +$i +$ex;
if ( $total == 1) {
if ($tmw){
print "\n<br> You are very likely over watering your plant.";
print " Make sure that the pot the plant is in has ";
print " excellent drainage. If it does then water ";
print " less frequently. Try half as often for a ";
print " start and watch how your plant responds.";
print " Clay pots will hold the water less than ";
print " plastic will, consider using clay pots if ";
print " you are not already doing so.";
$foundAnswer = 1;
}
if ($tlw){
print "\n<br> You are probably underwatering your plant.";
print " Either water more frequently, or if you are ";
print " not likely to water more frequently, then ";
print " re-pot your plant in a soil that will hold ";
print " water longer. Try adding potting soil to ";
print " orchid mulch, or moss for your orchids. ";
print " Add moss or water holding pellets to your ";
print " house plants that are already in soil.";
$foundAnswer = 1;
}
if ($tms){
print "\n<br> It is too bright, this plant wants less light.";
print " Try moving it back from the window a foot and ";
print " see how the plant responds. If that doesn't ";
print " work, try a less sunny window or lace curtains ";
print " to help filter the light. The plants that have ";
print " purple undersides to the leaves usually need less ";
print " light than the plants with all green on their leaves.";
$foundAnswer = 1;
}
if ($tls){
print "\n<br>This plant needs more sun.";
print " Try moving it closer to the window, a foot";
print " makes an enormous difference. If that ";
print " doesn't make a difference try a sunnier ";
print " window, or a lamp if no other window is brighter. ";
print " African Violets will grow quite happily and flower ";
print " with only a table lamp directly over them.";
115
$foundAnswer = 1;
}
if ($tmh){
print "\n<br>This plant needs a drier atmosphere.";
print " On top of the refrigerator is very dry,";
print " if that is a sunny location. Or next to ";
print " a heater if there is one near a window.";
$foundAnswer = 1;
}
if ($tlh){
print "\n<br>This plant needs higher humididty.";
print " The bathroom and kitchen are the most";
print " humid rooms in the home. If neither of ";
print " those will work, try putting a tray of ";
print " water under the plant. (Make sure the ";
print " plant is above, not in the water). Or ";
print " try a small table top fountain near the ";
print " plant. If the plant is very small, make or ";
print " place it in a terrarium.";
$foundAnswer = 1;
}
if ($tmf){
print "\n<br>Too much fertilizer, cut down the dosage.";
print " Fertilize weakly, weekly, a general rule of thumb ";
print " is to use half of the listed dose on the bottle.";
$foundAnswer = 1;
}
if ($tlf){
print "\n<br>This plant needs fertilizer.";
print " Fertilize weakly, weekly. Find a good ";
print " all purpose fertilizer at your nursery.";
$foundAnswer = 1;
}
if ($tht){
print "\n<br>It is too warm for this plant.";
print " Perhaps closer to a window or ";
print " door will give it cooler air? ";
$foundAnswer = 1;
}
if ($tlt){
print "\n<br>This plant is too cold.";
print " Is this plant outdoors past its ";
print " season? Or too close to a window ";
print " or door if inside?";
$foundAnswer = 1;
}
116
if ($i){
print "\<br>This plant has bugs.";
print " If they are tiny (spider mites)";
print " or aphids, then just mix a quart";
print " of water with a tablespoon of ";
print " cooking oil and a tablespoon of";
print " liquid dish soap. Spray the infected plant";
print " once a day until the bugs are gone, then";
print " give it a good rinsing off in the sink.";
print " A few days of spraying will cure";
print " most infestations. Otherwise ";
print " head to your local supply store ";
print " for insecticides.";
$foundAnswer = 1;
}
print "\n<br><br>Did you just bring this plant home, or relocate it?";
print " It sounds like it is unhappy about a recent move.";
print " Ficus in particular is sensitive about moves.";
print " Give it a little time to adjust or if it continues";
print " to be unhappy move it to a better location.";
$foundAnswer = 1;
}
117
if ( ($tooMuchWater + $tooLittleWater) >1 ){
$waterFlag = 1;
print "\n<br>Are you watering the plant consistently?";
print " Plants do not like to cycle through wet and dry spells.";
$foundAnswer = 1;
}
}
}
118
if ( ($tooLittleSun + $tooMuchSun) > 1 ){
$sunFlag = 1;
print "\n<br><br>The light is wrong, it may be too short or long or ";
print " too intense or not bright enough";
$foundAnswer = 1;
}
119
print " if it is in one of those rooms. Sometimes a ";
print " sunnier location will help.";
$foundAnswer = 1;
}else {
print "\n<br>Your plant seems to want more humidity. ";
print " Try placing the plant in the kitchen or bathroom";
print " to give it more humidity, or place a tray of water";
print " with pebbles in it to keep the plant out of the ";
print " water under the plant. You can buy table fountains";
print " cheaply now, try placing a fountain near the plant";
print " if you don't like the other options.";
$foundAnswer = 1;
}
}
120
print "\n<br><br>Is is drafty near the plant?";
print " Orchids are about the only plants that like a draft";
print " and not all the orchids like it!";
$foundAnswer = 1;
}
}
}
$tooMuchWater *= 100;
$tooLittleWater *= 100;
$tooMuchSun *= 100;
$tooLittleSun *= 100;
$tooMuchHumidity *= 100;
$tooLittleHumidity *= 100;
$tooLittleFertilizer *= 100;
$tooMuchFertilizer *= 100;
$tooHighTemperature *= 100;
$tooLowTemperature *= 100;
$extremeChange *= 100;
$insects *= 100;
$chemicals *= 100;
if ( ! $foundAnswer ){
print "\n<br>I did not find an answer for you. ";
print " Check the following table for likely causes and see if ";
print " the items listed might apply, ";
print " Or go back to the form and see if any ";
print " other conditions might apply and re-submit ";
print " the form.";
121
print "\n<br><br><br><br>";
print "\n<table>";
print "\n<th>Likely Sources of Problem<br></th>";
if ( $tooMuchWater){
print sprintf "\n<tr><td>Too Much Water</td></tr>", $tooMuchWater;
}
if ( $tooLittleWater ){
print sprintf "\n<tr><td>Too Little Water </td></tr>", $tooLittleWater;
}
if ($tooMuchSun ){
print sprintf "\n<tr><td>Too Much Light </td></tr>", $tooMuchSun;
}
if ($tooLittleSun){
print sprintf "\n<tr><td>Too Little Light </td></tr>", $tooLittleSun;
}
if ($tooMuchHumidity){
print sprintf "\n<tr><td>Too Much Humidity </td></tr>", $tooMuchHumidity;
}
if ($tooLittleHumidity){
print sprintf "\n<tr><td>Too Little Humidity </td></tr>", $tooLittleHumidity;
}
if ($tooMuchFertilizer ){
print sprintf "\n<tr><td>Too Much Fertilizer </td></tr>", $tooMuchFertilizer;
}
if ($tooLittleFertilizer ){
print sprintf "\n<tr><td>Too Little Fertilizer </td></tr>", $tooLittleFertilizer;
}
if ($tooHighTemperature) {
print sprintf "\n<tr><td>Too High Temperature </td></tr>", $tooHighTemperature;
}
if ($tooLowTemperature){
print sprintf "\n<tr><td>Too Low Temperature </td></tr>", $tooLowTemperature;
}
if ( $extremeChange ){
print sprintf "\n<tr><td>Too extreme of a change </td></tr>", $extremeChange;
}
if ($insects ){
print sprintf "\n<tr><td>Insects </td></tr>", $insects;
}
if ($chemicals){
print sprintf "\n<tr><td>Chemicals </td></tr>", $chemicals;
}
print "\n</table>";
122
}
print "\n</td></tr></table>";
print "\n<p></body>";
print "\n<p></html>";
123
|>left o here
124
Chapter 6
125
6.1.1 Java Spider to check website links
This program is a java spider that traverses a website. It starts with a le you
give it, grabs the links from that le and sorts them into links from that site
and links external to that site. It creates a list of each, and grabs each of the
internal pages, and parses them pulling out the links and again adding them to
either the internal or external list. It then checks each link and lets you know
which ones are incorrect.
126
//GUIGetIP.java
//www.timestocome.com
//Fall 2000
//get the ip address and the name of the host machine this program is run on
import java.net.*;
out[0] = host.getHostName();
out[1] = "";
for (int i=0; i<ip.length; i++){
return out;
}
}
127
//GUIIPtoName.java
//www.timestocome.com
//Fall 2000
//This converts an IP address to the host name. This seems a bit flakey, I
//understand earlier java versions had trouble with this command as
//well. Sometimes it gives the name, sometimes it just returns the
//ip address.
import java.net.*;
try{
address = InetAddress.getByName( host );
return address.getHostName();
}
}
128
//GUINsLookup.java
//www.timestocome.com
//lookup an ip address given a host name
import java.net.*;
try{
address = InetAddress.getByName(host);
byte[] ip = address.getAddress();
String temp = "";
return temp;
}
}
129
//jpanel.java
//www.timestocome.com
//Fall 2000
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
Jpanel ()
{
setBackground( Color.white );
}
}
}
130
//LinkInfo.java
//www.timestocome.com
//Fall 2000
import java.io.*;
import java.net.*;
import java.util.*;
class LinkInfo
{
String fileContainingLink;
String stringLink;
String sourceFile;
String info = "None";
URL link;
public LinkInfo( String fcl, String sf, String lu) throws Exception
{
sourceFile = sf;
stringLink = lu;
link = new URL (lu);
fileContainingLink = fcl;
131
System.out.println( "<>link " + link );
}
132
//ListEntry.java
import java.io.*;
import java.net.*;
import java.util.*;
133
//www.timestocome.com
//Fall 2000
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
Jpanel mainPanel;
JPanel userPanel;
JPanel outputPanel;
JPanel menuPanel;
public GUILinkCheckerV1 ()
{
super ("Times to Come Link Checker");
134
menuPanel = new Jpanel();
userPanel.add(lInput);
userPanel.add(tfInput);
userPanel.add(enter);
enter.addActionListener(b1);
mainPanel.add(userPanel);
outputPanel.add(output);
mainPanel.add(outputPanel);
}
135
JMenuBar jmenubar = new JMenuBar();
jmenubar.setUI( jmenubar.getUI() );
JMenu jmenu1 = new JMenu("Options");
JMenu jmenu2 = new JMenu("Help");
JMenu jmenu3 = new JMenu("Quit");
JRadioButtonMenuItem m1 = new
JRadioButtonMenuItem("Get local host information");
m1.addActionListener(a1);
JRadioButtonMenuItem m2 = new
JRadioButtonMenuItem("Convert IP number to domain name");
m2.addActionListener(a2);
JRadioButtonMenuItem m3 = new
JRadioButtonMenuItem("Convert domain name to IP number");
m3.addActionListener(a3);
JRadioButtonMenuItem m4 = new
JRadioButtonMenuItem("Check website for bad links");
m4.addActionListener(a4);
jmenu1.add(m1);
jmenu1.add(m2);
jmenu1.add(m3);
jmenu1.add(m4);
jmenu2.add(m6);
136
jmenu3.add(m7);
jmenubar.add(jmenu1);
jmenubar.add(jmenu2);
jmenubar.add(jmenu3);
return jmenubar;
}
};
137
{
public void actionPerformed( ActionEvent e )
{
JMenuItem m3 = ( JMenuItem )e.getSource();
choice = 3;
output.setText("\nThis function gets an IP number from a domain name. "+
"\nI'm not sure it is very useful unless your site IP " +
"\nnumber changes for some reason?"+
"\n\nTo use:"+
"\nEnter the domain name (see sample next line)"+
"\nwww.yoursite.com"+
"\n in 'Your Input' and then hit the Enter button.");
}
};
138
}
};
}
};
case 0:
output.setText ("\n\nPick an option from the menu first");
case 1:
output.setText ("\n\n Getting your IP address . . .");
try{
GUIGetIP guigetip = new GUIGetIP();
String answer[] = new String[2];
answer = guigetip.GetIP();
output.setText( "\n" + answer[0] + "\n" + answer[1] + "\n");
}catch(Exception e1){}
break;
case 2:
output.setText("\n\n Getting Domain Name from IP address . . .");
139
break;
case 3:
output.setText ("\n\n Performing Name Server lookup . . .");
case 4:
output.setText ("\n\n Checking website links . . .");
output.append ("\n\n This may take a while on a large site,");
output.append ("\n\n or one with lots of links.");
try{
GraphicLinkCheckerV1 glckr = new GraphicLinkCheckerV1();
glckr.Main( tfInput.getText(), output );
}catch(Exception e2){}
}
break;
default:
output.setText ("\n\n I am so confused... " + choice );
break;
}
}
};
}
140
//www.timestocome.com
//Fall 2000
//Program doesn't check links that are javascript... opening new windows
// but will pick up a link that isn't and most people who
// use javascript put up the same link with out it
// for those who don't use javascript so this should be ok.
// It does pick up links that are javascript mouse rollovers.
//if there is a dns error the program will hang. So does ping and netscape.
//Sorry but I've not yet the time to fix it.
import java.io.*;
import java.net.*;
import java.util.*;
import java.awt.*;
141
import javax.swing.*;
import java.awt.event.*;
//*get top internal link and get page, removing from internal list
//* remove page from url if explicitly stated and end with directory
LinkInfo first = new LinkInfo( "Start page ", beginningPage, beginningPage);
while ( !(toBeCheckedInternalList.isEmpty()) ) {
142
//give user feed back so we know we are not off lost in cyberspace
pageCount ++;
try{
}catch(IOException e){
toBeCheckedInternalList.removeElementAt(0);
if(testflag){
try{
//*get useful link info if page not found or site down
//*and add to info section of LinkInfo and get page and page size
parsePage(urlconn, contentlgth);
}catch(IOException e){
}
}
143
//now check links outside our site
//System.out.println("\n\n\nChecking external links: ");
out.setText("\n\n\n Checked " + pageCount + " pages on website\n");
pageCount = 0;
Enumeration e5 = externalList.elements();
pageCount ++;
}catch(IOException e){
badList.addElement(tempLinkInfo5);
break;
}
while(e2.hasMoreElements() ){
LinkInfo tempLinkInfo2 = (LinkInfo)e2.nextElement();
}
while(e3.hasMoreElements() ){
LinkInfo tempLinkInfo3 = (LinkInfo)e3.nextElement();
144
out.append("\nBad Link: " + tempLinkInfo3.stringLink + "\n
In Page: " + tempLinkInfo3.fileContainingLink);
}
//*********************************************************
//*********************************************************
InputStream in = urlconnection.getInputStream();
character = in.read();
//*dump href="
while( (char)character != '"'){
character = in.read();
}
145
character = in.read(); //*skip over first quotation mark
while( (char)character != '"'){
link += (char)character;
character = in.read();
}
links[count] = link;
count++;
link = "";
}
}//>
}//*end while loop find next link
in.close();
//*sort into internal and external and fix up link formatting if nec.
//*ditch mailto, ftp, flash, and javascript links...
for(int i=0; i<count; i++){
String inLink;
146
//*check for flash links and ditch them...
boolean flash = false;
if( links[i].indexOf("clsid") > 0){
flash = true;
}
while(e.hasMoreElements()){
LinkInfo tempLinkInfo2 = (LinkInfo)e.nextElement();
if( (tempLinkInfo2.link).equals(tempLinkInfo.link) ){
flag = true;
break;
}
}if( (!flag)&&(!javascript)&&(!flash) ){
internalList.addElement(new LinkInfo(source, top, links[i]));
toBeCheckedInternalList.addElement(new LinkInfo(source, top, links[i]));
flag = false;
}
while(e1.hasMoreElements() ){
if( (tempLinkInfo2.link).equals(tempLinkInfo1.link) ){
147
//System.out.println("Found duplicate external link " + links[i]);
//add duplicate pages for user reference
tempLinkInfo2.otherLocations.addElement(tempLinkInfo1.sourceFile);
flag1 = true;
break;
}
if (!flag1){
}
}
}
}else{ //*if not off top directory and begins with http must be external
}
}
}
148
//*patch together internal links if needed before adding to vector
//*does it begin with http? if so ok do nothing and return string
if( lk.startsWith(base) || lk.startsWith(base) || lk.startsWith(base) ){
return lk;
if ((int)temp[j] == 47){
flag = true;
}
if( flag )
bw += temp[j];
}
149
char temp2[] = bw.toCharArray();
String tempString = "";
150
6.2 Adaptive Autonomous Agents
Mobile agents have been in use since the early 1980s where they were used to
balance loads on homogeneous networks. Telescript, introduced in the early
1990s by General Magic, was the rst to be known as a 'Mobile Agent'. Java
and Python are the preferred languages for agents.
Agents are programs that operate with little or no human supervision. In
time they will initiate actions, form goals, construct plans of action, migrate
to dierent locations and communicate with other agents. They respond to
events and adjust behavior accordingly with out human intervention. They will
interact with other agents and with people to accomplish goals. Agents will
continue to exist and remember training and tasks even if the user's computer
crashes or is turned o. If they are well designed agents will have personality,
and like a good secretary will intrude only when necessary and not be intrusive.
There are dierent classes of agents depending on the agent's abilities: they
may be static or mobile; react to events or not; work alone or with other agents;
learn or be hardwired; autonomous or not.
Intelligent agents solve several classes of problems, they simplify distributed
computing, information retrieval, sorting and classication of data, and handle
repetitious tasks for the user. Already agents have taken over many tasks users
do not wish to do themselves, like scheduling appointments, answering email,
sorting news group information and getting the current news stories that match
the user's interest. As the agent learns more about its user it will become more
useful to the that user.
Agents behavior and ability to solve problems may be either in the individual
agent or the agent may serve as a dumb part of a group that can solve a problem.
(Think of bees or ants working together) Agents that work as a minor part of a
group form a more stable system and may be able to handle tasks not easily done
by computers. Without a central intelligence the group may grow stronger and
smarter. This type of agent setup may scale up better than individual agents.
151
protocol (SMTP, HTTP, ...).
KQML (Knowledge Query Markup Language) uses messages that carry in-
formation about the type information they are transmitting; assertion, request,
query. Performatives are the primitives which dene permissible operations
agents may do in eort to communicate with each other. It uses special agents
called facilitators that handle many tasks: Track locations of agents by specic
identity or type of service; Track services available and needed by agents; Act
like post oces, holding, forwarding, receiving messages for agents; Translate
between agent communication languages; Break complex problems in to parts
and distribute tasks to agents that can handle them; Monitor the agents.
KQML uses categories or levels for agent communication: content, the con-
tent of the message, text, binary strings etc.; communication, sender, recip-
ient, message ids; message, identies protocols for message transfer, handles
encoding, and descriptions of content. Requirements of KQML: form, simple,
declarative, and easily understood by humans; semantics, Should be familiar,
unambiguous, well grounded in theory; implementation, Needs to be ecient
and backward compatible; networking, Platform independent across networks
and support synchronous and asynchronous.; environment, will be distributed,
dynamic and non-heterogeneous; reliability, must be reliable and secure; con-
tent,the language should be layered, like all networked software. There are still
some diculties with KQML. There are ambiguities and vagueness and mis-
directions, misclassications and some things that are needed but not yet in
existence, in statements known as performatives (statements that work just by
declaration).
KIF (Knowledge Interchange Format) is particular syntax format, similar
to Lisp, for rst predicate calculus communication between agents. KIF can
perform translation from one language format to another. It can also be used
to communicate between agents. KIF is a rst order predicate calculus using
prex format. It supports the denition of objects, functions, relations, rules
and meta knowledge. It is not a programming language. KIF has three main
parts; variables, operators, and constants. KIF has two types of variables;
individual (begin with ?) and sequences (begin with @). It has four operators;
term (objects), rule (legal logical inference steps), sentence (facts), denition
(constants). A form is a sentence, rule or denition.
ACL (Agent Communication Language) has three pieces; vocabularies, KIF,
and KQML. The vocabulary uses an open ended dictionary of terms that can
be referenced by agents.
Telescript is an object orientated remote programming language for use with
mobile agents developed by General Magic. It has three main parts: a language
for developing agents and environments; an interpreter for the Telescript lan-
guage; and communication protocols (TCP/IP). The entire application can be
written in Telescript but usually a combination of Telescript and C/C++ is
used.
KAoS (Knowledgeable Agent-orientated System) diers from the other inter-
agent communication methods in that it considers not only the message but the
sequence of messages in which it occurs. This enables agents to coordinate fre-
152
quently recurring interactions. KAoS makes use of 'agent orientated program-
ming' which is an extension of object orientated programming. This provides a
consistent structure for the agents and an easier way to do agent programming.
The agents contain: Knowledge (facts, beliefs); Desires; Intentions; and Capa-
bilities. From birth the agent goes into a loop of 'updating the structure' and
'formulating and acting on intentions', unless it is in a cryogenic state, until its
death.
Communication takes place with messages containing verbs, and informa-
tion. The messages are structured much like the KQML messages. Commu-
nication between agents takes place only within the domain the agents are in.
Proxy agents communicate between domains in a given environment. Mediation
agents communicate with outside agents.
Instances of agents of particular classes are created to work in various do-
mains. Using inheritance specialized agents are created. Domain managers
control entry and exit of agents in a domain, and matchmaker agents give ac-
cess and information about services in the domain they are in.
153
6.3.1 Java Personal Agent
---Agent.java--
//www.timestocome.com
//2003
import javax.swing.*;
import java.awt.event.*;
import java.awt.*;
import java.io.*;
public buildFrame()
{
154
Toolkit tk = Toolkit.getDefaultToolkit();
Dimension d = tk.getScreenSize();
try{
FileReader fr = new FileReader ( "data/user.txt" );
BufferedReader br = new BufferedReader ( fr );
br.readLine(); //username
name = br.readLine();
fr.close();
}catch (IOException e){}
setTitle ( name );
setSize ( width, height );
setLocation ( width/3, height/10 );
}
}
155
--Conversation.java--
import javax.swing.*;
import java.awt.event.*;
import java.awt.*;
import java.io.*;
import java.util.*;
}
}
public createFrame()
{
setTitle ( name );
setSize ( 400, 400 );
156
setLocation ( 100, 300 );
Container cp = getContentPane();
cp.setBackground ( Color.white);
user.addKeyListener( this );
user.requestFocus();
157
"", "", "", "", "", "", "", "", "", "" };
if ( e.getKeyChar() == e.VK_ENTER ){
}
}
158
//does the file cText exist?
String fileName = "data/" + c;
try{
FileReader fr = new FileReader ( fileName );
BufferedReader br = new BufferedReader ( fr );
//yes
// is string uText listed?
// yes
// add one to uText
// no
// add uText with base score
159
// else create a line with that user string and base score
if ( !flag ) {
bw.flush();
fw.close();
}catch ( IOException ioe3){}
fr.close();
160
String userUpdate ( String u, String c )
{
String reply = "";
// yes
// grab random one of top 3 scorers
// set reply to that and return
linecount++;
int marker = in.indexOf ( '#' );
s = in.substring ( 0, marker );
n = in.substring ( marker + 1, in.length());
Integer j = new Integer ( n );
k = j.intValue();
161
}
}
}
reply = topthree[t];
}else if ( linecount >1 ){
reply = topthree[1];
}else{
reply = topthree[0];
}
fr.close();
// no responses in file
// randomly pick a file
// randomly grab a response
// add it to new file
// set response to it and return
reply = "I don't know?";
162
// find a close match to user string in the file list
// randomly pick a string from that file
//break user text into words
String uwords[] = new String[20];
char utemp[] = u.toCharArray();
String utempWord = "";
int uwordCount = 0;
if ( utemp[i] == '#' ){
uwords[uwordCount] = utempWord;
utempWord = "";
uwordCount++;
i = u.length();
}else{
uwords[uwordCount] = utempWord;
utempWord = "";
uwordCount ++;
}
}
163
int topScore = 0;
int tempScore[] = new int[dirList.length];
for ( int i=0; i< dirList.length; i++){ tempScore[i] = 0;}
int tempLocation = 0;
164
//now we have a list of user words
//and a list of words in the files
//****fix max count for q!!!
for ( m=0; m<dirList.length; m++){
for ( int p=0; p<uwordCount; p++){
for ( int q=0; q<20; q++){
if (( uwords[p] != null ) && ( dwords[m][q] != null)){
if ( uwords[p].compareTo ( dwords[m][q] ) == 0 ){
tempScore[m]++;
}
}
}
try{
FileReader fr = new FileReader ( dirList[tempLocation] );
BufferedReader br = new BufferedReader ( fr );
165
while (( in = br.readLine()) != null ){
pickstring[count] = s;
count++;
if ( count > 0 ){
t = r.nextInt(count);
}else{
t = r.nextInt(dirList.length);
}
reply = pickstring[t];
// add that string with base score to the file just created
String out = c + "#" + n.valueOf ( baseScore );
try{
fw = new FileWriter ( "data/" + u );
bw = new BufferedWriter ( fw );
bw.write(out);
166
bw.flush();
// close the file
fw.close();
}catch (IOException e4 ){}
return reply;
}
}
167
--Fetch2.java
import java.io.*;
import java.net.*;
import java.util.Date;
import java.util.*;
import java.text.*;
URL url;
File filename;
int score;
String description;
String words[] = new String[10];
long downloadtime;
long parsetime;
downloadtime = System.currentTimeMillis();
if ( response.compareTo ( "OK") == 0 ) {
InputStream in = uc.getInputStream();
168
while ( ( c = in.read() ) != -1 ){
data += (char)c;
}
in.close();
uc.disconnect();
//save to disk
String name = url.toString();
name = name.substring ( 7, name.length() );
name = name.replace ( '/', '_' );
bw.write ( data );
bw.flush();
bw.close();
downloadtime -= System.currentTimeMillis();
parsetime = System.currentTimeMillis();
parsetime -= System.currentTimeMillis();
169
}catch ( IOException e ){
downloadtime -= System.currentTimeMillis();
Search.imhome( url, downloadtime );
System.out.println ( url + "::" + e );
}
170
--Find.java--
//www.timestocome.com
//Winter 2002/2003
import java.io.*;
import java.net.*;
import java.util.Date;
import java.text.*;
import java.util.*;
int numberOfWords = 0;
int numberOfUrls = 0;
int maxURLS = 512;
int maxWords = 10;
URL urlList[] = new URL[maxURLS];
String wordList[] = new String[maxWords];
171
Vector docs = new Vector();
try{
FileReader fr = new FileReader ( "data/news.txt" );
BufferedReader br = new BufferedReader ( fr );
String in;
numberOfUrls = c;
c = 0;
while (( in = br.readLine() ) != null ){
wordList[c] = in;
c++;
}
}catch ( IOException ex ) { }
numberOfWords = c;
int pagecount = 0;
int i = 0;
int loop = 0;
172
//main loop
while (( urlList[i] != null )&&( i< maxURLS)){
pagecount++;
System.out.println ( "pagecount = " + pagecount );
System.out.println ( "page " + urlList[i] );
i++;
}//end url list while
173
//check vector size and grab top 0-20 pages
//create an html page
//grab the url as a link and the top 20 or so words after the <body> tag
//wrap up page
//create file
File resultsFile = new File ( "searchresults.html");
FileWriter fw;
try {
fw = new FileWriter ( resultsFile );
BufferedWriter bw = new BufferedWriter ( fw );
//send to getDesc
String desc = ((Found)docs.elementAt(q)).getDesc();
//create a link for desc...
String link = "\n\n<a href=\"" + ((Found)docs.elementAt(q)).url +"\">";
bw.write ( "<table border=3 ><tr><td>");
bw.write ( link );
bw.write ( desc );
bw.write ( "<br><br>");
bw.write ( docD );
bw.write ( "</td></tr></table><br><br><br>");
174
}
//write footer
String footer = new String ( "\n</body></html>" );
bw.write ( footer );
//close file
bw.flush();
bw.close();
//how can user recall or save this page? need to add that in here
//pop up window with infor, save button, erase button, close window button
//add user tool to main agent to bring page back up
SearchPanel sp = new SearchPanel();
// need a cleanup routine so news dir doesn't get huge with old stuff
}
if ( lb < ub ){
j = partition ( d, lb, ub );
sort ( d, lb, j-1 );
sort ( d, j+1, ub );
}
return d;
}
175
static int partition ( Vector d, int lb, int ub )
{
double a = ((Found)d.elementAt(lb)).total;
Found aFound = (Found)d.elementAt(lb);
int up = ub;
int down = lb;
}
}
d.setElementAt( (Found)d.elementAt(up), lb);
d.setElementAt ( aFound, up );
return up;
}
//*********************
//if file not found or other error, remover from url list
//so we don't keep trying to download the same bad file
//*********************************************
176
int c;
String data = "";
File newsFile = new File( "/news/dummy" );
try {
data = "";
if ( contentlength > 0 ){
InputStream in = urlconnection.getInputStream();
while ( ( c = in.read() ) != -1 ){
data += (char)c;
in.close();
bw.write( data );
bw.flush();
bw.close();
}
}catch (IOException e ){
System.out.println ( "Error getting news: " + e );
}
return newsFile;
177
}
178
--Found.java--
//www.timestocome.com
//Winter 2002/2003
import java.io.*;
import java.net.*;
import java.util.Date;
import java.text.*;
class Found
{
//new
Found( URL u, File f){
179
file = f;
url = u;
docDescription = u.toString();
//score
int score ( String wordList[] )
{
int count = 0;
//int tempArray[] = new int[wordarraysize];
int wordTally[] = new int[ wordList.length ];
if ( st.ttype == st.TT_WORD){
in = st.sval;
if ( wordList[j] != null ){
if ( in.compareToIgnoreCase( wordList[j] ) == 0){
total++;
180
System.out.println ( "found word: " + wordList[j] + " in file: " + file );
in2 = st1.sval;
total++;
}
}
}
}
}
}
}
}
}
}
return total;
181
//pullLinks
//add in link location.....
void pullLinks()
{
try{
//read document
FileReader fr = new FileReader ( file );
int c;
String urlName = new String ("http://");
int wordcount = 0;
while ( ( c = fr.read()) != -1 ){
char x = (char) c;
if ( x == ' ' ){
wordcount++;
}
x = (char) fr.read();
if (( x == 'A' ) || ( x == 'a' )){
x = (char) fr.read();
if ( x == ' ' ){
x = (char) fr.read();
if (( x == 'H' ) || ( x == 'h')){
x = (char) fr.read();
if (( x == 'R' ) || ( x == 'r' )){
x = (char) fr.read();
if (( x == 'E' ) || ( x == 'e' )){
x = (char) fr.read();
182
if (( x =='F' ) || ( x == 'f')){
x = (char) fr.read();
if ( x == '=' ){
x = (char) fr.read();
if ( x == '"' ){
x = (char ) fr.read();
if ( x == 'h' ){
x = (char) fr.read();
if ( x == 't' ){
x = (char) fr.read();
if ( x == 't' ){
x = (char) fr.read();
if ( x == 'p' ){
x = (char) fr.read();//skip //
x = (char) fr.read();
x = (char) fr.read();
urlName += (char) c;
}
urlName = "http://";
}
//c = fr.read(); //skip '>'
183
while ( ( c = (char)fr.read()) != '<' ){
linkDescription += (char)c;
}
urlDescription[links] = ( linkDescription );
linkDescription = "";
links++;
}
}
}
}
}
}
}
}
}
}
}
}
}
}//end while
}catch ( IOException e ){
System.out.println ( e );
}
184
//number of words in word list
int numberWords = 0;
for ( int i=0; i<wordList.length; i++){
if ( wordList[i] != null ){
numberWords ++;
}
}
185
}else{ //add to list
w[r] = wrd;
wrd = "";
r++;
}
}else{
wrd += p[q];
}
linkScore[i]++;
}
}
}
}
}
}
topLinks[i] = urlList[i];
//total += linkScore[i];
}
// }
186
}
String d = "";
FileReader fr;
try {
fr = new FileReader ( file );
int c;
while ( ( c = fr.read()) != -1 ){
char x = (char)c;
if ( x == '<'){
x = (char) fr.read();
x = (char) fr.read();
for (int q=0; q<100; q++){
if ( x != '<'){
d += x;
x = (char) fr.read();
187
}
}
}
}
}
}
}
}
}
}//end while
d += "</a>";
return d;
return d;
}
188
--GetIP.java--
//www.timestocome.com
//Winter 2002/2003
import java.io.*;
import java.net.*;
import java.util.Date;
class GetIP
{
GetIP(){}
URL url;
URLConnection urlconnection;
189
try {
url = new URL ( "http://www.timestocome.com/webtools/getip.shtml");
urlconnection = url.openConnection();
}catch (MalformedURLException e){
return ( "There is a problem with the URL " + e);
}catch (IOException e1){
return ( "The site can not be reached " + e1);
}
while ( (c = in.read() ) != -1 ) {
parsethis[i] = (char) c;
i++;
}
in.close();
}
}catch (IOException e2 ) {
return ( "Unable to get IP number from server " + e2 );
}
190
if ( ( parsethis[j] == '/' ) && ( parsethis[j+1] == 'b' ) &&
( parsethis[j+2] == 'o') && ( parsethis[j+3] == 'd') )
stop = j-3;
String in = "";
//save ip to file if changed and notify if changed
try {
FileReader fr = new FileReader ( "data/ip.txt" );
BufferedReader br = new BufferedReader ( fr );
in = br.readLine();
fr.close();
if ( address.equals( in ) ){
}else{
try {
FileWriter fw = new FileWriter ( "data/ip.txt" );
BufferedWriter bw = new BufferedWriter ( fw );
fw.write ( address );
fw.flush();
fw.close();
}catch ( IOException e ) {}
return ( " New IP address is: " + address + " old address: " + in );
191
}
192
--Joke.java--
//www.timestocome.com
//Winter 2002/2003
JFileChooser fc;
JButton openButton;
JButton closeButton;
193
int result;
Joke()
{
//read directory and get list length/number of jokes
File dir = new File ( "jokes" );
list = dir.listFiles();
194
int count = 0;
// read in file, word by word
try{
FileReader fr = new FileReader( list[i] );
StreamTokenizer st = new StreamTokenizer ( fr );
String in;
if ( st.ttype == st.TT_WORD){
in = st.sval;
//proximity score
weight = 1.0;
int x = 0; //sub total
int y = 0; //total
195
for ( int k=0; k<2048; k++){
if ( tempArray[k] != 0 ){
x++;
}else{
x = 0;
}
y += x;
score[i] = s + y/count;
//reset wordTally
for ( int k=0; k<numberWords; k++){
wordTally[k] = 0;
}
//reset tempArray
for ( int k=0; k<2048; k++){
tempArray[k] = 0;
}
196
score[i] = small;
list[index] = list[i];
list[i] = temp;
try{
bw.close();
}catch ( IOException e ){}
File tellJoke ()
{
try{
//read in sorted file list
FileReader fr = new FileReader ( "data/jokeSort.txt" );
BufferedReader br = new BufferedReader ( fr );
197
//and return file handle to agent
return list[number];
void addNew ( )
{
fc.setFileSelectionMode ( fc.DIRECTORIES_ONLY );
File dir = fc.getCurrentDirectory();
if ( reply == JOptionPane.YES_OPTION ){
System.out.println ( "ok");
//move over new files and change the name on the way
//we already have the last file name
for ( int i=0; i<list1.length; i++){
198
File newfile = new File( newname );
list1[i].renameTo( newfile );
}
}
}
199
--JokesList.java--
//www.timestocome.com
//Winter 2002/2003
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
import java.io.*;
JokesList ()
{
//open list files and read the lists into the arrays
try{
FileReader fr = new FileReader ( "data/jokekeys.txt" );
BufferedReader br = new BufferedReader ( fr );
String in;
fr.close();
200
}catch ( IOException e ){
keys[0] = "Enter";
keys[1] = "the";
keys[2] = "keywords";
keys[3] = "you";
keys[4] = "wish";
keys[5] = "to";
keys[6] = "search";
keys[7] = "for";
keys[8] = "here";
keys[9] = "";
jokeskeys.setEditable(true);
jokeskeys.getEditor().addActionListener ( this );
listPanel.add ( jokeskeys );
add ( listPanel );
jokeskeys.removeItemAt ( place );
jokeskeys.insertItemAt ( newItem, place );
201
//update the file
try {
FileWriter fw = new FileWriter ( "data/keys.txt");
BufferedWriter bw = new BufferedWriter ( fw );
bw.flush();
fw.close();
202
--MainPanel.java--
//www.timestocome.com
//Winter 2002/2003
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
import javax.swing.text.*;
import javax.swing.event.*;
import java.io.*;
import java.net.*;
public MainPanel()
{
203
output = new JEditorPane();
try {
output.setPage ( "file:index.html" );
}catch (IOException e){
System.out.println ( e );
}
newsButton.setBackground ( buttonColor );
helpButton.setBackground ( buttonColor );
exitButton.setBackground ( buttonColor );
buttonPanel.add ( helpButton );
buttonPanel.add ( exitButton );
204
buttonPanel.setBackground ( Color.white );
buttonPanel.setBorder ( BorderFactory.createLineBorder( buttonColor ));
buttonPanel.setLayout ( new BoxLayout ( buttonPanel, BoxLayout.X_AXIS));
buttonPanel.add ( Box.createRigidArea ( new Dimension ( 112, 55 )));
newsButton.addActionListener ( this );
helpButton.addActionListener ( this );
exitButton.addActionListener ( this );
//searches
JPanel listPanel = new JPanel();
listPanel.setBackground ( Color.white );
listPanel.setBorder ( BorderFactory.createLineBorder( buttonColor ));
listPanel.add ( listPanel3 );
listPanel.add ( listPanel4 );
listPanel.add ( newsButton );
add ( buttonPanel );
add ( listPanel );
205
}
if ( source == newsButton ){
output.setText( "<html><head></head><body><p>" +
"<br>http://www.timestocome.com"+
"<br>Winter 2002-2003"+
"<br>Copyright TimestoCome.com"+
"<br>contact theboss@timestocome.com"+
"<br>for information." +
"<br><br><br>"+
"<br>Be sure to enter your name, email, "+
"zip code and agent name to begin individualizing "+
"your agent" +
"</body></html>" );
206
}else if ( source == exitButton ){
System.exit(0);
}
setBackground( color );
repaint();
}
}
207
--NewsList.java--
//www.timestocome.com
//Winter 2002/2003
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
import java.io.*;
NewsList ()
{
//open list files and read the lists into the arrays
try{
FileReader fr = new FileReader ( "data/newskeys.txt" );
BufferedReader br = new BufferedReader ( fr );
String in;
fr.close();
208
}catch ( IOException e ){
urlKeys[0] = "Enter";
urlKeys[1] = "the";
urlKeys[2] = "keywords";
urlKeys[3] = "you";
urlKeys[4] = "wish";
urlKeys[5] = "to";
urlKeys[6] = "search";
urlKeys[7] = "for";
urlKeys[8] = "here";
urlKeys[9] = "";
newskeys.setEditable(true);
newskeys.getEditor().addActionListener ( this );
listPanel.setBackground ( Color.white );
newskeys.setBackground ( Color.white );
listPanel.add ( newskeys );
add ( listPanel );
newskeys.removeItemAt ( place );
newskeys.insertItemAt ( newItem, place );
209
try {
FileWriter fw = new FileWriter ( "data/keys.txt");
BufferedWriter bw = new BufferedWriter ( fw );
bw.flush();
fw.close();
210
--Pages.java--
import java.net.*;
import java.io.*;
class Pages
{
int score;
File filename;
URL url;
String description;
int getScore()
{
return score;
}
File getFile()
{
return filename;
}
URL getURL()
{
return url;
}
String getDescription()
{
return description;
}
211
}//end Pages class
212
--Progress.java--
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
Progress( progress p )
{
addWindowListener ( new WindowAdapter ()
{ public void windowClosed ( WindowEvent e ) {} } );
213
--Search.java--
import javax.swing.*;
import java.awt.event.*;
import java.awt.*;
import java.io.*;
import java.net.*;
import java.util.*;
int numberOfWords = 0;
int numberOfUrls = 0;
//static int maxURLS = 256;
static int maxURLS = 100;
static int maxWords = 10;
URL urlList[] = new URL[maxURLS];
static URL downloadedURLS[] = new URL[maxURLS];
static String wordList[] = new String[maxWords];
static Vector docs = new Vector();
static int totalPages = 0;
static int threadCount = 0;
static progress p; //page count
static progress p1; //score average
static long pt = 0;
static long dt = 0;
static int tpg = 0;
static int tpb = 0;
static int done = 0;
static int totalScore = 0;
static double averageScore = 0;
static int pageCount = 0;
getUserInput();
214
f.setTitle ( "Page Count ... " );
f.setSize ( 200, 60 );
f.setBackground ( Color.white );
f.setVisible(true);
static void imhome (int s, File f, URL u, String d, URL links[], long dtime, long pti
{
threadCount--;
System.out.println ( "thread count: " + threadCount + " url: " + u );
pt += ptime;
dt += dtime;
215
tpg++;
totalScore += s;
pageCount++;
averageScore = totalScore/pageCount;
docs.addElement(pg);
int duplicateFlag = 0;
if ( tempA.compareTo(tempB) == 0 ){
duplicateFlag = 1;
}
}
}
if ( duplicateFlag == 0 ){
216
fetchl[i] = new Fetch2 ( links[i], wordList );
fetchl[i].start();
totalPages++;
downloadedURLS[totalPages] = links[i];
//need to send this info to user....
p.setValue ( totalPages );
threadCount++;
}
}
}
}
//download failed
static void imhome ( URL u, long dtime )
{
threadCount--;
tpb++;
if ( threadCount <= 1 ){
System.out.println ( "time to finish " + threadCount);
finish();
}
}
217
if ( done == 1){
return;
}
//when list is empty sort vector and grab a percent or
//number of the highest scoring pages
done = 1;
}
218
try {
//create file
File resultsFile = new File ( "results/" + fn);
FileWriter fw;
//send to getDesc
String desc = ((Pages)docs.elementAt(q)).description;
//create a link for desc...
String link = "\n\n<a href=\"" + ((Pages)docs.elementAt(q)).url +"\">";
double scr = ((Pages)docs.elementAt(q)).score;
219
System.out.println ( "link: " + link );
System.out.println ( "desc: " + desc );
System.out.println ( "score: " + scr );
//write footer
String footer = new String ( "\n</body></html>" );
bw.write ( footer );
//close file
bw.flush();
bw.close();
//how can user recall or save this page? need to add that in here
//pop up window with info, save button, erase button, close window button
//add user tool to main agent to bring page back up
// SearchPanel sp = new SearchPanel();
void getUserInput()
{
try{
FileReader fr = new FileReader ( "data/news.txt" );
BufferedReader br = new BufferedReader ( fr );
String in;
220
while (( in = br.readLine() ) != null ){
try{
}catch ( IOException ex ) { }
}//end getUserInput
if ( lb < ub ){
j = partition ( d, lb, ub );
sort ( d, lb, j-1 );
sort ( d, j+1, ub );
}
return d;
}
221
static int partition ( Vector d, int lb, int ub )
{
double a = ((Pages)d.elementAt(lb)).score;
Pages aFound = (Pages)d.elementAt(lb);
int up = ub;
int down = lb;
}
}
d.setElementAt( (Pages)d.elementAt(up), lb);
d.setElementAt ( aFound, up );
return up;
}
}//end Search
222
223
--SearchPanel.java--
//www.timestocome.com
//Winter 2002/2003
import javax.swing.*;
import javax.swing.event.*;
import javax.swing.text.*;
import java.awt.event.*;
import java.awt.*;
import java.io.*;
import java.net.*;
class SearchPanel
{
SearchPanel()
{
JFrame sf = new SearchFrame();
sf.setBackground ( Color.white );
sf.show();
}
}
224
SearchFrame ()
{
setDefaultCloseOperation( DISPOSE_ON_CLOSE);
}
}
ShowResults(){
try {
dataout.setPage ( "file:searchresults.html" );
225
}catch (IOException e){System.out.println ( e );}
226
b1Panel.add ( savethis );
b1Panel.add ( Box.createRigidArea ( new Dimension ( 77, 30 )));
bPanel.setBackground ( Color.white );
bPanel.add ( getold );
bPanel.add ( clean );
bPanel.add ( home );
add ( bPanel );
add ( b1Panel );
add ( dataoutPanel );
if ( source == savethis ){
//save to search directory
//get name from user and copy-move
//searchresults.html to search/username.html
File old = new File ( "searchresults.html" );
String newfile = "search/" + jtf.getText();
File newf = new File ( newfile );
old.renameTo( newf );
jtf.setText ( "saved file");
227
}else if ( source == getold ){
//list files in search dir and
//show the one user picks
File f = jfc.getSelectedFile();
try{
dataout.setPage( "File:" + f );
}catch ( IOException ex ){}
try {
dataout.setPage ( "file:searchresults.html" );
}catch (IOException e2){System.out.println ( e2 );}
/*
class LinkFollower implements HyperlinkListener
{
private JEditorPane pane;
228
this.pane = pane;
}
if ( fileobj.getPath().lastIndexOf('.') >0 )
extension =
fileobj.getPath().substring(
fileobj.getPath().lastIndexOf ('.') + 1).toLowerCase();
if ( extension != "" )
return extension.equals ( "html" );
else
return fileobj.isDirectory();
}
229
}
230
--URLList.java
//www.timestocome.com
//Winter 2002/2003
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
import java.io.*;
URLList ()
{
//open list files and read the lists into the arrays
try{
FileReader fr = new FileReader ( "data/news.txt" );
BufferedReader br = new BufferedReader ( fr );
String in;
231
fr.close();
}catch ( IOException e ){
keys[0] = "http://www.cnn.com";
keys[1] = "http://www.foxnews.com";
keys[2] = "http://www.drudgereport.com";
keys[3] = "http://www.slashdot.org";
keys[4] = "http://www.boston.com";
keys[5] = "http://www.wired.com";
keys[6] = "http://www.projo.com";
keys[7] = "http://news.bbc.co.uk/";
keys[8] = "http://www.nando.com";
keys[9] = "http://www.timestocome.com/blogs/blogs.html";
URLkeys.setEditable(true);
URLkeys.getEditor().addActionListener ( this );
listPanel.setBackground ( Color.white );
URLkeys.setBackground ( Color.white );
listPanel.add ( URLkeys );
add ( listPanel );
URLkeys.removeItemAt ( place );
232
URLkeys.insertItemAt ( newItem, place );
bw.flush();
fw.close();
233
--User.java
//www.timestocome.com
//Winter 2002/2003
import javax.swing.*;
import javax.swing.event.*;
import java.awt.event.*;
import java.awt.*;
import java.io.*;
public User()
{
JFrame f = new userFrame();
f.setBackground( Color.white );
f.show();
234
JTextArea userName;
JTextArea agentName;
JTextArea zipcode;
JTextArea email;
public userFrame()
{
Container cp = getContentPane();
try {
FileReader fr = new FileReader ( "data/user.txt");
BufferedReader br = new BufferedReader ( fr );
String in;
un = br.readLine();
an = br.readLine();
zc = br.readLine();
em = br.readLine();
fr.close();
}catch ( IOException e ) {}
userNamel.setBorder ( BorderFactory.createEtchedBorder() );
235
agentNamel.setBorder ( BorderFactory.createEtchedBorder() );
zipcodel.setBorder ( BorderFactory.createEtchedBorder() );
emaill.setBorder ( BorderFactory.createEtchedBorder() );
userName.setBorder ( BorderFactory.createEtchedBorder() );
agentName.setBorder ( BorderFactory.createEtchedBorder() );
zipcode.setBorder ( BorderFactory.createEtchedBorder( ) );
email.setBorder ( BorderFactory.createEtchedBorder() );
userPanel.add ( userNamel );
userPanel.add ( userName );
userPanel.add ( agentNamel );
userPanel.add ( agentName );
userPanel.add ( zipcodel );
userPanel.add ( zipcode );
userPanel.add ( emaill );
userPanel.add ( email );
userPanel.add ( done );
cp.add( userPanel);
try {
FileWriter fw = new FileWriter ( "data/user.txt" );
BufferedWriter bw = new BufferedWriter ( fw );
bw.write ( userName.getText() );
bw.newLine();
bw.write ( agentName.getText() );
bw.newLine();
bw.write ( zipcode.getText() );
bw.newLine();
236
bw.write ( email.getText() );
bw.newLine();
bw.flush();
bw.close();
}catch ( IOException e ){ }
//close window
setVisible ( false );
237
--Weather.java
//www.timestocome.com
//Winter 2002/2003
import java.io.*;
import java.net.*;
import java.util.Date;
URL url;
Weather(String zip)
{
String poll ()
{
int c;
String data = "<Html><Head></Head><Body><br><br>";
data += "Brought to you from <a href=\"http://www.noaa.gov\">NOAA</a>";
data += "<br><br><table><tr><td><b>";
238
try {
InputStream in = urlconnection.getInputStream();
while ( ( c = in.read() ) != -1 ){
if ( (char)c == '7'){
if ( (char)in.read() == '-'){
if ( (char)in.read() == 'D' ){
if ( (char)in.read() == 'a' ){
if ( (char)in.read() == 'y' ){
while ( ( c = in.read() ) != -1 ){
if ( (char)c == '<'){
if ( (char)in.read() == 't' ){
if ( (char)in.read() == 'd' ){
if ( (char)in.read() == '>' ){
if ( (char)in.read() == '<'){
if ( (char)in.read() == 'b'){
if ( (char)in.read() == '>' ){
while ( (c = in.read()) != -1 ){
data += (char)c;
if ( data.endsWith ( "</table>") ){
break;
}
}
}
}
}
}
}
}
}
}
}
}
}
}
}
}
239
in.close();
data += "</Body></Html>";
return data;
}catch (IOException e ){
return ( "Error getting weather: " + e );
}
}
}
240
--progress.java--
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
241
--README--
//www.timestocome.com
//Winter 2002/2003
This agent is far from done. So far it checks your local weather,
tells you a joke, does deep link searches and gets your ip number.
It will also learn to converse with you. The more you converse with
the agent the better at conversation it will become.
To compile
javac Agent.java
To run
java Agent
or
java -Xmx128M Agent
------------------------------------------------
data for the agent is as follows:
for conversations:
store files in a directory named 'data'
the file name should be a sentence
the file data is responses to that sentence followed by a #10 which
is the beginning score. The more a sentence gets used the higher
the score will be
example:
File name 'How are you?'
File data
I'm fine and you?#12
Excellent#10
242
for jokes
store jokes in a directory named 'jokes'
The file names should be sequential numbers followed by .html
store each joke as a regular HTML webpage, using the background
and fonts of your choice
example:
File name '1.html'
<html>
<head></head>
<body>
<br>Why did the chicken cross the road?
<br>To get to the other side.
</body>
</html>
243
Chapter 7
Neural Networks
244
with Kohonen and Anderson independently published papers about networks
that learned with out supervision, SOM, (self organizing maps). Grossberg and
Carpenter developed the ART (adaptive resonance theory) which learns with
out supervision in the late 1960's. The 1970's brought NEOCOGNItrON, for
visual pattern recognition. Hopeld published PDP ("Parallel Distributed Pro-
cessing") in three volumes. These books described neural networks in a way
that was easy to understand.
If a neural net is too large it will memorize rather than learn. Neural nets
usually are composed of three layers, input, hidden, and output. More layers
can be added, but usually little is gained from doing so. The connections vary
by the network type. Some nets have connections from each node in one layer
to the next, some have backward connections to the previous layer and some
have connections with in the same layer. Neural networks map sets of inputs
to sets of outputs. Learning is what shapes the neural networks surface. Su-
pervised learning algorithms take inputs and match them to outputs, correcting
the network if the output does not match the desired output. Unsupervised
learning algorithms do not correct the output given by the neural net. The net
is provided with inputs, but not with outputs.
Training data for a neural net should be fairly representative of the actual
data that will be used. All possibilities should be covered and the proportion
of data in each area should match the proportion in the real data. There are
several ways of training of neural nets: Hard coded weights determined by
experience or mathematical formulas can serve in place of a training algorithm;
Supervised training uses input and matching output patterns to let the net set
the weights; Graded training only uses input patterns, but then the neural net
receives feedback on how accurate its answer is; Unsupervised Training uses
only input patterns then the neural nets output is the correct answer.
Autonomous learning in neural nets is dierent from other unsupervised
learning systems in that the neural net can learn selectively, it doesn't learn
every pattern input, only those that are 'important'. An autonomous learning
neural net has the following capabilities; it organizes information into categories
without outside input and will reorganize them if it makes sense to do so; it
retrieves information from less than perfect input; it is congured to work in
parallel to keep speed reasonable; the system is always selectively learning;
priorities given to input patterns can change; it can generalize; and it has more
memory space than it needs; it must be able to expand and add to its knowledge
rather than overwriting previously learned knowledge. Of course something this
wonderful should also make your coee and sort your email for you too.
Simulated annealing is a statistical way to solve optimization problems, like
setting a schedule or wiring a network. Boltzmann networks use this algorithm
to learn. A random solution is chosen and compared to the current best solution
found. The better of the two is kept and then depending on the problem some
random changes are made. The amount of randomness in each loop is decreased
over time allowing the net to slowly settle into a solution. The randomness helps
to keep the net from settling into local minimas rather than global minimas.
The Lyapunov function, also known as the energy function, is used to test
245
for convergence of the neural network. The function decreases as the network
changes and assures stability.
7.3 Perceptron
Rosenblatt added the learning law to the McCulloch-Pitts neurode to make it
Perception, which is the rst of the neural net learning models. The perception
has one layer of inputs and one layer of outputs, but only one group of weights.
If data points on a plot are linearly separable (we can draw a straight line sepa-
rating points that belong in dierent categories), then we can use this learning
method to teach the neural net to properly separate the data points.
The McCulloch-Pitts neurode res a +1 if the neurode's total input the
sum of each input * its weight + some bias function is greater than the set
threshold. If it is less than the set threshold, or if there is any inhibitory input
a -1 is red. If the weights are chosen to be 1 for each input and the threshold
is zero, then the bias is chosen to be 0.5 - input*weight then the neurode works
as an AND function. If the bias is chosen to be -0.5 then the neurode acts as a
OR function. If the bias is chosen to be 0.5 it behaves as a NOT operator. Any
logical function can be created using only AND, OR and NOT gates so a neural
net can be created with McCulloch-Pitts neurodes to solve any logical function.
246
We start with a weight vector that has its tail at the origin and a randomly
picked point. Each data point is input to the neurode and it responds with
either a +/- 1, the weight vector is multiplied by the correct output. This is
done until all data points are input and the neurode gives the correct output
for each point.
The perception fell out of favor since it can only handle linearly separable
functions which means simple functions like XOR, or parity can not be computed
by them. Minsky and Papert published a book 'Perceptions', in the 1980's, that
proved that one and two layer neural nets could not handle many real world
problems and research fell o for about twenty years in neural nets.
An additional layer and set of weights can enable the Perception to handle
functions that are not linear. A separate layer is needed for each vertex needed
to separate the function. A 1950's paper by A.N. Kilmogorov published a proof
that a three layer neural network could perform any mapping exactly between
any two sets of numbers.
Multi layered perceptrons were developed than can handle XOR functions.
Hidden layers are added and they are trained using backpropagation or a similar
training algorithm. Using one layer linearly separable problems can be solved.
Using two layers regions can be sorted and with three layers enclosed regions
can be sorted.
247
than 2 or the network will not stabilize.
Input patterns are used to set the initial weights, during which time the
mentor node is set to += 1 depending on the desired output. Following that
a training set, dierent from the initial set, is tried. If the answer is correct we
do nothing. If the answer is not correct the weights are adjusted using the delta
rule.
The delta rule changes the weights in proportion to the amount they are
incorrect. The distance is determined by subtracting network's actual response
dierence from expected response; multiply this by a training constant; multiply
by the size and direction of the input pattern vector; and use this information to
determine the change in weight. This is also known as the Least Mean Squared
Rule ChangeInW eight = 2 LearningRate InputN odej (DesiredOutput
ActualOutput)
Collections of Adeline's in a layer can be taught multiple patterns. Adelines
can have additional inputs that are powers or multiplications of inputs and are
referred to as higher order networks. It may work better at pattern solving
than a many layered single order network. This may be used in more than two
dimensions. A line separates linear data in a plane, a plane separates linear
data in three dimensions, etc. Adelines and Madelines can be used to clean up
noise from data provided there is a good copy of the data to learn from during
training.
248
Associate memory systems can recall information based on garbled input, details
are stored in a distributive fashion, are accessible by content, are very robust,
and most importantly can generalize. The two classes of associative memory
classied by how they store memories are: auto associative; hetero-associative.
Autoassociate: each data item is associated with itself. Used for cleaning
up and recognizing handwriting. Training is done by giving the same pattern
to the input and output nodes.
Hetero-associative: dierent data items are associated with each other. One
pattern is given and another is output, a translation program would fall in this
category. This one is trained by giving one input pattern to the input nodes
and the desired output pattern to the output nodes.
The main architectures for associated memory neural networks are: crossbar
(aka Hopeld); adaptive lter networks; competitive lter networks.
Adaptive lter networks, like Adelines, test each neurode to see if it is the
pattern specic to that neurode. These are used in signal processing.
Competitive lter networks, like Kohonens, have neurodes competing to be
the one that matches the pattern. They self-organize and they perform statis-
tical modeling with out outside aid or input.
249
7.8 Counterpropagation Network
The counterpropagation network is a hybrid network. It consists of an out-
star network and a competitive lter network. It was developed in 1986 by
Robert Hecht-Nielsen. It is guaranteed to nd the correct weights, unlike reg-
ular backpropagation networks that can become trapped in local minimums
during training.
The input layer neurodes connect to each neurode in the hidden layer. The
hidden layer is a Kohonen network which categorizes the pattern that was input.
The output layer is an outstar array which reproduces the correct output pattern
for the category.
Training is done in two stages. The hidden layer is rst taught to categorize
the patterns and the weights are then xed for that layer. Then the output
layer is trained. Each pattern that will be input needs a unique node in the
hidden layer, which is often too large to work on real world problems.
250
close to it using a Mexican Hat function. (so called because it looks like a
Mexican hat.) The Mexican Hat function is also used in wavelets and image
processing. An example is 1:5x4 4x2 + 2, try plotting this between -2 and 2.
The neurodes close to the one activated take part in the training, the others do
not. To make it computationally ecient a step function is used instead of a
true Mexican hat function.
Self organization is a form of unsupervised learning. This sets weights with a
'winner take all' algorithm. Each neurode learns a classication. Input vectors
will be classed into the group to which they are closest.
General algorithm
The weights between the nodes are initialized to random values between 0.0 and
1.0.
Then the weight vector is normalized.
The learning rate is set between 1.0 and 0.0 and decreased linearly each itera-
tion.
The neighborhood size is set and decreased linearly each iteration
The input vector is normalized and fed into the network.
The input vector is multiplied by the connection weights and the total is accu-
mulated by the Kohonen network nodes.
The winning nodes out put is set to one and all the other nodes are set to zero.
Weights are adjusted Wnew = Wold + training constant ( input - Wold)
Training continues until a winning node vector meets some minimum error stan-
dard.
251
7.10.1 C++ Self Organizing Net
//som.cpp
//this is an example of a 'Self Organizing Kohonen Map'
//http://www.timestocome.com
#include "somlayer.cpp"
//read in data
kohonen.getData();
kohonen.readInputFile();
252
//somlayer.cpp
//www.timestocome.com
//
//
//This program is a C/C++ program demonstrating the
//self organizing network (map) {algorithm by Kohonen}
//This is an unsupervised network
//one neurode in the output layer will
//be activated for each different input pattern
//The activated node will be the one whose weight
//vector is closest to the input vector
//
//It reads in a data file of vectors in the format:
//99.99 88.88 77.77
//66.66 55.55 44.44
//
//algorithm
//weight array is created
//(number of input dimensions) X (number of input dimensions * number of vectors)
//the weights are initialized to a random number between 0 and 1
//weight vectors are normalized
//the learning rate is set to one and linearly decremented
// depending on maximum number of iterations
//the neighborhood size is set to the max allowed by the kohonen out put layer size
// and decremented linearly depending on the maximum number of iterations
//the input vector is normalized
//each input is multiplied by a connecting weight and sent to each output node
//the inputs for each output node are summed
//the winning node is set to one
//the outer output nodes are set to zero
//the distance between the winning node and the input vector are checked
//if the distance is not inside minimum acceptable the weights are adjusted
// Wnew = Wold + trainingConstant * (input - Wold)
//the nieghborhood size and training constant are decreased
//
//and the next loop is begun.
#ifndef _LAYER_CPP
#define _LAYER_CPP
#include <iostream.h>
#include <stdlib.h>
#include <math.h>
253
#include <time.h>
#include <stdio.h>
#include <string>
class network{
private:
void normalizeWeights()
{
254
}
void normalizeInput()
{
public:
network(){}
~network(){}
255
void createNetwork()
{
int max = 1;
}
}
normalizeWeights();
}
void getData()
{
256
cout << "* Enter the name of your input file containing the *"<< endl;
cout << "* vectors. *"<< endl;
cout << "*****************************************************"<< endl;
cin >> fileIn;
// distance tolerance
cout << "*****************************************************"<< endl;
cout << "* Enter the distance tolerance that is acceptable *"<< endl;
cout << "*****************************************************"<< endl;
cin >> distanceTolerance;
//vectorsIn = nodesIn;
weightColumn = nodesIn * vectorsIn;
257
//user gave us the number of floating numbers per row (dimensions)
//and the number of lines (vectors)
//only a space is used between the numbers, no commas or other markers.
void readInputFile()
{
//read in vectors
for (int i=0; i<vectorsIn; i++){
for (int j=0; j<nodesIn; j++){
fscanf ( fp, "%lf", &inputArray[i][j]);
}
}
fclose (fp);
normalizeInput();
void train ()
{
258
}
cout << endl << endl;
// inner loop
// see if outside number iterations = break from inner loop
while (count < maxIterations ){
count++;
cout << "\n loop number " << count;
cout << "\tdistance " << distance;
cout << "\twinning node " << winningNode << endl;
259
// set all other outputNodes to zero
for( int i=0; i<nodesK; i++){
if( i != winningNode ){
kohonen[i] = 0.0;
}else{
kohonen[i] = 1.0;
}
}
260
if (( count % decreaseNeighborhoodSize == 0)
&& (neighborhoodSize > 1)){
neighborhoodSize--;
//flip flag
firstLoop = 0;
261
if( weights[i][j] > 1.0){
weights[i][j] = 1.0;
}
// re-normalize weights
normalizeWeights();
void print()
{
//open file
FILE *fp = fopen ( fileOut, "w");
//headings
fprintf (fp, "\n\n\n data from training run \n");
//headings
fprintf ( fp, "\n\n\nnormalized input vectors\t\twinning node\tdistance\n");
262
//print vectors, winning node for each and distance for each
for ( int i=0; i<vectorsIn; i++){
};
#endif // _LAYER_CPP
263
7.11 Backpropagation
Forward Feed Back Propagation networks (aka Three Layer Forward Feed Net-
works) have been very successful. Some uses include teaching neural networks to
play games, speak and recognize things. Backpropagation networks can be used
on several network architectures. The networks are all highly interconnected
and use non-linear transfer functions. The network must have at minimum
three layers, but rarely needs more than three layers.
Back-propagation supervised training for Forward-Feed neural nets uses pairs
of input and output patterns. The weights on all the vectors are set to random
values. Then input is fed to the net and propagates to the output layer and the
errors are calculated. Then the error correction is propagated back through the
hidden layer then to the input layer in the network. There is one input neurode
for each number (dimension) in the input vector, there is one output neurode
for each dimension in the output vector. So the network maps IN-dimensional
space to OUT-dimension space. There is no set rule for determining the num-
ber of hidden layers or the number of neurodes in the hidden layer. However,
if too few hidden neurodes are chosen then the network can not learn. If too
many are chosen, then the network memorizes the patterns rather than learning
to extract relevant information. A rule of thumb for choosing the number of
hidden neurodes is to choose log( 2)X where X is the number of patterns. So
if you have 8 distinct patterns to be learned, then log( 2)8 = 3 and 3 hidden
neurodes are probably needed. This is just a rule of thumb, experiment to see
what works best for your situation.
The delta rule is used for error correction in backpropagation networks. This
is also known as the least mean squared rule. N ewW eight = OldW eight 2
LearningConstant N eurodeOutput(desiredOutput actualOutput) The delta
rule uses local information for error correction. This rule looks for a minimum.
In an eort to nd a minimum it may nd a local minimum rather than the
global minimum. Picture trying to nd the deepest hole in your yard, if you
measure small sections at a time you may locate a hole but it may not be the
deepest in the yard. The generalized delta rule seeks to correct this by looking
at the gradient for the entire surface, not just local gradients.
The error vectorPis aimed at zero during training. The vector is calculated
as: Error = ( 21 ( overeachoutputnumber (desired actual)2 )) To get the error
close to zero, with in a tolerance, we use iteration. Each iteration we move
a step downward. We take the gradient, the derivative of a vector, and use
the steepest descent to minimize the error. So thenewweight = oldW eight +
stepsize ( gradientW (e(W )).
The derivative of the function T (x) = (1=(1 e x )) is just T (x) (1 T (x))
so using the chain rule we arrive at the error correction function
(desired actual)(1 actual) eachN odeOutW eight eachN odeHiddenW eight
the weight is then changed by the amount of the error correction function
as it propagates back through the network.
To train the net all weights are randomly set to a value between -1.0 and 1.0
To do the calculations going forward through the net:
264
Each NodeInput is multplied by each weight connected to it
Each HiddenNode sums up these incoming weights and adds a bias to the
total
This value is used in the sigmoid function as x 1/(1+e x)
If this value is greater than the threshold the HiddenNode res this value,
else it res zero
Each HiddenNode is multiplied by each weight connected to it
Each OutputNode sums up these incoming weights and adds a bias to the
total
This value is used in the sigmoid function as x 1/(1 + e x)
This is the value out put by the OutputNode
To calculate the adjusments during training, you gure out the error and
propigate it back like this:
Adjust weights between HiddenNodes and OutputNodes
ErrorOut = ( OutputNode)*(1-OutputNode)(DesiredOutput - OutputNode)
ErrorHidden = (HiddenNode)*(1-HiddenNode)*(Sum ErrorOut*Weight +
ErrorOut*Weight ... ) for each weight connected to this node
LearningRate = LearningConstant * HiddenNode
(LearningConstant is usually set to something around 0.2 )
Adjustment = ErrorOut * LearningRate
Weight = Weight - Adjustment
Adjust weights between HiddenNodes and InputNodes
Adjustment = ( ErrorHidden)*(LearningConstant)*(NodeInput)
Weight = Weight - Adjustment
Adjust Threshold
On OutputNode, Threshold = Threshold - ErrorOut * LearningRate
On HiddenNode, Threshold = Threshold - ErrorHidden * LearningRate
If you use a neural net that also accounts for imaginary numbers you can
adapt this function so it is not always positive and calculate all of the four
derivatives needed.
Numerous iterations are required for a backpropagation network to learn.
Therefore it is not practical for neural nets that must learn in 'real time'. It
will not always arrive at a correct set of weights. It may get trapped in local
minimums rather than an actual minimum. This is a problem with the 'steepest
decent' algorithm. A momentum term that allows the calculation to slide over
small bumps is sometimes employed. Back propagation networks do not scale
well. They are only good for small neural nets.
265
7.11.1 GUI Java Backpropagation Neural Network Builder
//backpropagation.java
//http://www.timestocome.com
//Neural Net Building Program
//winter 2000-2001
import javax.swing.*;
import java.io.*;
class backpropagation{
backpropagation
(neuralnet n, double c, double t, File f, JTextArea info, int noV, double err)
throws Exception
{
allowedError = err;
max = n.maxNodes;
266
outNodes = n.out;
noLayers = n.numberOfLayers;
inNodes = n.in;
trainingConstant = c;
threshold = t;
nnToTrain = new neuralnet();
nnToTrain = n;
nnToTrain.threshold = t;
numberOfVectors = noV;
message = info;
trainingDataFile = f;
FileReader fr = new FileReader(f);
BufferedReader br = new BufferedReader(fr);
String lineIn;
vectorsIn = new double[numberOfVectors][inNodes];
vectorsOut = new double[numberOfVectors][outNodes];
neurodeOutputArray = new double[noLayers][max];
nodesPerLayer = new int[noLayers+1];
nodesPerLayer[0] = inNodes;
nodesPerLayer[noLayers - 1] = outNodes;
while(st.nextToken() != st.TT_EOF){
267
if(st.ttype == st.TT_NUMBER){
vectorsIn[k][i] = st.nval;
i++;
vectorsOut[k][j] = st.nval;
j++;
if(j == outNodes){
k++;
i = 0;
j = 0;
}
}
}
}
//*********forward we go*******************
//propagate input through nn
int vectorNumber = 0;
while(vectorNumber < numberOfVectors){
long loopNumber = 0;
boolean noConvergence = false;
boolean gotConvergence = false;
268
while( !noConvergence && !gotConvergence){
double temp = 0;
269
}
}
}
desired = vectorsOut[vectorNumber][i];
actual = neurodeOutputArray[noLayers-1][i];
errorVectorCurrent[i] = (actual)*(1-actual)*(actual-desired);
//current weight
double cw = nnToTrain.weightTable[layer-1][wgt][node];
//output of node connecting to input end of this weight
pvsOut = neurodeOutputArray[layer-1][wgt];
270
tempCalc += cw*pvsOut;
}
}
message.append("\nDesired-Actual=error " +
desired+ "-" +actual+ "=" +(desired-actual) );
271
vectorNumber ++;
}
272
//DisplayNet.java
//http://www.timestocome.com
//Neural Net Building Program
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
import java.text.*;
jpaneldisplaynet jpdn;
rootPanel.add(jpdn);
rootPanel.add(sby, BorderLayout.EAST);
sby.addAdjustmentListener(new AdjustmentListener()
{
public void adjustmentValueChanged( AdjustmentEvent evt){
JScrollBar sb = (JScrollBar)evt.getSource();
jpdn.setScrolledPosition(evt.getValue());
jpdn.repaint();
}
});
273
}
//create window
final JFrame f = new DisplayNet(in, out, hidden, weight);
f.setBounds( 100, 50, 400, 600);
f.show();
//destroy window
f.setDefaultCloseOperation(DISPOSE_ON_CLOSE);
f.addWindowListener(new WindowAdapter(){
public void windowClosed(WindowEvent e){
f.setVisible(false);
}
});
}
}
int inNodes;
int outNodes;
int hiddenNodes[];
double weights[][][];
int noLayers;
int scrollx = 0, scrolly = 0;
int layers[];
274
jpaneldisplaynet(int i, int o, int h[], double w[][][])
{
inNodes = i;
outNodes = o;
hiddenNodes = h;
weights = w;
noLayers = h.length + 2;
layers[noLayers - 1] = outNodes;
g.setColor(backColor);
g.fillRect(0, 0, 1280, 960);
275
Color nodeColor = new Color( 0, 80, 0);
Color weightColor = new Color(0, 0, 255);
g.setColor(nodeColor);
//Heading...
g.drawString("The node and layer locations are in green, +
weights are in blue.", x, y-30);
g.drawString("The leftmost layer is the input,+
the rightmost layer is output.", x, y-20);
int max;
if(rows>cols){
max = rows;
}else{
max = cols;
}
r -= (scrolly*40);
276
for(int j=0; j<layers[i]; j++){
g.setColor(nodeColor);
int printRow = r + (j+1)*20;
g.setColor(weightColor);
for(int k=0; k<layers[i+1]; k++){
if(weights[i][j][k] != 0){
g.drawString( " " + nf.format(weights[i][j][k]) +
" ", (c+(i*100)), printRow+(20*(k+1)));
r = printRow + 20*(k+1);
}
}
}
r = 80; //lreset at end of column
}
277
}
278
//filefilter.java
//http://www.timestocome.com
//Neural Net Building Program
//winter 2000-2001
import java.io.File;
import javax.swing.filechooser.*;
if(fileobj.getPath().lastIndexOf('.') > 0)
extension = fileobj.getPath().substring(
fileobj.getPath().lastIndexOf('.')
+ 1).toLowerCase();
if(extension != "")
return extension.equals("net");
else
return fileobj.isDirectory();
}
279
//DisplayVectors.java
//http://www.timestocome.com
//Neural Net Building Program
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
jpaneldisplayvectors jpdv;
public DisplayVectors()
{
rootPanel.add(jpdv);
void display(){
//create window
final JFrame f = new DisplayVectors();
f.setBounds( 200, 200, 180, 600);
f.setVisible(true);
//destroy window
f.setDefaultCloseOperation(DISPOSE_ON_CLOSE);
280
f.addWindowListener(new WindowAdapter(){
public void windowClosed(WindowEvent e){
f.setVisible(false);
}
});
}
}
jpaneldisplayvectors()
{
setBackground(Color.white);
}
super.paintComponent(g);
}
}
281
//gui.java
//http://www.timestocome.com
//Neural Net Building Program
//winter 2000-2001
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
import java.io.File;
import javax.swing.filechooser.*;
import java.io.*;
//build
static JTextField NumberInputs;
static JTextField NumberOutputs;
static JTextField NumberHidden;
static JTextField NumberPerHidden;
JButton jbuttonBuild;
//train
static JTextField TrainingConstant;
282
static JTextField Threshold;
static JTextField TrainingVectorFile;
static JTextField NoTrainingVectors;
static JTextField Error;
JTextField fileTrain;
JButton jbuttonTrain;
//use
JTextField filetouse;
static JTextField vectorfiletouse;
static JTextField NoVectors;
JButton jbuttonUse;
//file stuff
static File currentFile;
static neuralnet nn;
public gui()
{
super ("http://www.TimesToCome.com");
283
JPanel jp1 = new JPanel();
jp1.setBackground(c);
jp1.setLayout(new BoxLayout(jp1, BoxLayout.X_AXIS));
jp1.add(LNumberInputs);
jp1.add(NumberInputs);
jp1.add(Box.createHorizontalStrut(20));
jpanelNew.add(jp1);
//training info
jpanelTrain = new jpanel( "Train a Neural Net");
284
Threshold = new JTextField(20);
JLabel LThreshold = new JLabel("Threshold: ");
285
jp7.add(NoTrainingVectors);
jp7.add(Box.createHorizontalStrut(20));
jpanelTrain.add(jp7);
//usage info
jpanelUse = new jpanel( "Use a Neural Net");
jpanelUse.add(Box.createRigidArea(new Dimension(570, 5)));
286
jbuttonUse = new JButton("Process");
jpanelUse.add(jbuttonUse);
jbuttonUse.addActionListener(jb3);
//information
jpanelInformation = new jpanel( "Information");
JScrollPane scrollpaneText = new JScrollPane();
scrollpaneText.add(output);
scrollpaneText.setViewportView(output);
jpanelInformation.add(scrollpaneText);
//set up interface
rootPane = getContentPane();
rootPane.setBackground(Color.white);
rootPane.setLayout(new FlowLayout());
rootPane.add(jpanelNew);
rootPane.add(jpanelTrain);
rootPane.add(jpanelUse);
rootPane.add(jpanelInformation);
//add in menu
jmenubar();
//create window
JFrame f = new gui();
f.setBounds( 100, 100, 650, 700);
f.setVisible(true);
287
//destroy window
f.setDefaultCloseOperation(DISPOSE_ON_CLOSE);
f.addWindowListener(new WindowAdapter(){
public void windowClosed(WindowEvent e){
System.exit(0);
}
});
jmenubar.setUI( jmenubar.getUI() );
288
JMenuItem m7 = new JMenuItem("Exit");
m7.addActionListener(a7);
jmenu1.add(m1);
jmenu1.add(m2);
jmenu1.add(m3);
jmenu1.add(m4);
jmenu1.add(m15);
jmenu1.addSeparator();
jmenu1.add(jmenu5);
jmenu1.addSeparator();
jmenu1.add(m7);
jmenu4.add(m8);
jmenu5.add(m11);
jmenu2.add(m13);
jmenu2.add(m6);
jmenubar.add(jmenu1);
jmenubar.add(jmenu4);
jmenubar.add(jmenu2);
setJMenuBar(jmenubar);
289
//create new neural net
static ActionListener a1 = new ActionListener()
{
public void actionPerformed( ActionEvent e )
{
JMenuItem m1 = ( JMenuItem )e.getSource();
}
};
if(result == JFileChooser.APPROVE_OPTION){
output.setText("Opening... " + fileobj.getPath());
currentFile = fileobj;
try{
FileInputStream fis = new FileInputStream(currentFile);
ObjectInputStream ois = new ObjectInputStream(fis);
nn = (neuralnet)ois.readObject();
ois.close();
}catch(Exception exception){}
}
};
290
//save net
static ActionListener a3 = new ActionListener()
{
public void actionPerformed( ActionEvent e )
{
JMenuItem m3 = ( JMenuItem )e.getSource();
if(result == JFileChooser.APPROVE_OPTION){
output.setText("Saving... " + fileobj.getPath());
currentFile = fileobj;
try{
FileOutputStream fos = new FileOutputStream(currentFile);
ObjectOutputStream oos = new ObjectOutputStream(fos);
oos.writeObject(nn);
oos.flush();
oos.close();
}catch(Exception exception){}
}
}
};
//train net
static ActionListener a4 = new ActionListener()
{
public void actionPerformed( ActionEvent e )
{
JMenuItem m4 = ( JMenuItem )e.getSource();
output.setText("\n Use this to train a neural net that you have ");
output.append("\n created and saved.");
output.append("\n Enter the training constant (~0.2, the threshold (~.2) 0.0-1.0");
output.append("\n and the file where your training vectors are stored.");
output.append("\n The training vector file should be of the format:");
output.append("\n (1.0, 4.3, 5.6) (3.6, 6.7, 5.2, 5.3)");
output.append("\n The first file per line should be the input vector,");
291
output.append("\n and the second should be the output vector");
output.append("\n Make sure you open a file from the file menu to train.");
output.append("\n press [Train] when you are ready to begin.");
}
};
//about
static ActionListener a6 = new ActionListener()
{
public void actionPerformed( ActionEvent e )
{
JMenuItem m6 = ( JMenuItem )e.getSource();
output.setText( "\nhttp://www.timestocome.com"+
"\nNeural Net Building Program"+
"\nCopyright (C) 2001 Linda MacPhee-Cobb"+
"\nThis program is free software; you can"+
"\nredistribute it and/or modify"+
"\nit under the terms of the GNU General "+
"\nPublic License as published by"+
"\nthe Free Software Foundation; either "+
"\nversion 2 of the License, or "+
"\nany later version."+
"\n\nThis program is distributed in the hope"+
"\nthat it will be useful,"+
"\nbut WITHOUT ANY WARRANTY; without even "+
"\nthe implied warranty of "+
"\nMERCHANTABILITY or FITNESS FOR A PARTICULAR "+
"\nPURPOSE. See the"+
"\nGNU General Public License for more details."+
"\nYou should have received a copy of the "+
"\nGNU General Public License"+
"\nalong with this program; if not, "+
"\nwrite to the Free Software"+
"\nFoundatation, Inc., 59 Temple Place, "+
"\nSuite 330, Boston, Ma 02111-1307"+
"\nUSA"+
"\nI may be reached via the website http://www.timestocome.com"+
"\nlinda macphee-cobb"+
"\nwinter 2000-2001");
292
};
//exit
static ActionListener a7 = new ActionListener()
{
public void actionPerformed( ActionEvent e )
{
JMenuItem m7 = ( JMenuItem )e.getSource();
output.setText( "Thank you . . . ");
System.exit(0);
}
};
//display net
static ActionListener a8 = new ActionListener()
{
public void actionPerformed( ActionEvent e )
{
JMenuItem m8 = ( JMenuItem )e.getSource();
if(nn == null ){
output.setText("Please open or create a neural net to display.");
}else{
DisplayNet dn = new DisplayNet(nn.in, nn.out, nn.hiddenLayers, nn.weightTable);
dn.display(nn.in, nn.out, nn.hiddenLayers, nn.weightTable);
}
}
};
//print net
static ActionListener a11 = new ActionListener()
{
public void actionPerformed( ActionEvent e )
{
JMenuItem m11 = ( JMenuItem )e.getSource();
293
if(nn == null ){
output.setText("Please open or create a neural net to print.");
}else{
//print to a file
PrinterNet pn = new PrinterNet(nn.in, nn.out, nn.hiddenLayers, nn.weightTable);
}
};
//help
static ActionListener a13 = new ActionListener()
{
}
};
//process vectors
static ActionListener a15 = new ActionListener()
{
public void actionPerformed( ActionEvent e )
{
JMenuItem m15 = ( JMenuItem )e.getSource();
294
choice = 15;
output.setText("From the file menu open the neural net");
output.append("\nYou wish to use, then enter the name");
output.append("\nof the file containing the vectors you");
output.append("\nwish to process. The vector file format");
output.append("\nshould be:");
output.append("\n(3.2, 5.66, 2.0)");
output.append("\nwith one vector per line");
}
};
//build net
static ActionListener jb1 = new ActionListener()
{
public void actionPerformed(ActionEvent e)
{
int i = 0;
int o = 0;
String h = "";
i = (int)(Double.valueOf(NumberInputs.getText() ).doubleValue());
o = (int)(Double.valueOf(NumberOutputs.getText() ).doubleValue());
h = NumberPerHidden.getText();;
295
if( (i == 0) || (o == 0)){
output.setText ("Please enter valid numbers for input and output neurodes");
}else if( h == ""){
output.setText ("Please enter a training file name.");
}else{
output.setText("\nBuilding...");
nn = new neuralnet( i, o, h, output);
nn.setInitWeights(output);
}
}else{
output.setText("Please enter all values needed");
}
}
};
//train net
static ActionListener jb2 = new ActionListener()
{
public void actionPerformed(ActionEvent e)
{
double tconstant = -1.0;
double threshold = -1.0;
String fname = "";
double error = -1.0;
output.setText(" ");
296
output.setText( "Enter a training constant and threshold > 0.0 please.");
}else{
if(currentFile.isFile()){
if(nn == null){
output.setText
("\nPlease open or create a neural net to train.");
}else{
try{
nn = bp.train();
}catch(Exception exc){}
}
}else{
output.setText("\n Check training file name and path ");
}
}
}else{
output.setText("Please fill in all of the blanks");
}
297
}
};
//process vectors
static ActionListener jb3 = new ActionListener()
{
public void actionPerformed(ActionEvent e)
{
int nv = -1;
nv = (int)(Double.valueOf(NoVectors.getText() ).doubleValue());
output.setText("\n Processing...");
File processfile;
//JTextField vectorfiletouse;
String fname = vectorfiletouse.getText();
if( fname.compareTo("")==0){
output.setText("Please enter a file name");
}else{
298
try{
}catch(Exception exc){
output.setText("\n Hellfire and damnation. I believe we ran off the end");
output.append("\n of an array. Double check your numbers.");
}
}
}
};
299
//Help.java
//http://www.timestocome.com
//Neural Net Building Program
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
jpanelHelp jph;
public Help()
{
super ( "Help");
roothelpPanel.add(jph);
void display(){
//create window
final JFrame f1 = new Help();
f1.setBounds( 200, 200, 180, 600);
f1.setVisible(true);
//destroy window
f1.setDefaultCloseOperation(DISPOSE_ON_CLOSE);
f1.addWindowListener(new WindowAdapter(){
public void windowClosed(WindowEvent e){
f1.setVisible(false);
300
}
});
}
}
301
String processA = "File->Process";
String processB = " Use this to process a file of input vectors";
String processC = " through the net you've created.";
String processD = " Use the format (a, b, c, d) one vector ";
String processE = " per line for the input file.";
jpanelHelp()
{
setBackground(Color.white);
}
super.paintComponent(g);
g.drawString(newA, x, y);
g.drawString(newB, x, y + incY);
g.drawString(newC, x, y + 2*incY);
g.drawString(newD, x, y + 3*incY);
g.drawString(newE, x, y + 4*incY);
g.drawString(newF, x, y + 5*incY);
g.drawString(newG, x, y + 6*incY);
g.drawString(newH, x, y + 7*incY);
g.drawString(newI, x, y + 8*incY);
g.drawString(newJ, x, y + 9*incY);
g.drawString(newK, x, y + 10*incY);
g.drawString(importA, x, y + 12*incY);
g.drawString(importB, x, y + 13*incY);
g.drawString(importC, x, y + 14*incY);
302
g.drawString(importD, x, y + 15*incY);
g.drawString(saveA, x, y + 17*incY);
g.drawString(saveB, x, y + 18*incY);
g.drawString(saveC, x, y + 19*incY);
g.drawString(trainA, x, y + 21*incY);
g.drawString(trainB, x, y + 22*incY);
g.drawString(trainC, x, y + 23*incY);
g.drawString(trainD, x, y + 24*incY);
g.drawString(trainE, x, y + 25*incY);
g.drawString(trainF, x, y + 26*incY);
g.drawString(trainG, x, y + 28*incY);
g.drawString(trainH, x, y + 29*incY);
g.drawString(trainI, x, y + 30*incY);
g.drawString(trainJ, x, y + 31*incY);
g.drawString(trainK, x, y + 32*incY);
g.drawString(processA, x, y + 33*incY);
g.drawString(processB, x, y + 34*incY);
g.drawString(processC, x, y + 35*incY);
g.drawString(processD, x, y + 36*incY);
g.drawString(processE, x, y + 37*incY);
g.drawString(printWA, x, y + 38*incY);
g.drawString(printWB, x, y + 39*incY);
g.drawString(printWC, x, y + 40*incY);
g.drawString(displayNA, x, y + 41*incY);
g.drawString(displayNB, x, y + 42*incY);
}
}
303
//jpanel.java
//http://www.timestocome.com
//Neural Net Building Program
//winter 2000-2001
import javax.swing.*;
import java.awt.*;
jpanel(String s)
{
Color c = new Color(225, 255, 225);
setBackground(c);
setBorder(BorderFactory.createTitledBorder(
BorderFactory.createEtchedBorder(),s));
setLayout(new BoxLayout(this, BoxLayout.Y_AXIS));
}
}
304
//neuralnet.java
//http://www.timestocome.com
//Neural Net Building Program
//winter 2000-2001
import javax.swing.*;
import java.util.*;
import java.io.*;
neuralnet(){}
305
if(in > out) {
maxNodes = in;
}else{
maxNodes = out;
}
if(temp[j] != ','){
tempS += temp[j];
}else{
tempArray[count] = (int) Double.valueOf(tempS).doubleValue();
count++;
tempS ="";
}
}
306
//build a 3-d weight table to store our weights in
numberOfLayers = count + 3;
numberOfConnections = maxNodes;
weightTable = new double[numberOfLayers][maxNodes][numberOfConnections];
//initialize the table with random numbers between -1.0 and 1.0
public void setInitWeights(JTextArea information)
{
//set all weights to zero
nodeCount[0] = in;
307
}
nodeCount[numberOfLayers-1] = out;
weightTable[i][j][k] = number;
}
}
}
308
//neurode.java
//http://www.timestocome.com
//Neural Net Building Program
class neurode {
void calculateValue()
{
309
}
310
//PrinterNet.java
//http://www.timestocome.com
//Neural Net Building Program
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
import java.text.*;
int inNodes;
int outNodes;
int hiddenNodes[];
double weights[][][];
int noLayers;
int scrollx = 0, scrolly = 0;
int layers[];
/*
inNodes = i;
outNodes = o;
hiddenNodes = h;
weights = w;
noLayers = h.length + 2;
311
layers[noLayers - 1] = outNodes;
int x = 50;
int y = 50;
int q = 100;
int c = 0;
int rows = 0;
int cols = noLayers;
int max;
if(rows>cols){
max = rows;
}else{
max = cols;
}
312
r -= (scrolly*40);
if(weights[l][j][k] != 0){
g.drawString( " " + nf.format(weights[l][j][k]) +
" ", (c+(i*100)), printRow+(20*(k+1)));
r = printRow + 20*(k+1);
}
}
}
r = 80; //lreset at end of column
}
*/
g.dispose();
pj.end();
313
//PrintNet.java
//http://www.timestocome.com
//Neural Net Building Program
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
import java.text.*;
import java.io.*;
int inNodes;
int outNodes;
int hiddenNodes[];
double weights[][][];
int noLayers;
int layers[];
String fileName = "printme.txt";
inNodes = i;
outNodes = o;
hiddenNodes = h;
weights = w;
noLayers = h.length + 2;
layers[noLayers - 1] = outNodes;
314
}
int x = 50;
int y = 50;
int q = 100;
int c = 0;
int rows = 0;
int cols = noLayers;
int max;
if(rows>cols){
max = rows;
}else{
max = cols;
}
315
for(int i=0; i<cols; i++){
if(weights[i][j][k] != 0){
r = printRow + 20*(k+1);
}
}
}
r = 80; //lreset at end of column
}
bw.close();
316
//process.java
//http://www.timestocome.com
//Neural Net Building Program
//winter 2000-2001
import javax.swing.*;
import java.io.*;
class process{
max = n.maxNodes;
outNodes = n.out;
noLayers = n.numberOfLayers;
inNodes = n.in;
threshold = n.threshold;
nn = n;
numberOfVectors = noV;
message = info;
answerArray = new double[noV][outNodes];
317
dataFile = f;
FileReader fr = new FileReader(f);
BufferedReader br = new BufferedReader(fr);
String lineIn;
nodesPerLayer[noLayers - 1] = outNodes;
while(st.nextToken() != st.TT_EOF){
if(st.ttype == st.TT_NUMBER){
vectorsIn[k][i] = st.nval;
i++;
if(i == inNodes){
k++;
318
i = 0;
}
}
}
}
double temp = 0;
319
}
}
}
}
vectorNumber ++;
}
}
}
320
}
321
//weighttable.java
//winter 2000-2001
class weighttable {
saveTable()
{
//print to a file
}
printTable()
{
//print to screen or printer
}
322
loadTable()
{
//load a saved table into memory for use
}
setWeights()
{
//if training set to true
//backpropagation training
//else print error message to user
}
323
//printme.txt
Layer # 1
Nd 1 Weights=> 0.51, 0.48,
Nd 2 Weights=> 0.953, 0.181,
Nd 3 Weights=> -0.374, 0.273,
Nd 4 Weights=> -0.082, 0.227,
Layer # 2
Nd 1 Weights=> 0.923, 0.75, 0.64,
Nd 2 Weights=> 0.319, -0.259, -0.929,
Layer # 3
Nd 1 Weights=> -0.571, -0.081, 0.154, 0.786, -0.390, -0.105,
Nd 2 Weights=> 0.742, 0.825, -0.558, -0.221, -0.851, -0.564,
Nd 3 Weights=> 0.128, -0.869, 0.333, 0.742, -0.767, -0.641,
Layer # 4
Nd 1 Weights=>
Nd 2 Weights=>
Nd 3 Weights=>
Nd 4 Weights=>
Nd 5 Weights=>
Nd 6 Weights=>
324
//process.txt
(1,3,5,7)
(2,4,6,8)
(1,2,3,4)
(0.1, 0.2, 0.3, 0.4)
325
//train.txt
(0.4, -0.4) (.9)
326
//test2.net
(0.1, 0.2, 0.3, 0.4) (0.5, 0.6, 0.7, 0.8)
(0.5, 0.6, 0.7, 0.8) (0.9, 1.0, 1.1, 1,2)
327
7.11.2 C++ Backpropagation Dog Track Predictor
This is a set of tools to download the data files from a race track,
clean them up and train a neural net to predict the winner of the race
given only the information available online before a race.
This creates a weight table, and and error file so you can see how
well it is working.
328
10) testnet.cpp can be used to run test data through your net and see
how accurate it is at predicting the winners.
329
---cleandata.pl---
#!/usr/bin/perl
#results
#G - greyhound
#R for results
#2 letter track id
#two digit day of month
#S/A/E/L student, afternoon, evening, late night
$i = 0;
foreach $item (@filelist){
330
if ( ( $item =~ /GR[A-Z, 0-9]+.HTM/)
|| ( $item =~ /^\./)
|| ( $item =~ /^\.\./) )
{
$oldFileName = $item;
delete @filelist[$i];
}
$i++;
}
#FILE LOOP
#open the first/next entries file
foreach $item (@entriesfiles){
$oldFileName = $item;
$race = 0;
if ( /Grade/ ){
$race++;
#create a new file for the first race with the correct file name
$newFileName = $oldFileName;
$newFileName =~ s/G/$race/;
$newFileName =~ s/HTM/txt/;
$newFileName = lc ($newFileName);
331
or die "Couldn't open data/$newFileName";
#for each dog -- write out dogs number, name, odds(divided), weight
#parse line
if ( /^[1-8][\s]/ ){
$parseline = $_;
$parseline =~ s/(^[1-8][\s])([A-Za-z|\.|\-|\s|\']+)([0-9]+[\-][0-9]+[\s])
([A-Za-z|\.|\-|\s|\'|\&]+[\s]*)([\(][0-9]+[\)])/$1 $2 $3 $4 $5 $6 $7 $8 $9/;
$dogNumber = $1;
$dogName = $2;
$odds = $3;
$weight = $5;
$oddsTop = $odds;
$oddsBot = $odds;
if ( $oddsBot == 0){
#print " \n !!! $dogNumber, $dogname, $odds, $weight";
}else{
$oddsTop =~ s/\-\d+//;
$oddsBot =~ s/\d+\-//;
$odds = $oddsTop/$oddsBot;
}
$dogNumber =~ s/\s//;
$weight =~ s/\(//;
$weight =~ s/\)//;
332
if ( /^Track Handicapper:/ ){
$parseline = $_;
$h1 = $2;
$h2 = $4;
$h3 = $6;
$h4 = $8;
if ( ! $h4){
$h4 = 0;
}
print OUT "\nH: $h1, $h2, $h3, $h4";
close OUT;
}
#if not found rm the entries file and grab the next entries file
if ( ! (open (INPUT2, "trainingdata/$resultsFile"))){
if ( "trainingdata/$resultsFile" ){
print "\n trainingdata/$resultsFile";
unlink ("trainingdata/$resultsFile");
}
if ( "trainingdata/$oldFileName"){
print "\n trainingdata/$oldFileName";
unlink ("trainingdata/$oldFileName");
}
if ( "data/$newFileName"){
print "\n data/$newFileName";
unlink ("data/$newFileName");
333
}
}else{
while (<INPUT2>){
#get the winners from the file for the correct race
#find each race
if ( /Grade:/ ){
$flag++;
$flag1 = 0;
$flag2 = 0;
$flag3 = 0;
$raceNo = $flag;
$done = 0;
}
334
$third = $parseline;
$third =~ s/([1-8])([A-Za-z|\.|\'|\&|\s]+)/$1 $2 $3 $4 $5 $6 $7 $8 $9/;
$thirdNumber = $1;
$thirdName = $2;
$flag3 = 1;
}
$done = 1;
close (INPUT2);
}
335
--formatdata.pl---
#!/usr/bin/perl
#read in a file
foreach $file (@filelist){
while (<FILEHANDLE>){
$temp = $_;
$temp =~ s/([H:\s]+)([1-8])([\,])([\s])([1-8])([\,])([\s])([1-8])([\,])([\s])([0-8])
/$1 $2 $3 $4 $5 $6 $7 $8 $9 $10 $11/;
336
$h1 = $2;
$h2 = $5;
$h3 = $8;
$h4 = $11;
elsif ( $_ =~ /(\s[1-8])/){
$temp = $_;
$temp =~ s/([\s])([1-8])([\,\s])([A-Za-z|\.|\-|\s|\']+)
([\s\,])([\d+|\.]+)([\,])([\d]+)/
$1 $2 $3 $4 $5 $6 $7 $8 $9/;
$pos = $2;
$odds = $6;
$weight = $8;
if ( $d == 1){
$d = 2;
$p1 = 0;
$o1 = $6;
$w1 = $ 8;
}elsif ( $d == 2){
$d = 3;
$p2 = 0;
$o2 = $6;
$w2 = $ 8;
}elsif ( $d == 3){
$d = 4;
$p3 = 0;
$o3 = $6;
$w3 = $ 8;
}elsif ( $d == 4){
$d = 5;
$p4 = 0;
$o4 = $6;
$w4 = $ 8;
337
}elsif ( $d == 5){
$d = 6;
$p5 = 0;
$o5 = $6;
$w5 = $ 8;
}elsif ( $d == 6){
$d = 7;
$p6 = 0;
$o6 = $6;
$w6 = $ 8;
}elsif ( $d == 7){
$d = 8;
$p7 = 0;
$o7 = $6;
$w7 = $ 8;
}elsif ( $d == 8){
$d = 1;
$p8 = 0;
$o8 = $6;
$w8 = $ 8;
$temp = $_;
$temp =~ s/([1-8])([\,\s]) /$1/;
if ($i == 0){
$win = $1;
$i++;
}elsif ($i == 1){
$place = $1;
$i++;
}elsif ( $i == 2){
$show = $1;
$i = 0;
338
}
if ( $h1 == 1){
$p1 = 1;
}elsif ( $h1 == 2 ){
$p2 = 1;
}elsif ( $h1 == 3 ){
$p3 = 1;
}elsif ( $h1 == 4 ) {
$p4 = 1;
}elsif ( $h1 == 5) {
$p5 = 1;
}elsif ( $h1 == 6 ){
$p6 = 1;
}elsif ( $h1 == 7) {
$p7 = 1;
}elsif ( $h1 == 8 ){
$p8 = 1;
}
if ( $h2 == 1){
$p1 = 2;
}elsif ( $h2 == 2 ){
$p2 = 2;
}elsif ( $h2 == 3 ){
$p3 = 2;
}elsif ( $h2 == 4 ) {
$p4 = 2;
}elsif ( $h2 == 5) {
$p5 = 2;
}elsif ( $h2 == 6 ){
$p6 = 2;
}elsif ( $h2 == 7) {
$p7 = 2;
339
}elsif ( $h2 == 8 ){
$p8 = 2;
}
if ( $h3 == 1){
$p1 = 3;
}elsif ( $h3 == 2 ){
$p2 = 3;
}elsif ( $h3 == 3 ){
$p3 = 3;
}elsif ( $h3 == 4 ) {
$p4 = 3;
}elsif ( $h3 == 5) {
$p5 = 3;
}elsif ( $h3 == 6 ){
$p6 = 3;
}elsif ( $h3 == 7) {
$p7 = 3;
}elsif ( $h3 == 8 ){
$p8 = 3;
}
if ( $h4 == 1){
$p1 = 4;
}elsif ( $h4 == 2 ){
$p2 = 4;
}elsif ( $h4 == 3 ){
$p3 = 4;
}elsif ( $h4 == 4 ) {
$p4 = 4;
}elsif ( $h4 == 5) {
$p5 = 4;
}elsif ( $h4 == 6 ){
$p6 = 4;
}elsif ( $h4 == 7) {
$p7 = 4;
}elsif ( $h4 == 8 ){
$p8 = 4;
}
if ( $h4 == 0){
$p1 /= 3;
$p2 /= 3;
$p3 /= 3;
$p4 /= 3;
$p5 /= 3;
$p6 /= 3;
340
$p7 /= 3;
$p8 /= 3;
}else{
$p1 /= 4;
$p2 /= 4;
$p3 /= 4;
$p4 /= 4;
$p5 /= 4;
$p6 /= 4;
$p7 /= 4;
$p8 /= 4;
}
#print "$p1,$o1,$w1,$p2,$o2,$w2,$p3,$o3,$w3,
$p4,$o4,$w4,$p5,$o5,$w5,$p6,$o6,$w6,$p7,$o7,$w7,$p8,$o8,$w8,$win,$place,$show\n";
close (FILEHANDLE);
# print "$p1,$o1,$w1,$p2,$o2,$w2,$p3,$o3,$w3,$p4,$o4,$w4,$p5,
$o5,$w5,$p6,$o6,$w6,$p7,$o7,$w7,$p8,$o8,$w8,$win,$place,$show\n";
close (OUT);
341
//---dogs.cpp---
//www.timestocome.com
//neural net to better pick winning dogs
//data is downloaded from the racing tracks
//and parsed using cleandata.pl followed by formatData.pl
//this program then takes that data and creates a weight table
//using a backpropagation neural net.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <math.h>
#include <ctime>
#include <iostream>
#include <fstream>
using namespace std;
#define NODESIN 24
#define NODESHIDDEN 16
#define NODESOUT 8
#define VECTORSIN 2000
#define LOOPSMAX 100
#define ERRORMAX 1.0
342
void randomizeWeights ( double weightsI[NODESIN][NODESHIDDEN],
double weightsO[NODESHIDDEN][NODESOUT]);
//output vector
double outputData[NODESOUT];
for (int i=0; i<NODESOUT; i++){
outputData[i] = 0.0;
}
343
//run training routine
trainingRoutine ( weightsI, weightsO, inputData, outputData, numberOfVectors);
if (!fout.is_open()){
cerr << "Could not create weights.dat" << endl;
exit(1);
}
fout.close();
344
int trainingRoutine (double wgtsI[NODESIN][NODESHIDDEN],
double wgtsO[NODESHIDDEN][NODESOUT],
double vctrIn[VECTORSIN][NODESIN+NODESOUT],
double vctrOut[NODESOUT], int vectorCount)
{
double outputNodes[NODESOUT];
double hiddenNodes[NODESHIDDEN];
double errorO[NODESOUT];
double errorI[NODESHIDDEN];
double bias = 0.0;
double threshold = 0.20;
double learningRateI = 0.20;
double learningRateO = 0.20;
double errorAdjustmentO[NODESOUT];
double errorAdjustmentI[NODESHIDDEN];
int loops = 0;
int badloops = 0;
int goodRaces=0, badRaces = 0;
345
//loop until converge for the vector,
//or maximum error allowed error is reached.
for ( int loops=0; loops< LOOPSMAX; loops++){
}
}
//add bias
//stabilize with sigmoid function
//see if over threshold to fire
for ( int j=0; j<NODESHIDDEN; j++){
hiddenNodes[j] += bias;
hiddenNodes[j] = 1/ (1 + exp(-hiddenNodes[j]));
}
}
346
outputNodes[j] += bias;
outputNodes[j] = 1/ (1 + exp(outputNodes[j]));
//determine error
totalError = 0.0;
for (int j=0; j< NODESOUT; j++){
errorO[j] = outputNodes[j] - vctrIn[i][NODESIN + j];
totalError += errorO[j];
}
}
}
//input layer
347
for( int j=0; j<NODESIN; j++){
for (int k=0; k< NODESHIDDEN; k++){
wgtsI[j][k] += learningRateI * vctrIn[i][j] *
errorAdjustmentO[k] ;
}
}
} else {
badloops++;
}
//debugInfo( i, loops, vctrIn, outputNodes,
totalError, goodRaces, wgtsI, wgtsO );
348
}//******************end training loop (move to next vector)
return 0;
}
n = rand() % 2;
if ( n == 0){
weightsI[i][j] = ((double) (rand()))/RAND_MAX;
}else {
weightsI[i][j] = (-1.0) * ((double) (rand()))/RAND_MAX;
}
}
}
n = rand() % 2;
if ( n == 0){
weightsO[i][j] = ((double) (rand()))/RAND_MAX;
}else {
weightsO[i][j] = (-1.0) * ((double) (rand()))/RAND_MAX;
}
}
}
349
}
if ( !fin.is_open()){
printf ( "\nCould NOT open data.dat ");
exit (1);
}
350
char temp[257];
int endOfLine = 0;
int j=0;
for (int i=0; i<length; i++){
if ( tempString[i] != ',' ){
}else{
//adjust odds
if (( track%3 == 0) &&( track < 24)){
// v[count][track] /= 10.0;
}
//adjust handicap
if (( (track+2)%3 == 0) && (track < 24)) {
v[count][track] /= 10.0;
}
351
}
}
count++;
}
count -= 1;
//adjust win/place/show
for ( int i=0; i<count; i++){
v[i][24] = 0;
v[i][25] = 0;
v[i][26] = 0;
v[i][w] = .75;
v[i][p] = .50;
v[i][s] = .25;
}
fin.close();
return count;
352
if (!fptr.is_open()){
cerr << "Could not create error.dat" << endl;
exit(1);
}
//this file can get quite large, I only used it for debugging
//dump some info to file for review
fptr <<"\n*********************************************************\n";
fptr << "\n\n Vector " << i << ", loop Number " << loops << endl;
fptr <<"\n Actual: " << first << ", " << second << ", " << third << "\t\t";
353
//convert output to easily readable information
int w = 0, p = 0, s = 0;
double win=0.0, place=0.0, show=0.0;
double temp[NODESOUT];
for ( int m=0; m< NODESOUT; m++){
temp[m] = outputNodes[m];
}
for (int m=0; m< NODESOUT; m++){
if( temp[m] > win){
win = temp[m];
w = m+1;
}
}
temp[w-1] = 0.0;
for (int m=0; m<NODESOUT; m++){
if ( temp[m] > place){
place = temp[m];
p = m+1;
}
}
temp[p-1] = 0.0;
for( int m=0; m<NODESOUT; m++){
if ( temp[m] > show){
show = temp[m];
s = m+1;
}
}
/*
//weights
fptr << endl;
354
fptr << wgtsI[m][n] << ",\t ";
}
fptr << endl;
}
fptr <<"\n**********************************************************\n";
fptr <<"good races " << goodRaces << " bad races " << i-goodRaces << endl;
*/
fptr.close();
355
double temp[NODESOUT];
for ( int m=0; m< NODESOUT; m++){
temp[m] = outputNodes[m];
}
for (int m=0; m< NODESOUT; m++){
if( temp[m] > win){
win = temp[m];
w = m+1;
}
}
temp[w-1] = 0.0;
for (int m=0; m<NODESOUT; m++){
if ( temp[m] > place){
place = temp[m];
p = m+1;
}
}
temp[p-1] = 0.0;
for( int m=0; m<NODESOUT; m++){
if ( temp[m] > show){
show = temp[m];
s = m+1;
}
}
356
//---predictor.cpp---
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <math.h>
#include <ctime>
#include <iostream>
#include <fstream>
using namespace std;
#define NODESIN 24
#define NODESHIDDEN 16
#define NODESOUT 8
//set up stuff
double weightsI[NODESIN][NODESHIDDEN];
double weightsO[NODESHIDDEN][NODESOUT];
double outputData[NODESOUT];
for (int i=0; i<NODESOUT; i++){ outputData[i] = 0.0; }
357
//read in weights from file
readWeights(weightsI, weightsO);
}
}
//add bias
358
//stabilize with sigmoid function
//see if over threshold to fire
for ( int j=0; j<NODESHIDDEN; j++){
hiddenNodes[j] += bias;
hiddenNodes[j] = 1/ (1 + exp(-hiddenNodes[j]));
}
}
return 0;
359
int userInput( double dataIn [NODESIN])
{
//initalize vector
for ( int i=0; i<NODESIN; i++){
dataIn[i] = 0.0;
}
cout << "\n hit enter after each number " << endl;
cin >> h1;
cin >> h2;
cin >> h3;
cin >> h4;
360
h2--; dataIn[h2*3] = .67;
h3--; dataIn[h3*3] = .34;
}else{
//if there are 4 handicaps then set the dog's handicaps to 1, 3/4, 1/2, 1/4
h1--; dataIn[h1*3] = 1.0;
h2--; dataIn[h2*3] = .75;
h3--; dataIn[h3*3] = .50;
h4--; dataIn[h4*3] = .25;
}
return 0;
if ( !fin.is_open()){
printf ( "\nCould NOT open weights.dat ");
exit (1);
}
361
//weights file has one row for each input node and one weight for
//each hidden node in the row
//then we have one row for each hidden node
//and one weight for each output
int j=0;
for (int i=0; i<length; i++){
if ( tempString[i] != ',' ){
}else{
362
} else {
output = 0;
hiddenO++;
weightsO[hiddenO][output] = tempNumber;
}
}
fin.close();
return 0;
363
}
temp[w-1] = 0.0;
for (int m=0; m<NODESOUT; m++){
if ( temp[m] > place){
place = temp[m];
p = m+1;
}
}
temp[p-1] = 0.0;
for( int m=0; m<NODESOUT; m++){
if ( temp[m] > show){
show = temp[m];
s = m+1;
}
}
return 0;
364
//---testnet.cpp
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <math.h>
#include <ctime>
#include <iostream>
#include <fstream>
using namespace std;
#define NODESIN 24
#define NODESHIDDEN 16
#define NODESOUT 8
#define VECTORSIN 2000
#define LOOPSMAX 100
#define ERRORMAX 1.0
365
int readWeights ( double weightsI[NODESIN][NODESHIDDEN],
double weightsO[NODESHIDDEN][NODESOUT]);
double weightsI[NODESIN][NODESHIDDEN];
double weightsO[NODESHIDDEN][NODESOUT];
//output vector
double outputData[NODESOUT];
for (int i=0; i<NODESOUT; i++){
outputData[i] = 0.0;
}
366
}
if ( !fin.is_open()){
printf ( "\nCould NOT open weights.dat ");
exit (1);
}
//weights file has one row for each input node and one weight for
//each hidden node in the row
//then we have one row for each hidden node
//and one weight for each output
367
int endOfLine = 0;
int j=0;
for (int i=0; i<length; i++){
if ( tempString[i] != ',' ){
}else{
368
//jump to next column in array and reset temp array
for( int k=0; k<257; k++){
temp[k] = ' ';
}
j=0;
}
}
fin.close();
return 0;
double outputNodes[NODESOUT];
double hiddenNodes[NODESHIDDEN];
double errorO[NODESOUT];
double errorI[NODESHIDDEN];
double bias = 0.0;
double threshold = 0.20;
double learningRateI = 0.20;
double learningRateO = 0.20;
double errorAdjustmentO[NODESOUT];
double errorAdjustmentI[NODESHIDDEN];
int loops = 0;
int badloops = 0;
int goodRaces=0, badRaces = 0;
int loopCount = 0;
369
for ( int i=0; i<NODESHIDDEN; i++){
hiddenNodes[i] = 0.0;
}
}
}
//add bias
//stabilize with sigmoid function
//see if over threshold to fire
for ( int j=0; j<NODESHIDDEN; j++){
hiddenNodes[j] += bias;
hiddenNodes[j] = 1/ (1 + exp(-hiddenNodes[j]));
370
}
}
}
}
//determine error
totalError = 0.0;
for (int j=0; j< NODESOUT; j++){
errorO[j] = outputNodes[j] - vctrIn[i][NODESIN + j];
totalError += errorO[j];
}
return 0;
}
371
//read in the data file that was created by
//the two perl routines and parse the data into
//an array, one line per vector, 24 inputs, 3 outputs
//no idiot checking since we created this file and
//checked it ourselves, assume proper formatting
//of the data
if ( !fin.is_open()){
printf ( "\nCould NOT open data.dat ");
exit (1);
}
372
int j=0;
for (int i=0; i<length; i++){
if ( tempString[i] != ',' ){
}else{
//adjust odds
if (( track%3 == 0) &&( track < 24)){
// v[count][track] /= 10.0;
}
//adjust handicap
if (( (track+2)%3 == 0) && (track < 24)) {
v[count][track] /= 10.0;
}
373
}
count -= 1;
//adjust win/place/show
for ( int i=0; i<count; i++){
v[i][24] = 0;
v[i][25] = 0;
v[i][26] = 0;
v[i][w] = .75;
v[i][p] = .50;
v[i][s] = .25;
}
fin.close();
return count;
if (!fptr.is_open()){
cerr << "Could not create error.dat" << endl;
exit(1);
374
}
//this file can get quite large, I only used it for debugging
//dump some info to file for review
fptr <<"\n******************************************************\n";
fptr << "\n Vector # " << i << endl;
fptr << "\n Actual:\t" << first << ",\t" << second << ",\t" << third
<< "\t\t";
375
}
}
temp[w-1] = 0.0;
for (int m=0; m<NODESOUT; m++){
if ( temp[m] > place){
place = temp[m];
p = m+1;
}
}
temp[p-1] = 0.0;
for( int m=0; m<NODESOUT; m++){
if ( temp[m] > show){
show = temp[m];
s = m+1;
}
}
fptr << "\n Predicted: \t" << w << ", \t" << p << ", \t" << s;
fptr << endl;
int right = 0;
if ( (first == w )||(second == w )||(third == w)){
right++;
}
if ( (second == p )||(first == p )||(third == p)){
right++;
}
if ( (third == s)||(first == s)||(second ==s)){
right++;
}
score += right/3.0;
fptr.close();
376
int testData (int i, double vctrIn[VECTORSIN][NODESIN + NODESOUT],
double outputNodes[NODESOUT] )
{
377
}
}
378
--example.error.dat
***********************************************************************
Input 0,
0.5, 0.52, 1,
0.45, 0.82, 0.666667,
0.35, 0.72, 0.333333,
0.25, 0.68, 0,
0.6, 0.61, 0,
1, 0.84, 0,
1.2, 0.61, 0,
0.8, 0.69, 0.5,
Actual Desired
0.229886 0.5
0.632139 0.75
0.117174 0
0.228958 0.25
0.126706 0
0.166924 0
0.0692887 0
0.22751 0
totalError 0.298586
Actual: 2, 1, 4
Predicted: 2, 1, 4
***********************************************************************
Input 0,
1, 0.55, 0,
0.8, 0.6, 0,
1.2, 0.54, 0,
0.6, 0.55, 0.666667,
0.35, 0.69, 0,
379
0.5, 0.6, 0.333333,
0.25, 0.58, 1,
0.45, 0.63, 0.25,
Actual Desired
0.288367 0.25
0.219338 0
0.145928 0
0.109193 0
0.113589 0
0.500957 0.75
0.289854 0.5
0.129178 0
totalError 0.296404
Actual: 6, 7, 1
Predicted: 6, 7, 1
***********************************************************************
Input 0,
0.5, 0.74, 0,
1.2, 0.69, 0.333333,
0.25, 0.59, 0,
0.6, 0.57, 0.666667,
0.35, 0.71, 0,
0.8, 0.7, 1,
0.45, 0.6, 0,
1, 0.69, 0,
Actual Desired
0.0956196 0
0.0998712 0
0.0435176 0
0.0440099 0
0.158237 0.25
0.157403 0
0.599425 0.75
0.413257 0.5
380
totalError 0.111342
Actual: 7, 8, 5
Predicted: 7, 8, 5
***********************************************************************
Input 0,
1, 0.55, 0,
0.6, 0.58, 0,
0.8, 0.55, 0,
1.2, 0.55, 1,
0.45, 0.74, 0.666667,
0.35, 0.67, 0.333333,
0.25, 0.76, 0,
0.5, 0.55, 0,
Actual Desired
0.076147 0
0.0524968 0
0.517485 0.75
0.0393935 0
0.105092 0
0.377277 0.5
0.373286 0.25
0.155083 0
totalError 0.196261
Actual: 3, 6, 7
Predicted: 3, 6, 7
***********************************************************************
Input 0,
0.8, 0.58, 0,
381
1, 0.6, 0,
0.6, 0.57, 0,
0.5, 0.6, 1,
0.45, 0.63, 0.666667,
0.35, 0.73, 0,
1.2, 0.71, 0.333333,
0.25, 0.72, 0,
Actual Desired
0.0416575 0
0.0540597 0
0.478243 0.5
0.0508952 0
0.152083 0.25
0.126667 0
0.147998 0
0.532813 0.75
totalError 0.0844163
Actual: 8, 3, 5
Predicted: 8, 3, 5
***********************************************************************
382
--example.test.data.dat
0.5,4.5,64,0.25,3.5,56,0,2.5,61,0,6,58,0,8,67,0,10,58,1,12,64,0.75,5,64,2,3,5,
0,4.5,74,0,6,58,0.333333333333333,8,79,0.666666666666667,2.5,81,0,3.5,62,0,10,
72,0,12,70,1,5,58,4,8,1,
0.333333333333333,12,63,1,2.5,58,0,4.5,64,0,8,59,0.666666666666667,5,55,0,3.5,
72,0,6,62,0,10,66,2,5,1,
0,3.5,60,0,10,67,0,12,72,1,5,67,0,4.5,74,0,6,68,0.333333333333333,8,59,
0.666666666666667,2.5,58,7,8,3,
1,3.5,62,0,4.5,62,0,5,59,0,8,79,0.333333333333333,6,61,0,2.5,57,0,10,77,
0.666666666666667,12,59,2,3,7,
0.333333333333333,5,71,0,2.5,71,0,12,54,1,8,61,0,4.5,52,0.666666666666667,10,
77,0,3.5,73,0,6,54,4,3,1,
0,3.5,81,0,5,70,1,12,76,0.333333333333333,4.5,71,0,2.5,67,0,10,65,0,8,61,
0.666666666666667,6,62,8,3,7,
0,8,60,0.666666666666667,10,78,0,3.5,55,0,5,76,0,12,75,0.333333333333333,6,57,
1,2.5,77,0,4.5,59,2,3,5,
0,8,75,0,12,75,1,5,62,0.333333333333333,4.5,69,0,2.5,68,0.666666666666667,6,72,
0,3.5,62,0,10,72,4,8,5,
0.666666666666667,4.5,64,0,3.5,59,0,8,60,0.333333333333333,12,57,0,2.5,63,0,6,
62,0,5,59,1,10,60,1,6,5,
0.333333333333333,5,62,0,2.5,79,0,6,77,0,12,72,0.666666666666667,8,75,1,3.5,69,
0,4.5,69,0,10,58,6,7,1,
0,8,58,0,2.5,73,0,12,72,0,15,58,0,5,60,0,6,82,0,3.5,68,0,8,63,6,5,8,
0,5,73,0,12,70,0.25,4.5,75,0,10,70,0.5,8,65,1,2.5,58,0,6,57,0.75,3.5,69,1,5,7,
0,5,74,1,12,58,0.666666666666667,10,68,0.333333333333333,8,60,0,4.5,76,0,3.5,
74,0,2.5,73,0,6,71,4,3,7,
0,8,55,0,12,71,1,10,77,0.333333333333333,6,71,0.666666666666667,5,64,0,4.5,64,
0,2.5,57,0,3.5,68,2,4,7,
0,8,75,1,3.5,61,0,2.5,79,0,10,75,0,4.5,61,0,12,73,0.666666666666667,5,56,
0.333333333333333,6,58,8,6,7,
383
0,6,72,0,3.5,55,1,5,61,0.333333333333333,10,74,0,12,69,0,4.5,59,
0.666666666666667,2.5,70,0,8,69,4,1,3,
1,8,60,0,10,82,0,3.5,69,0,4.5,63,0.333333333333333,12,64,0,5,72,0,6,58,
0.666666666666667,2.5,72,3,8,5,
0.333333333333333,5,62,0,3.5,62,1,6,64,0,2.5,53,0,8,61,0,4.5,63,
0.666666666666667,10,59,0,12,73,2,1,6,
0,8,61,0.333333333333333,6,64,0,12,72,1,5,75,0.666666666666667,2.5,76,0,10,64,
0,4.5,58,0,3.5,74,3,5,6,
1,5,62,0,10,58,0.666666666666667,2.5,60,0,4.5,73,0,12,79,0,3.5,60,0,6,71,
0.333333333333333,8,64,8,5,1,
0.333333333333333,12,72,0,10,58,0,4.5,73,0,2.5,57,0.666666666666667,8,60,0,6,
60,0,3.5,74,1,5,73,1,2,8,
0.333333333333333,4.5,55,0,8,72,0.666666666666667,6,70,0,2.5,60,0,5,73,1,3.5,
57,0,10,59,0,12,74,7,1,2,
0.666666666666667,8,74,0,12,72,0,15,59,0.333333333333333,3.5,75,1,5,72,0,6,68,
0,2.5,64,0,4.5,67,8,3,4,
0.666666666666667,2.5,67,1,8,77,0,12,68,0,3.5,54,0,4.5,61,0.333333333333333,10,
79,0,6,57,0,5,63,3,8,2,
0.333333333333333,3.5,74,0,10,70,0,12,62,1,2.5,71,0,5,54,0.666666666666667,6,
64,0,4.5,59,0,8,72,2,3,4,
1,12,74,0,10,57,0.333333333333333,3.5,56,0,4.5,72,0,5,59,0,2.5,58,0,6,64,
0.666666666666667,8,70,3,5,4,
0.5,5,69,0.25,6,58,0,8,56,0,3.5,64,0.75,2.5,73,1,12,57,0,10,75,0,4.5,69,1,2,3,
0.333333333333333,4.5,67,0,10,76,0,3.5,59,0,2.5,56,0,8,63,1,6,59,0,12,73,
0.666666666666667,5,75,8,4,5,
0,3.5,62,0.333333333333333,8,79,0,10,55,1,12,64,0,2.5,63,0.666666666666667,5,68,
0,4.5,78,0,6,55,7,5,2,
0,6,60,0,5,70,0.666666666666667,4.5,70,0.333333333333333,8,65,0,12,73,0,3.5,
70,0,2.5,64,1,10,73,6,2,7,
0,2.5,73,1,12,61,0,3.5,65,0,8,76,0,4.5,63,0.333333333333333,5,74,0,10,63,
384
0.666666666666667,6,58,7,8,2,
0,3.5,57,1,6,66,0,8,72,0,12,58,0.333333333333333,4.5,76,0.666666666666667,5,
59,0,10,86,0,2.5,77,5,8,6,
0.666666666666667,6,60,0,8,58,0.333333333333333,10,74,1,3.5,58,0,12,60,0,2.5,
64,0,4.5,61,0,5,61,7,8,2,
1,3.5,65,0,12,63,0.333333333333333,10,79,0,4.5,61,0,5,65,0.666666666666667,2.5,
64,0,6,69,0,8,69,2,6,5,
0,8,56,0,3.5,61,0.333333333333333,10,73,0,6,59,1,12,75,0,2.5,61,
0.666666666666667,5,58,0,4.5,69,5,3,7,
1,12,71,0,10,74,0,3.5,69,0.333333333333333,4.5,62,0,5,65,0,6,57,0,2.5,55,
0.666666666666667,8,65,3,7,8,
385
--example.testData.dat
***********************************************************************
Vector # 0
totalError 0.291978
Actual: 5,1,3
Predicted: 8, 5, 7
***********************************************************************
Vector # 1
totalError 0.459928
Actual: 6,1,3
Predicted: 8, 2, 1
***********************************************************************
Vector # 2
totalError 0.310754
Actual: 3,4,5
Predicted: 8, 1, 2
***********************************************************************
Vector # 3
386
totalError 0.308097
Actual: 3,4,5
Predicted: 8, 2, 5
387
---example.training.data.data
0,5,52,1,4.5,82,0.666666666666667,3.5,72,0.333333333333333,2.5,68,0,6,61,0,10,84
,0,12,61,0,8,69,2,1,4,
0,10,55,0,8,60,0,12,54,0,6,55,0.666666666666667,3.5,69,0,5,60,0.333333333333333,
2.5,58,1,4.5,63,6,7,1,
0,5,74,0,12,69,0.333333333333333,2.5,59,0,6,57,0.666666666666667,3.5,71,0,8,70,1 ,
4.5,60,0,10,69,7,8,5,
0,10,55,0,6,58,0,8,55,0,12,55,1,4.5,74,0.666666666666667,3.5,67,0.333333333333333,
2.5,76,0,5,55,3,6,7,
0,8,58,0,10,60,0,6,57,0,5,60,1,4.5,63,0.666666666666667,3.5,73,0,12,71,
0.333333333333333,2.5,72,8,3,5,
0.333333333333333,2.5,72,0,10,54,0,12,55,0,8,64,0,5,53,0.666666666666667,3.5,57,
1,9,67,0,6,60,7,3,1,
0.333333333333333,2.5,56,0,10,55,0,12,63,1,4.5,70,0,6,58,0,5,63,0,8,57,
0.666666666666667,3.5,64,1,2,7,
0,10,71,0,6,73,0,5,60,0,12,82,0.666666666666667,3.5,57,0.333333333333333,2.5,58,
1,4.5,71,0,8,56,2,7,4,
0,8,63,0,12,58,0,6,70,0,5,75,0,10,64,1,4.5,76,0.666666666666667,3.5,65,
0.333333333333333,5,67,2,7,8,
0.333333333333333,2.5,65,0,6,60,1,4.5,73,0,10,76,0,5,73,0,12,58,0,8,73,
0.666666666666667,3.5,69,1,5,3,
0.333333333333333,,,0,4.5,68,1,6,68,0,3.5,61,0,8,59,0,15,71,0,8,58,
0.666666666666667,12,60,2,3,7,
0.333333333333333,8,79,0,5,60,0.666666666666667,12,53,0,3.5,68,0,5,60,0,10,73,1,
6,64,0,4.5,72,1,7,4,
0,3.5,58,0.333333333333333,8,74,0,2.5,67,0,10,73,0,6,71,0,12,55,1,5,73,
0.666666666666667,4.5,77,1,2,8,
0,3.5,57,0,12,56,0,5,64,1,8,78,0.333333333333333,4.5,74,0,2.5,57,0,10,75,
0.666666666666667,6,58,8,3,7,
1,2.5,65,0,4.5,74,0,12,69,0.666666666666667,6,57,0,3.5,57,0,8,79,0,5,55,
0.333333333333,10,60,1,2,4,
388
0.333333333333333,5,74,0,2.5,59,0,10,72,0.666666666666667,6,58,1,3.5,58,0,4.5,72
,0,8,74,0,12,57,1,5,3,
0,3.5,65,0.333333333333333,8,55,0,2.5,75,0,6,72,1,5,72,0,4.5,65,0,12,73,
0.666666666666667,10,58,4,2,7,
1,3.5,73,0,4.5,59,0,10,77,0.333333333333333,5,69,0,2.5,79,0,8,61,0,12,51,
0.666666666666667,6,70,4,2,5,
389
--example.weights.dat
390
-0.830313, 0.402075, -1.00981, -0.815833, 0.300742, -1.28449, -1.72554,
-0.289421, 0.121561, -0.379148, 0.359904, 0.410897, 0.326119,
0.581417, 0.588857, 0.205351,
391
-0.172246, 0.500135, 0.356741, 0.483515, -0.630037, -0.447533,
0.209796, 0.666823, 0.272065,
392
0.19013, 0.164908, -0.203131, -0.0488369, 0.063752, -0.121208, 0.534113,
0.394686,
393
7.12 Hopeld Networks
John Hopeld, in the late 1970's, brought us these networks. These networks
can be generalized and are robust. These networks can also be described math-
ematically. On the downside they can only store 15% as many states as they
have neurodes, and the patterns stored must have Hamming distances that are
about 50% of the number of neurodes.
Hopeld networks, aka crossbar systems, are networks that recall what is
fed into them. This makes it useful for restoring degraded images. It is a fully
connected net, every node is connected to every other node. The nodes are not
connected to themselves.
Calculating the weight matrix for a Hopeld network is easy. This is an
example with 3 input vectors. You can train the network to match any number
of vectors provided that they are orthogonal.
Orthogonal vectors are vectors that give zero when you calculate the dot
product.
orthogonal (0, 0, 0, 1) (1, 1, 1, 0) = 0*1 + 0*1 + 0*1 + 1*0 = 0
orthogonal (1, 0, 1, 0) (0, 1, 0, 1) = 1*0 + 0*1 + 1*0 + 0*1 = 0
NOT orthogonal (0, 0, 0, 1) (0, 1, 0, 1) = 0*0 + 0*1 + 0*0 + 1*1 = 1
Orthogonal vectors are perpendicular to each other.
To calculate the weight matrix for the orthogonal vectors (0, 1, 0, 0), (1, 0,
1, 0), (0, 0, 0, 1)
rst make sure all the vectors are orthogonal
(0, 1, 0, 0) (1, 0, 1, 0) = 0*1 + 1*0 + 0*1 + 0*0 = 0
(0, 1, 0, 0) (0, 0, 0, 1) = 0*0 + 1*0 + 0*0 + 0*1 = 0
(1, 0, 1, 0) (0, 0, 0, 1) = 1*0 + 1*0 + 1*0 + 0*1 = 0
Change the zeros to negative ones in each vector
(0, 1, 0, 0) === (-1, 1, -1, -1)
(1, 0, 1, 0) === (1, -1, 1, -1)
(0, 0, 0, 1) === (-1, -1, -1, 1)
Multiply each matrix by itself
0 1 0 1
1 1 1 1 1
B 1 C B 1 1 1 1 C
B
@ 1
C
A 1 1 1 1 =B
@ 1 1 1 1
C
A (7.1)
1 1 1 1 1
0 1 0 1
1 1 1 1 1
B 1 C B 1 1 1 1 C
B
@ 1
C
A 1 1 1 1 =B
@ 1 1 1 1
C
A (7.2)
1 1 1 1 1
0 1 0 1
1 1 1 1 1
B 1 C B 1 1 1 1 C
B
@ 1
C
A 1 1 1 1 =B
@ 1 1 1 1
C
A (7.3)
1 1 1 1 1
394
The third step is to put zeros on the main diagonal of each matrix and add
them together. (Putting zeros on the main diagonal keeps each node from being
connected to itself. 0 1
0 1 1 1
B 1 0 1 1 C
B C (7.4)
@ 1 1 0 1 A
1 1 1 1
0 1 0 1 0 1
0 1 1 1 0 1 1 1 0 1 3 1
B 1 0 1 1 C B 1 0 1 1 C B 1 0 1 1 C
B C+B C = T heresultingmatrixis : B C
@ 1 1 0 1 A @ 1 1 0 1 A @ 3 1 0 1 A
1 1 1 0 1 1 1 0 1 1 0 1
(7.5)
The Hopeld network is fully connected, each weight connects to every other
weight [n1] - [n2] = weight is -1
[n1] - [n3] = weight is 3
[n1] - [n4] = weight is -1
[n2] - [n1] = weight is -1
[n2] - [n3] = weight is -1
[n2] - [n4] = weight is -1
[n3] - [n1] = weight is 3
[n3] - [n2] = weight is -1
[n3] - [n4] = weight is -1
[n4] - [n1] = weight is 1
[n4] - [n2] = weight is 1
[n4] - [n3] = weight is 1
These networks can also be described as having a potential energy surface
with conical holes representing the data. Holes having similar depth and di-
ameter represent data with similar properties. The input data seeks the lowest
potential energy and settles in to the closest hole. The energy surfaces of these
networks are mathematically equivalent to that of 'spin glasses'.
Some problems with these neural nets are they are computationally intensive,
use lots of memory, and although I haven't seen it mentioned I would guess race
conditions may present a problem since data is updated continuously at each
node with the output from one becoming the input for another.
BAM, bidirectional associative memory is an example of a Hopeld network.
It consists of two fully connected layers, one for input and one for output. The
nodes in each layer do not have connections to other nodes in the same layer.
The weights are bidirectional, meaning that there is communication in both
directions along the weight vector. There are no connections between neurodes
in the same layer. BAM networks take only -1's and 1's as input and only
output -1's and 1's. So if you are working with binary data, you must convert
the zeros to -1's. The weights are calculated in the same way as the Hopeld
example above. The nodes are either 0 or 1 (on or o).
395
7.12.1 C++ Hopeld Network
--hopfield.cpp---
//Linda MacPhee-Cobb
//www.timestocome.com
#include <stdio.h>
#include <iostream.h>
class neurode
{
private:
int total;
public:
neurode() { }
total = inputVector[locationOfNode];
396
return 0;
}
}
};
int main ()
{
int nodeLocation;
int const nodes=4;
int weightArray[nodes][nodes] = {
{ 0, -1, 1, -1},
{-1, 0, -1, 1},
{ 1, -1, 0, -1},
{-1, 1, -1, 0}
};
int output[nodes];
neurode n[nodes];
cout << "\n Input vector 1: {1,0,1,0} output " << endl;
for(int i=0; i<nodes; i++){
cout << " " << output[i];
}
cout << endl;
cout << "\n Input vector 2: {0,1,0,1} output " << endl;
for(int i=0; i<nodes; i++){
397
cout << " " << output[i];
}
cout << endl;
398
--Network.java
//www.timestocome.com
//Fall 2000
//class network is needed for the hopfield network
import java.util.*;
if (k>=0){
return 1;
}else{
return 0;
}
399
{
v.add(a1);
v.add(b1);
v.add(c1);
v.add(d1);
400
" network. The network recalls two input patterns" +
" {1,0,1,0} and {0,1,0,1}.\n\n\n");
System.out.println("\n\n");
//try the second pattern
Network hopfield2 = new Network (weight1, weight2, weight3, weight4);
hopfield2.activation(pattern2);
401
---Neuron.java
//www.timestocome.com
//Fall 2000
weight[i] = j[i];
}
}
return activation;
}
402
}
403
Chapter 8
This section contains URLS to examples, tutorials, online books and courses in
various AI/NN math topics. There is never one book that can do everything or
cover everything. I did not wish to discourage those uncomfortable with math
away from using this book and programs. Those of you who are comfortable
with math should pursue the following topics if you are unfamiliar with any of
them.
404
joshua.smcvt.edu/linalg.html Linear Algebra ( online/downloadable text-
book ) ocw.mit.edu/18/18.06/f02/index.html Linear Algebra ( MIT Open
Courseware )
405
--gasket.cpp--
//open a display window
//and generate the Sierpinski gasket
#include <stdlib.h>
#include <stdio.h>
#include <gl/glut.h>
#include <gl/glu.h>
#include <gl/gl.h>
int j;
long int k;
long random_number(); //random_number number generator
point2 p = {75.0, 50.0}; //start somewhere
//plot point
glBegin(GL_POINTS);
glVertex2fv(p);
glEnd();
}
glFlush(); //plot quickly.. only benefit if on a network
}
406
glColor3f(1.0, 0.0, 0.0); //draw color
//set up viewing
glMatrixMode(GL_PROJECTION);
407
8.1.2 C OpenGL 3D Gasket
--3dgasket.cpp
//open a display window
//and generate the Sierpinski gasket
//in 3d
#include <stdlib.h>
#include <stdio.h>
#include <gl/glut.h>
#include <gl/glu.h>
#include <gl/gl.h>
int j;
long int k;
// long random(); //random number generator
point p = {250.0, 100.0, 250.0}; //start somewhere
//plot point
glBegin(GL_POINTS);
//color depends on location
glColor3f(p[0]/250.0, p[1]/250.0, p[2]/250.0);
glVertex3fv(p);
glEnd();
}
glFlush(); //plot quickly.. only benefit if on a network
}
408
void myinit (void){
//attributes
glClearColor(1.0, 1.0, 1.0, 0.0); //backround
//set up viewing
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
409
8.1.3 C OpenGL Mandelbrot
--mandelbrot.cpp
//mandelbrot in opengl
#include <stdlib.h>
#include <math.h>
#include <gl/glut.h>
//N X M MATRIX
#define N 500
#define M 500
//prototypes
void calculate(void);
void add(complex a, complex b, complex p);
void mult(complex a, complex b, complex p);
void mult(complex a, complex b, complex p);
float mag2(complex a);
void form(float a, float b, complex p);
void mouse(int btn, int state, int x, int y);
void display(void);
void myReshape(int w, int h);
void myinit();
410
//initial position
cx = -0.5;
cy = 0.0;
width = 2.5;
height = 2.5;
system("clear");
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize(N, M);
glutCreateWindow("Mandelbrot");
myinit();
glutReshapeFunc(myReshape);
glutDisplayFunc(display);
glutMouseFunc(mouse);
glutMainLoop();
void calculate(void)
{
int i, j, k;
float x, y, v;
complex c0, c, d;
411
if(v > 4.0) break;//assume not in set if mag > 4
}
if(v > 1.0) v = 1.0; //if > 1 set to backround
image[i][j] = 255*v;
}
printf("\n done w/ new image map");
display();
height /=2.0;
width /=2.0;
412
calculate();
cy = (double)x/500.0;
if(y > 250) cy *=(-1);
calculate();
void display(void){
glClear(GL_COLOR_BUFFER_BIT);
glutSwapBuffers();
calculate();
glMatrixMode(GL_PROJECTION);
glMatrixMode(GL_MODELVIEW);
413
void myinit(){
414
davis.wpi.edu/ matt/courses/fractals/index.htm" Using Fractals to Sim-
ulate Natural Phenomena
www.math.okstate.edu/mathdept/dynamics/lecnotes/lecnotes.html" Dy-
namical Systems and Fractals Lecture Notes
Fuzzy www-2.cs.cmu.edu/Groups/AI/html/faqs/ai/fuzzy/part1/faq.html faq Fuzzy
logic
www.paulbunyan.net/users/gsirvio/nonlinear/fuzzylogic.html Fuzzy Logic
Logic www.trentu.ca/academic/math/sb/pcml/welcome.html" A Problem Course
in Mathematical Logic" ( downloadable/online text book)
Nonlinear systems
Optimization Theory www.economics.utoronto.ca/osborne/MathTutorial/IND.HTM
Tutorial on Optimization Theory and Dierence and Dierential Equa-
tions by Martin Osborne, online book and course outline.
Probability www.dartmouth.edu/ chance/teaching aids/books articles/probability book/book.html
Introduction to Probability ( online/downloadable textbook )
archives.math.utk.edu/topics/statistics.html The Math Archives, proba-
bility
www.netcomuk.co.uk/ vaillant/proba/index.html Probability.net ( tutori-
als )
Statistics www.psych.utoronto.ca/courses/c1/statstoc.htm Statistics Tutorial
archives.math.utk.edu/topics/statistics.html The Math Archives, Statis-
tics
Topology www.geom.umn.edu/ bancho/Flatland/" Flatland, A Romance of
Many dimensions, Edwin A Abbott ( online/downloadable textbook )
at.yorku.ca/i/a/a/b/23.htm Topology Course ( Lecture Notes )
Vector Math www.cubic.org/ submissive/sourcerer/vector1.htm Simple Primer
on Vector Math
kestrel.nmt.edu/raymond/ph13xbook/node21.html Math Tutorial Vectors
www.ping.be/math/ Math Tutorial
Wavelets www.public.iastate.edu/ rpolikar/WAVELETS/waveletindex.html" Wavelet
tutorial
davis.wpi.edu/ matt/courses/wavelets/" Wavelets in Multiresolution Anal-
ysis
415
8.2 Specic Topics
Bayes balducci.math.ucalgary.ca/bayes-theorem.html Bayes Theorem
members.tripod.com/ Probability/bayes01.htm Bayes' Theorem
engineering.uow.edu.au/Courses/Stats/File2414.html Bayes' Theorem
Boltzmann function www.cs.berkeley.edu/ murphyk/Bayes/bayes.html Boltzmann Equation
www.ph.ed.ac.uk/ jmb/thesis/node18.html The Boltzmann Equation
uracil.cmc.uab.edu/ harvey/Tutorials/math/Boltzmann.html The Boltz-
mann Distribution
Fokker-Planck Equation tangaroa.oce.ordt.edu/cmg3b/node2.html The Fokker-Planck Equation
www.d.aau.dk/ hoyrup/master/node17.html Solution of the Fokker=Planck
Equation
Gradient hyperphysics.phy-astr.gsu.edu/hbase/gradi.html The Gradient
web.mit.edu/wwmath/vectorc/summary.html Vector Calculus Summary
www.ma.iup.edu/projects/CalcDEMma/vecdcalc/vecdicalc.html Vector
Dierential Calculus
www.mas.ncl.ac.uk/ sbrooks/book/nish.mit.edu/2006/Textbook/Nodes/chap01/node26.html
Vector Calculus
Gibbs Probability research.microsoft.com/ szli/MRF Book/Chapter 1/node13.html Markov
Gibbs Equivalence
iew3.technion.ac.il/Academ/Grad/STdep/crystal.php Gibbs Fields and Phase
Segregation
www.blc.arizona.edu/courses/bioinformatics/book pages/gibbs.html The
Gibbs Sampler
bbs Sampler Convergence Theorem www.utdallas.edu/ golden/ANNCOURSESTUFF/lecture notes/lec11.notes
Boltzmann Machine, Brain State in a Box
Hessian Matrix This is the derivative of the Jacobian. It is used to verify critical points
to nd minimums and maximums.
thesaurus.maths.org/dictionary/map/word/2148 Hessian matrix
rkb.home.cern.ch/rkb/AN16pp/node118.html Hessian
www-sop.inria.fr/saga/logiciels/AliAS/node7.html General purpose solv-
ing algorithm with Jacobian and Hessian
Invariant Sets An invariant set is the region of the state space such that any trajectory
initiated in the region will remain there for all time. This is used in judging
the stability of neural networks.
cnls.lanl.gov/ nbt/Book/node105.html Invariant Sets
www.amsta.leeds.ac.uk/ carsten/preprints/article/node4.html Invariant Sets
416
www.cnbc.cmu.edu/ bard/xppfast/lin2d.html The Phase Plane for a Lin-
ear System
Jacobian Matrix These are used to obtain partial derivatives of implicit functions. It can
be used to map a correspondence between two planes.
rkb.home.cern.ch/rkb/AN16pp/node135.html#134 Jacobi Determinant
thesaurus.maths.org/dictionary/map/word/946 Jacobian
www-sop.inria.fr/saga/logiciels/AliAS/niode7.html General purpose solv-
ing algorithm with Jacobian and Hessian
Lagrange Multipliers lagrange.pdf An excellent example from a homework problem from ww2.lafayette.edu/ math/Gary/">P
Gordon @Lafayette
Lipschitz Condition Shows the possibility of nding a global minimum.
thesaurus.maths.org/dictionary/map/word/10115 Lipschitz Condition
m707.math.arizona.edu/ restrepo/475B/Notes/source/node3.html Some im-
portant theorems on odes
www.gris.uni-tuebingen.de/projects/dynsys/latex/dissp/node7.html Con-
tinuous Dynamical Systems
Lyapunov Function This is used to evaluate the stability of a critical point in a dynamical
system. It is also known as the 'characteristic multiplier' or the '
oguet
multiplier'.
The Lyapunov Exponent is also dened as d(t) = d: e(landat)
which describes the separation between two trajectories that begin very
close to each other
cepa.newschool.edu/het/essays/math/lyapunov.htm Lyapunov's Method
www.irisa.fr/bibli/publi/pi/1994/845/845.html PI-845 Lyapunov's stabil-
ity of large matrices by projection methods
MAP Risk functions (maximum a posteriori estimate)
www.ccp4.ac.uk/courses/proceedings/1997/g bricogne/main.html Maximum
Entropy Methods and the Bayesian Programme
www.cs.berkeley.edu/ murphyk/Bayes/bayes.html A Brief Introduction to
Graphical Models and Bayesian Networks
Markov Random Fields research.microsoft.com/ szli/MRF Book/Chapter 1/node11.html Markov
Random Fields
omega.math.albany.edu:8008/cdocs/summer99/lecture3/l3.html" An In-
troduction to Markov Chain Monte Carlo
dimacs.rutgers.edu/ dbwilson/exact/ Website for Perfectly Random Sam-
pling with Markov Chains
417
Method of Newton www.ma.iup.edu/projects/CalcDEMma/newton/newton.html Newton's Method
archives.math.utk.edu/visual.calculus/3/newton.5/ Visual Calculus, New-
ton's Method
www.mapleapps.com/categories/mathematics/calculus/html/NewtonSlides.html
Slide Show about Newton's Method
Multivariable Taylor's Theorem (aka 'Mean Value Theorem') This is used to approximate a function.
www.math.gatech.edu/ carlen/2507/notes/Taylor.html Taylor's Theorem
with several variables
thesaurus.maths.org/dictionary/map/word/2933 Taylor's theorem
Probability Mass (Density) functions
www.mathworks.com/access/helpdesk/help/toolbox/stats/tutoria5.shtml
Statistics Toolbox
Sampling error
Sigma Function ce597n.www.ecn.purdue.edu/CE597N/1997F/students/michael.a.kropinski.1/project/tutorial
The Normal Distribution Tutorial
Steepest Descent www.gothamnights.com/Trond/Thesis/node26.html Method of Steepest
Descent
cauchy.math.colostate.edu/Resources/SD CG/sd/index.html Steepest De-
scent Method www.uoxray.uoregon.edu/dale/papers/CCP4 1994/node8.html
Steepest Descent
Stochastic Approximation Theorem Several theories used to show that an unpredictable, or random system
will convert or become stable.
Wald Test
Zipf's Law linkage.rockefeller.edu/wli/zipf/ Zipf's Law references
www.few.eur.nl/few/people/vanmarrewijk/geography/zipf/ Zipf's Law
More General information thesaurus.maths.org/index.html Maths Thesaurus
rkb.home.cern.ch/rkb/titleA.html The Data Anyalysis BriefBook www-
sop.inria.fr/saga/logiciels/AliAS/AliAS.html An Algorithms Library of In-
terval Analysis for Equation Systems
www.cs.utk.edu/ mclennan/Classes/594-MNN/ CS 594 Math for Neural
Nets ( not yet complete )
www.ams.org/online bks/ American Mathematical Society online books
www.math-atlas.org The Mathematical Atlas
www.nr.com Numerical Recipes
ocw.mit.edu/global/department18.html MIT Open Courseware Math Sec-
tion, lectures, notes, quizzes, homework and solutions along with text book
information
418
Chapter 9
Bibliography
9.1 Bibliography
Books
Articial Intelligence: A New Synthesis, Nis J. Nilsson, Morgan Kaufmann
Publishers, 1998, #1-55860-467-7
Articial Intelligence, A Modern Approach, Stuart Russell and Peter Norvig,
Prentice Hall Series in Articial Intelligence, 1995, #0-13-103805-2
C++ Neural Networks and Fuzzy Logic, Dr. Valluru B. Rao and Hayariva
V. Rao, MIS Press, 1995, #1-55851-552-6
Constructing Intelligent Agents with Java, Joseph P. Bigus and Jennifer
Bigus, Wiley Computer Publishing, 1998, #0-471-19135-3
Introduction to Articial Intelligence, Philip C. Jackson Jr., Dover Publica-
tions, 1985, #0-486-24864-X
Mathematical Methods for Neural Network Analysis and Design, Richard
M. Golden, Bradford-MIT Press, 1996, #0-262-07174-6
Naturally Intelligent Systems, Maureen Caudill and Charles Butler, A Brad-
ford Book/The MIT Press, 1990, #0-262-03156-6
Neural Network and Fuzzy Logic Applications in C/C++, Stephen T. Wel-
stead, Wiley, 1994, #0-471-30974-5
Programming and Deploying Java Mobile Agents with Aglets, Danny B.
Lange and Mitsuru Oshima, Addison Wesley, 1998, #0-201-32582-9
Programming Intelligent Agents for the Internet, Mark Watson, Computing
McGraw-Hill, 1996, #0-07-912206-X
Signal and Image Processing with Neural Networks, A C++ Sourcebook,
Timothy Masters, John Wiley and Sons, 1994, #0-471-04963-8
Software Agents, Jerey M. Bradshaw, AAAI Press/The MIT Press, 1997,
#0-262-52234-9
Thinking in Complexity, Klaus Mainzer, Springer, 1997, #3-540-62555-0
Online Sources
419
An Introduction to Bayesian Networks and their Contemporary Applica-
tions, Moisies, www.cs.ust.uk/ samee/bayesian/intro.html
The Rete Algorithm, yoda.cis.temple.edu:8080/UGAIWWW/lectures/rete.html
Birth of a Learning Law, Stephen Grossberg, cns-web.bu.edu/Proles/Grossberg/Learning.html
Overview of Support Vector Machines, Chew, Hong Gunn, www.eleceng.adelaide.edu.au/Personal/hgche
WebMate: A Personal Agent for Browsing and Searching, Liren Chen, Katia
Sycara, citeseer.nj.nec.com/cs
Articial Intelligence Gets Real, Stephen W. Plain, www.zdnet.com/computershopper/edit/cshopper/co
Evolution, Error and Intentionality, Daniel C. Dennett, ase.tufts.edu/costud/papers/evolerr.htm
The Construction of Programs with Common Sense, John McCarthy
Articial Intelligence, Logic and Formalizing Common Sense, John Mc-
Carthy, www-formal.stanford.edu/jmc
Modeling Adaptive Autonomous Agents, Pattie Maes, pattie@media.mit.edu
Hopkins Scientists Shed Light on How the Brain Thinks, Gary Stephenson,
gstephenson@jhmi.edu
Knowledge Discovery in Databases, Tools and Techniques, Peggy Wright,
www.acm.org/crossroads/xrds5-2/kdd.html
Minds, Brains, and Programs, John R. Searle, www.cogsci.ac.uk/bbs/Archive/bbs.searle2.html
www.opencyc.org Open source version of Cyc
www.markwatson.com/opencontent/opencontent.htm Practical Articial In-
telligence Programming in Java, by Mark Watson A downloadable book with
example code.
www.cs.dartmouth.edu/ brd/Teaching/AI/Lectures/Summaries/planning.html#STRIPS
STRIPS
robotics.stanford.edu/ koller/papers/position.html Structured Representa-
tions and Intratiblility
www-cs-students.stanford.edu/ pdoyle/quail/notes/pdoyle/search.html Search
Methods
www.ams.org/new-in-math/cover/turing.html Turning Machines (AMS site)
www.cogs.susx.ac.uk/users/bend/atc/2000/web/nicholn/ A Tutorial Intro-
duction to Turing Machines
www.turing.org.uk/turing/scrapbook/tmjava.html A Turing Machine Ap-
plet
cgi.student.nada.kth.se/cgi-bin/d95-aeh/get/umeng Turing Machines (sev-
eral applets to demonstrate a turing machine)
www.ktiworld.com/GBB/information bibli.html Blackboard Systems
www.cs.cmu.edu/afs/cs/project/tinker-arch/www/html/1998/Lectures/20.Blackboard/base.000.htm
A Slide Show on Blackboard Architectures
Start with this paper! It gives an excellent introduction. IntroToSVM.pdf">Introduction
to Support Vector Machines, by Dustin Boswell I downloaded it from www.work.caltech.edu/ boswell/IntroT
)
Lagrange Multipliers - here is an excellent example that explains how to
use Lagrange Multipliers I got from ww2.lafayette.edu/ math/Gary/ Math 263
Lagrange Multiplier Solutions 1. Find the extreme values ... and a copy here if
that one disappears lagrange.pdf lagrange.pdf
420
www.support-vector-machine.org Support Vector Machines (mailing list and
links)
www.eleceng.adelaide.edu.au/Personal/hgchew/svmdoc/svmdoc.html Overview
of Support Vector Machines this has a nice description of how the kernel is cal-
culated
citeseer.nj.nec.com/burges98tutorial.html A Tutorial on Support Vector Ma-
chines for Pattern Recognition , this is considered the best introduction and is
quite in-depth.
www.cis.ysu.edu/ john/835/notes/notes6.html Situational Calculus
421
www.drc.ntu.edu.sg/users/mgeorg/conferences.epl AI Events
www.cs.man.ac.uk/ai/ AI Group
www.calresco.org/tutorial.htm Tutorials on AI
www.enteract.com/ rcripe/aipages/ai-intro.htm What is AI concerned
with?
www.psych.utoronto.ca/ reingold/courses/ai/nn.html AI Neural Nets,
What are they?
www.landeld.com/faqs/ai-faq/neural-nets/part1 AI NN Faq
www.neuroguide.com Neurosciences on the Internet
www.brainsource.com Brain Source, Neuropsychology and Brain Resources
and information
faculty.washington.edu/ wcalvin/bk9/ The Cerebral Code, Thinking a Thought
in the Mosaics of the Mind, William H Calvin an online book.
www.nimh.nih.gov/neuroinformatics/index.cfm Neuroinformatics , The
Human Brain Project
www.rstmonday.dk/issues/issue5 2/ronfeldt/ Game Theory in Auto Rac-
ing
www.economics.utoronto.ca/osborne/ Martin J. Osborne home page Mar-
tin has written several books on game theory and has several chapters of
a coming book 'Introduction to Game Theory' on line that you can down-
load and read. It gives a very clear explanation of the Nash equilibrium.
www.few.eur.nl/few/people/vanmarrewijk/geography/zipf/ Zipf's Law
as it relates to geographical economics, trade, location and growth
www.gametheorysociety.org Game Theory Society not much here yet, but
it does have a good list of books.
plato.stanford.edu/entries/game-theory/ Game Theory, history and in-
troduction a short paper from a philosopher's stand
economics101.org/ch17/micro17/ a powerpoint game theory introduction
linkage.rockefeller.edu/wli/zipf/ Zipf's Law references
news.bbc.co.uk/1/hi/in depth/sci tech/2000/dot life/2225879.stm Computer
Games Start Thinking, BBC Article
www.gameai.com/ The Game AI Page Open Source Software, publications,
and people.
422
www.research.ibm.com/massive/tdl.html Temporal Dierence Learning
and TD Gammon
www.botepidemic.com Bot Epidemic at the forefront of game bot develop-
ment
www.ibm.com/news/morechess.html IBM story on 'Deep Thought'
www.ai.sri.com/ wilkins/bib-chess.html Papers on Chess by David Wilkins
www.rome.ro/ John Romero's Home page
www.gamedev.net GameDev.net {all your game development needs
www-cs-students.stanford.edu/ amitp/gameprog.html Amit's Game Pro-
gramming Page
www.twilightminds.com/bbe.html Brainiac Behavior Engine
personalityforge.com Personality Forge
etext.lib.virginia.edu/helpsheets/regex.html regular expressions
www.alicebot.org/ A.L.I.C.E.
www-ai.ijs.si/eliza/eliza.html Eliza
cogsci.ucsd.edu/ asaygin/tt/ttest.html One of the main benchmarks of
AI is the 'Turing Test'
www.loebner.net/Prizef/loebner-prize.html Loebner Prize gives a turing
test each year and awards a prize to the winner
www-2.cs.cmu.edu/ awb/ Alan Black, Carnegie Mellon has several useful
publications online
www.isip.msstate.edu/projects/switchboard/ Download Switchboard-1 data
transcriptions Switchboard-1 is a corpus of telephone conversations col-
lected by Texas Instruments in 1990/1. I contains 2400 two sided phone
conversations.
www.cs.columbia.edu/nlp/ Columbia Natural Language Processing Group
has some really cool projects you might want to check out
perun.si.umich.edu/ radev/u/db/acl/ Association for Computational Lin-
guistics There are searchable references and information on conferences.
www.research.microsoft.com/ui/persona/home.htm Persona Project Mi-
crosoft This is a project to develop a user interface with emotions, that
interacts socially and appears intelligent.
www-cs-students.stanford.edu/ pdoyle/quail/notes/pdoyle/natlang.html
AI Natural Language
423
www.eas.asu.edu/ cse476/atns.htm Introduction to Natural Language Pro-
cessing
www.bestweb.net/ sowa/misc/mathw.htm Mathematical Background
www.cs.tamu.edu/research/CFL/ Center for Fuzzy Logic, Robotics and
Intelligent Systems
www.seattlerobotics.org/encoder/mar98/fuz/
index.html Fuzzy Logic
Tutorial
www.csu.edu.au/complex systems/fuzzy.html Fuzzy Systems | A Tu-
torial
cbl.leeds.ac.uk/ paul/prologbook/node18.html First Order Predicate Cal-
culus
www.earlham.edu/ peters/courses/logsys/low-skol.htm Skolem Theorem
www.alcyone.com/max/links/alife.html Articial Life Links
jasss.soc.surrey.ac.uk/JASSS.html Journal of Articial Societies and So-
cial Simulation
www.santafe.edu/s/indexResearch.html Santa Fe Research Institute (there
are several research projects here related to this topic)
www.theatlanticmonthly.com/issues/2002/04/rauch.htm Seeing Around
Corners, The Atlantic Monthly (excellent article)
www.angelre.com/id/chaplincorp Chaplin Corp has a Java/Neural net
program that evolves.
www.aist.go.jp/NIBH/ b0616/Lab/Links.html Applets for neural networks
and articial intelligence
lslwww.ep
.ch/ moshes/introal/introal.html An Introduction to Arti-
cial Life
www.cs.cmu.edu/afs/cs.cmu.edu/project/alv/member/www/projects/ALVINN.html
Autonomous Land Vehicle In a Neural Network (ALVINN)
www-iri.upc.es/people/ros/WebThesis/tutorial.html Spatial Realizabil-
ity of Line Drawings
www-2.cs.cmu.edu/afs/cs/project/cil/ftp/html/v-pubs.html Computer
Vision Online Publications, books and tutorials
www.kurzweilai.net Ramona
www.alicebot.org ALICE ananova.com Ananova
Dierent interfaces and information
424
www.cs.umd.edu/hcil/pubs/treeviz.shtml TreeViz
ccs.mit.edu/papers/CCSWP181 Experiments with Oval
dq.com/homend Dynamic HomeFinder is another example of a graphical
interface that speeds up queries and imparts more information than could
be absorbed in a textual display.
www.research.microsoft.com/ui/persona/home.htm Persona Project Mi-
crosoft This is a project to develop a user interface with emotions, that
interacts socially and appears intelligent.
www.arcbridge.com/ACTidoc.htm ACTidoc Is an agent interface that builds
documents on the
y for learning.
agents.www.media.mit.edu/groups/agents MIT Media Lab for Software
Agents Group
www.pitt.edu/ circle/Projects/Atlas.html Atlas tutoring system
www.pitt.edu/ vanlehn/andes.html Andes, An intelligent tutoring system
for physics
www.ryerson.ca/ dgrimsha/courses/cps720/agentEnvironment.html
Agent Environment Types robot8.cps.unizar.es/grtr/navegacion/pfnav.htm]
A New Potential Field Based Navigation Method
vision.ai.uiuc.edu/dugad/ Rakesh Dugad's Homepage, has a good down-
loadable tutorial on HMM
uirvli.ai.uiuc.edu/dugad/hmm tut.html A Tutorial on Hidden Markov Mod-
els
home.ecn.ab.ca/ jsavard/crypto/co040503.htm Hidden Markov Models
www-2.cs.cmu.edu/ javabayes/ Java Bayes
powerlips.ece.utexas.edu/ joonoo/Bayes Net/bayes.html Tools for Bayesian
Belief Networks
omega.albany.edu:8008/JaynesBook.html Probability Theory: The Logic
of Science ( a statistics book with lots of information on Bayesian logic)
www.cs.helsinki./research/cosco/Calendar/BNCourse/ Bayesian Net-
works, Course notes
www.cyc.com CYC is a current attempt at building a common sense program,
there is an open cyc that you can download and play with on your home
computer.
www.ee.cooper.edu/courses/course pages/past courses/EE459/StrIPS
General Problem Solver
425
citeseer.nj.nec.com/vila94survey.html A survey on Temporal Reasoning
www-formal.stanford.edu/jmc/frames.html Programs with Common Sense,
John McCarthy and his home page
www.acm.org/crossroads/xrds5-2/kdd.html Knowledge Discovery in Databases:
Tools and Techniques
www.kdnuggets.com KD Nuggets: Data Mining, Web Mining, and Knowl-
edge Discovery Guide
www.opencyc.org OpenCyc This is an open source project of Cyc, one of the
most general and complete knowledge based systems.
cui.unige.ch/db-research/Enseignement/analyseinfo/AboutBNF.html
About BNF notation
www.mv.com/ipusers/noetic/iow.html InOtherWords Lexical Database is
a good example of a semantic net.
www.botspot.com/ Bot Spot
interviews.slashdot.org/article.pl?sid=02/07/26/0332225 Slashdot inter-
view with ALICE bot creator Dr. Wallace
alice.sunlitsurf.com/ A.L.I.C.E. AI Foundation
www.dis.uniroma1.it/ iocchi/pub/webnet97.html Information Accession
the Web
ict.pue.udlap.mx/people/alfredo/ihc-o99/clases/agentes.html A Tax-
onomy of Agents
lieber.www.media.mit.edu/people/lieber/Lieberary/Letizia/AIA/AIA.html
Autonomous Interface Agents
www.isi.edu/isd/LOOM/LOOM-HOME.html Loom Project Home Page
www.ai.mit.edu/projects/iiip/conferences/www95/kr-panel.html Building
Global Knowledge Webs
www.cs.umbc.edu/kqml KQML Web
meta2.stanford.edu/sharing/knowledge.html Knowledge Sharing
ksi.cpsc.ucalgary.ca/KAW/KAW96/bradshaw/KAW.html KAoS: An Open
Agent Architecture Supporting Reuse, Interoperability, and Extensibility
www.cs.umbc.edu/kse/kif/ KIF Knowledge Interchange Format
piano.stanford.edu/concur/language/ Agent Communication Language (ACL)
myspiders.biz.uiowa.edu/ My Spiders
426
www.microsoft.com/products/msagent/devdownloads.htm MS has a free
agent developer's kit you can download and use
www.bonzi.com Bonzi Buddy, Intelligent Agent (free)
dsp.jpl.nasa.gov/members/payman/swarm/ Swarm Intelligence
www-cia.mty.itesm.mx/ lgarrido/Repositories/IA/index.html Intelligent
Agents Repository
agents.media.mit.edu/index.html MIT Media Lab: Software Agents
homepages.feis.herts.ac.uk/ comqkd/aaai-social.html Socially Intelligent
Agents
www.insead.fr/CALT/Encyclopedia/ComputingSciences/Groupware/VirtualCommunities/
Aglets Library for Java from IBM, this is open source, free code
agents.umbc.edu/ UMBC Agent Web, News and Information on Agents
www.java-agent.org/ Java Agent Services
alicebot.org/ A.L.I.C.E. AI Foundation
agents.umbc.edu/technology/asl.shtml Agent Programming and Script-
ing Languages
www.agentbase.com/survey.html Agent-Based Systems
yoda.cis.temple.edu:8080/UGAIWWW/lectures/rete.html The Rete Al-
gorithm
www.cyc.com CYC a current, Internet based common sense knowledge data
base, there is an open source version you can download and use at home.
www.cis.temple.edu/ ingargio/cis587/readings/wumpus.shtml Wumpus
World
www.cs.cmu.edu/ illah/PAPERS/interleave.txt Time-Saving Tips for Prob-
lem Solving with Incomplete Information
yoda.cis.temple.edu:8080/UGAIWWW/lectures/rete.html The Rete Al-
gorithm
davis.wpi.edu/ matt/courses/soms/ Self Organizing Maps a short course
www.calresco.org/sos/sosfaq.htm Self-Organizing Systems FAQ
www.hh.se/sta/denni/sls course.html Learning and Self Organizing Sys-
tems lecture notes and problems for a graduate level computer class
pespmc1.vub.ac.be/Papers/BootstrappingPask.html Bootstrapping knowl-
edge representations
427
www.c3.lanl.gov/ rocha/ijhms pask.html Adaptive Recommendation and
Open-Ended Semiosis
artsandscience.concordia.ca/edtech/ETEC606/paskboyd.html Re
ections
on the Conversation Theory of Gordon Pask
www.cs.colostate.edu/ anderson/res/graphics/ Neural Networks in Com-
puter Graphics
www.ticam.utexax.edu/reports/2002/0202.pdf Neural Nets for Mesh As-
sessment
www.anc.ed.ac.uk/ amos/hopeld.html Why Hopeld Networks?
www.geocities.com/CapeCanaveral/1624/ Neural Networks at your n-
gertips
www.geocities.com/CapeCanaveral/1624/cpn.html Counter Propagation
Network C source code example to determine the angle of rotation using
computer vision
homepages.goldsmiths.ac.uk/nikolaev/311pnn.htm Probabilistic Neural
Networks
www.cs.wisc.edu/ bolo/shipyard/neural/local.html A Basic Introduction
to Neural Networks
www.shef.ac.uk/psychology/gurney/notes/contents.html Neural Nets:
A short online book
www.cse.unsw.edu.au/ cs9417ml/MLP2/BackPropagation.html Backpropagation
rana.usc.edu:8376/ yuri/kohonen/kohonen.html Java applet demonstrat-
ing SOM
www.willamette.edu/ gorr/classes/cs449/Unsupervised/SOM.html Kohonen's
SOM
davis.wpi.edu/ matt/courses/soms/ A course on SOM
www.quantumpicture.com/index.htm Flo Control an image recognition
neural net to keep the cat from bringing its victims into the house.
www.neci.nec.com/homepages/
ake/nodelib/html NODElib, a program-
ming library for rapidly developing neural network simulations
wol.ra.phy.cam.ac.uk/mackay/itprnn/book.html Textbook: Information
Theory, Inference and Learning Algorithms, David Mc Kay a download-
able textbook
www.shef.ac.uk/psychology/gurney/notes/index.html Neural Nets by
Kevin Gurney
428
www.maths.uwa.edu.au/ rkealley/ann all/ Articial Neural Networks, An
Introductory Course
nips.djvuzone.org/ Advances in Neural Information Processing Systems, Vol-
umes 0 to 13
www.ee.mu.oz.au/courses/431-469/subjectinfo.html 431-469 Multimedia
Signal Processing Course, lecture notes, problems and solutions from the
University of Melborne
www.willamette.edu/ gorr/classes/cs449/intro.html Neural Networks, an
online course
www.mindpixel.com/ Mindpixel
www.ai-forum.org/forum.asp?forum id=1 AIForums
www.gamedev.net/community/forums/forum.asp?forum id=9 GameDev.net
www.generation5.org/cgi-local/ubb/Ultimate.cgi?action=intro Forum
5
sodarace.net/forum/forum.jsp?forum=16 Sodarace
www.igda.org/Forums/forumdisplay.php?forumid=30 IGDA Forums
429