Você está na página 1de 8

Jose Mari C.

Rey BSCS 401

Alpha-beta pruning search algorithm which seeks to decrease the number of nodes that are evaluated by the minimax algorithm in its search tree. It is an adversarial search algorithm used commonly for machine playing of two-player games (Tic-tac-toe, Chess, Go, etc.). It stops completely evaluating a move when at least one possibility has been found that proves the move to be worse than a previously examined move. Such moves need not be evaluated further. When applied to a standard minimax tree, it returns the same move as minimax would, but prunes away branches that cannot possibly influence the final decision. History Allen Newell and Herbert Simon who used what John McCarthy calls an "approximation" in 1958 wrote that alpha-beta "appears to have been reinvented a number of times". Arthur Samuel had an early version and Richards, Hart, Levine and/or Edwards found alpha-beta independently in the United States. McCarthy proposed similar ideas during the Dartmouth Conference in 1956 and suggested it to a group of his students including Alan Kotok at MIT in 1961. Alexander Brudno independently discovered the alpha-beta algorithm, publishing his results in 1963. Donald Knuth and Ronald W. Moore refined the algorithm in 1975 and it continued to be advanced. Pseudocode

function alphabeta(node, depth, , , Player) if depth = 0 or node is a terminal node return the heuristic value of node if Player = MaxPlayer for each child of node := max(, alphabeta(child, depth-1, , , not(Player) )) if break

(* Beta cut-off *)

return else for each child of node := min(, alphabeta(child, depth-1, , , not(Player) )) if break return

(* Alpha cut-off *)

(* Initial call *)
alphabeta(origin, depth, -infinity, +infinity, MaxPlayer)

A* search algorithm is a computer algorithm that is widely used in pathfinding and graph traversal, the process of plotting an efficiently traversable path between points, called nodes. Noted for its performance and accuracy, it enjoys widespread use. Peter Hart, Nils Nilsson and Bertram Raphael first described the algorithm in 1968.[1] It is an extension of Edsger Dijkstra's 1959 algorithm. A* achieves better performance (with respect to time) by using heuristics. However, when the heuristics are monotone, it is actually equivalent to running Dijkstra. Description A* uses a best-first search and finds the least-cost path from a given initial node to one goal node (out of one or more possible goals). It uses a distance-plus-cost heuristic function (usually denoted f(x)) to determine the order in which the search visits nodes in the tree. The distance-plus-cost heuristic is a sum of two functions:

the path-cost function, which is the cost from the starting node to the current node (usually denoted g(x)) and an admissible "heuristic estimate" of the distance to the goal (usually denoted h(x)).

The h(x) part of the f(x) function must be an admissible heuristic; that is, it must not overestimate the distance to the goal. Thus, for an application like routing, h(x) might represent the straight-line distance to the goal, since that is physically the smallest possible distance between any two points or nodes.

If the heuristic h satisfies the additional condition for every edge x, y of the graph (where d denotes the length of that edge), then h is called monotone, or consistent. In such a case, A* can be implemented more efficientlyroughly speaking, no node needs to be processed more than once (see closed set below)and A* is equivalent to running Dijkstra's algorithm with the reduced cost d'(x,y): = d(x,y) h(x) + h(y). Note that A* has been generalized into a bidirectional heuristic search algorithm; see bidirectional search. History In 1964 Nils Nilsson invented a heuristic based approach to increase the speed of Dijkstra's algorithm. This algorithm was called A1. In 1967 Bertram Raphael made dramatic improvements upon this algorithm, but failed to show optimality. He called this algorithm A2. Then in 1968 Peter E. Hart introduced an argument that proved A2 was optimal when using a consistent heuristic with only minor changes. His proof of the algorithm also included a section that showed that the new A2 algorithm was the best algorithm possible given the conditions. He thus named the new algorithm in Kleene star syntax to be the algorithm that starts with A and includes all possible version numbers or A*.

Concepts As A* traverses the graph, it follows a path of the lowest known cost, keeping a sorted priority queue of alternate path segments along the way. If, at any point, a segment of the path being traversed has a higher cost than another encountered path segment, it abandons the highercost path segment and traverses the lower-cost path segment instead. This process continues until the goal is reached.

Process Like all informed search algorithms, it first searches the routes that appear to be most likely to lead towards the goal. What sets A* apart from a greedy best-first search is that it also takes the distance already traveled into account; the g(x) part of the heuristic is the cost from the start, not simply the local cost from the previously expanded node. Starting with the initial node, it maintains a priority queue of nodes to be traversed, known as the open set (not to be confused with open sets in topology). The lower f(x) for a given node x, the higher its priority. At each step of the algorithm, the node with the lowest f(x) value is removed from the queue, the f and h values of its neighbors are updated accordingly, and these neighbors are added to the queue. The algorithm continues until a goal node has a lower f value than any node in the queue (or until the queue is empty). (Goal nodes may be passed

over multiple times if there remain other nodes with lower f values, as they may lead to a shorter path to a goal.) The f value of the goal is then the length of the shortest path, since h at the goal is zero in an admissible heuristic. If the actual shortest path is desired, the algorithm may also update each neighbor with its immediate predecessor in the best path found so far; this information can then be used to reconstruct the path by working backwards from the goal node. Additionally, if the heuristic is monotonic (or consistent, see below), a closed set of nodes already traversed may be used to make the search more efficient. Pseudocode The following pseudocode describes the algorithm: function A*(start,goal) closedset := the empty set // The set of nodes already evaluated. openset := {start} // The set of tentative nodes to be evaluated, initially containing the start node came_from := the empty map // The map of navigated nodes. g_score[start] := 0 // Cost from start along best known path. h_score[start] := heuristic_cost_estimate(start, goal) f_score[start] := h_score[start] // Estimated total cost from start to goal through y. while openset is not empty x := the node in openset having the lowest f_score[] value if x = goal return reconstruct_path(came_from, came_from[goal]) remove x from openset add x to closedset foreach y in neighbor_nodes(x) if y in closedset continue tentative_g_score := g_score[x] + dist_between(x,y) if y not in openset add y to openset tentative_is_better := true else if tentative_g_score < g_score[y] tentative_is_better := true else tentative_is_better := false if tentative_is_better = true came_from[y] := x

g_score[y] := tentative_g_score h_score[y] := heuristic_cost_estimate(y, goal) f_score[y] := g_score[y] + h_score[y] return failure function reconstruct_path(came_from, current_node) if came_from[current_node] is set p = reconstruct_path(came_from, came_from[current_node]) return (p + current_node) else return current_node

Uniform-cost search

uniform-cost search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. The search begins at the root node. The search continues by visiting the next node which has the least total cost from the root. Nodes are visited in this manner until a goal state is reached. Typically, the search algorithm involves expanding nodes by adding all unexpanded neighboring nodes that are connected by directed paths to a priority queue. In the queue, each node is associated with its total path cost from the root, where the least-cost paths are given highest priority. The node at the head of the queue is subsequently expanded, adding the next set of connected nodes with the total path cost from the root to the respective node. The uniformcost search is complete and optimal if the cost of each step exceeds some positive bound .[1] The worst-case time and space complexity is O(b1 + C*/), where C* is the cost of the optimal solution. When all step costs are equal, this becomes O(bd + 1).[ Pseudocode procedure UniformCostSearch(Graph, root, goal) node := root, cost = 0 frontier := empty priority queue containing node explored := empty set do if frontier is empty return failure node := frontier.pop() if node is goal return solution explored.add(node)

for each of node's neighbors n if n is not in explored if n is not in frontier frontier.add(n) if n is in frontier with higher cost replace existing node with n

Example

Expansion showing the explored set and the frontier (priority queue): Start Node: A Goal Node: G Step 1 Frontier: Node A Cost 0 Explored: Step 2 Expand A Frontier: Node D B

Cost 3 5 Explored: A Step 3 Expand D Frontier: Node B E F Cost 5 5 5 Explored: A D Step 4 Expand B Frontier: Node E F C Cost 5 5 6 Explored: A D B Step 5 Expand E Frontier: Node F C Cost 5 6 Explored: A D B E Note: B was not added to the priority queue because it was already explored Step 6 Expand F Frontier: Node C G Cost 6 8 Explored: A D B E F

Step 7 Expand C Frontier: Node G Cost 8 Explored: A D B E F C Step 8 Expand G Found the path: A to D to F to G

Você também pode gostar