A technique that improves the minimax algorithm for playing two-person zero-sum games by reducing the number of moves that a program needs to examine. If there are 30 possible moves for you and your opponent at any point in a chess game and you want to look four moves ahead, you'll need to evaluate 304 = 810,000 board positions. Alpha-beta pruning can potentially reduce the 30 moves per position to about 8 moves per position, giving a total number of 84=4,096, a savings of 99.5%.

The technique starts with the observation that when you're playing a game and looking ahead some number of moves, you may realize that certain positions can never be better than the best move you've already found. When you recognize that situation, you look no further down that line of moves. If you represent all of your possible moves and your opponents possible responses as a tree (often called a look-ahead tree or a game tree), you're effectively pruning off branches that aren't worth exploring.

Here's a very simple example. Suppose we have a game where you and your opponent have two possible moves at any point, and you want to look two moves deep (in other words, how is my opponent going to reply to my move). Each position in the game is given a score, where a positive score is good for you and a negative score is good for your opponent. According to the minimax algorithm, you will choose the move with the maximum score and your opponent will choose the move with the minimum score. We start by looking at one of our two possible moves and then at each of our opponent's possible replies, one of which gives a score of 9 and the other a score of 4. We can show this graphically like this:

```                     /\
/  \
/    \
/\    /\
/  \  /  \
9   4
```

Since your opponent picks moves best for him (i.e., he minimizes), you assume he'll pick the second move. Therefore, if you select the first move, you'll end up in a position with a value of 4. You might be able to do better with the second move, so your expectation at this point is that you'll going to end up with a score of four or greater (if your other move turns out to be better, but we don't know that yet). Graphically, it looks like this:

```                    >=4
/\
/  \
4 /    \
/\    /\
/  \  /  \
9   4
```

Now let's explore the second move. We discover immediately that our opponent's first reply gives a board with a score of 3. Since our opponent always minimizes the situation, we know that our second move will lead to a score no better than 3:

```                    >=4
/\
/  \
4 /    \ <=3
/\    /\
/  \  /  \
9   4  3
```

At this point, we know that the first move will always be better for us than the second move, regardless of what our opponent's other move might be. We prune the last branch of the tree since it's irrelevant, and instead of looking at four final positions in the tree, we've only looked at three.

Alpha-beta pruning is all about keeping track of the best that can be hoped for at any point in the look-ahead tree and comparing it to the best that can be hoped for in the previous position. If the previous position can't be improved upon, we can skip this part of the tree and thereby reduce our effort. By the way, the name comes from the formal definition of the algorithm: alpha represents the best value that the player can hope for, and beta represents the best value that the opponent can hope for.

In addition to Percepied's writeup, here's a quick'n'dirty pseudocode algorithm for alpha-beta pruning:

```minimax-value max-value(gamestate, alpha, beta)
{
if(cutoff-test(gamestate)) return eval(gamestate)
for each s in successors(gamestate)
{
alpha = max(alpha, min-value(s, alpha, beta))
if(alpha >= beta) return beta
}
return alpha
}

minimax-value min-value(gamestate, alpha, beta)
{
if(cutoff-test(gamestate)) return eval(gamestate)
for each s in successors(gamestate)
{
beta = min(beta, max-value(s, alpha, beta))
if(alpha >= beta) return alpha
}
return beta
}
```

Note that the return values of both functions is a minimax value for a state. You should probably have a separate function that goes through each successor state of a particular gamestate and assigns it the return value of min-value(state, alpha, MAX) (the reason for the MAX constant is so that the first value min-value looks at will be assigned to beta. This also means that this external function's first call to min-value will be min-value(state, MIN, MAX); after this, the function will keep track of a best-value alpha and pass that instead of MIN). Also note that in general, the cutoff test is a ply-depth a la depth-limited search, so an additional variable ply may need to be passed along and decremented (or incremented, depending on your implementation) at each function.