This node deals with Monte Carlo simulations in the field of finance, more specifically option pricing. There are many other applications of this concept in many other fields.

The Monte Carlo simluation is used in Finance to estimate the value of a European-style derivative security. In other words, to solve differential equations numerically through trial and error.

Suppose that the value of the option is dependent on two underliers, a stock index and an exchange rate. Monte Carlo simulation might be used to price such an option as follows:

  • Randomly generate 10,000 scenarios for the value, on the option's expiration date, of the two underliers. Do so in a manner that is consistent with an assumed (risk neutral) joint probability distribution of the two variables.
  • Determine what the option's expiration value would be under each of the 10,000 scenarios.
  • Form a histogram of those results. This represents a discrete approximation for the probability distribution of the option's expiration value. The discounted mean of the histogram is the estimated option price.

    This simulation is numerically efficient when there are multiple variables, as the time it takes to execute a Monte Carlo simulation increases linearly as the number of variables increase, whereas for most other procedures it increases exponentially. One of the drawbacks is that it can only be used with European options.
  • This writeup deals with a Monte Carlo simulation in the field of Solid State Physics, in particular, surface diffusion. However, the more abstract elements should transfer well to other fields.

    At a given temperature and configuration, each atom on the surface of a metal has a known chance of jumping into a neighboring lattice site. It also has some chance of knocking into another atom and them both moving at the same time. Complications of this can lead to interesting dynamics, which experiments can see some of but not give all of the details of. For example, one can take stunning visual profiles of the surface with an STM, but not quickly enough to see anything but the final steady state outcome; or one can bounce ions off the surface and tell the density of various species of atom even while it is still changing, but not know where in particular they are.

    With a somewhat simplified model, one can write a Monte Carlo simulation of the time development of this system using several algorithms. I have worked with two of them in my thesis work at Haverford:

    The Metropolis Algorithm

    is named after Nicholas Metropolis, who invented it in the 1950's. The people working with him included the (in?)famous Edward Teller. The Algorithm is as follows (I use the term 'move' to refer to whatever simple transformations on the system the implementation deals with):

    1. Pick a random move (in our case, this meant pick an atom, and a random direction. If this direction leads into another atom, then pick another random direction for that second atom to move)
    2. Figure out how probable this move is, based on the presence of other nearby atoms and the temperature
    3. Get a random number on (0,1), and if the random number is lower than the probability, then carry out the move.
    4. Repeat, and advance the time by 1/(a constant prefactor).

    The Metropolis Algorithm usually rejects moves. There are ways around this, but unless done carefully they can discount some of the more probable moves, and no matter what they will make the statistics more 'grainy' (not a technical term).

    The BKL algorithm

    is named after the three people who invented it in the 1970's, Bortz, Kalos, and Lebowitz. The algorithm is as follows:

    1. Figure out how likely every single possible move is.
    2. Pile all of the probabilities on top of each other. By this I mean assign each possible move a domain of the real numbers such that in (0,Z), where Z is the sum of the probabilities, each number has exactly one move associated with it.
    3. Pick a random number in between zero and Z.
    4. Carry out the move that this number corresponds to.
    5. Then figure out what new moves are possible, what old moves are no longer possible, and how all the other moves have changed.
    6. Repeat, advancing the time step by 1/(a constant prefactor x Z before the move).


    The Metropolis algorithm is much more widely used since each step is much faster, and since the computer programming required to make it work is much less mind-bending. However, if the probability of an arbitrary individual move is very low (if, let's say, the temperature is low, so the atoms do not have enough energy to overcome the barriers very often), then the Metropolis algorithm will spend almost all of its time throwing moves out... enough to slow it down beyond what the BKL algorithm would spend figuring out the consequences of its actions.

    @waterhouse, below: Your distinction includes Monte-Carlo evaluations as Monte Carlo simulations. The distinction is important. Monte Carlo evaluations are experimental in nature, attempts to determine the probability of the various outcomes by trying to measure repeatedly. You start out with no information, and assume that the information that you get is typical (the more attempts, the more reliable). This is actually the quintessential experimental situation, and 'Monte Carlo' would not enter into the thought process of the experimenter.
    Monte Carlo simulations are theoretical in nature, attempts to determine the behavior of a system based on simple assumptions stated in probabalistic terms. You start out with all of the information, but you stochastically synthesize it into a usable form.
    All (or nearly all) active experimentation is Monte Carlo, while non-Monte Carlo theory is common.

    A Monte Carlo simulation is named such because of its use in gambling. Before gambling became a near scientific practice designed to take money from suckers, the first casinos had no idea how to set bet limits, what a minimum bet should be, etc. In order to discover the odds that any given person had of winning a given casino game, the people behind the curtain would simply play the game until all possibilities had been exhausted or the necessary degree of precision had been obtained for the statistics of any given game. Thus, in modern parlance, a Monte Carlo Simulation is any simulation which derives its results from brute force computation and which returns a value dependent upon the interpretation of the total, sum data. In that definition, flipping a coin one hundred times to find the ratio of heads to tails is a Monte Carlo Simulation, but brute forcing password is not, because the only result obtained was the password and that had nothing to do, statistically, with the rest of the data set (the hundreds of passwords previously guessed). If the sought data was the ratio of false passwords to correct ones, well, that would fall under my definition of a Monte Carlo Simulation.

    Log in or register to write something here or to contact authors.