top of page
  • Writer's pictureNagesh Singh Chauhan

Introduction to Monte Carlo Simulation in Statistics

Updated: Sep 20, 2021

A super simple concept, still allows us to solve really complex problems.

If we summarize what Monte Carlo simulation is in one line, then it would be:

"Fake it a billion times until we kind of know what the reality is."


Introduction


Monte Carlo simulation lets you see all the possible outcomes of your decisions and assess the impact of risk, allowing for better decision-making under uncertainty.


A Monte Carlo simulation can be used to tackle a range of problems in virtually every field such as finance, engineering, supply chain, and science. It is also referred to as a multiple probability simulation.


Monte Carlo simulation was originally designed to solve Buffon’s needle problem, in which π, pi, could be determined by dropping needles on a floor made of parallel equidistant strips. The modern version of Monte Carlo Simulation was invented by Stanislaw Ulam, inventor of the modern version of the Markov Chain Monte Carlo(MCMC) technique during his work on nuclear weapons projects during World War 2, and Von Neumann programmed the ENIAC computer(first programmable computer) to perform Monte Carlo calculations. Neumann is also known for his famous approach to making unfair dice a fair one by tossing an unfair coin twice and ignoring the differing {Head, Tail} and {Tail, Head} options.


Ulam’s proposed approach which was based on a list of truly random numbers was extremely slow, but Von Neumann developed a method to calculate pseudorandom numbers, which was much faster than Ulam’s approach. The name of this approach comes from the Monte Carlo Casino in Monaco where Ulam’s uncle would borrow money from relatives to gamble.

Von Neumann is also known for his famous approach to making an unfair dice a fair one by tossing an unfair coin twice and ignoring the differing HT and TH options.


What is Monte Carlo Simulation?


Monte Carlo simulations are a wide class of computational algorithms that rely on repeated random sampling. The fundamental notion of Monte Carlo is to use randomness to resolve enigmas that might be deterministic in principle. Monte Carlo simulation is one of the most famous procedures to draw conclusions about a population without knowing the true fundamental population distribution. This sampling technique becomes beneficial particularly when one doesn’t have the richness to repeatedly sample from the original population.


The intention behind Monte Carlo is that, as we use more samples, our solution should get more and more precise.

Monte Carlo methods vary, but tend to follow a distinct pattern:

  1. Define a domain of possible inputs

  2. Generate inputs randomly from a probability distribution over the domain

  3. Perform a deterministic computation on the inputs

  4. Aggregate the results


Why use Simulation?


If I were to highlight one (oversimplified) advantage of Simulation over ML algorithms, it would be this: Exploration. We use Simulation to understand the inner working of any systems at any scale (e.g. the world, a community, a company, a team, a person, a fleet, a car, a wheel, an atom, etc.)


By re-creating a system virtually with simulations, we can calculate and analyze hypothetical results without actually changing the world or waiting for real events to happen. In other words, Simulations allow us to ask bold questions and develop tactics to manage various future outcomes without much risk and investment.


How does Monte Carlo Simulation Works?


The Monte Carlo methods are based on Law of large numbers(LLN), According to the law, the average of the results obtained from a large number of trials should be close to the expected value and will tend to become closer to the expected value as more trials are performed.


The LLN is important because it guarantees stable long-term results for the averages of some random events.


For example, a single roll of fair, six-sided dice produces one of the numbers 1, 2, 3, 4, 5, or 6, each with equal probability. Therefore, the expected value of the average of the rolls is:

According to the law of large numbers, if a large number of six-sided dice are rolled, the average of their values (sometimes called the sample mean) is likely to be close to 3.5, with the precision increasing as more dice are rolled.


Unlike a normal forecasting model, Monte Carlo Simulation predicts a set of outcomes based on an estimated range of values versus a set of fixed input values. In other words, a Monte Carlo Simulation builds a model of possible results by leveraging a probability distribution, such as a uniform or normal distribution, for any variable that has inherent uncertainty. It, then, recalculates the results over and over, each time using a different set of random numbers between the minimum and maximum values. In a typical Monte Carlo experiment, this exercise can be repeated thousands of times to produce a large number of likely outcomes.


When a Monte Carlo Simulation is complete, it yields a range of possible outcomes with the probability of each result occurring.


One simple example of a Monte Carlo Simulation is to consider calculating the probability of rolling two standard dice. There are 36 combinations of dice rolls. Based on this, you can manually compute the probability of a particular outcome. Using a Monte Carlo Simulation, you can simulate rolling the dice 10,000 times (or more) to achieve more accurate predictions.


How to use Monte Carlo methods?


Depending on the number of factors involved, simulations can be very complex. But at a basic level, all Monte Carlo simulations have four simple steps:


1. IDENTIFY THE TRANSFER EQUATION

To create a Monte Carlo simulation, you need a quantitative model of the business activity, plan, or process you wish to explore. The mathematical expression of your process is called the “transfer equation.” This may be a known engineering or business formula, or it may be based on a model created from a designed experiment (DOE) or regression analysis. Software like Minitab Engage and Minitab Workspace gives you the ability to create complex equations, even those with multiple responses that may be dependent on each other.


2. DEFINE THE INPUT PARAMETERS

For each factor in your transfer equation, determine how its data are distributed. Some inputs may follow the normal distribution, while others follow a triangular or uniform distribution. You then need to determine distribution parameters for each input. For instance, you would need to specify the mean and standard deviation for inputs that follow a normal distribution. If you are unsure of what distribution your data follow, Engage and Workspace have a tool to help you decide.


3. SET UP SIMULATION

For a valid simulation, you must create a very large, random data set for each input —something on the order of 100,000 instances. These random data points simulate the values that would be seen over a long period for each input. While it sounds like a lot of work, this is where Engage and Workspace shine. Once we submit the inputs and the model, everything here is taken care of.


4. ANALYZE PROCESS OUTPUT

With the simulated data in place, you can use your transfer equation to calculate simulated outcomes. Running a large enough quantity of simulated input data through your model will give you a reliable indication of what the process will output over time, given the anticipated variation in the inputs.


Let us consider some examples to understand Monte Carlo by applying it to solve some problems.


The most common example of Monte Carlo simulation is using it to estimate Pi (π). To do so, first consider a circle with diameter 1 which is inscribed in a square of size 1.




The idea is to simulate random (x, y) points in a 2-D plane with domain as a square of side 1 unit. We then calculate the ratio of number points that lied inside the circle and total number of generated points.

We know that area of the square is 1 unit sq while that of circle is π*(1\2)^2 =π/4. Now for a very large number of generated points,




that is,


The beauty of this algorithm is that we don’t need any graphics or simulation to display the generated points. We simply generate random (x, y) pairs and then check if x^2+y^2 <= 1. If yes, we increment the number of points that appears inside the circle. In randomized and simulation algorithms like Monte Carlo, the more the number of iterations, the more accurate the result is.


We can write the python code for the above approach:

#First I begin by importing the modules we need
import random
from math import sqrt

#Setting the seed
random.seed(1991)    #Guess my birth.year

#This function does all the calculations for us
def PiEstimation(n):
    #I define a variable to store the number of points inside the circle
    circle = 0

    #loop to calculate the points we need for our estimation
    for i in range(1,n+1):
        x = random.random()
        y = random.random()
        
        #Checking to see if the produced number falls into the circle
        if sqrt(x**2 + y**2) <= 1:
            circle += 1

    #I calculate Pi for this iteration
    Pi = 4 * (circle/n)

    #returning the result
    return Pi

#n
n=[10,100,1000,10000,100000,10000000,100000000,200000000]

#I define an empty dictionary to store the results of iterations to it
Pi_storage ={}

#Running a loop to estimate Pi with different number of random points
for j in n:
    Pi_storage[j]=PiEstimation(j)

#Printing the results to the console
for k in n:
    print ("The result of pi estimation after " + str(k) + " attempts is equal to: " + str(Pi_storage[k]))

Output:


The result of pi estimation after 10 attempts is equal to: 2.4
The result of pi estimation after 100 attempts is equal to: 3.12
The result of pi estimation after 1000 attempts is equal to: 3.196
The result of pi estimation after 10000 attempts is equal to: 3.1448
The result of pi estimation after 100000 attempts is equal to: 3.14184
The result of pi estimation after 10000000 attempts is equal to: 3.1414672
The result of pi estimation after 100000000 attempts is equal to: 3.14155472
The result of pi estimation after 200000000 attempts is equal to: 3.14174194

As it is evident from the results, the larger the number of our attempts (n) is, the closer we get to the value of Pi. However, even after 20 million attempts, just the first two digits of Pi have been estimated correctly.


Let us take another example, say we want to estimate a causal effect between two variables, pair of independent and dependent variables and we have some idea about possible values of intercept alpha and slope parameter beta.


What we can do is we can randomly sample from normal distribution to generate error terms, dependent and independent variable values. Then we can estimate the coefficient of beta, beta_hat, and repeat this process M = 10000 times. Then by the LLN, the sample mean of these 10000 beta_hats will be an unbiased estimate for the true beta. That is:


import numpy as np
import statsmodels.api as sm
np.random.seed(2021)
mu = 0 
sigma = 1 
n = 100 
# assumed population parameters
alpha = np.repeat(0.5, n)
beta = 1.5

def MC_estimation_slope(M):
    MC_betas = []
    MC_samples = {}

    for i in range(M):
        # randomly sampling from normal distribution as error terms
        e = np.random.normal(mu, sigma, n)
        # generating independent variable by making sure the variance in X is larger than the variance in error terms
        X = 9 * np.random.normal(mu, sigma, n)
        # population distribution using the assumd parameter values alpha/beta
        Y = (alpha + beta * X + e)
        
        # running OLS regression for getting slope parameters
        model = sm.OLS(Y.reshape((-1, 1)), X.reshape((-1, 1)))
        ols_result = model.fit()
        coeff = ols_result.params
        
        MC_samples[i] = Y
        MC_betas.append(coeff)
    MC_beta_hats = np.array(MC_betas).flatten()
    return(MC_samples, MC_beta_hats)
    
MC_samples, MC_beta_hats = MC_estimation_slope(M = 10000)
beta_hat_MC = np.mean(MC_beta_hats)

Output:


MC_beta_hats  array([1.49450864, 1.47494779, 1.50035304, ..., 1.50746915, 1.49457376,        1.49608805])

beta_hat_MC  1.5001615366005294

Let us take another example of approximating the integrals,


For known functions such as the Normal distribution function, calculating integral might be simple and would not require the usage of MC. However, for more complicated functions computing integrals might be very hard and in those situations using MC could be the only way to calculate this integral.


MC for approximating integrals is based on the LLN and the idea behind it is that if we can generate random samples xi from a given distribution P(x), then we can estimate the expected value of a function under this distribution by summation, rather than integration. Stated differently, we can find the value of an integral by determining the average value of its integrand, h(x). The MC is based on this principle, as we saw earlier.



Let’s say we want to obtain the probability Pr[X ≥ 3] where X~Norm(10, 2), which can be expressed by the following integral where f(x) is the pdf function of Normal distribution.


Then this integral can be obtained using Monte Carlo simulation by calculating this amount 10000 times and taking the mean of these values.


import numpy as np
from scipy.stats import norm

def Integral_estimation(M,mu,sigma):
    prob_larger_than3 = []

    for i in range(M):
        # Using CDF since P[Z>=3] = 1-P[Z<=3]
        p = 1- norm.cdf(3, mu, sigma)
        # Using Survival Function P[Z>=3]
        p = norm.sf(3, mu, sigma)
        prob_larger_than3.append(p)
    MC_approximation_prob = np.array(prob_larger_than3).mean()
    return(MC_approximation_prob)

Integral_estimation(M = 10000, mu = 10, sigma = 2)

Output:

0.9997673709209641


How do casinos earn money?


The trick is straightforward — “The more you play, the more the casinos earn.” Let us take a look at how this works with a simple Monte Carlo Simulation example.


Consider an imaginary game in which a player has to choose a chip from a bag of chips.


Rules:

1. There are chips containing numbers ranging from 1–100 in a bag.

2. Users can bet on even or odd chips.

3. In this game, 10 and 11 are special numbers. If we bet on evens, then 10 will be counted as an odd number, and if we bet on odds, then 11 will be counted as an even number.

4. If we bet on even numbers and we get 10 then we lose.

5. If we bet on odd numbers and we get 11 then we lose.

6. If we bet on odds, the probability that we will win is of 49/100. The probability that the house wins is of 51/100. Therefore, for an odd bet the house edge is = 51/100–49/100 = 200/10000 = 0.02 = 2%


If we bet on evens, the probability that the user wins is of 49/100. The probability that the house wins is of 51/100. Hence, for an odd bet the house edge is = 51/100–49/100 = 200/10000 = 0.02 = 2%


In summary, for every $ 1 bet, $ 0.02 goes to the house. In comparison, the lowest house edge on roulette with a single 0 is 2.5%. Consequently, we are certain that you will have a better chance of winning at our imaginary game than with roulette.


#Import required libraries :

import random
import matplotlib.pyplot as plt

"""RULES : 
1) There are chits containing numbers ranging from 1-100 in a bag.
2) Users can bet on even or odd.
3) In this game 10 and 11 are special numbers. 10 will be counted as an odd number and 11 will be counted as an even number.
4) If you bet on even number and if you get 10 then you lose.
5) If you bet on odd number and if you get 11 then you lose.
"""

#Place your bet:

#User can choose even or odd number :
choice = input("Do you want to bet on Even number or odd number \n")

#For even :
if choice=="Even":
    def pickNote():
        #Get random number between 1-100.
        note = random.randint(1,100)
       
        #Check for our game conditions.
        
        #Notice that 10 isn't considered as even number.
        if note%2!=0 or note==10:
            return False
        elif note%2==0:
            return True

#For odd :        
elif choice=="Odd":
    def pickNote():
        #Get random number between 1-100.
        note = random.randint(1,100)
        
        #Check for our game conditions.
        
        #Notice that 11 isn't considered as odd number.
        if note%2==0 or note==11:
            return False
        elif note%2==1:
            return True  
            
#Main function :
def play(total_money, bet_money, total_plays):

    num_of_plays = []
    money = []
    
    #Start with play number 1
    play = 1
  
    for play in range(total_plays):
        #Win :
        if pickNote():
            #Add the money to our funds
            total_money = total_money + bet_money
            #Append the play number
            num_of_plays.append(play)
            #Append the new fund amount
            money.append(total_money)
        
        #Lose :
        else:
            #Add the money to our funds
            total_money = total_money - bet_money 
            #Append the play number
            num_of_plays.append(play)
            #Append the new fund amount
            money.append(total_money)
    
    #Plot the data :
    plt.ylabel('Player Money in $')
    plt.xlabel('Number of bets')
    plt.plot(num_of_plays,money)

    #Final value after all the iterations :
    final_funds.append(money[-1])
    return(final_funds)
    
    #Create a list for calculating final funds
final_funds= []

#Run 10 iterations :
for i in range(10):
    ending_fund = play(10000,100,50)
    
print(ending_fund)
print(sum(ending_fund))

#Print the money the player ends with
print("The player started with $10,000")
print("The player left with $",str(sum(ending_fund)/len(ending_fund)))

After 10 iterations and 50 bets:


After 1000 iterations and 50 bets:



After 100 bets


After 1000 bets


After 1000 bets


From the above experiment, we can see that the player has a better chance of making a profit if they place fewer bets on these games. In some case scenarios, we get negative numbers, which means that the player lost all of their money and accumulated debt instead of making a profit.


Monte Carlo in Machine Learning


In machine learning, Monte Carlo methods provide the basis for resampling techniques like the bootstrap method for estimating a quantity, such as the accuracy of a model on a limited dataset.


Random sampling of model hyperparameters when tuning a model is a Monte Carlo method, as are ensemble models used to overcome challenges such as the limited size and noise in a small data sample and the stochastic variance in a learning algorithm.

  • Resampling algorithms.

  • Random hyperparameter tuning.

  • Ensemble learning algorithms.

Monte Carlo methods also provide the basis for randomized or stochastic optimization algorithms, such as the popular Simulated Annealing optimization technique.


Conclusion


Like with any forecasting model, the simulation will only be as good as the estimates we make. It is important to remember that the Monte Carlo Simulation only represents probabilities and not a certainty. Nevertheless, the Monte Carlo simulation can be a valuable tool when forecasting an unknown future.


Feel free to connect me on LinkedIn for any query.


References:



3,415 views0 comments

Recent Posts

See All
bottom of page