About Game Monopoly: A Markov Process

Table of Content

Introduction:

Monopoly is a board game that dates back to 1904. Elizabeth Magie created the game, The Landlord’s Game, to describe the e↵ect of land monopolism and the use of land value tax. Although some people enjoyed her game, it did not gain popularity until Charles Darrow reinvented it with new rules in 1935. He created the game Monopoly, that is sold in stores today. Eventually, a company called Parker Brothers bought the game from Darrow. Then, as of 1991, the company Hasbro acquired the game from Parker Bros. To this day, Hasbro still has ownership of the game.

The objective is to be the only player left in the game who is not bankrupt. It is unclear if there is one specific strategy that guarantees a victory in this game. Monopoly can be seen as a Markov Process, specifically a finite Markov Chain. The players’ placements can change within one turn depending on the outcome of rolling two dice. Therefore, it is di cult to predict where a player will be after a turn. If we study Monopoly through a Markov process, we can consider the likelihood of landing on a certain square after each turn.

Purpose:

This paper will explain the ways in which a Markov process can help a player understand the probabilities of landing on certain spaces after each roll of the dice. We will look at each individual property space on the board as a state when creating the Markov chain. For a model such as Monopoly to be Markovian, the probability of moving to a state must not depend on any previous states and only depend on the current state. Unfortunately, if a player ends their turn on the jail space they potentially to stay in that state for three consecutive turns. They have the opportunity to exit the Jail space if they roll a double or if they pay money to leave. Therefore, the probability of being in a future state depends on the players choice which is not Markovian. In order to make the process of the game Markovian, we will ignore this particular rule. Players will not have a choice, rather, they will leave jail on their first turn. Considering the objective of the game, it is beneficial for a strategic player to understand the properties that are most frequently landed on in order to increase income and decrease payout. The calculations completed in this paper will result in finding the long-term probability that a player will end on a specific state. Therefore, we will be able to recognize the most frequently visited spaces of the game and then create a plan to take ownership of those states and manage a player’s cash flow.

Definitions:

A Stochastic process is a sequence of events where the position at any stage depends on some probability.

The state space is the set of distinct values assumed by a stochastic process.

A Markov process is a stochastic process with the following characteristics:

(a) The number of possible states (outcomes) is finite.

(b) The probability to be at any state only depends on the current state.

(c) The probabilities are constant over time.

A finite Markov chain is a Markov Process such that the transition probabilities pij(n), which are the probabilities of going from state i to state j at time n, do not depend on any previous situations.

A transition matrix is an n ⇥ n matrix P , whose entries are pij , that describes the transition probabilities of a Markov chain.

An eigenvalue of a square matrix P is a number, , such that there exists some nonzero vector v that satisfies Pv = v. An eigenvector is a vector, v, which corre- sponds to the eigenvalue.

A steady state is the long-term probability that a particular state is active.

Basic Example:

We will look at a basic example to explain the main concept of a finite Markov chain. Consider a four square board with each square labeled one through four.

At time 0 a person is on space 1 and will flip a fair coin to see if he goes to space 2 or space 4. If the coin lands on heads he will go clockwise and if it lands on tails he will go counterclockwise. Let Xn denote the square the player is on at time n. 4 Hence, (X0,X1,X2,…,Xn) is a random process with state space {1,2,3,4} that can be considered a Markov process since a future state only depends on the current state.

Considering the player starts on 1 at time 0 we have P(X0 = 1) = 1. When he decides toflipthecoinandmove,wehaveP(X1 =2)= 1,P(X1 =4)= 1,andP(X1 =3)=0 22 because the probability that he lands on heads is 1 and the probability he lands on 2 tails is also 1. There is no chance that he can land on space 3 in the first step. 2 Computing the distribution of Xn for n 2 is more complex, so it will be useful to consider conditional probabilities. Suppose at time n the player is on square 2 then the conditional probabilities are P(Xn+1 = 1|Xn = 2) = 1 and P(Xn+1 = 3|Xn = 2) = 1 . 22

When we calculate the probabilities from time 0 all the way up to time n we get P(Xn+1 = 1|X0 = i0,X1 = i1,…,Xn 1 = in 1,Xn = 2) = 1 and P(Xn+1 = 3|X0 = i0,X1 = i1,…,Xn 1 = in 1,Xn = 2) = 1 2

The coin-flip at time n + 1 is independent of all the previous coin-flips which makes this example is a random process with Markovian property. This example is also a finite Markov chain that has a transition matrix P with transition probabilities pij. The transition probabilities are calculated using

pij = P(Xn+1 = sj|Xn = si) where the state space is S = {s1,…,sk}. In knowing that the state space for this example is S = {1, 2, 3, 4} we create the transition matrix to be 260 1 0 137 6227

61 0 1 07 P=62 2 7

60 1 0 17 6227

41 0 1 05 22

Another important characteristic of a Markov chain, informing us how the chain starts, is the initial distribution. We denote the initial distribution as the row vector μ(0) This vector shows the probabilities that the player is on a square at time 0. Considering weknowthatP(X0 =1)=1wehave

μ(0) =(1,0,0,0)

We denote μ(1), μ(2), …, μ(k) as the distributions of the Markov chain at times 1, 2, …, k.

This gives us

μ(n) = ⇣μ(n), μ(n), …, μ(n)⌘ 12k

In knowing the initial distribution μ(0) and the transition matrix P we now calculate all the distributions for this chain.

Theorem: For a Markov chain (X0,X1,…,Xn) with state space {s1,…,sk}, initial distribution μ(0), and transition matrix P, we have for any n that the distribution μ(n) at time n satisfies

μ(n) =μ(0)Pn

Knowing a Markov chain’s characteristics allows us to predict where a player might land at a given time.

Relation to Monopoly:

When looking at a model such as Monopoly, it is useful to create a finite Markov chain including the transition matrix in order to understand the long-term probability of a player landing on each square.

Each property square will be classified as the states as shown in the figure.

To begin creating the transition matrix, we recall that there are 40 squares on the board, each square is represented as a state of its own and so the transition matrix is a 40 ⇥ 40 matrix with the first row of the matrix being

0,0, 1, 2, 3, 4, 5, 6, 5, 4, 3, 2, 1,0,0,…,0 36 36 36 36 36 36 36 36 36 36 36

This initial distribution vector shows us the probabilities a player has when moving from the GO square, which is the initial state, to any of the other squares when rolling two six-sided dice. The second row will be the probabilities a player has when moving

from the second square to any other square on the board and the pattern continues 7

up to the 40th state. Hence, there are 1,600 entries total. The second row of the transition matrix is

0,0,0, 1, 2, 3, 4, 5, 6, 5, 4, 3, 2, 1,0,…,0 36 36 36 36 36 36 36 36 36 36 36

Another concept of a Markov Chain is calculating the probability of going from state i to state j at a given time step. We have already described the transition matrix at time step 1. If we want to know what the probability is for going from state i to state j at time step 2 there is an equation that will be used. For example, to calculate the probability at time step 2 from leaving state 1 and going to state 3 we use the equation

p(2)=p p +p p +p p 13 11 13 12 23 13 33

This is the dot product of two vectors, specifically the first row of matrix P and the third column of P . In a general case, if the Markov chain has r states then

(2) Xr

pij = pikpkj

k=1

The reason we want to look at future time steps is to calculate the long-term probability of ending a turn on a state. Using the matrix we calculate the steady state of our Monopoly Markov process.

According to the relative frequencies, a player will most likely end a turn on the Jail square which makes logical sense since there are multiple ways a person can end up in Jail. If we were to ignore Jail, the next square a person is most likely to end their turn on would be Illinois.

One may ask ”why is it important to know these long-term probabilities?” When properties are obtained by others and a player ends up on someone’s property they will have to pay that person the rent. Strategic players will want to determine which properties they desire to obtain in order to increase their cash flow and eventually win the game. Hence, a player who is first to obtain Illinois is o↵ to a good start.

References:

Isaacson, D. L., Madsen, R. W. (1985). Markov Chains: Theory and Applications. Malabar, FL: R.E. Krieger Pub.
Kemeny, J. G., Snell, J. L. (1981). Finite Markov Chains. New York: Springer.
Ash, R. B., Bishop, R. L. (1972). Monopoly as a Markov Process. Mathematics Magazine, 45(1), 26. doi:10.2307/2688377
Bilisoly, R. (2014). Using Board Games and Mathematica to Teach the Fundamen-
tals of Finite Stationary Markov Chains. https://search-ebscohost-com.proxy- kutztown.klnpa.org/login.aspx?direct=truedb=edsarxAN=edsarx.1410.1107site=eds- livescope=site
Johnson, R. W. (2003). Using Games to Teach Markov Chains. PRIMUS, 13(4),
337–348. https://search-ebscohost-com.proxy-kutztown.klnpa.org/login.aspx?direct=truedb= livescope=site

This essay could be plagiarized. Get your custom essay
“Dirty Pretty Things” Acts of Desperation: The State of Being Desperate
128 writers

ready to help you now

Get original paper

Without paying upfront

Cite this page

About Game Monopoly: A Markov Process. (2022, Apr 27). Retrieved from

https://graduateway.com/about-game-monopoly-a-markov-process/

Remember! This essay was written by a student

You can get a custom paper by one of our expert writers

Order custom paper Without paying upfront