rel_state_probs.Rd
Adds a column state.prob
to the computed equilibrium data frame,
which you can retrieve by calling get_eq
.
rel_state_probs( g, x0 = c("equal", "first", "first.group")[1], start.prob = NULL, n.burn.in = 100, n.averaging = 100, tol = 1e-13, eq.field = "eq" )
g | the game object for which an equilibrium has been solved |
---|---|
x0 | the initial state, by default the first state. If |
start.prob | an optional vector of probabilities that specifies for each state the probability that the game starts in that state. Overwrites "x0" unless kept NULL. |
n.burn.in | Number of rounds before probabilities are averaged. |
n.averaging | Number of rounds for which probabilities are averaged. |
tol | Tolerance such that computation stops already in burn-in phase if transition probabilities change not by more than tol. |
If the equilibrium strategy induces a unique stationary distribution over the states, this distribution should typically be found (or at least approximated). Otherwise the result can depend on the parameters.
The initial distribution of states is determined by the parameters
x0
or start.prob
. We then multiply
the current probabilities susequently n.burn.in
times with
the transitition matrix on the equilibrium path.
This yields the probability distribution over states assuming
the game is played for n.burn.in
periods.
We then continue the process for n.averaging
rounds, and return
the mean of the state probability vectors over these number of rounds.
If between two rounds in the burn-in phase the transitition probabilities
of no state pair change by more than the parameter tol
, we
immediately stop and use the resulting probabilit vector.
Note that for T-RNE or capped RNE we always take the transition probabilities of the first period, i.e. we don't increase the t in the actual state definition.