rel_capped_rne.Rd
In a capped version of the game we assume that after period T the state cannot change anymore and always stays the same. I.e. after T periods players play a repeated game. For a given T a capped game has a unique RNE payoff. Also see rel_T_rne
.
rel_capped_rne( g, T, delta = g$param$delta, rho = g$param$rho, adjusted.delta = NULL, beta1 = g$param$beta1, tie.breaking = c("equal_r", "slack", "random", "first", "last", "max_r1", "max_r2", "unequal_r")[1], tol = 1e-12, add.iterations = FALSE, save.details = FALSE, save.history = FALSE, use.cpp = TRUE, T.rne = FALSE, spe = NULL, res.field = "eq" )
g | The game |
---|---|
T | The number of periods in which new negotiations can take place. |
delta | the discount factor |
rho | the negotiation probability |
adjusted.delta | the adjusted discount factor (1-rho)*delta. Can be specified instead of delta. |
beta1 | the bargaining weight of player 1. By default equal to 0.5. Can also be initially specified with |
tie.breaking | A tie breaking rule when multiple action profiles could be implemented on the equilibrium path with same joint payoff U. Can take the following values:
|
tol | Due to numerical inaccuracies the calculated incentive constraints for some action profiles may be vialoated even though with exact computation they should hold, yielding unexpected results. We therefore also allow action profiles whose numeric incentive constraints is violated by not more than tol. By default we have |
add.iterations | if TRUE just add T iterations to the previously computed capped RNE or T-RNE. |
save.details | if set TRUE details of the equilibrium are saved that can be analysed later by calling |
save.history | saves the values for intermediate T. |