animate_capped_rne_history()
|
Use ggplotly to show an animation of the payoff sets of a capped RNE going from t=T to t=1 |
animate_eq_li()
|
Use ggplotly to show an animation of the payoff sets of a list of equilibria |
compare_eq()
|
Helper function to find differences between two equilibria |
diagnose_transitions()
|
Take a look at the computed transitions for each state
using separate data frames |
eq_combine_xgroup()
|
Aggregate equilibrium behavior in games with random active player |
eq_diagram()
|
Draws a diagram of equilibrium state transition |
eq_diagram_xgroup()
|
Draws a diagram of equilibrium state transition |
get_eq()
|
Get the last computed equilibrium of game g |
get_repgames_results()
|
Get the results of all solved repeated games assuming the state is fixed |
get_rne()
|
Get the last computed RNE of game g |
get_rne_details()
|
Retrieve more details about the last computed RNE |
get_spe()
|
Get the last computed SPE of game g |
get_T_rne_history()
|
Get the intermediate steps in from t = T to t = 1 for
a T-RNE or capped RNE that has been solved with
save.history = TRUE |
irv()
|
Helper functions to specify state transitions |
irv_joint_dist()
|
Helper function to specify state transitions |
irv_val()
|
Helper functions to specify state transitions |
plot_eq_payoff_set()
|
Show a base R plot of equilibrium payoff set |
rel_after_cap_actions()
|
Fix action profiles for the equilibrium path (ae) and during punishment (a1.hat and a2.hat) that are assumed to be played after the cap in period T onwards. The punishment profile a1.hat is the profile in which player 1 already plays a best-reply (in a1 he might play a non-best reply). From the specified action profiles in all states, we can compute the relevant after-cap payoffs U(x), v1(x) and v2(x) assuming that state transitions would continue. |
rel_after_cap_payoffs()
|
Specify the SPE payoff set(s) of the truncated game(s) after a cap in period T. While we could specify a complete repeated game that is played after the cap, it also suffices to specify just an SPE payoff set of the truncated game of the after-cap state. |
rel_capped_rne()
|
Solve an RNE for a capped version of a game |
rel_change_param()
|
Add parameters to a relational contracting game |
rel_compile()
|
Compiles a relational contracting game |
rel_eq_as_discounted_sums()
|
Translate equilibrium payoffs as discounted sum
of payoffs |
rel_first_best()
|
Compute first-best. |
rel_game()
|
Creates a new relational contracting game |
rel_is_eq_rne()
|
Checks if an equilibrium eq with negotiation payoffs is an RNE |
rel_mpe()
|
Tries to find a MPE by computing iteratively best replies |
rel_options()
|
Set some game options |
rel_param()
|
Add parameters to a relational contracting game |
rel_rne()
|
Find an RNE for a (weakly) directional game |
rel_scale_eq_payoffs()
|
Scale equilibrium payoffs |
rel_solve_repgames()
|
Solves for all specified states the repeated game assuming the state is fixed |
rel_spe()
|
Finds an optimal simple subgame perfect equilibrium of g. From this the whole SPE payoff set can be deduced. |
rel_states() rel_state()
|
Add one or multiple states. Allows to specify action spaces, payoffs and state transitions via functions |
rel_state_probs()
|
Compute the long run probability distribution over states if an
equilibrium is played for many periods. |
rel_transition()
|
Add a state transition from one state to one or several states. For more complex games, it may be preferable to use the arguments trans.fun of link{rel_states} instead. |
rel_T_rne()
|
Compute a T-RNE |