3
$\begingroup$

I am trying to create a very simple mathematical "game" that involves:

  • Two players take turns and are competing against each other (Player 1 and Player 2)
  • Each player can either perform "Action A" or "Action B"
  • There is some element of probability

I would like this game to illustrate the following point: There are some situations in which "Action A" is on average more likely to benefit the player, and some situations where "Action B" is on average more likely to benefit the player. Ideally, we should be able to identify the "conditions" (e.g. Player 1 is some situation and Player 2 is in some other situation) in which it is more favorable to use "Action A" vs. "Action B" using probability reasoning as well as via simulation (e.g. randomly play the game again and again and based on the results of the simulation, calculate the conditional probability of winning the game using "Action A" vs. "Action B" at different points in time). Ideally, I would like to able to code the simulation in some programming language such as "R".

I would like to make this game as simple as possible to explain. I tried doing some research to see if such a "game" might already exist - the closest thing that I could find to such a problem was a "game" called the Monty Hall Problem (https://en.wikipedia.org/wiki/Monty_Hall_problem).

In the Monty Hall game, there is a prize behind one of three doors - a player chooses one of these doors and another door from the remaining two doors is opened (obviously not the door containing the prize). The player now has the option of "switching his choice of door" (Action A) or "keeping his choice of door" (Action B). (Note: We can call the person choosing the doors "Player 1" and the "host of the gameshow" as "Player 2")

However, using the Laws of Probability, it can be shown that "performing Action A will consistently lead to better odds of wining the prize compared to performing Action B". This is because initially, Player 1 has a 1/3 chance of winning - when one of the doors is opened and the player has the chance of switching: its as if he is restarting the game with only 2 doors and switching doors (Action A) will now give him a 1/2 chance of winning whereas keeping the same door (Action B) still only results in a 1/3 chance of winning. Since 1/2 > 1/3, it is obviously better to stick with the option having the 1/2 chance of winning.

This being said, I would like to either find/create a "game" in which sometimes it is more advantageous to "switch doors", but other times it is more advantageous to "keep the same door".

Can someone please help me create/find a simple game in which some conditions result in "Action A" being more favorable and other conditions result in "Action B" being more favorable - and these "conditions" can be both determined theoretically (e.g. making a "probability tree" (https://upload.wikimedia.org/wikipedia/commons/thumb/9/9c/Probability_tree_diagram.svg/1200px-Probability_tree_diagram.svg.png) and via simulation results (e.g. conditional probabilities and contingency tables)?

Thanks!

Note: Perhaps the "Prisoner's Dilemma Game" (https://en.wikipedia.org/wiki/Prisoner%27s_dilemma) can be adapted for this purpose in which some "conditions" result in it being more favorable to "cooperate" vs. other "conditions" result in it being more favorable "not to cooperate"? With my very limited knowledge and understanding of Game Theory, it seems that in the Prisoner's Dilemma Game there exists a Nash Equilibrium, meaning that some action is always and consistently more favorable compared to all other actions. I think the situation I am interested in is characterized by "Mixed Strategies" (https://en.wikipedia.org/wiki/Strategy_(game_theory)#Pure_and_mixed_strategies), meaning that no single action can be said to consistently be better than other actions at all times.

$\endgroup$
3
  • 1
    $\begingroup$ Consider Draw Poker, where person B acts first, and all cards are dealt Face-Up. However, A must choose his discards before seeing what cards that B has re-drawn. For example, usually, you hold on to a pair. However, if A has a pat flush, and B has $4$ to a higher flush, with a pair, then he should break the pair. Related - when you have a small pair and a high kicker, should you draw $3$ or keep the kicker? $\endgroup$ Commented Feb 28, 2022 at 23:22
  • $\begingroup$ Re previous comment - Related : A does not have to choose his discards until after he sees what cards that B has re-drawn. Even more complicated : immediately after B has re-drawn, but before his cards are revealed - a coin is flipped. If Heads, then A must immediately re-draw, without knowing what cards B has re-drawn. If Tails, then A can wait until he sees the results of B's re-draw before he (A) decides how many cards to re-draw; $\endgroup$ Commented Feb 28, 2022 at 23:33
  • $\begingroup$ @ user2661923: thank you for your reply! I too had also thought of poker/blackjack games, but I was looking for something much simpler that will be easier for me (someone with limited knowledge of math and computers) to write a computer simulation for LOL $\endgroup$ Commented Mar 1, 2022 at 7:01

3 Answers 3

2
$\begingroup$

How about this variant of rock-paper-scissors?

  • You select a proportion of the time $r$ that you will play rock, a proportion $p$ that you will play paper, and a proportion $s$ that you will play scissors ($r,p,s\ge 0, r+p+s=1$).

  • Your opponent does the same, without knowing your strategy.

  • You play $100$ games of rock-paper-scissors, where your move is determined by sampling your chosen probability distribution. (Think of it like rolling a weighted three-sided die with probabilities $r,p,s$.

If your opponent chooses $r=0.6,p=0.3,s=0.1$, for instance, then your best choice is $p=1$ (and you'll win an expected $60$% of games).

It's not too hard to characterise the best strategy if you know your opponent's strategy.

Perhaps if this does not fit your conditions, there is some version of rock-paper-scissors that would. (For instance, by alternately allowing player 1 or player 2 to change their strategy after every $10$ rounds; or making changes to what information is unknown. Or adapting the game so that there are two choices, not three.)

$\endgroup$
1
  • $\begingroup$ @ A.M. Thank you for your answer! This is a really simple example and I really like it - however, I wonder if a game exists where such a optimal strategy can exist even if you don't know the the opponent's strategy? thank you so much! $\endgroup$ Commented Mar 1, 2022 at 7:00
1
$\begingroup$

I am thinking of a game that doesn't involve probability, but maybe it can inspire you. There is a pile of $k$ stones. Player A and player B take turns removing stones from the pile, starting with A. Each turn, the player can choose to remove either 1 or 2 stones. The player who removes the last stone wins.

The original question was to determine which player has a winning strategy for each $k$. The core of the solution is to realise that if your opponent chooses to remove 1 stone, you can remove 2, and after your turn is finished the pile has a number of stones that has the same remainder modulo 3 that it had right before your opponent started. The same happens if your opponent removes 2 stones and you remove 1. So if $k$ is divisible by 3, player B can always choose in response to what A does in order to leave the pile with a number of stones divisible by 3, and never let A do the same. So B will be the only one who can remove the last stone (leaving the pile with 0 stones). Likewise, if $k$ isn't divisible by 3, player A can choose to remove stones equal to the remainder of $k$ in the division by 3, and leave B in the same situation as before, so in this case A always wins.

As I said, this has no probability, but it has this idea that you have two choices and one will be better than the other depending on the situation. Maybe you could introduce probability in some way, like making the decision of how many stones you remove depend on some random thing. Hope it helps!

$\endgroup$
1
  • $\begingroup$ @ Ivan : Thank you for your answer! I think I have heard of this "stones" game before, it's called NIM (en.wikipedia.org/wiki/Nim)? The only problem with this game is that if the player who starts the game knows the optimal strategy, he will always win - i.e. the starting players will always win. I wonder if there is some variant of the "stones" game where the starting player is not guaranteed to always win when playing the optimal strategy? thank you so much! $\endgroup$ Commented Mar 1, 2022 at 6:59
0
$\begingroup$

I think I thought of an example myself!

Recently, I thought of the following "game" to illustrate "mixed strategies and comparative advantages":

  • There are two Players: Player 1 and Player 2
  • There are two Coins: Coin 1 and Coin 2
  • Coin 1 lands on "Heads" with a probability of 0.5 and "Tails" with a probability of 0.5
  • Coin 2 lands on "Heads" with a probability of 0.7 and "Tails" with a probability of 0.3
  • If Coin 1 is "Heads", a score of -1 is obtained; if Coin 1 is "Tails", a score of +1 is obtained
  • If Coin 2 is "Heads", a score of -3 is obtained; if Coin 1 is "Tails", a score of +4 is obtained

In this game, Player 1 always starts first - Player 1 chooses either Coin 1 or Coin 2, flips the coin that they select and gets a "score". Then, Player 2 chooses either Coin 1 or Coin 2, flips the coin that they select and get a "score". The Player with the higher score wins, the Player with the lower score loses (a "tie" is also possible).

In this game, Coin 1 can be seen as a "medium risk and medium reward" option, whereas Coin 2 can be seen as a "high risk and high reward" option. Since Player 1 always starts first, Player 2 will always have an advantage - Player 2 gets to see what Player 1 chose:

  • If Player 1 chose the "high risk and high reward" option (Coin 2) and got a "bad result" (i.e. a big negative score), Player 2 does not need to choose the "high risk and high reward" option - Player 2 can win by selecting the "low risk and low reward" option (Coin 1).

  • If Player 1 chose the "high risk and high reward" option and got a "good result" (i.e. a big positive score), Player 2 now needs to choose the "high risk and high reward" option - Player 2 can only win by also selecting the "high risk and high reward" option. Player 2 needs to place all his "eggs in one basket" by selecting the "high risk and high reward" option if we wants to stand a chance of winning.

  • Similar logic can be used to rationalize the coin choice for Player 2 given that Player 1 has selected the "low risk and low reward" option.

I wanted to create a scenario where Player 1 and Player 2 are playing this game, but they do not have access to these probabilities upfront - instead, they only have access to 100 rounds (i.e. iterations) of this game. The goal is to "study" these iterations and build an optimal play strategy based on this iterations. Thus, I simulated 100 random iterations of this game in R:

score_coin_1 = c(-1,1) score_coin_2 = c(-3, 4) results <- list() for (i in 1:100) { iteration = i player_1_coin_choice_i = sample(2, 1, replace = TRUE) player_2_coin_choice_i = sample(2, 1, replace = TRUE) player_1_result_i = ifelse(player_1_coin_choice_i == 1, sample(score_coin_1, size=1, prob=c(.5,.5)), sample(score_coin_2, size=1, prob=c(.7,.3)) ) player_2_result_i = ifelse(player_2_coin_choice_i == 1, sample(score_coin_1, size=1, prob=c(.5,.5)), sample(score_coin_2, size=1, prob=c(.7,.3))) winner_i = ifelse(player_1_result_i > player_2_result_i, "PLAYER_1", ifelse(player_1_result_i == player_2_result_i, "TIE", "PLAYER_2")) my_data_i = data.frame(iteration, player_1_coin_choice_i, player_2_coin_choice_i, player_1_result_i, player_2_result_i , winner_i ) results[[i]] <- my_data_i } results_df <- data.frame(do.call(rbind.data.frame, results)) head(results_df) iteration player_1_coin_choice_i player_2_coin_choice_i player_1_result_i player_2_result_i winner_i 1 1 1 1 -1 1 PLAYER_2 2 2 1 2 -1 -3 PLAYER_1 3 3 2 2 4 -3 PLAYER_1 4 4 1 2 1 -3 PLAYER_1 5 5 2 1 4 1 PLAYER_1 6 6 2 2 4 -3 PLAYER_1 one_one <- results_df[which(results_df$player_1_coin_choice_i == 1 & results_df$player_2_coin_choice_i == 1), ] one_two <- results_df[which(results_df$player_1_coin_choice_i == 1 & results_df$player_2_coin_choice_i == 2), ] two_one <- results_df[which(results_df$player_1_coin_choice_i == 2 & results_df$player_2_coin_choice_i == 1), ] two_two <- results_df[which(results_df$player_1_coin_choice_i == 2 & results_df$player_2_coin_choice_i == 2), ] 

Then, I analyzed the results (e.g. "one_two_sum" = player 1 chose coin 1 and player 2 chose coin 2):

 library(dplyr) one_one_sum = data.frame(one_one %>% group_by(winner_i) %>% summarise(n = n())) one_two_sum = data.frame(one_two %>% group_by(winner_i) %>% summarise(n = n())) two_one_sum = data.frame(two_one %>% group_by(winner_i) %>% summarise(n = n())) two_two_sum = data.frame(two_two %>% group_by(winner_i) %>% summarise(n = n())) 

For instance, suppose Player 1 chose "Coin 1":

one_one_sum winner_i n 1 PLAYER_1 9 2 PLAYER_2 10 3 TIE 9 one_two_sum winner_i n 1 PLAYER_1 23 2 PLAYER_2 6 

Based on these results, it appears that if Player 1 picks "Coin 1", Player 2 should also pick "Coin 1", seeing that he has a 10/29 chance of winning and a 9/29 chance of "tie" (overall, a 19/29 chance of not losing).

Similarly, we can look at the optimal strategy if Player 1 picks "Coin 2":

two_one_sum winner_i n 1 PLAYER_1 5 2 PLAYER_2 14 two_two_sum winner_i n 1 PLAYER_1 5 2 PLAYER_2 1 3 TIE 18 

Based on these results, it appears that Player 2 should almost always pick Coin 1 if Player 2 picks Coin 2 - as Player 2 has a 14/19 chance of winning if this happens.

The overall results can be summarized in a table like this:

enter image description here

I would be curious to see how complicated this game gets when more coins are involved and players have more turns!

Thanks everyone!

$\endgroup$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.