The nature of bridge probabilities
(Page of 9)

I guess we all have an idea what the term "probability" means. In mathematics, one goes through a lot of pain to come up with a formal definition without any intuitive elements. Here is a short version:

A random experiment is a procedure which has several possible outcomes and which is of a nature that the actual outcome cannot be predicted. Suppose the random experiment is conducted an infinite number of times; we count the trials we conduct, and among them the trials which produce a specific outcome. If the ratio between the number of successful trials and the number of all trials converges to a fixed number, that number is said to be the probability of the outcome in question.

An event is a collection of outcomes of a random experiment. The probability of an event is defined analogously, by counting all trials which have an outcome within the specified collection. It is easy to see that the probability always lies between 0 and 1; the empty collection gives an event of probability 0, and the full collection of all possible outcomes gives an event of probability 1.

There are a lot of basic rules for computations of probabilities which I will not present here. Instead, I want to focus on some abstract aspects of the theory. It is crucial that the random experiment is always repeated under identical conditions. When it comes to applications of probability theory in bridge, this is where the trouble starts.

Suppose you cast a die a single time, and you get 6. If you want to draw any inferences about the probability of the outcome 6, this is not conclusive, of course. So you roll the die again and again; let us say this is what you get for this outcome:

10 trials: 3 times successful

100 trials: 19 times successful

1000 trials: 161 times successful

10000 trials: 1648 times successful

You are tired by now, but you realize (too late) that you can never conduct the experiment an infinite number of times anyway. The ratio seems to converge towards 1/6 - which is what you would expect, unless the die is loaded - but in truth you have no way of finding out. Probability itself is merely an abstract concept.

Now let us move to a bridge problem. We are declarer, and after dummy comes down, we see that our contract depends on a simple finesse. Ignoring the inferences from the opening lead (and assuming there are no clues from the bidding), we tend to say that the finesse has a probability of 50% to succeed. What is the actual meaning of this statement?

First of all, the dealing of the cards is a random experiment for all practical purposes. When we shuffle and deal, we cannot know (I hope) the layout that we are eventually producing. What we are interested in is the probability of the event that the finesse is on if we view dummy's cards and our own as fixed.

We can see 26 cards in our combined hands. There are over 5*10^28 possible deals in bridge, and in only 10400600 of them, the cards in our own hand and in dummy are exactly the same ones we are seeing right now. This means, on average we must play 5*10^21 hands to arrive at the same holdings in our own hand and in dummy again. Even if we play a lot, the solar system will be history before we face the same layout even once more, not to mention a significant sample size.

Clearly, we are unlikely to ever come across an identical situation again in our lifetime (unless we forget to shuffle the hands and play the deals from last week's event or - in case computer-generated hands are in use - the dealing software has a glitch). So what is the relevance of probabilities in this context?

Well, we cannot repeat this specific random experiment in practice, but we can do so hypothetically. We are assuming that the shuffling of the hands is fair, which means that each and every one of the 5*10^28 layouts will come up the same amount of the time if we play indefinitely.

As a consequence, each of the 10400600 hands which are compatible with what we see will also come up the same amount of the time. 10 million different cases are a reasonable sample size, so it remains to check them all and see how often the finesse is on in these situations.

If we intend to run a program that prints all 10 million distributions and then check in how many of these one particular opponent holds the critical card, we will still be busy for quite some time. Fortunately, there are elementary combinatorial techniques which tell us that said card will be held by a specific player exactly half the time. We reason that this percentage will remain the same if we play for an infinite time, so we conclude that the probability for a successful finesse is 1/2 (=50%).

What does this number mean in practice? The finesse is either on or off in the particular board we are playing. Nobody is swapping cards from one defender to another during play just to annoy us. The probabilitiy gives us no certainty in either direction for this specific deal.

Since probability is a quantity that is defined solely via long-term behaviour, all implications are also strictly on a long-term basis. In this example, the probabilistic result tells us that, if we are going to finesse every time the same situation occurs, we will succeed half the time; nothing more, nothing less. For any particular hand, it could go either way.

Suppose the finessing position is AQ6 opposite 432. Instead of finessing, we might decide to play for the drop. If we deduce the probability for the success in an analogous manner, we will obtain a much lower result. Still, on an isolated occasion the stiff king might be offside. Probabilities are no guarantees.

The idea of probabilities is simply that we follow them consistently. A good player who plays with the odds will produce better results in the long run. In a nutshell, that is all there is. Nonetheless, sometimes we will see a strong player go down in a contract that is made at another table by a weaker declarer. There is still an element of chance involved when we play bridge; don't let anyone ever tell you otherwise.

By the way, when we work with probabilities in bridge, we do not start from scratch every time we have to evaluate our chances. It is an intuitive understanding that many details of the hand do not matter for computations of bridge odds. For example, if the holding is AQ6 opposite 532 instead, we adopt the result we have already obtained instead of doing all the combinatorial stuff again.

The same goes for AQ6 opposite 543, AQ7 opposite 543, etc. The odds remain the same if the tenace is in the other hand or if we move from a 3-3 holding to a 4-2 holding. The holdings in the three other suits do not matter - well, perhaps for the contract, but not for the finesse itself. Eventually we realize that it all comes down to the AQ holding and some unknown number of pips in both hands - naturally with at least one card opposite the tenace, or there would be no finesse at all.

Strictly speaking, we are cheating when we just copy the result from one situation to another, since it is a new random experiment with a new underlying counting procedure and hence new probabilities. Never mind; it is an accepted mathematical technique to say that, under the circumstances described above, the computation works completely analogously.

Let us discuss a different scenario: Our contract now depends on a two-way finesse for a queen, say AJ3 opposite K102. As in the discussion on the previous pages, we conclude that the probability of LHO having the queen is 50%, and that the probability of RHO having the queen is also 50%. So whichever way we finesse, our chances of success are 50% (playing for the drop is inferior).

Good players are known to "find out as much as possible about the hands before making their decision". A typical example of this is cashing a side suit - assuming we can afford this - and noticing the break. Suppose we learn that the side suit splits 5-2, with LHO holding the length. What does this mean; what is the relevance of this for the probability of the diamond finesse?

The underlying concept is that of conditional probabilities. The basic idea is this: In the original procedure of repeating the random experiment and counting trials, only those are counted - both for the successful trials and for the number of all trials - which satisfy an additional condition.

In the example of casting a die, let us say we are only interested in the probability of the outcome 6 restricted to cases where an even number is rolled. If we are assuming mathematically ideal conditions, we will obtain an even number half the time. Once we are excluding all odd numbers as outcomes, the outcome 6 will occur approximately once in three trials, so the conditional probability is 1/3.

Back to our bridge scenario. The original probability of 50% either way has changed. There is a combinatorial approach, usually referred to as Vacant Spaces, which tells us that the odds now favor finessing against the player with shortage in the side suit we checked before making our decision.

But what does it mean when we say "the odds have changed". How can it be that an event which had a probability of 50% earlier on now has a probability higher or lower than 50%?

Every new trick that is played and every new insight about the defenders' hands we gain over the course of play gives us new information that must be digested in our attempt to understand the nature of probabilities. Each such information reduces the space of trials we may count when we are interested in the ratio between the numbers of successful trials and the number of total trials.

As I wrote before, only 10400600 of all possible deals will have exactly the same holding in both declarer's hand and in dummy, so it takes about 5*10^28 deals to arrive at a sample size of 10 million admissible trials; among those, the Q we are interested in will show up in LHO's hand 5 million times and in RHO's hand 5 million times.

But actually we have seen LHO and RHO play some cards before we make our decision. Suppose we win the first trick and want to decide at trick two which direction to finesse. At this point, the space of possible layouts has already decreased from the original 10 millions to roughly one fourth of this number. Now, we usually accept that the probability for a finesse is still close to 50%, unless LHO's opening lead and RHO's contribution to trick one give us information about the rest of the hand.

Strictly speaking, we are already dealing with conditional probabilities at this point. And every new card we see - either as a played card directly or one which we can place in a defender's hand by inference - reduces the space of possible layouts further.

Having seen trick one and also a 5-2 break in a side suit later, we are down to about 19000 layouts. Among those, Vacant Spaces tells us that the Q will be located in the hand with the shortage in question in about 59% of the remaining cases. Saying that the odds have changed means recognizing that the space of possible layouts has changed, and the share of successful cases is now different from what it was before.

We should think of this as an evolutionary process: The state of information declarer possesses starts with the knowledge of his hand and dummy only, and it grows with each card that is played and with each inference that is drawn. For every new state of information, we have in fact new probbilities for each event. Sometimes the new information is of a kind that the probabilities remain the same, but quite often they will change.

It is important to acknowledge that the correct probability in this sequence - based on different states of knowledge - is always the most recent one, which takes the most information into account. When we start with a contract depending on a 50% finesse and the new information tells us that the finesse is less likely to succeed, we cannot go back to the original probability just because it is more convenient for us.

It may seem that we are occasionally worsening our position by collecting information that reduces the probability for the success of our contract, so it may appear better not to collect the information. Given a slam with a 50% chance of success, why would we want to play other suits first and thus risk that our chance of success drops to, say, 30%? Why not take our chances immediately?

The argument is wrong, and again it comes down to what probabilities actually stand for. Let us assume for simplicity that we have only one possible line of play. It will work or not work on this occasion; no computation whatsoever is going to change that. Acquiring more information first will just give us more accurate results for how often the contract will make under the same conditions.

The point is, the information is always there for the taking. Refusing to take it into account does not have any influence on whether we will succeed here and now. It will only produce a false image of how many times we are going to succeed in the long run.

Now suppose that we have several possible lines of play at our disposal, and we want to maximize our chances of success. Therefore, we are not actually interested in absolute probabilities; in reality we want to evaluate the relative odds, i.e. determine which of the available lines has the highest probability, no matter what the actual value is.

By gathering additional information, we are not actually changing the specific layout we are about to encounter on this one hand; we are merely eliminating impossible layouts which are distorting the correct numbers. It is a natural observation that we are better off making a decision when we know as much as possible about the hand, i.e. when the layouts we are still considering are as close as possible to the layout that is actually present.

Let us say the line X is a favorite to succeed based on the original odds and the line Y is the favorite once we take new information into account. Simply put, this tells us that line X happened to work better only in a larger collection of cases, some of which we have already been able to eliminate, and that in our specific scenario line Y is more likely to succeed.

A similar problem occurs when we have two different approaches to compute a probability in a bridge hand, and we obtain two different results following these approaches. We are better off following the approach which gives us the higher probability, aren't we?

This is plain wrong. There cannot be two different probability values for the same event under the same circumstances. This is a fundamental principle for the notion of convergence: the limit, if it exists, is always unique. The ratio between favorable cases and total cases can never converge towards more than one number. (Better not spend any time thinking about what it means if we do not have convergence at all.)

Therefore, if we compute two different probabilities, the circumstances cannot be the same. If it happens anyway, the most likely explanation - apart from the possibility of a straightforward error in one of our calculations - is that we are simply taking different information into account. In that case, at least one computation is not relevant for us, potentially both.

Let us have a look at an example which is familiar by now. We hold A10964 in dummy and K852 in our hand. Suppose the opening lead is in a black suit; we notice the two cards played and find that we cannot draw any further inferences from it. Next we cash the K and RHO follows with the queen. Before we continue the suit, we test the hearts and find out that they split 5-2, with LHO holding the length. RHO discards another card in a black suit on the third round.

Restricted Choice tells us that the second round finesse in diamonds will work with a probability of 2/3, assuming that RHO will play either honor from QJ tight with 50% probability. Vacant Spaces tells us that RHO is more likely to have the jack than LHO by a ratio of 8:6 (or even 8:5, once we see LHO following small to the second round in diamonds). Which is correct?

The answer is: Neither. When we apply RC as above, we are ignoring the information from Vacant Spaces. When we apply Vacant Spaces as above, we are ignoring the inferences from RHO's choice to play the queen. The correct probability can only be obtained by taking both pieces of information into account. Otherwise we are getting an approximation at best, not an accurate result.

One last thing (something I have already mentioned several times in previous posts). When it comes to randomness in bridge, there are two different phenomena to consider. On the one hand we have probabilities based on purely combinatorial considerations, typically based on nothing more than the assumption that the hand was shuffled and dealt properly. On the other hand we have probabilities based on choices from other players - which is clearly outside our control as well.

I am occasionally using the terms "layout odds" and "strategy odds" in this context. This terminology is actually misleading because it kind of suggests that there may be two different approaches leading to two different probabilities for something, in violation of what I just said on the previous page. What I want to say is that both kinds of randomness must be taken into account in order to arrive at the proper probabilities; it is just that they must be processed in different ways.

The point is, everything that is solely based on the shuffling of the cards can be computed exactly. In contrast, everything that depends on choices of other players eventually rests on assumptions, for example how often they would make a specific lead with a given holding, how often they would shift to one suit or another, how often they would player either card from several equals, etc.

A difficulty arises if we have insufficient data to make a well-founded assumption. In that case we can follow one of several paths. On the other hand, we can make a guess about the strategies the other players follow when making choices. This allows us to work with the full information we have from the played cards, but with an uncertainty regarding the accuracy of our guess.

On the other hand, we can ignore the strategy odds entirely and go back to the state of knowledge we had before choices became relevant. This will yield precise numbers for the probabilities we are interested in, but we must recognize that they may be incorrect because we are not basing our decision on the most recent information.

The following is based on a comment I recently wrote below another post. Let us go back to the RC example, but without the fall of the heart suit, the opening lead or anything else. We are interested in the odds for a successful second round finesse in diamonds after RHO dropped the queen on the first round.

What we can do is make an assumption about how often RHO would play the queen from QJ tight. Perhaps we know this opponent well and have gathered some experience about this kind of tendency of his in the past. Maybe he is a beginner who will always follow up the line. Maybe we have a sworn statement by him saying that he will always play the queen.

It does not really matter where our assumption comes from. When we make it, we arrive at precise odds, the correctness of which depends on the correctness of our initial guess.

What we can do instead is ignore which honor RHO has played. If we do this, we will arrive at 2:1 odds in favor of the finesse. But right now we are working with a lesser state of knowledge, hence the space of possible layouts is twice as large.

It turns out that the second way gives the same result as making the assumption about complete randomness on RHO's part. In the abstract language I used on the first pages, we are assuming that the information about which honor RHO has played is irrelevant for the ratio between successful trials and total trials, in other words, that the ratio is the same whether he plays the queen or the jack.

In fact, this is often the case: If we ignore a piece of information we have gathered, it yields the same result as the assumption that the probability remains the same for every possible alternative. In a manner of speaking, this is just another assumption. We cannot unlearn something we have learned; we can only work under the premise that it makes no difference.