Join Bridge Winners
All comments by Nicolas Hammond
You are ignoring the author of this comment. Click to temporarily show the comment.
David wrote, “The table record is rarely going to be demonstrative.”

A single table record is not. Multiple table records are.

Suppose setting the contract relies on switching to a certain suit in the middle of a hand. Dummy/I have the same cards in both suits. It is a 50-50 guess. A cheating pair will get this right more than 50% of the time. One hand says nothing; multiple do.

Let's say I cheat. Let's say I have double dummy knowledge. There will be situations where if I take advantage of my unauthorized information that I know I might get caught. This is a risk/reward situation. My perception of this risk may be different than yours. Therefore there will be some situations where I deliberately play the wrong card. If you look at my playing record, I am not 100% perfect. If I were, you would catch me cheating. Therefore I can cite all the times I was wrong as examples that I was not cheating.

I have software that does all of this. I can compare any pair's results to known cheating pairs. If you are better than the known cheaters in certain aspects of the game then you are highly suspicious. I know the known cheating pairs make mistakes. All cheating pairs do.

If someone is willing to put the data from the 1965 World Championship into BBO LIN format, this will be a great help.

Instead of a subjective analysis of the data, I can have software do an objective analysis.
July 8, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
From http://www.worldbridge.org/wp-content/uploads/2017/03/2017LawsofDuplicateBridge-nohighlights.pdf:

“The first Laws of Duplicate Bridge were published in 1928 and there have been successive revisions in 1933, 1935, 1943, 1949, 1963, 1975, 1987, 1997, and 2007. ”

I do not know what rules were in effect in 1963. Someone may have the book.
July 8, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
1. Page 6. In 1997, Law 73B2 read:

The gravest possible offence is for a partnership to exchange information through prearranged methods of communication other than those sanctioned by these Laws. A guilty partnership risks expulsion.

The last sentence was removed in the 2007 Edition and remains removed in the 2017 Edition.

It now reads:

The gravest possible offence is for a partnership to exchange information through prearranged methods of communication other than those sanctioned by these Laws.

2. On page 7, someone is missing the five of hearts. N and S both have the three of hearts.

3. I have software that can look at the data and generated statistics on the likelihood that someone is cheating. There is some data from that era on The Vugraph Project, but there is no computer readable record of the 1965 Bermuda Bowl. If someone wants to enter it, I would be interested in running my software against the data. My upcoming book has a chapter that covers all data that is in the Vugraph Project so it includes data on Reese/Schapiro and others from that time period. The results were interesting; not necessarily what you expect, or perhaps you would expect. The software can tell me who was likely cheating at the time, within some statistical confidence levels. Book should be back from printers late next week; early the following week. Too late for me to add anything about 1965, but there was enough data on Reese/Schapiro to form an opinion.
July 6, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
PM me. I'm taking pre-orders. Free shipping with pre-orders. $39.95. After that probably available through Amazon and some Bridge book-sellers.
July 3, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
Avon provided me the 1958 data. No surprises. The data was consistent with other tournaments from that era. It was too late to include in the charts/graphs in my book but I did put the data into the chapter of players from that era. Looks like the book will be out this month. Hopefully before the Las Vegas NABC.
July 3, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
I have data from 1955 onwards from the Vugraph project, see https://www.sarantakos.com/bridge/vugraph.html, and data from 2003 onwards from BBO.

My data does not include 1964. If someone wants to add from the WC books…

Reese/Schapiro are above average for opening leads, but not exceptional.

In my database I have hundreds of thousands of records from top tournaments from 1955 to 2019.

If I take the top 500 pairs based on amount of data from top tournaments, Reese/Schapiro are #474 based on amount of data. If I then sort by accuracy (did not give up a trick on the opening lead), R/S are ranked #103 out of 500. This includes all contracts. More importantly is an accurate lead for contracts which are not going to make. R/S rank #232.

By comparison, Belladonna/Garozzo are ranked #165 by amount of data, #61 for all opening leads and #182 for opening leads that, according to double dummy before the opening lead, are not supposed to make.

For both R/S and B/G I can provide many examples of “bad leads”.

Statistically I need a large data set to generate numbers on the likelihood that they are cheating on opening leads. Probably valid for only the top 200 pairs that I have the most data on.

There are a lot fewer boards on Soloway/Swanson. I only have 62 boards. They are ranked #793 based on amount of data. If I were to insert S/S into the list of the top 501 based on the amount of data they would rank #267 on the list of pairs for all opening leads and #108 on the list of pairs for leads to contracts that are not supposed to make (per double dummy). In other words, for the latter category - opening leads when the contract is not supposed to make, S/S are better than R/S and B/G.

Hamman/Wolff rank #69 on amount of data, #159 on opening lead and #318 on opening lead for non makeable contracts.

It is easy to twist statistics. I have better tools to detect cheating on the opening lead than the data I just provided. But the accuracy of opening leads is an important number.

I can provide LOTS of examples of bad leads from known cheating pairs. Actually I can provide example of bad leads from all players. No-one is close to 100% accurate on opening lead. World class players average just under 81% accuracy on opening lead (80.94%).

More details in my forthcoming book. Looks like it will be coming out this month. I have a chart of the top 100 pairs, based on the amount of data, and ask you to pick out Fisher/Schwartz from a scatter graph showing number of opening leads and accuracy of the opening lead. See how obvious their cheating is based on this statistic. I know the leading style of all the top partnerships (trump leads, xx leads etc.) and how good each player in each top level partnership is for opening leads. Over 10% of the book is on opening leads. Analysis of all the top level players gives insight into how to improve opening leads. More importantly I know the list of experts I won't take any opening lead advice from.

Sorry John - not enough data on you to make any of the tables, charts or graphs in the book but I do cover R/S, B/G, H/W and top pairs from the 1955-1991 era as well as data from all players from 1955-2019. The results are … interesting.
July 3, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
As I said earlier, “No-one but the mathematicians would care unless you tied in a National event or a qualifier.”

There are only a small number of boards/IMP pairs that changed by 0.01 because of concavity. You would have to be most unfortunate to have hit one of them.
June 24, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
This discussion has gone off-topic to the OP.

By “… would care …”, I meant about the implementation of the concavity checks, not the original formula.

Bethe worked on the original formula. It required effort, analysis, and some mathematically modeling. It was a very nice implementation. The values that he used were based on historical records. I was not on the committee, I'm just going from the report I read a few years ago.

The concavity is totally separate. When rounding was applied to the formula, someone pointed out the concavity issues. The revised WBF formula included a concavity implementation. Way back when (this was in ACBLscore+ days), I generated results with the original and revised WBF formula and compared the two. The difference was 0.01 on a small number of IMP results for a small number of VP tables.

The concavity implementation gives the results the beauty and elegance needed because rounding was applied. It is the spirit of mathematics. The previous results were “ugly” because of the concavity issue.

You are welcome to join me, Ed, Ray in the bar at the next NABC… we will be the nerds sitting in the corner talking about 0.01.
June 23, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
This thread has gone way off the original topic!

I wrote,

“I do use the Fisher test. If you defend better than Fisher you are probably cheating. (A little statistics humor).”

@Richard wrote:

“> I do use the Fisher test. If you defend better than
> Fisher you are probably cheating.

Doesn't seem particularly exact”

I know that Richard knows, but let me explain the humor so the wider audience appreciates Richard's comment.

Ronald Fisher was a 20th century statistician. Probably most famous for the “Fisher Test”. See https://en.wikipedia.org/wiki/Fisher%27s_exact_test for details.

In Bridge, Lotan Fisher is a convicted cheating player.

What the BW audience know about Fisher cheating was that F/S were cheating on the opening lead by placement of the tray.

However, F/S were cheating far, far more than just the opening lead. They were cheating on defense as well. How, I don't know. Statistically I can “prove” it. What this means is that within some very very high confidence level I can state they were cheating during the play of the hand and I show this in the book.

Given that F/S, a pair that is known to have cheating on the opening lead, were cheating during the play of the hand, any pair that defends better than F/S is cheating within some statistical confidence level.

There are some active players, not convicted of cheating, that historically (uo to 2015) defended better than F/S.
June 23, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
@Ray: I get stalked by the WBF on BW :-) Looks like Gordon uploaded this two days ago for you.

@Ed: Re: Jim. Yes. It was the most practical thing to do. Otherwise we would have had different VP formulas with the USBF, WBF and ACBL. Jim had the power to make ACBL and USBF consistent with WBF and keep life simple for everyone. Join Ray at the bar and I'll explain the full history some time…
June 23, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
@John: I've got a chapter that answers your question. It is a boringly long chapter but unfortunately necessary. All the reviewers so far hate it, but I have to keep the chapter in for mathematical validity. I can't explain it in a few sentences.
June 22, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
@Ray: you are looking at an old WBF formula that does not include the concavity checks. You need to find the improved WBF formula that has concavity. The difference is small - 0.01 IMPs for some fields. No-one but the mathematicians would care unless you tied in a National event or a qualifier.

It's a long story (will tell you at a bar sometime).

The formula that the ACBL and USBF boards approved and that they publish is not the one that they use. Jim implemented the WBF formula because that was the right thing to do.
June 22, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
@Art: Detecting cheating on the opening lead is complicated.

If you signal something about your hand before the opening lead then partner may use this information with her opening lead, or may choose to use the information later.

Discussing opening leads is difficult because there is a huge difference between US and Europe on what they are “best”.
June 22, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
I don't use Spearman so don't know.

I do use the Fisher test. If you defend better than Fisher you are probably cheating. (A little statistics humor). There are two pairs who defend better than F/S (using one of the metrics I use).
June 22, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
The case you mentioned was used by the F/N defense lawyers in the CAS hearing. See https://en.wikipedia.org/wiki/Sally_Clark. The statistical “expert” in that case was a paediatrician.

The F/N statistical expert claimed that shuffling /dealing the cards was not random and therefore this may explain the correlation with the orientation of F/N's cards without them cheating. You can ponder the scientific merit of that statement. EBL used Professor Greg Lawler who recently won the Wolf prize.

Both yours and John's comments are addressed in the book. I devote an entire chapter to false positives.

I do not claim to be a statistics expert or a paediatrician. I do know more than most about bridge statistics.
June 22, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
@Mike: Data is to copy/paste into a VPU file so you can use with ACBLscore.

@Ray: The thing of beauty is the WBF/USBF scale. It's the one ACBL uses. It is not the formula that is on the ACBL web site. The web site documents the old formula, not the newer, better, revised formula.
June 22, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
@Richard:F I include all data from all the top tournaments. Almost 300 major events in total. (WBF/EBL/ACBL). It is important statistically to not cherry pick data. The long term effect of this work will not be on cheating but the ability to use the data to improve your own game.

@RichardW: Analysis is in the book. I use “top pairs” against “top pairs”. You will be surprised who the “top pair” is when only playing against the other “top pairs”. Not who you might think. I either use the top 100 or top 120 pairs based on the amount of data available. I have an entire chapter on the opening lead. Lots of data. Auctions are separated if contested or not. If I cheat I will have a signal when I bid if I want that suit led or not. I address consistency. For example, Meckstroth/Rodwell are both very consistent with their opening lead style - one is better than the other. Hamman is very interesting because he has two different styles with his three main top level partners (Wolff, Zia, Soloway). Idiosyncratic leads are difficult to define to ask the computer to search for them, but I have a chapter on them. Spearman/Pearson won't tell you much - the data contains both cheating pairs and non-cheating pairs - you are interested in finding those that cheat; not validating a hypothesis. There are other ways of doing this. There are various tips in the book from the analysis of the data on how you can improve your own game for opening leads.

Comparing players data from pre-2015 and post-2015 is most interesting.

Of the top 250 individual players, the best opening leader was a surprise.

I think all of this has gone off-topic to the OP!
June 22, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
The handviewer is very good for creating hand records.

If you have run a Vugraph event using BBO software then you could create the hand records using the handviewer tool. Combine hand records into a single LIN file. Then run like a Vugraph.
June 21, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
From Bridgescore+
20 Pt. 30 Board continuous scale:


0 10 10
1 10.23 9.77
2 10.45 9.55
3 10.67 9.33
4 10.89 9.11
5 11.1 8.9
6 11.31 8.69
7 11.52 8.48
8 11.72 8.28
9 11.92 8.08
10 12.11 7.89
11 12.3 7.7
12 12.49 7.51
13 12.67 7.33
14 12.85 7.15
15 13.03 6.97
16 13.21 6.79
17 13.38 6.62
18 13.55 6.45
19 13.72 6.28
20 13.88 6.12
21 14.04 5.96
22 14.2 5.8
23 14.35 5.65
24 14.5 5.5
25 14.65 5.35
26 14.8 5.2
27 14.95 5.05
28 15.09 4.91
29 15.23 4.77
30 15.37 4.63
31 15.5 4.5
32 15.63 4.37
33 15.76 4.24
34 15.89 4.11
35 16.02 3.98
36 16.14 3.86
37 16.26 3.74
38 16.38 3.62
39 16.5 3.5
40 16.61 3.39
41 16.72 3.28
42 16.83 3.17
43 16.94 3.06
44 17.05 2.95
45 17.16 2.84
46 17.26 2.74
47 17.36 2.64
48 17.46 2.54
49 17.56 2.44
50 17.66 2.34
51 17.75 2.25
52 17.84 2.16
53 17.93 2.07
54 18.02 1.98
55 18.11 1.89
56 18.2 1.8
57 18.29 1.71
58 18.37 1.63
59 18.45 1.55
60 18.53 1.47
61 18.61 1.39
62 18.69 1.31
63 18.77 1.23
64 18.85 1.15
65 18.92 1.08
66 18.99 1.01
67 19.06 0.94
68 19.13 0.87
69 19.2 0.8
70 19.27 0.73
71 19.34 0.66
72 19.41 0.59
73 19.47 0.53
74 19.53 0.47
75 19.59 0.41
76 19.65 0.35
77 19.71 0.29
78 19.77 0.23
79 19.83 0.17
80 19.89 0.11
81 19.94 0.06
82 19.99 0.01
83 20 0
June 21, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
What Richard says.

There is a big difference between circumstantial evidence and statistical evidence.

All cheaters are convicted on statistical evidence. For example, FN and the likelihood that their leads were random given an orientation. This is statistics.

You may be convinced because you can watch the videos and verify yourself. But all you are doing is verifying statistics.

Circumstantial is FS always find the “best lead” (they didn't by the way) because on board 3 they found a killing lead.

Statistically I can show that they found a bad lead on boards 5, 7, 11 but you will remember board 3. Not the others. The computer however looks at all boards, not selective boards.

If I find a pair that consists defends better than FN over a large number of boards - what is your opinion if that pair is cheating? I can generate statistics on all pairs. Show you where FN are. Show you another pair. Show you the number of boards. At some point you will be convinced. If you watched FN video, you might not be convinced until you personally saw the 10th or 20th lead.

But… you may be one that requires me to verify this by finding the actual code. Post 2015, and assuming smart bridge players, you won't be able to find a code because it will vary by board number/session.

So far I can show that the software detects the known cheaters. Without knowing how they cheat.
June 21, 2019
.

Bottom Home Top