Join Bridge Winners
All comments by Nicolas Hammond
You are ignoring the author of this comment. Click to temporarily show the comment.
@Richard:F I include all data from all the top tournaments. Almost 300 major events in total. (WBF/EBL/ACBL). It is important statistically to not cherry pick data. The long term effect of this work will not be on cheating but the ability to use the data to improve your own game.

@RichardW: Analysis is in the book. I use “top pairs” against “top pairs”. You will be surprised who the “top pair” is when only playing against the other “top pairs”. Not who you might think. I either use the top 100 or top 120 pairs based on the amount of data available. I have an entire chapter on the opening lead. Lots of data. Auctions are separated if contested or not. If I cheat I will have a signal when I bid if I want that suit led or not. I address consistency. For example, Meckstroth/Rodwell are both very consistent with their opening lead style - one is better than the other. Hamman is very interesting because he has two different styles with his three main top level partners (Wolff, Zia, Soloway). Idiosyncratic leads are difficult to define to ask the computer to search for them, but I have a chapter on them. Spearman/Pearson won't tell you much - the data contains both cheating pairs and non-cheating pairs - you are interested in finding those that cheat; not validating a hypothesis. There are other ways of doing this. There are various tips in the book from the analysis of the data on how you can improve your own game for opening leads.

Comparing players data from pre-2015 and post-2015 is most interesting.

Of the top 250 individual players, the best opening leader was a surprise.

I think all of this has gone off-topic to the OP!
June 22
You are ignoring the author of this comment. Click to temporarily show the comment.
The handviewer is very good for creating hand records.

If you have run a Vugraph event using BBO software then you could create the hand records using the handviewer tool. Combine hand records into a single LIN file. Then run like a Vugraph.
June 21
You are ignoring the author of this comment. Click to temporarily show the comment.
From Bridgescore+
20 Pt. 30 Board continuous scale:

0 10 10
1 10.23 9.77
2 10.45 9.55
3 10.67 9.33
4 10.89 9.11
5 11.1 8.9
6 11.31 8.69
7 11.52 8.48
8 11.72 8.28
9 11.92 8.08
10 12.11 7.89
11 12.3 7.7
12 12.49 7.51
13 12.67 7.33
14 12.85 7.15
15 13.03 6.97
16 13.21 6.79
17 13.38 6.62
18 13.55 6.45
19 13.72 6.28
20 13.88 6.12
21 14.04 5.96
22 14.2 5.8
23 14.35 5.65
24 14.5 5.5
25 14.65 5.35
26 14.8 5.2
27 14.95 5.05
28 15.09 4.91
29 15.23 4.77
30 15.37 4.63
31 15.5 4.5
32 15.63 4.37
33 15.76 4.24
34 15.89 4.11
35 16.02 3.98
36 16.14 3.86
37 16.26 3.74
38 16.38 3.62
39 16.5 3.5
40 16.61 3.39
41 16.72 3.28
42 16.83 3.17
43 16.94 3.06
44 17.05 2.95
45 17.16 2.84
46 17.26 2.74
47 17.36 2.64
48 17.46 2.54
49 17.56 2.44
50 17.66 2.34
51 17.75 2.25
52 17.84 2.16
53 17.93 2.07
54 18.02 1.98
55 18.11 1.89
56 18.2 1.8
57 18.29 1.71
58 18.37 1.63
59 18.45 1.55
60 18.53 1.47
61 18.61 1.39
62 18.69 1.31
63 18.77 1.23
64 18.85 1.15
65 18.92 1.08
66 18.99 1.01
67 19.06 0.94
68 19.13 0.87
69 19.2 0.8
70 19.27 0.73
71 19.34 0.66
72 19.41 0.59
73 19.47 0.53
74 19.53 0.47
75 19.59 0.41
76 19.65 0.35
77 19.71 0.29
78 19.77 0.23
79 19.83 0.17
80 19.89 0.11
81 19.94 0.06
82 19.99 0.01
83 20 0
June 21
You are ignoring the author of this comment. Click to temporarily show the comment.
What Richard says.

There is a big difference between circumstantial evidence and statistical evidence.

All cheaters are convicted on statistical evidence. For example, FN and the likelihood that their leads were random given an orientation. This is statistics.

You may be convinced because you can watch the videos and verify yourself. But all you are doing is verifying statistics.

Circumstantial is FS always find the “best lead” (they didn't by the way) because on board 3 they found a killing lead.

Statistically I can show that they found a bad lead on boards 5, 7, 11 but you will remember board 3. Not the others. The computer however looks at all boards, not selective boards.

If I find a pair that consists defends better than FN over a large number of boards - what is your opinion if that pair is cheating? I can generate statistics on all pairs. Show you where FN are. Show you another pair. Show you the number of boards. At some point you will be convinced. If you watched FN video, you might not be convinced until you personally saw the 10th or 20th lead.

But… you may be one that requires me to verify this by finding the actual code. Post 2015, and assuming smart bridge players, you won't be able to find a code because it will vary by board number/session.

So far I can show that the software detects the known cheaters. Without knowing how they cheat.
June 21
You are ignoring the author of this comment. Click to temporarily show the comment.
Thanks for doing this. Much appreciated.
June 21
You are ignoring the author of this comment. Click to temporarily show the comment.
BBO lets you create hands using their handviewer tool. This includes bidding/play of the hand.

Editing LIN files with a text file will likely lead to many manual errors.
June 21
You are ignoring the author of this comment. Click to temporarily show the comment.
I would not classify my work is not a witch-hunt. It is a scientific approach to detecting cheating.

It is possible to detect cheating, within some agreeable limits, using sophisticated software. The software correctly identifies all the recent cheating pairs - FS, FN, BZ, PS, Doktors etc.

The software is able to identify the area of the game that they were cheating which makes looking at video much easier.

I can apply the same methods to the data from historical tournaments, including the Blue Team era. The results are “interesting”. I cover the historical tournaments in one chapter. I just processed the 1958 data - the results were “as expected” - consistent with other data from that era.

The software correctly identified Buratti/Lanzarotti as a cheating pair.

I can generate names. Ignoring the known cheaters (the six pairs listed above), I know the currently active top pairs that have historically - at least up until 2015 - been able to defend better than, say, BZ. Draw your own conclusions if you think they were cheating.

Strictly speaking, I can generate lists of players, along with statistical data. Technically this is all legally permissible - it's just processing public data. However, the implications for some of the lists are obvious. If a ranked list has FS as #1, some other pair at #2, BZ at #3, BL at #4, FN at #5 and the data is related to defense then it is fairly obvious that #2 was cheating. I only generate these lists for pairs with a very large number of boards in top level play therefore the chance their results are through “luck” is sufficiently small to be ignored. However, we have seen how CAS can understand statistics.

The lawyer has had me remove all names except for those who have been convicted/not allowed to play. However, some of the top pairs have given me permission to list their names and where they rank. So in the book you will see the difference between cheating players and honest players.

I have had to wait about four years so that there was sufficient data from top level post-2015 tournaments to compare to pre-2015 to validate some of the software.

The impact of the 2015 scandals impacted top level bridge above and beyond just the known cheating pairs. Far more than anyone would believe.

I can identify the pairs that attended the WBF seminar on “How to Suddenly Get Worse in Bridge” that was held in 2015.

I've put some data into charts/scatter graphs to make the data more understandable.

There is sufficient data for you to be able to work how many other pairs were cheating and did not get caught, but there is not enough information in the book for you to work out who they are.

My background is high level computer security. You don't want to disclose what you are doing or how you have been doing it until necessary. There is a lot more happening behind the scenes. It may be a couple of years before I can write about it. The software has already caught some pair(s).

For the most egregious pair(s), I've found video and submitted to the appropriate bridge authorities. What happens next is up to them.

The data shows a big difference with post-2015 tournaments and pre-2015 tournaments. For example, the software identifies the 2015 Bermuda Bowl as the cleanest BB on record by far. Wonder why. Same applies to EBL tournaments pre-2015 and post-2015.

I am sure there will be some skeptics. This is all new. No-one believed that this could be done. The results are speaking for themselves.

It's the chapter on the ACBL players that is likely to be the most “interesting”.
June 21
You are ignoring the author of this comment. Click to temporarily show the comment.
Book mostly done. It should come out soon (2-4 months). About 200-220 pages. Currently with graphical designer. I won't be making any friends when it is published. I suspect there will be some policy changes/amnesties after it comes out.

It uses software and statistics to detect cheating. It works very well. It quickly identifies all the recent known cheating pairs. It does require sufficient data on pairs. To be technically correct: it identifies the pairs that are likely to be cheating within a statistical limit of how likely their results are down to “luck”. Given a large number of boards, everyone's “luck” factor is statistically (within some boundaries) quantifiable.

It will cover all the big tournaments back to the 1950s where there are table results in a computer readable format. It covers all players where there is sufficient data. My database is over 11,000,000 records.

If someone is willing to enter data from old (1930-1984) tournaments (see I still have a little time to process the data. All the data in the book from old tournaments comes from The Vugraph Project. See

There are available tools to enter the data if someone is interested and has access to old results/old championship books etc. For example, i don't have the data from the Buenos Aires Bermuda Bowl. If you have some time, and access to the championship books, PM me. I'll offer to host the raw data on a public site if you can do the work. Or will give to TVP for them to host.

There's some interesting stuff that is not related to cheating that will (should!!) affect coaching/training of some pairs/players.

It is probably no surprise that there are some active top level players/pairs that are/were cheating that have not been caught. Yet.

The book will identify the minimum number of pairs that were cheating in Opatija at the European Championships in 2014.

There have been some funny incidents with writing the book. I was able to detect changes in some of the top level (think top 200 pairs in the world) players. When talking with some coaches, it turns out I was able to detect when a player's playing skills/mental abilities had temporarily dropped (divorce, having an affair, undisclosed sickness/family issues etc.). That chapter won't be in the book (no snarky comments on how thick that book would be).
June 18
You are ignoring the author of this comment. Click to temporarily show the comment.
The formulas adopted by the WBF, ACBL and USBF boards are all different. At one time they each had a different continuous scale.

The newest WBF scale is the more mathematically correct one.

In The Great Reunification, Jim implemented the WBF scale in ACBLscore and ignored what the ACBL board had voted to do.

USBF used the version from ACBLscore and ignored what the USBF board had voted on.

The differences are 0.01 on some fields. Not anything you are likely to notice.

The newer WBF scale fixed some concavity issues from the first version.

Neither the ACBL board not the USBF (AFAIK) ever formally adopted the (newer) WBF version, however both use it.

Just a little historical trivia…
June 14
You are ignoring the author of this comment. Click to temporarily show the comment.
Barbara wrote that someone from ACBL management stated, “another multimillion dollar boondoggle”. This would seem to be another swipe at ACBLscore+.

Once again….. ACBLscore+ was fine. ACBL did not own the Copyright so they were told by their outside lawyers to not use it.

At some point, I'll write more about what really happens as ACBL keep trying to re-write history and not take responsibility.

ACBL could be using ACBLscore+ (see Gatlinburg as an example) and be saving money with fewer directors, quicker games etc. etc.
May 28
You are ignoring the author of this comment. Click to temporarily show the comment.
Cracking the ACBL hand records required three consecutive deals. H1, H2, H3. Finding the seed from H1 to H2 was a brute force crack, however there were various flaws in the algorithm/implementation which made the search space a lot smaller. Given the seed for H1->H2, it was trivial, but still necessary, to find the seed from H2->H3. Given two consecutive seeds, it was possible to generate the remaining hands, H4, H5 etc.

It was possible to crack an entire hand record set in about 30-40 seconds on a single machine. This included the time for generating the hand records and doing double dummy analysis on all hands. It usually took longer to enter the cards into the program than it did for the program to crack the hands. The vast majority of the time was in finding the seed from H1 to H2. By the time H3 had been entered, the key for H1->H2 had usually been found.

As an example, the hand records for the Charlotte Regional were each cracked in under a minute on a single machine.
May 14
You are ignoring the author of this comment. Click to temporarily show the comment.
Use Google Earth to “fly over” the hotel and see the area. There are some covered walk ways from Cosmopolitan to other (cheaper) hotels.
May 11
You are ignoring the author of this comment. Click to temporarily show the comment.
You are ignoring the author of this comment. Click to temporarily show the comment.
Law 40.
May 8
You are ignoring the author of this comment. Click to temporarily show the comment.
Collusive cheating is playing with a pro and having an agreement that the client will not bid no trump and not disclosing this agreement to your opponents.

Wait to see what happens when the first non-European pair gets charged.
May 7
You are ignoring the author of this comment. Click to temporarily show the comment.
In ACBL land, Swiss/KO matches are always shuffle/deal. The only exception is the later rounds of the NABC+ Swiss events (if you are doing well) and the Spingold/Vanderbilt/Reisinger.

This is one of the reasons that the cost to play is much cheaper in ACBL land than elsewhere. Example: recent Gatlinburg regional is $12 per person per session (24 boards per session).

I have tried multiple times to run Swiss events with Bridgemates + pre-dup-ed boards.

Have offered to run Gatlinburg Bracketed Swiss as a pre-duped event but the willingness of the organizers/TDs/players is not there yet.

There are other reasons that some prefer it this way:


Saw it twice in Memphis at the last NABC in the 0-10K Swiss. Warned my partner before we sat down at a certain table what would happen and we saw it. She later caught it happening at a different table (not one we were at). Two different teams.

Most Bracketed Swiss in my area are now 9 team brackets.

Note that running a club game with pre-dups and running a tournament event are a little different in scope/size.
May 7
You are ignoring the author of this comment. Click to temporarily show the comment.
I published a declarer skill listing a few years ago.

The defensive “skill” lists (I have several methods)are able to identify the known cheating pairs (and some suspected cheating pairs). I don't publish those. Bridge is a game of mistakes. Even cheating pairs make mistakes. They just make fewer in certain parts of the game. The detection methods are sensitive enough (given a sufficiently large number of boards) to separate the cheating pairs from the top pairs.

If the methodology is known, then cheating pairs would play differently, making it harder to catch them in the future so I don't publish the details.
May 5
You are ignoring the author of this comment. Click to temporarily show the comment.
There are lots of ways of measuring bridge “skill”. Here's one:

Take all contracts where declarer can make the contract (according to double dummy). Find out the percentage of contracts actually made. The average is around 91-92% for top players in team events.

I take the data from top tournaments from Vugraph, top 100 pairs, leaving 200 individual declarers. Some players appear more than once because of different partners.

On this top 200 list, CN is #9, FN is #25. Buratti is #37, Lanzarotti #64 (92.25%).

There is an advantage if you are playing against the weaker pair in the opposing team. This was probably not the case for F/N. There is also an argument that if you are aggressive bidders you will end up in more marginal contracts than other pairs.

Michael Rosenberg appears on my list with three different partners. Michael: I'll PM you.

Based on the stats; F/N/B/L are all world class declarers. However, we know from video that some declarers get “help” from their partners.

I have lots of other measurements, including different ways of measuring declarer skill/defensive cheating skill.
May 5

Bottom Home Top