You are ignoring the author of this comment. Click to temporarily show the comment.
Ironically, chatted with the lawyer today before I read BW…

I have posted some statistical information in the past. See https://bridgewinners.com/article/view/statistically-the-best-declarer-is/

I generate a _lot_ of statistics on all pairs. Data is from ACBL, WBF, EBL, BBO. My database is over 11,000,000 hands.

I have created several algorithms that can be used to detect cheating. All the known cheating pairs appear high (very high) on this lists. There are other pairs on these lists that appear higher than the known cheating pairs. The implication is obvious. All the data I use is from public. Legally I could publish all the data I have.

At some point there will be the first honest pair. It is unfair to that pair to be grouped with the cheaters.

Statistically, we expect some variations. In my book I cover what these are, and what we can expect. As an example, from p26, “As an example, early on during this work, there was a reasonably well-known pair that had a statistically fantastic tournament. So amazing that you would be convinced that they had to be cheating. Their next tournament was even worse than their previous one was good. Combined, their statistics were below average. Over their career they are still below average compared to their peer group. This story is told to emphasize the difficulty of drawing conclusions from a small data set. In order for Bridge statistics to work, you must have a large data set. How large a data set depends on the data that you are processing.”

Another problem is understanding the “data behind the data”.

Example: you/I play at the local club, average age 85. We do very well. The statistics of this event must be appropriate factored (or ignored). For cheating detection, the first hint is how well you do against all pairs using some of the cheating detection methods I have. If you appear high on this list, then I look at how well you do against the top ‘n’ pairs in the world. Most people suffer a drop in their statistics. Some don't.

One of the various interesting data set is pre 2015 and post 2015.

I have a formula that can calculate the “quality” of bridge. The higher the number, the better the declarers. The lower the number the better the defense. We would expect the number to tend to its expected value given a large number of boards. I also have the ability of processing the same data without any known cheating pairs.

I take the data from the “top” tournaments from before 2015, my “quality” number is 1.238. I remove the known cheating pairs. The number increases to 1.262. This is expected. Cheating players cheat on defense; therefore the defense is worse without them. This validates this “quality” calculation. I then take the same tournaments but look at post 2015. We would expect the number to be close to 1.262. It is 1.323.

Again, from my book, p 145:

'The difference in values between the “Pre-2015 (-n)” and “Post-2015” can only be explained by:

• Declarers suddenly improved. Perhaps there were required masterclasses in late 2015 on improving declarer play. I suspect not.
• Defenders suddenly became a lot worse. Perhaps all top-level players took my famous “how to become a worse defender” class.
• A statistical anomaly. The volume of data makes this unlikely.
• Undetected cheating pairs stopped cheating.
'

Another example: I have a function that creates statistics on opening leads (p 128 of the book).
Here's the results from before 2015. This is a table ranking the top 100 pairs.

1. <American pair – names withheld>63.78%
2. Lotan Fisher/Ron Schwartz 63.14%
3. Andrea Buratti/Massimo Lanzarotti 63.10%
4. <European pair – names withheld> 61.90%
6. Alexander Smirnov/Josef Piekarek 60.67%

From p 129, “Post 2015 had an impact on the American pair that is at the top of this list. Let me leave it at that.”

You may draw your own conclusions. What I did not provide in the book was the amount of data on each of the pairs (obviously I have it), this is needed for you to determine the statistical likelihood of a pair being that accurate on opening leads.

@Shawn. Richard P/I probably use a different approach to analyzing the data. A lot of what I focus on is detecting cheating. Also a lot depends on who you play against - play against weak opponents you should do well.
Sept. 15, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
Shouldn't a better title be “Scribbles in the Face”?
Sept. 15, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
I have software that detects cheating.

For the 2011 Transnationals, based on Vugraph data, F/S were almost perfect in defense. No-one is that good. There is not enough boards to statistically prove that they were cheating; just that their defense rating for this tournament is better than the long term record of any major partnership in history. BUT…. not enough boards to statistically prove they were cheating (using the thresholds I use). Enough to stick a camera on them, but I don't think this was done in 2011. Everyone has good/bad tournaments. F/S were exceptional when on Vugraph for this tournament. Their data triggered the cheating detection software for this tournament.

For the 2011 data based on the WBF web site, F/S were very good. They were not good as declarers, but were exceptional on defense. If I take the top 180 pairs (chosen so that I have at least 100 boards on defense), then F/S are ranked just over 100 (in terms of amount of data). If I sort by a cheating metric I use, then F/S ranked #17 on defense, #126 as declarers. The discrepancy is enough to trigger the cheating detection software. However…. there are three pairs that played a similar or more number of boards that declared worse than F/S and defended better than F/S. None AFAIK have been convicted of cheating; all six of these players are still playing.
Sept. 13, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
“I know this was probably available in bridgescore+ and the ACBL decided to scrap it ;-). <jk>”

The ability to import ACBLscore game files was in ACBLscore+. ACBL have that code, so does my company. We can each use it as we see fit. ACBL did not want to make it public domain.

After the ACBLscore+ contract was over, I renamed the product Bridgescore+, and added the ability to import WBF, EBL, ACBL, BBO data from their web sites. This is not part of ACBLscore+, ACBL has not rights to this (new) code.

I gave a proposal to WBF about 18 months ago to license all this work, including all the anti-cheating work, but they have yet to decide on it. This would give you the database you want (assuming it was decided to open it up). The problem if you open it up is then it is (relatively?) easy to work out who has been cheating.
Sept. 4, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
@David: See coaching advice at https://bridgewinners.com/article/view/re-nic-hammonds-detecting-cheating-at-bridge/?cj=842365#c842365

“Unlike most partnerships you actually do better against stronger players. This means you are probably concentrating less when playing against weaker players. This is a part of the game that you should work on. ”

This is the value of statistics in Bridge.
Sept. 3, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
“Good” is usually subjective. I have objective data.

I take the data from the top tournaments on Vugraph. I take the data from the top 200 pairs based on the amount of data.

I have software that detects cheating.

One of the things I look for is the difference between declarer play and defensive play.

I compare these 200 pairs on defense/declarer ability difference.

The German Doktors rank #5 on this list. Piekarek/Smirnov rank #4, BZ rank #6. Buratti/Lanzarotti are #12.

Of the top ten pairs, they are in the bottom two in terms of defensive ability. So even with their cheating, they are still bad defensively (comparatively). But there is a huge difference between their declarer play and their defensive play. This is usually a strong signal that a pair is cheating.
Sept. 2, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
The filming of the finals of the D'Orsi bowl was public. The videos are on line. The cameras were mounted on stands in the room. Pretty obvious that they were being filmed. You will see some players looking at the camera.
Sept. 2, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
Michael Clark did some good videos. See https://www.youtube.com/watch?v=1xVj1EQ_vSI
Sept. 2, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
The German players have little choice. A German court has ruled on the matter. See https://www.sueddeutsche.de/panorama/manipulations-vorwuerfe-gericht-beendet-husten-affaere-1.3750997

The WBF had several procedural issues with their hearing in Dallas on the German doctors. (See page 64 of my book for one of the procedural issues). Given the procedural issues, the ruling was likely to be overturned by the courts.
Sept. 2, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
He's still on the couch. Law 46 b 5 would apply.
Sept. 2, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
Warren and Bill with Sharon personally ran the initial stages of the program.

I do not know how much money they gave to different programs.

They then gave the remainder to the SBL. SBL created some nice material but relied on local schools to do the teaching. I do not know if SBL were charged with seeding programs with money or not.

SBL would not have spent \$600K.

I think W&B were going after the schools, not clubs. Someone should ask them.
Aug. 19, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
The “And I Wanted To … ” series probably deserves its own category on BW.

I suspect it would quickly fill up all available disk space.
Aug. 19, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
Warren and Bill did put up money. They personally helped administer it in its early days. See https://bridgewinners.com/article/view/buffets-gates-world-view/ for some of the stories. I know first hand of programs that received money (not ACBL-affiliated) and also programs that requested funding and received nothing. As the Gates/Buffett program progressed, the money was transferred to the now defunct School Bridge League.

I don't know when Jeff requested money, but follow the link I provided for one positive story.

To see a successful outcome: http://www.actuarialoutpost.com/actuarial_discussion_forum/showthread.php?t=86708
Aug. 18, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
I hope people appreciate the difference between a good tournament and a great tournament. It is stuff like this. Hopefully Kevin got paid for making this number of board sets; it's typically a tournament expense, almost certainly it is not part of his ACBL salary.

Kudos to the tournament organizer/chair for understanding that these things improve the player experience.
Aug. 16, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
You are ignoring the author of this comment. Click to temporarily show the comment.
@Kevin: Bridgescore+ automatically creates this movement for 9 team RR brackets.
Aug. 15, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
For those that know of Little Britain, some of the “computer says no” skits are at
Aug. 15, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
Pure extrapolation. I am searching for a good reason why they were so bad statistically compared to their previous statistics at other tournaments.
Aug. 15, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
@OP: “It will be very interesting to see what comes from this.”

There were _at least_ two top pairs that are identified in the book as cheating pairs (but not mentioned by name apart from in the MD5 hashes) that played in the recent Spingold. I provided details to ACBL on the pair(s) and which parts of the game they are cheating in. This would make video inspection easier. These pairs were “targeted”. I'm struggling for the right word to use; “targeted” is not quite the right word, but there was some additional things that were done for these pair(s). ACBL may have had their own list and own additional pairs to target; I don't know about any additional pairs. Obviously the idea would be that the pair(s) would be unaware of any additional activity, but it seems that at least one of the pairs was aware. People talk. I don't think any of the other players in the room (and I was one for the round of 64) were aware of anything different; and that's the way it should be. To catch some of the cheaters, will take some additional actions. No, I was not part of any covert activity.

Bridgescore+ has processed the Spingold 2019 data.

One of the pairs that was “targeted”, had a statistical aberration. They played nowhere near their normal historical level. Was it because they were aware of possible additional activity? Or just a bad tournament? I can look at their results from all previous tournaments and compare. Statistically they ranked n-1 out of the top n pairs (not giving out ‘n’ as don't want to give out too much information), but n is at least > 10. This is not their normal performance or reputation.

If the knowledge that you can be detected stops players cheating, this is a worthwhile benefit to the honest players.

There's some more statistics on the Spingold that I will post later.
Aug. 14, 2019
You are ignoring the author of this comment. Click to temporarily show the comment.
@Richard R: “Nicolas, while you’re at it…Do I have a shot with Elizabeth Hurley? ;-)”

The computer says no
Aug. 14, 2019
Nicolas Hammond edited this comment Aug. 14, 2019
.

Bottom Home Top