Join Bridge Winners
All comments by Nicolas Hammond
You are ignoring the author of this comment. Click to temporarily show the comment.
The original post stated, “The Anti-Doping Tribunal found that the doping violation had not influenced the performance of the player”.

Who or what is the “Anti-Doping Tribunal”?

I can find no reference to an Anti-Doping Tribunal. The first and only reference to the WBF Anti-Doping Tribunal is in the press release.

Who is on the Anti-Doping Tribunal? What are their qualifications.

Who appoints people to the Anti-Doping Tribunal?

The press release raises far more questions than it answers.
Sept. 17
You are ignoring the author of this comment. Click to temporarily show the comment.
The original article stated, “The decision has been endorsed by the Court of Arbitration for Sport.”

This was puzzling for me. IANAL. CAS exists to resolve disputes. It is not an advisory or an endorsement board. I cannot find any reference on the CAS site to this endorsement.

Can anyone provide details on who at CAS made this endorsement?

When did the WBF make this request to CAS?

Presumably it must have happened after the “Anti-Doping Tribunal” and before the WBF Executive committee.

Details please….
Sept. 17
You are ignoring the author of this comment. Click to temporarily show the comment.
Several comments, some of them long, so I will post separately.

http://www.worldbridge.org/rules-regulations/anti-doping-regulations/ has the WBF anti-doping rules. They went into effect on January 1, 2015.

http://www.worldbridge.org/wp-content/uploads/2018/05/Players_guidelines.pdf has the player guidelines.

According to this document, which was published before the event in question, page 2,
"Whether or not substances are expected to affect performance is bridge is irrelevant. If they are on the WADA prohibited list then they may not be used without a TUE (Therapeutic Exemption Certificate) “

The bolding is in the original document.

(For the conspiracy theorists, when you have a url with ‘2018/05’ you automatically assume that this file was uploaded in 2018. Possibly in May (05). For this file it appears that it was most recently updated on May 20, 2019; after the Orlando competition. Whenever you are dealing with a ”living" file, it is normal to include a version number and date in the body so that it is clear when the file was changed. Those are missing from the Player Guidelines document. My comments are assuming that this file has not changed in substance since May 2018 though clearly the current version is a later file. If you are a player, this means that you are responsible for downloading the current version of the file each time you play - looks like WBF may choose to update older documents with newer rules. Nice.)))
Sept. 17
Nicolas Hammond edited this comment Sept. 17
You are ignoring the author of this comment. Click to temporarily show the comment.
I think you meant to write “Bridge player” and not “Sumo wrestler”. Bridge does have a precedent with Disa and a diet drug at the Montreal 2002 event. See https://www.telegraph.co.uk/news/worldnews/northamerica/canada/1406037/Bridge-player-is-stripped-of-medal-for-refusing-drug-test.html.

In this instance she was not awarded the medal, the team was allowed to keep theirs.

The difference between 2002 and 2018 is that the WBF now has rules in place for what happens if a player on a team fails a drug test.
Sept. 17
You are ignoring the author of this comment. Click to temporarily show the comment.
Ask Anna what operating system she is using. Ask her what Country/Region is set in iTunes. I suspect her native country. It is not worth changing this to China for the short duration of the event. Based on her OS, she can check the digital signature of the download. There are instructions online on how to do this based on her OS. If she remains concerned, don't do any updates while traveling.
Sept. 16
You are ignoring the author of this comment. Click to temporarily show the comment.
@Fred. First: appreciate you buying and reading the book. You are one of the few to have the ability to have generated something similar.
Second: probably not the place or time; but thank you to BBO for donating approximately $100+K of software to the ACBLscore+ project. Saved everyone some time/money and made that product better. You gave it freely, no licensing fees, no royalty fees.

Answering Shawn's question
1. I did not know who Bill James is. I had to Google. I know now.
2. Shawn referred to Richard Pavlicek's work and to Boye's bridgecheaters.com site. I'll let Richard/Boye speak for themselves.
3. I looked at Richard's site, correct URL is http://www.rpbridge.net/9y82.htm
The data is for 2005-2014 and is sorted by average IMPs to par for partnerships.
Richard, with some comments and explanations, removed the data for the 2014 Rosenblum and all data with Italy v. a very weak team.

I can generate the same data that Richard has. So I did. I get similar, but slightly different, results. My results are probably different because I filter out some boards with incorrect BBO data. For example if the opening lead is not in the hand of the person that made it, there is a problem somewhere. I also included the Venice Cup and Seniors data. It was unclear if Richard included these or not.

To give an example: in Richard's list, Piekarek/Smirnov rank #34 on average IMPs to par.

I used the same input data as Richard (however I did not remove the Italy v. weak teams records). I had 48 pairs with 500+ boards. On my average IMPs to Par list, P/S ranked #42.

In my book, I also publish the average IMPs to Par. I take the top 100 pairs. Rank them. I didn't list the entire set of 100 players for space reasons. I did the data for Open, Seniors and Women.

Some of the first drafts of my book sent out for review listed all the players, including listing all the players names in all of the cheating detection functions (the data had been randomized, i.e. the set of names was correct, the set of values was correct, but in the very early draft copies the correspondence of the name to the value was random to avoid any issues). My lawyer had me remove the names. Just to be clear: before the book was published I did receive notice of a potential lawsuit. The printer version of the book has no randomization; it lists the correct names and data.

There are some tables, IMPs to Par, as an example, where I list the results with names and data. Similar to Richard's web site. However, I usually remove the names of players who are not known cheating pairs from the top 10 of these lists. The lawyer advised that this was best, but it was a gray area.

I did ask some top pairs if I could include their names in all data. Some said yes. In the end, I only used Kit Woolsey/Fred Stewart. They are listed by name in most tables, including the cheating detection ranked tables. There was at least one pair (from the top 100 pairs with the most data) that did not want their name listed. I honored this request for the tables that are cheating related. For tables which are not necessarily related to cheating, I listed their name along with the value for the table. I did not remove any data/names from the scatter plots. The scatter plots for me are the most obvious identification that there are cheating pairs that have not been caught.

To detect cheating, I use a different approach than Richard. I don't use IMPs to Par. As an example: I take the same data set that Richard has (see link above). I then rank by one of my cheating detection functions. Using this test, the results are

#3 BZ
#4 PS
#6 FN

And the big question will be, who are #1, #2, #5?

(BTW, Fisher/Schwartz only had 324 boards of data with 500 being the minimum. If F/S were included, they would be #1 on this list).

Because this is the result of a formula designed to detect cheating, listing the names of #1, #2, #5 publicly could be construed as claiming that they cheat. The legal arguments are that this is statistics from public data; the counter-argument is that the naming of the formula as cheating detection has implications.

I have not named #1, #2, #5 publicly.

However, I have provided the names of some of the players that the software identifies as cheating, along with information in which part of the game they are cheating to some of the appropriate NBOs. I not provided all the data as, for commercial reasons, I am hoping that the NBOs may want to license the software. But… they want to know that the software works so I have to show something. Proving that they are cheating is the harder part. The NBOs still do not know how/where to put cameras to detect cheating. (Ask a Casino security person where they would put cameras to detect cheating; look at where the NBOs put cameras. The cameras are not designed to detect cheating).

Who is the first honest pair from this list? Probably #7. They drop below the threshold I use for cheating detection. But if I list the names of the top ten, you may think that #7 is a cheating pair and #8 or #9 is the first honest pair.

For reference, for this data set, I have over 2,100 boards on Fred Gitelman/Brad Moss. On my cheating formula, you rank #22 out of #48. On IMPs to pair, you rank #19.

Hope I have answered some of your questions.
Sept. 16
You are ignoring the author of this comment. Click to temporarily show the comment.
For those with too much time on their hands…

Here is the WBF Anti-Doping policy/guidelines:
http://www.worldbridge.org/rules-regulations/anti-doping-regulations/

From there, the WBF rules are at:
http://www.worldbridge.org/rules-regulations/anti-doping-regulations/wbf-antidoping-regulations/

With the real details that you need to read at
http://www.worldbridge.org/wp-content/uploads/2016/12/wbfantidopingregulations.pdf

IANAL but…

The rules for individual results in a team event are at
ARTICLE 9 AUTOMATIC DISQUALIFICATION OF INDIVIDUAL RESULTS (page 31)

The consequence for an individual's failure of a drug test for a team event are at
ARTICLE 11 CONSEQUENCES TO TEAMS (page 44)

This is the Article to read. See Article 11.

“11.2.1 An anti-doping rule violation committed by a member of a
team in connection with an In-Competition test, automatically leads to
Disqualification of the result obtained by the team in that Competition ,
with all resulting consequences for the team and its members,
including forfeiture of any medals, points and prizes. ”

WADA definition of a TUE and retroactive TUEs is at
4.3 WADA’s Determination of the Prohibited List (page 12)
I list this because this could be cited as a possible excuse.

WBF seem to have ignored WADA leading to possible complications with countries that receive funding because WBF are associated with the IOC.

Though this decision may have an initial impact on Zimmermann, the far-reaching consequences will be the NBOs that receive money from their governments because Bridge is “an Olympic Sport”. Those governments can now show that WBF does not follow WADA rules.
Sept. 15
You are ignoring the author of this comment. Click to temporarily show the comment.
Also still waiting on the WBF ruling on Team Monaco's second place finish in Bali 2013.

See https://bridgewinners.com/article/view/were-mr-fantonimr-nunes-cheating-in-the-bermuda-bowl-in-bali-2013/

(Poland won the Bronze medal with Balicki/Zmudzinski).
Sept. 15
You are ignoring the author of this comment. Click to temporarily show the comment.
Ironically, chatted with the lawyer today before I read BW…

I have posted some statistical information in the past. See https://bridgewinners.com/article/view/statistically-the-best-declarer-is/

Somewhat tongue in cheek in places, but the point made then, and repeated now, is “The only reason for this article is to show the dangers of using statistics in Bridge.”

I generate a _lot_ of statistics on all pairs. Data is from ACBL, WBF, EBL, BBO. My database is over 11,000,000 hands.

I have created several algorithms that can be used to detect cheating. All the known cheating pairs appear high (very high) on this lists. There are other pairs on these lists that appear higher than the known cheating pairs. The implication is obvious. All the data I use is from public. Legally I could publish all the data I have.

At some point there will be the first honest pair. It is unfair to that pair to be grouped with the cheaters.

Statistically, we expect some variations. In my book I cover what these are, and what we can expect. As an example, from p26, “As an example, early on during this work, there was a reasonably well-known pair that had a statistically fantastic tournament. So amazing that you would be convinced that they had to be cheating. Their next tournament was even worse than their previous one was good. Combined, their statistics were below average. Over their career they are still below average compared to their peer group. This story is told to emphasize the difficulty of drawing conclusions from a small data set. In order for Bridge statistics to work, you must have a large data set. How large a data set depends on the data that you are processing.”

Another problem is understanding the “data behind the data”.

Example: you/I play at the local club, average age 85. We do very well. The statistics of this event must be appropriate factored (or ignored). For cheating detection, the first hint is how well you do against all pairs using some of the cheating detection methods I have. If you appear high on this list, then I look at how well you do against the top ‘n’ pairs in the world. Most people suffer a drop in their statistics. Some don't.

One of the various interesting data set is pre 2015 and post 2015.



I have a formula that can calculate the “quality” of bridge. The higher the number, the better the declarers. The lower the number the better the defense. We would expect the number to tend to its expected value given a large number of boards. I also have the ability of processing the same data without any known cheating pairs.

I take the data from the “top” tournaments from before 2015, my “quality” number is 1.238. I remove the known cheating pairs. The number increases to 1.262. This is expected. Cheating players cheat on defense; therefore the defense is worse without them. This validates this “quality” calculation. I then take the same tournaments but look at post 2015. We would expect the number to be close to 1.262. It is 1.323.

Again, from my book, p 145:

'The difference in values between the “Pre-2015 (-n)” and “Post-2015” can only be explained by:

• Declarers suddenly improved. Perhaps there were required masterclasses in late 2015 on improving declarer play. I suspect not.
• Defenders suddenly became a lot worse. Perhaps all top-level players took my famous “how to become a worse defender” class.
• A statistical anomaly. The volume of data makes this unlikely.
• Undetected cheating pairs stopped cheating.
'

Another example: I have a function that creates statistics on opening leads (p 128 of the book).
Here's the results from before 2015. This is a table ranking the top 100 pairs.

1. <American pair – names withheld>63.78%
2. Lotan Fisher/Ron Schwartz 63.14%
3. Andrea Buratti/Massimo Lanzarotti 63.10%
4. <European pair – names withheld> 61.90%
5. Entscho Wladow/Michael Elinescu 61.13%
6. Alexander Smirnov/Josef Piekarek 60.67%

From p 129, “Post 2015 had an impact on the American pair that is at the top of this list. Let me leave it at that.”

You may draw your own conclusions. What I did not provide in the book was the amount of data on each of the pairs (obviously I have it), this is needed for you to determine the statistical likelihood of a pair being that accurate on opening leads.


@Shawn. Richard P/I probably use a different approach to analyzing the data. A lot of what I focus on is detecting cheating. Also a lot depends on who you play against - play against weak opponents you should do well.
Sept. 15
You are ignoring the author of this comment. Click to temporarily show the comment.
Shouldn't a better title be “Scribbles in the Face”?
Sept. 15
You are ignoring the author of this comment. Click to temporarily show the comment.
I have software that detects cheating.

For the 2011 Transnationals, based on Vugraph data, F/S were almost perfect in defense. No-one is that good. There is not enough boards to statistically prove that they were cheating; just that their defense rating for this tournament is better than the long term record of any major partnership in history. BUT…. not enough boards to statistically prove they were cheating (using the thresholds I use). Enough to stick a camera on them, but I don't think this was done in 2011. Everyone has good/bad tournaments. F/S were exceptional when on Vugraph for this tournament. Their data triggered the cheating detection software for this tournament.

For the 2011 data based on the WBF web site, F/S were very good. They were not good as declarers, but were exceptional on defense. If I take the top 180 pairs (chosen so that I have at least 100 boards on defense), then F/S are ranked just over 100 (in terms of amount of data). If I sort by a cheating metric I use, then F/S ranked #17 on defense, #126 as declarers. The discrepancy is enough to trigger the cheating detection software. However…. there are three pairs that played a similar or more number of boards that declared worse than F/S and defended better than F/S. None AFAIK have been convicted of cheating; all six of these players are still playing.
Sept. 13
You are ignoring the author of this comment. Click to temporarily show the comment.
“I know this was probably available in bridgescore+ and the ACBL decided to scrap it ;-). <jk>”

The ability to import ACBLscore game files was in ACBLscore+. ACBL have that code, so does my company. We can each use it as we see fit. ACBL did not want to make it public domain.

After the ACBLscore+ contract was over, I renamed the product Bridgescore+, and added the ability to import WBF, EBL, ACBL, BBO data from their web sites. This is not part of ACBLscore+, ACBL has not rights to this (new) code.

I gave a proposal to WBF about 18 months ago to license all this work, including all the anti-cheating work, but they have yet to decide on it. This would give you the database you want (assuming it was decided to open it up). The problem if you open it up is then it is (relatively?) easy to work out who has been cheating.
Sept. 4
You are ignoring the author of this comment. Click to temporarily show the comment.
@David: See coaching advice at https://bridgewinners.com/article/view/re-nic-hammonds-detecting-cheating-at-bridge/?cj=842365#c842365

“Unlike most partnerships you actually do better against stronger players. This means you are probably concentrating less when playing against weaker players. This is a part of the game that you should work on. ”

This is the value of statistics in Bridge.
Sept. 3
You are ignoring the author of this comment. Click to temporarily show the comment.
“Good” is usually subjective. I have objective data.

I take the data from the top tournaments on Vugraph. I take the data from the top 200 pairs based on the amount of data.

I have software that detects cheating.

One of the things I look for is the difference between declarer play and defensive play.

I compare these 200 pairs on defense/declarer ability difference.

The German Doktors rank #5 on this list. Piekarek/Smirnov rank #4, BZ rank #6. Buratti/Lanzarotti are #12.

Of the top ten pairs, they are in the bottom two in terms of defensive ability. So even with their cheating, they are still bad defensively (comparatively). But there is a huge difference between their declarer play and their defensive play. This is usually a strong signal that a pair is cheating.
Sept. 2
You are ignoring the author of this comment. Click to temporarily show the comment.
The filming of the finals of the D'Orsi bowl was public. The videos are on line. The cameras were mounted on stands in the room. Pretty obvious that they were being filmed. You will see some players looking at the camera.
Sept. 2
You are ignoring the author of this comment. Click to temporarily show the comment.
Michael Clark did some good videos. See https://www.youtube.com/watch?v=1xVj1EQ_vSI
Sept. 2
You are ignoring the author of this comment. Click to temporarily show the comment.
The German players have little choice. A German court has ruled on the matter. See https://www.sueddeutsche.de/panorama/manipulations-vorwuerfe-gericht-beendet-husten-affaere-1.3750997

See https://bridgewinners.com/article/view/elinescu-and-wladow-win-their-case-against-the-wbf/ for Adam Wildavsky's post.

The WBF had several procedural issues with their hearing in Dallas on the German doctors. (See page 64 of my book for one of the procedural issues). Given the procedural issues, the ruling was likely to be overturned by the courts.
Sept. 2
You are ignoring the author of this comment. Click to temporarily show the comment.
He's still on the couch. Law 46 b 5 would apply.
Sept. 2
You are ignoring the author of this comment. Click to temporarily show the comment.
Warren and Bill with Sharon personally ran the initial stages of the program.

I do not know how much money they gave to different programs.

They then gave the remainder to the SBL. SBL created some nice material but relied on local schools to do the teaching. I do not know if SBL were charged with seeding programs with money or not.

SBL would not have spent $600K.

I think W&B were going after the schools, not clubs. Someone should ask them.
Aug. 19
.

Bottom Home Top