You are ignoring the author of this comment. Click to temporarily show the comment.
They are a lot of fun. They brought a lot of energy to the events they played in. They entered some of the NABC+ events, including 2017 Reisinger.

The movie was sponsored by the ACBL Education Foundation. See https://www.acbleducationalfoundation.org/page/news-5/news/the-kids-table-movie-gets-cheery-reception-2.html

We must remember that this is their film of what they saw.
Sept. 27
You are ignoring the author of this comment. Click to temporarily show the comment.
Sept. 27
You are ignoring the author of this comment. Click to temporarily show the comment.
After the opening lead, the statistics show the accuracy of the card played against double dummy. Any card that gives up a trick (double dummy) is a “bad” card, anything else is a “good” card. Percentages shown are good / (good + bad).

(1) “effective on defense” See above.
(2) “what counts as a mistake” See above.
(3) “who made mistake” Cannot determine. We know who played the card, but partner may have failed to signal properly.
(4) “difficulty of deal” All deals are treated the same.
(5) “ ”ineffectiveness in the auction". Not really. This statistic measures defensive ‘mistakes’. Separately I calculate (but haven't published) declarer ‘mistakes’. Declarers make more mistakes (ignoring the opening lead) than defenders.

The Law of Large Numbers should apply given sufficient number of cards played.

I fully understand that Bridge is a single dummy game and that these statistics are based on double dummy results.

Top players average in the 97.0%-98.0% range for played cards after the opening lead.

For the opening lead, the average (for an accurate lead) is ~ 81%.

I have the opening lead data (+ lots of other data) but don't show it on the web site. There are too few boards for opening lead comparisons to make meaningful sense.

The data is objective; the decisions on which statistics to show is subjective.

If there is some data you would like to see displayed, let me know and I'll see if I have it or can easily create it.
Sept. 27
You are ignoring the author of this comment. Click to temporarily show the comment.
@David. You can scientifically break down Bridge into four categories: bidding, opening lead, defensive play, declarer play. You can further break down bidding into competitive and non-competitive auctions if needed. Butlers cover all four categories. The statistics I have published are just on the defense after the opening lead.

A problem with Butler is that it reflects who you played against. As an example, Morocco did very badly. If you were sitting out when your country played Morocco, your teammates would see their Butler scores increase.

The statistics I have published are objective data based on the pair's and player's performance. They are not a subjective opinion.

I do have separate stats on both bidding and opening leads but haven't published them.

The stats I have published only reflect one of the four categories in Bridge. But they are more reflective on how well the partnership did ignoring who their opponents were.
Sept. 26
You are ignoring the author of this comment. Click to temporarily show the comment.
@Louis. Thanks. It is Bob not Bas. My mistake.
Data normalization is one of the hardest problems with BBO data. The BBO records listed “B Drijver”. I have to manually type in the mappings from BBO names to real names. There were 24 team, 6 players, 4 divisions, over 500 names to correct. I didn't do all of them, and got this one wrong. (There were a couple of others that were wrong that were privately caught earlier in the week). I had incorrectly entered Bas not Bob for “B Drijver”.
Sept. 26
You are ignoring the author of this comment. Click to temporarily show the comment.
For those interested, I have been updating http://www.detectingcheatinginbridge.com/statistics.html after each day of play.
Sept. 25
You are ignoring the author of this comment. Click to temporarily show the comment.
I have 138 boards for Chiaradia/D'Alelio on lead. Their error rate (bad lead according to double dummy) was 24.6%. Far higher than their contemporaries and today's players. If they were cheating on opening lead, they were not doing it very well.
Sept. 25
You are ignoring the author of this comment. Click to temporarily show the comment.
@Richard. I have 39 deals with Roth declaring against the Italians. On 8 of them he made an “overtrick”, one or more than the contract. But this is not a good usage of the term in this context IMHO. For example 4+1 or 2+1 is an “overtrick”. Either way, I still rate Roth's claim as false.

Here's an example:
http://www.bridgebase.com/tools/handviewer.html?n=sK5hAQ76542dQ43c4&e=sA106h8dKJ875cAQ53&s=sQJ943hK103d10cK876&w=s872hJ9dA962cJ1092&d=W&nn=Tobias%20Stone&en=Pietro%20Forquet&sn=Al%20Roth&wn=Guglielmo%20Siniscalco&d=w&b=16&v=e&a=PP1D2SPPP&p=DA
At the other table:
Sept. 25
You are ignoring the author of this comment. Click to temporarily show the comment.
I devote ten pages about opening leads in my book. The range for top pairs (> 400 boards) is 15%-23%. Figures 50 and 51 from my book. The Italians are within the range. The graph follows a normal distribution. There are some outliers. I would not read anything into the statement about 16-21%. If anything, the pair that is at 21% is worse than average.
Sept. 25
You are ignoring the author of this comment. Click to temporarily show the comment.
Top level players give up a trick on average 19% of the time on opening lead.

The Italians he played against are in the 16%-21% range.

I have Roth declaring 39 hands in the 1958 and 1967 Bermuda Bowl when playing against the Italians. In 6 of those boards, he made an “overtrick” (which I will define as one more than Double Dummy allows). Two of them were because of the opening lead; the other four came during the play of the hand.

Roth did not have the advantage of double dummy analysis and computers when he made his statement. These are his overtrick boards:

1958 (no play details)
Segment 6, boards 94, 99, 109
Segment 7, board 121

1967

Top players make an “overtrick” approximately 25% of the time.

Roth was only 15% against the Italians.

I rate Roth's claim as FALSE.
Sept. 24
You are ignoring the author of this comment. Click to temporarily show the comment.
Sometimes there are assigned scores. 288 is a multiple of 16 (x 18). Just guessing as to reason.
Sept. 22
You are ignoring the author of this comment. Click to temporarily show the comment.
I've published statistics from the Vugraph data for each pair on how they defended after the opening lead. This is a good test of how well the partnership is working on defense. See http://www.detectingcheatinginbridge.com/statistics.html. Only 33 pairs had enough boards (94) to qualify for the list. Moss/Lall ranked #31 of these 33 pairs.
Sept. 22
You are ignoring the author of this comment. Click to temporarily show the comment.

“I played against the King of Bridge journalism”, Tom said regally.
Sept. 21
You are ignoring the author of this comment. Click to temporarily show the comment.
Something like that.

The point was that Helgemo didn't play as well as he normally does.

However, statisticians will point out that there isn't enough data for true analysis.

I'm offering you both sides of the coin.
Sept. 20
You are ignoring the author of this comment. Click to temporarily show the comment.
@Daivd: Familiarity. Laziness. Don't really like them (results presentation). I generally use Tor if I know I'll be searching for things I want to avoid ads on.
Sept. 18
You are ignoring the author of this comment. Click to temporarily show the comment.
“I should be playing another game”, said Tom w(h)istfully.
Sept. 18
You are ignoring the author of this comment. Click to temporarily show the comment.
@Espen: Full details of the drug(s) he was taking are posted at http://bridgewinners.com/article/view/anti-doping-violation-by-geir-helgemo-results-in-team-zimmerman-disqualification-from-orlando-rosenblum-cup/

WADA _requires_ all IFs to publish details on all failed tests. No privacy at all. If you agree to play in the main WBF events, you give up your medical privacy.

Helgemo was taking “Clomifene and synthetic Testosterone”. Clomiifene is often sold under the name “Clomid”.

There's a lot of discussion on the link I provide about what these drugs do along with a lot of inaccurate speculation.

The real reason why he was taking those drugs should be a private matter and should remain so; unfortunately it is semi-public knowledge, which makes this case even more tragic. Talk to a medical professional and they will explain what the combination is typically used for.

The WADA issue is that these drugs have been shown to improve certain performance. For example, see https://clinicaltrials.gov/ct2/show/NCT03028532

To oversimplify: “After a man reaches the age of 30 years, testosterone levels gradually decrease, falling an average of one percent each year.” (https://www.medicalnewstoday.com/articles/266749.php), Men start their andropause around age 30. For some men, this can present as a “mental fog”. Difficulty remembering things, lack of focus/concentration. Here is a sample site: https://evexiasmedical.com/andropause-may-affect-how-you-feel/.

The drugs listed can improve the “mental fog”, concentration issues of andropause.

Now that I've spent 30 minutes searching for some of the above on Google, I'm starting to get some very weird ads, so I suggest using Tor (the browser, not the player) to visit these URLs.
Sept. 17
Nicolas Hammond edited this comment Sept. 17
You are ignoring the author of this comment. Click to temporarily show the comment.
@Espen: WADA provide a template for International Federations (IF). This makes logical sense. Each IF can then use those rules with minimal changes and without having to spend large sums and create a policy. It is logical for WBF to use this template. The fact that someone at WBF read the policy and added something specific for Bridge is a positive, not a negative. The rules are clearly defined for an individual or team in the WADA template. Someone added a rule to cover pair events.
Sept. 17
You are ignoring the author of this comment. Click to temporarily show the comment.
@Greg. It is getting into semantics. Was Helgemo's test “In competition” or “during or in connection with an Event”? There are different rules. If the first, the team is automatically disqualified. If the latter, WBF has discretion.

The WBF rules are at http://www.worldbridge.org/wp-content/uploads/2016/11/wbfantidopingregulations.pdf

11.2.1 An anti-doping rule violation committed by a member of a team in connection with an In-Competition test, automatically leads to Disqualification of the result obtained by the team in that Competition , with all resulting consequences for the team and its members, including forfeiture of any medals, points and prizes.
11.2.2 An anti-doping rule violation committed by a member of a team occurring during or in connection with an Event may lead to Disqualification of all of the results obtained by the team in that Event with all consequences for the team and its members, including forfeiture of all medals, points and prizes, except as provided in Article 11.2.3.
11.2.3 Where a Player who is a member of a team committed an anti-doping rule violation during or in connection with one Competition in an Event, if the other member(s) of the team establish(es) that he/she/they bear(s) No Fault or Negligence for that violation, the results of the team in any other Competition(s) in that Event shall not be Disqualified unless the results of the team in the Competition(s) other than the Competition in which the anti-doping rule violation occurred were likely to have been affected by the Player's anti-doping rule violation.
11.2.4 If an anti-doping rule violation is committed by a member of a Pair this automatically leads to Disqualification of the result obtained by the Pair in that Competition, with all resulting consequences for the pair including forfeiture of any medals, points and prizes.

11.2.1-11.2.3 are boilerplate from WADA recommendations for International Federations. 11.2.4 is a WBF addition.

If reliable sources (Jan Martel) are to be believed, then Helgemo's test was on the evening of the first day of the final.

Is this an “In-Competition” test and 11.2.1 must apply; or is this “during… event” and 11.2.2 applies.
Sept. 17
You are ignoring the author of this comment. Click to temporarily show the comment.
The original post stated, “The Anti-Doping Tribunal found that the doping violation had not influenced the performance of the player”.

How?

What criteria did they use?

Who did the work?

AFAIK, I am the only person with any software that can measure the performance of a player. I generally use the software to detect cheating, but it can also detect changes in performance (one of the markers for a cheating pair, or when a cheating pair stops cheating). It can also be used for coaching, to find the weaknesses in someone's game for improvement.

So…. I was curious… I ran the results through Bridgescore+. Did Mr. Helgemo perform differently at the Orlando 2018 tournament?

Here's the details…

First, the usual caveat, in determining performance a small set of boards is usually not enough to perform rigorous statistical analysis. The number of boards for this sample is below the threshold that I typically use for evaluating a player's performance. Some of the tests I use require a large number of boards. Some require smaller.

I looked at their lifetime values, pre-2015, post-2015 and from Orlando. There is a significant difference for some top pairs from before 2015 and after 2015. Helgemo/Helness (HH) have similar data before/after 2015.

I have 264 boards for Helgemo/Helness for Orlando. This is data from Vugraph. Lifetime I have almost 6800 boards.

See my book (www.detectingcheatinginbridge.com) for more details on the functions.

Their declarer rating for this tournament (1.43) was less than their lifetime declarer rating (1.55). A higher value is better.

Their defense rating for this tournament (1.21) was higher than their lifetime defensive rating (1.03). A lower value is better.

These numbers are meaningless without context. I took the top 100 pairs based on amount of data from top tournaments. I then rank them by their declarer rating. HH are #12. If I use their values from Orlando, they would rank #44.

I do the same but with the defense rating. Lifetime they are #22. One may expect higher from a top pair but #4 is Fisher/Schwartz, #5 is Fantoni/Nunes, #6 is Buratti/Lanzarotti, #7 is Piekarek/Smirnov, #9 is Balicki/Zmudzinski. If I use their defensive rating from Orlando they would rank #61. Before you ask, pairs #1, #2, #3 are simply brilliant bridge players who, lifetime, have defended better than those players I just listed.

In other words, HH declared worse and defended worse than normal.

Helgemo's lifetime declarer rating is 1.78 (13). Helness is 1.37 (103). The number in the brackets is the ranking of the top 200 players (from the top 100 pairs).

For Orlando, Helgemo's rating was 1.36, Helness was 1.5.

I look at another statistic. How many (compared to Double Dummy) errors did they make as declarer. In both cases, HH each made fewer mistakes when declaring than their lifetime average. You may be puzzled why this appears to contradict the values above for Helgemo (not Helness). Each function tests something different. Very simplified: the rating value increases as the defenders make more mistakes, declarer “flair” so to speak. The double dummy statistic reflects accuracy of card play. Lifetime both Helgemo and Helness have very similar double dummy declarer error rates. For Orlando, Helgemo made fewer mistakes than Helness and had a higher increase.

Next, I look at their individual performance while defending. Helness was consistent. Same rating that he has had lifetime. Helgemo was worse. Quite a big drop (relatively). Helgemo made more defensive mistakes than usual.

What about opening leads. Helgemo was worse on opening leads than normal. Helness better. BUT…. there are only 74 boards with Helgemo on lead, 58 with Helness on lead. Doing statistics with small numbers like this is incorrect.

The bidding is more complicated to analyze so I haven't.

Summary:

There is probably not enough boards to pass rigorous statistical standards.

Helgemo played worse than he usually does, particularly on defense. Helness was consistent compared to his lifetime.

Does this prove anything?

Absolutely not.

Any player's performance in a tournament can be affected by many factors. You cannot necessarily make any correlations with any drugs he/she may be taking.

From my book (page 108)

“With another pair, I noticed significant dips in playing ability during certain time periods. When I talked with the coach to try to understand the results, it appears I had detected when one of the players was having an affair and a subsequent divorce. The tools really do detect cheating! Imagine the impact on top level bridge if I started publishing that data…”

(Just to be clear, the player I referenced in the book is not Helgemo or Helness).
Sept. 17
.

Bottom Home Top