Join Bridge Winners
Testing the Field in BBO Robot Tournaments

I think there are pretty big differences in the strength of the human field in various robot tournaments on BBO.  for example, if I have what I subjectively feel is a 62% game, it seems like it ends up being in the mid-high 50%s in one of the daylong tournaments but in the mid-high 60%s in an ACBL bot tournament (the latter probably depends somewhat on the time of day as well).  

I can think of three ways to test this. I believe option (1) below is probably more objective in the long run but noisier and requires more disruption.  Method (2) may be somewhat less accurate but is simpler to implement.  Method (3) is the simplest but could be very biased, especially if there isn't much cross-over between the fields for various types of tournaments.  Can anyone think of other ways to test?  Is method (2) or method (3)  "good enough" or would anyone actually recommend BBO implement (1) if they want to understand field strength?

(1) Occasionally, and without advance notice or any discernable pattern (so that you get the same contestants who normally sign up for that tournament), BBO could run these tournaments with half the contestants sitting West instead of South.  You would want to say something to contestants at the start of the first board so they would know it was no longer a "best hand" tournament (though you could still make them have at least as good a hand as their partner if you wished).  The scoring for that particular tournament would obviously be more random, partly because the bots would be making a greater share of the decisions and partly because half the comparisons for human/bot pairs would be to bot/bot pairs, so it would not have so much of the flavor of a par contest.  *But now you can look at the average score of the bot/bot pairs, which will no longer be 50%*.  The worse the bot/bot pairs do, the better the human field.

(2) Have an all bot table play through all the hands of every tournament (this could be done after the fact).  The lower the score of the bot/bot pair that gets compared against the human/bot field (i.e., the NS pair) the better the human field.  This is pretty similar to (1) and less noisy, but possibly less realistic as the scores are based on a table with no human beings.

(3) Look at the track reccords in each tournaments type of humans who play in both.  This only requires access to all the results, so BBO could easily turn this into an index and track field strength over time.  The problem is that the crossover players may not be typical of the rest of the field.

Does anyone care?  Is this worth looking at?  Would sponsors be embarrassed if they knew they had the worst fields?  Or would it be useful information to help them focus their marketing?  

Getting Comments... loading...

Bottom Home Top