jetset said:
We are not talking about government here - we are talking about a private initiative involving major industry companies with good records trying to bring some safety and order to an unregulated industry that spans international boundaries and clearly has problems.
My point was that companies will do anything if you pay them enough money.
jetset said:
eCOGRA has already said that it consulted widely with experts on the formulation of its eGAP, and its advisers felt that the TGTR method for assuring games fairness on an ongoing basis was the way to go.
I believe the point has been previously made that like other major international financial and business services groups, PwC's expertise is not confined to being simply "...an accounting firm"
Who was their expert on the mathematical and statistical aspects of eGAP? As far as I can tell from their biographies, the directors of ecogra don't have any mathematical background and they approved flawed criteria for game fairness. If they did not hire anyone with mathematical expertise, I am very concerned about the standards of the secret statistical tests, if they did, they should ask for their money back.
jetset said:
However, you seem from your postings to have some special expertise in this area so perhaps we can move on from the deadlocked transparency issue and explore some of the areas that Caruso has previously left unanswered in the other thread on this issue.
What in your opinion is the main objective here - is it to provide reassurance that PwC's TGTR system is honest and provides an acceptable level of comfort for the player that the games being presented are fair, or do you have some other and higher benchmark in mind? If so, what is it and how can it be achieved and judged?
If outcomes based testing is not in your opinion the answer, what is? Are you recommending the traditional testing laboratories and do these have access to the source codes of the software companies whose products they are testing? How is the confidentiality issue of this critical proprietary company asset protected - by NDAs? Do you consider this to be sufficient protection?
Is the testing carried out on a "one-off and here's your certificate" basis, or is there ongoing monitoring, top-up inspections and supervision of every access to the software? How is that achieved?
Outcome based testing is fine with me. I think the pronciples of OCA were good, my biggest concern would be about the integrity of the data, I don't know what measures were taken to prevent players from submitting fake data or from selectively submitting data only from losing or winning session.
If I were doing it, I would first test the raw random number output. If there is not enough randomness here, the games cannot be fair.
I would then do game specific tests. In blackjack, I would test the joint distribution of the player's initial cards and the dealer's up card, the distribution of the dealer's hole card, dealer outcomes, the player's third and maybe even fourth card if the size of the data allows. I would also test whether the distributions are independent of the player's action, for example, is the probability of the dealer having BJ the same whether or not the player takes insurance, or if the player has A,2 vs 5, will he get the same distribution of cards whether he hits or doubles? In roulette, craps and sic bo, I would test distribution of the numbers, the independence of spins or rolls. I would also test whether the outcome are independent of the bets, so that red and black are equally likely when there is more money on red, and there would have to be a special test for bets such as pass/don't pass, come/don't come and hard way bets, which depend on the outcomes of several rolls to make sure that there are no hidden dependencies. There are some problems with video poker and slots because there may not be enough data to verify that royal flushes or jackpots occur with the correct frequency. Some testing for correct distribution and independence of the initial five cards dealt in VP should be possible, the hardest thing would be to test the distribution of the cards draw after the discard, I don't have an obvious solution to this. All these test should also be done grouped by bet level or coin size.
Testing should be done on an ongoing basis and cumulative data should also be tested, because a larger sample may produce statistically significant results where a smaller one does not. (For example, losing 10 units in 100 hands of BJ is not unusual, losing 100 units in 1000 hands would be very bad luck but still possible in a fair game, whereas losing 1000 units in 10000 hands would be very convincing evidence of cheating.)
My list may not be complete, other people may want to test other things, but I would want to be convinced that at least this level of testing is being carried out and that it is done competently.