sagefr0g said:
just me maybe, the speculated equivalence of ATH probability and expected value smacks of valid reality for advantage plays in general. my suspicion is that the idea of a 'complete play' would be a needed hurdle to overcome in order for said equivalence to be managed mathematically in a practical manner.
You really should read Griffin's book to get a better understanding.
There is actually a section, after the coin-toss example, that looks at blackjack in a similar way. It's necessarily more complicated, and unfortunately it doesn't look at quite the same thing, dealing with how to calculate the probability of doubling your bankroll, rather than being at an ATH.
I've been loathe to quote isolated bits here, both in case I introduce any errors through my own [mis-]interpretations, and because I worry about how much of a book it is OK to reproduce before you are getting into the realms of piracy! (Plus, I've only ever skipped through some of the more complicated bits up to now.
)
But I'll try and extract a few helpful snippets -
In the coin-toss example we repeatedly bet one unit, and we win one unit with probability p >0.5, and lose one unit with probability q = 1 - p.
Define x as the chance of always being ahead after the first toss, and y as the probability of never falling behind.
That is, x means you win the first toss and your bankroll never falls below this +1 level, and y means your bankroll never falls below its starting level.
You can write two simultaneous equations -
1) x = py
2) y = x + p(1-y)y
#1 says that to always stay ahead you must win the first toss (p) and then never fall behind (y) from this new position.
#2 says you're either always ahead (x) or you win the first toss (p), at some later point fall back to your original starting point (1 - y), and subsequently never fall behind your original starting point (y).
[Griffin phrases #2 differently, but I've tried to make it easier to understand, hopefully without making any mistakes.]
Solving those two simultaneous equations gives -
y = 2 - 1/p
x = 2p - 1
So there we have the proof that the probability of always being ahead is 2p - 1, which is also the expected value of the game.
There is then a neat graphical proof to show that this is the same thing as the probability of being at an all time high.
When playing blackjack there are multiple payoffs, and the number of units bet is not fixed.
Griffin gives a formula for approximating a value of p for a coin-toss game that is equivalent to a multiple-payoff game like blackjack -
Let EX be the expectation and ASR be the average squared result:
p = 1/2 + EX/(2 * sqrt(ASR))
So if we had an advantage off the top and were flat betting, we could definitely just calculate p, as above, and the ATH probability for the game would be 2p-1.
With a game that starts out with a house edge, and with varying bet size, maybe it is still valid to use the above formula, once EX and ASR have been calculated for an 'average hand'
[*]. Or maybe a different approach is needed, using a table of different probabilities, corresponding to the chances of winning/losing every possible number of units that can be won/lost on a single hand.
It is actually from such a table that we get EX and ASR in the first place. Griffin gives a contrived example where he considers a game with possible results of -3 +2, -1, or 0 units won, calculates EX and ASR from the probabilities of these results, and uses EX and ASR as the input to a formula to approximate the chance of doubling a bankroll.
[*]An average hand being what you get when you consider the true count frequencies, their associated bet sizes, and the frequencies of the various different results (that can involve multiple splits and doubles).