iCountNTrack said:
Calculating pre-deal evs would make things even much slower
, because that would require calculating prior to each round. I did however write a code that enumerates all the possible deck compositions for a given penetration and calculate the pre-deal ev for each possible composition. I think you have seen that in one of my posts a while ago.
http://www.blackjackinfo.com/bb/showpost.php?p=175883&postcount=14
Yeah, I did some haphazzard experiments of my own a while ago, along similar lines, and hit upon the same performance issues.
A couple of thoughts I had at the time -
Rather than run a sim of game-play, perhaps the EVs of a large, random set of compositions of different sizes could be averaged. (i.e. do what you did in the example you mention, but not for a fixed depth, for every depth up to the pen. defined for the game.)
I managed to confuse myself about what would be a valid way of sampling the shoes. The simplest appoach would be to perform a shuffle and then deal cards one at a time, re-computing the EV after each, until the max pen is reached, and then repeat the process. However, that would yield compositions with impossible sizes (e.g., the first round can never consist of a single card).
So an alternative might be to consume the cards by playing either basic strategy or the CA-calculated strategy (but might even this cause subtle inconsistencies when the 'results' being logged relate to every possible way the round could be played out, rather than one specific way?)
Or maybe a random number of cards could simply be consumed to simulate each round; done in such a way that the average number of cards per round matches what you would expect.
It also occurred to me that it might be possible to use some techniques to compute an estimate for the pre-deal EV that is much better than the simple, linear approach, but a lot faster than a full CA.
I did some work on C++ code to implement the methods described in chapter 15 of TOBJ (Interactive Approximations to Facilitate Rapid Blackjack Computations). I think the chapter only appears in the 6th edition. I had some trouble understanding it, in part due to an error that seems to have got into the book. But Eric Farmer very kindly helped me out and clarified things.
As ever I left things about 80% completed, so this is one of the many projects I ought to go back to and finish off.
All that being said, I don't know how well the 'interactive approximation' technique would perform with the extreme deck compositions.
iCountNTrack said:
But you do raise an important point, people sometimes mistakingly separate betting and playing while the two are very related. You bet according to your pre-deal EVs because that is your advantage. The pre-deal evs is a collection of perfect plays for all possible player cards/dealer up card combination.
Card counting systems make things easier but they create "fudge" factors called tags :grin:, which on average do work but not for all cases.
Let's take a look at a simple example,
a deck with 4 7s, 4 8s, 4 9s, 2 10s (S17, DAS) pre-deal ev is 25.83%
removing one ten INCREASES the pre-deal EV to 35.92%, and removing the other ten also INCREASES the pre-deal EV to 53.40%.
In this case the tag for ten should have been positive because the EV increases after it is removal.
Also we notice that a supposedly "neutral" (Hi-Lo) or "negative" composition (Hi-Opt II) has an EV of 53%.
All these discussions are purely theoretical since you need a time machine to play these games :laugh:
That goes back to the question I posed at the end of my previous post. We may not get deep enough penetration to see very extreme cases, but do we know to what extent typical penetrations would yield cases where the bet sizes indicated by the count and by the CA are different?