You both mis-state and misinterpret what Kahneman says. You should have offered a quote rather than described it from memory -- and indeed doing so may have resolved your difficulty before you even posted the question.
Firstly, the story about the ski jumping is about commentators' misunderstanding of regression to the mean (all but the first half sentence of this is on p179 of my copy). An actual explanation of regression to the mean is immediately before the ski jumping story and is in terms of golf:
- the golfer who did well on day 1 is likely to be successful on day 2, but less than on the first, because the unusual luck he probably enjoyed on day 1 is unlikely to hold.
- The golfer who did poorly on day 1 will probably be below average on day 2, but will improve, because his probable streak of bad luck is not likely to continue.
then shortly after he says:
My students were always surprised to hear that the best predicted performance on day 2 is more moderate, closer to the average than the evidence on which it is based (the score on day 1). This is why the pattern
is called regression to the mean.
By contrast you say:
after a very good first jump ... the regression to the mean principles gives us reason to believe the second is going to be worse...
correct so far
... (below average)
No!
The point is that you expect it to be below the unusually good jump (worse than the jump you just did), not below the skiers own average.
But this seems to contradict the principle whereby consecutive samples (with replacement) are independent,
Kahneman does not assume independence. If you read the golf discussion carefully, he assumes consecutive scores are positively related (you'll score better than average on the second day, too, just closer to the mean than the good score you just had).
For instance, if we roll a dice and it lands on 5, there is (I think) no reason to believe that the second roll should be below average (i.e. 1, 2 or 3) just because the first roll was above-average.
That's not what regression to the mean says.
Your example with the die is slightly problematic because of issues with "closer to" when combined with only a few discrete outcomes (it also throws out the positive dependence Kahneman actually discusses -- and which typical is in real regression to the mean scenarios -- but let's leave that aside to begin).
First let's modify it by taking the sum of two dice and assume the first roll was unusually high -- you get a total of 10. Regression to the mean would suggest the next roll will be more likely to be less extremely high -- typically nearer to 7 -- than even more extreme (e.g. you will be more likely to get 7,8 or 9 than to get 11 or 12).
Secondly, let's now reintroduce the positive dependence. Imagine we have two dice, one red and one green. We roll 10 again (and for simplicity imagine we rolled two 5's). We re-roll the green one but keep the red one. Now the total is still likely to exceed the long term average (7), but will still tend to be closer to 7 rather than further from it than the 10 was. That is what regression to the mean is about, and that's what Kahneman describes.