The generic statistical problem is optimal stopping: given a sequence $X_1,X_2,\ldots,X_n,\ldots$ of random variables and an associated sequence of reward functions $y_n:\mathbb{R}^n\to \mathbb R,$ create a "stopping rule" $$t_n:\mathbb{R}^n\to\{\text{stop},\text{continue}\}.$$
The inputs to $y_n$ and $t_n$ are the sequence of values $(x_1,x_2,\ldots,x_n)$ observed through step $n$. The first time $t_n(x_1,\ldots,x_n)$ is "stop," you receive reward $y_n(x_1,\ldots,x_n).$ Assuming you are risk-neutral, the objective is to maximize the expectation of this reward.
The present problem is considerably simpler because the reward functions are simple and there is a finite, known, set of rounds. The value of $y_n$ is just the last value drawn, $x_n.$ Let's consider the case where at each step the prize is drawn independently, randomly, and uniformly from the interval $[0,1]$ (in units of $P$ dollars, so we don't have to carry a factor of $P$ through the calculations).
Because the reward depends only on the last value drawn, the optimal stopping procedure depends only on that value. Because the reward equals the last value drawn, the optimal procedure $t_i$ must be to stop if and only if $X_i$ exceeds some threshold $p_i.$
It remains to find these thresholds. To this end, note that with $n$ rounds remaining in the game, there are just two possibilities: stop when the current prize drawn exceeds the threshold or continue. If you continue, you will play the game with $n-1$ rounds. This invites a recursive solution indexed by the number of remaining rounds. Therefore, let's change the notation slightly: let $p_n$ be the threshold to use when $n$ rounds remain, let $e_n$ be the expected reward of an $n$-round game under the optimal stopping procedure, and let $X$ be the first random prize just drawn to begin the game.
The two possible outcomes are
$X \ge p_n.$ We choose to stop. This occurs with expectation $E[X\mid X \ge p_n] = (1+p_n)/2$ and probability $1-p_n.$
$X \lt p_n.$ We choose to continue. This occurs with expectation $e_{n-1}$ (the value of the next $n-1$ rounds remaining) and probability $p_n.$
By the definition of expectation, then, the expected value of the optimal solution for an $n$-round game is the sum of the rewards associated with the two possible actions as multiplied by the chances they occur,
$$e_{n} = E[X \mid X \ge p_n]\Pr(X \ge p_n) + e_{n-1} \Pr(X \lt p_n) = \frac{1+p_n}{2}(1-p_n) + e_{n-1}p_n.$$
This quadratic function of $p_n$ is maximized when $p_n=e_{n-1}.$ Plugging this into the foregoing gives
$$p_{n+1} = e_n = \frac{1+p_n}{2}(1-p_n) + e_{n-1}p_n = \frac{1+p_n^2}{2}.$$
Finally, to get the recursion started, note that with no rounds to go, the expected reward is just $p_1 = e_0 = E[X]=1/2.$
This recurrence yields the sequence
$$(p) = \left(\frac{1}{2},\frac{5}{8},\frac{89}{128},\frac{24305}{32768},\frac{1664474849}{2147483648}, \ldots\right)$$
A good approximation for larger $n$ is $p_n \approx 1 - 2/n.$ The error is $O(n^{-2}).$