I first summarize my understanding of p-values. We have an hypothesis $H_0$ (null hypothesis) that we want to test.
Now we build a test statistics $T$, a random variable. Depending on $H_0$ being true or not $T$ has a different distribution.
The p-value $P$ is then the statistics $P=F_T(T)$, if $F_T$ is the c.d.f. of $T$ given that $H_0$ is true.
It is well known that $F_X(X)$ is uniformly distributed for every r.v. $X$. In particular we have:
$P \sim U(0,1)$ if $H_0$ is true
This permits than to say that, if the p-value is small, e.g. $P \in [0,\epsilon]$, it will an unprobable value and than we can reject the null hypothesis.
So my questions are:
if this is really it, we could check if $P \in [a,a+\epsilon]$. Of course we cannot cherry pick $a$ after the $P$ value computation, but would it still be reasonable if all the world used $a=0.5$ instead of $a=0$ ?
Each statistics $T$ defines a reasoanable $P$ value. What are the criteria that are usually used for choosing the test statistics $T$ ?
Do the answers of the previous points involve maybe defining an alternative hypothesis $H_1$ and tuning $T$ so that if $H_1$ is true, $P$ is close to zero ? In this case, could $H_1 = not H_0$ be a reasonable alternative hypothesis ?