You can use the Neyman-Pearson lemma to determine the most powerful test to apply.
First, we want to move into testing two simple hypotheses:
$$
H_0: \quad \alpha=0\\
H_1: \quad \alpha=\hat{\alpha}
$$
where $\hat{\alpha} \in \{ \alpha : \alpha > 0 \}$. Please bear in mind that $\hat{\alpha}$ is a fixed value, so we have the full distribution defined. We just don't know the value yet.
If $\alpha \neq 0$, the derivative $\partial_\alpha f_\alpha = 2x-1$. Then, we have that $f_\alpha$ is monotonically decreasing for $x<1/2$, and monotonically increasing for $x>1/2$.
For $x=0.8$, we can set the alternative hypothesis as $H_1: \, \alpha=1$, since it corresponds to the MLE.
The Neyman-Pearson lemma allows us to define the most powerful test:
$$
\Lambda(x)=\frac{\mathcal{L}(x|H_0)}{\mathcal{L}(x|H_1)}=\frac{1}{2x} \leq k \iff \\
x \geq \frac{1}{2k} = k^*
$$
The latter represents the rejection region for $H_0$.
This implies that the smallest $x$ for a p-value = 0.05 is
$$
P(x > k^*)=\int_{k^*}^1 \mathcal{L}(x|H_0) \, dx = \int_{k^*}^1 \, dx = 1 - k^* = 0.05 \implies \\
x > 1 - 0.05 = 0.95
$$
In order to get the p-value corresponding to the observed $x=0.8$, you simply recalculate the integral:
$$
P(x > 0.8) = \int_{0.8}^1 dx = 0.2
$$
P.S. Thanks to @whuber for the enlightening comments.