There are two p-values of interest. The critical p-value, also known as the $\alpha$-level or significance level, is decided and fixed before the study / analysis is performed. This critical p-value is in fact the probability of Type I error. Your examples are talking about the critical p-value, and I agree with those who are saying they are incorrect or at least misleading, because in common usage "the p-value" refers to the next type of p-value.
The observed p-value, or calculated p-value, is calculated as part of a statistical test. The observed p-value is the probability under the null hypothesis of observing data that leads to a test statistic as or more extreme than the test statistic actually observed in the experiment. The definition of "extreme" varies depending on your test, e.g. one-sided versus two-sided. The common usage of the phrase "the p-value" usually refers to the observed p-value.
We reject the null hypothesis if the observed p-value is less than or equal to the critical p-value. This is because, under the null hypothesis, the observed p-value is uniformly distributed on $[0,1]$. Thus the probability that we reject the null when in fact the null is true is equal to the critical p-value.
Edit: The full section in [1] is talking about the observed p-values, but the one line you quoted only makes sense when talking about the critical p-value. If you were to force it to be about observed p-values and be correct, it would be something like "It is the probability of a new, identical study wrongly rejecting the null hypothesis if you set your alpha-level equal to it." Strictly speaking, once you've done your analysis, the probability of wrongly rejecting the null hypothesis is either 0 or 1, depending on whether or not the null hypothesis is true and whether or not you rejected it!