In a comment to Jaap's answer to this question about reporting tiny p-values, amoeba says "Once the $p$-value is below 0.0001 or something ... it's probably more meaningful to look at $z$-value than at $p$-value," for example the 5$\sigma$ convention in particle physics. There was agreement that this was right. Why is it right? Why are $z$-values a more meaningful way to describe data that gives very low $p$-values? What does "more meaningful" mean?
Quoting Glen_b's answer to why very low $p$-values aren't that meaningful:
But statistical meaning will have been lost far earlier. Note that $p$-values depend on assumptions, and the further out into the extreme tail you go the more heavily the true $p$-value (rather than the nominal value we calculate) will be affected by the mistaken assumptions, in some cases even when they're only a little bit wrong. Since the assumptions are simply not going to be all exactly satisfied, middling $p$-values may be reasonably accurate (in terms of relative accuracy, perhaps only out by a modest fraction), but extremely tiny $p$-values may be out by many orders of magnitude.
Why doesn't similar logic apply to really large $z$-values? Don't they have assumptions that can be violated?
Related questions: If this is the case, why not just always use $z$-values? Are $p$-values "more meaningful" than $z$-values when the $p$-value is high?