This is a tricky question.
First, I think a lot of people will denounce the idea of using a z or t test on data which are not continuous. "Your data are ordinal/nominal, use a different test!" they will cry! And they are not wrong; your data are not normal even in theory, so a different test truly is best.
But are those other tests necessary? I argue no.
Let's set up a little experiment. I'm going to generate some ordinal data 1 through 5 and run a t test on those data. I will assume they come from a normal (they really don't, they come from a multinomial with approximately symmetric probabilities around the median). Let's see if a) The data pass some sort of normality test like the shapiro wilk, and b) if the false positive rate is maintained.
Here is some code.
library(tidyverse)
gen_data<-function(){
x = c(-Inf, -2, -1, 1, 2, Inf)
p = pnorm(x[2:6]) - pnorm(x[1:5])
y = sample(1:5, replace = T, size = 100, prob = p)
return(y)
}
data<-rerun(1000, gen_data())
shap_wilk_results<-map_lgl(data, ~shapiro.test(.x)$p.value<0.05) %>% mean()
fpr <- map_lgl(data, ~t.test(.x, mu = 3, var.equal =F)$p.value<0.05) %>% mean
If you run this code you will find 2 things: 1) We almost always reject the null of the shapiro wilk test, so our data are sufficiently not normal, and 2) the false positive rate for the t test is approximately 5% right where it should be.
So what is my point? My point is that even when your data are not normal, you can still use the t test and maintain some of the frequency properties that really matter (I haven't checked power here, but I imagine it is maybe lower).
Back to your question. You ask "how can I check normality", but after what we've seen here I think a better question is "does normality even matter for some of these tests?". The answer, in my opinion, is no. The z and t seem to do well so long as the data are symmetric, and maybe for your purposes that is all you need to check and then just make the assumption the data are normal anyway.