When designing a probit model, I worry about whether:
A) I have a sufficiently large number of observations (more than whats needed for linear regression).
B) All relevant variables are included, the latent variable model is correct in theory and that I don't have perfect linear dependence between any variables.
C)My explanatory variables are exogenous (maybe only applicable to econometrics)
The last 2 are fairly obvious and apply to the linear regression as well, the first may require some more explanation which is given below.
Let $\theta$ be the true parameter values, $\hat \theta$ be the parameter values estimated by the probit model, and $n$ be the number of observations.
A very important yet subtle point: nothing saying the probit model is unbiased. What we can say is that the probit model is consistent. This means that as the number of observations approaches infinity the parameter estimates will converge in probability to the true parameter values. More compactly
$$ \mathrm{plim}\:\hat \theta = \theta$$
Where $ \mathrm{plim}$ stands for the probability limit.
This is very different from unbiasedness, which is to say; for any finite $n$, $E[\hat \theta]=\theta$.
For this reason it is important that you have a sufficiently large amount of data to do a probit model. The minimal amount of data you need depends on the problem, this link discusses the issue for a logit model, the idea is the same for probit.
As for multicollinearity, In theory it only effects the standard errors of the estimate (i.e. the t-tests). However, in applications it may cause problems with estimation as well. Unlike the linear model, the probit has no closed form solution so it must be estimated numerically. The numerical estimation can become unreliable if the multicolinearity is strong enough.