Since there are tractable approximate algorithms for both sparse and dense PCA, you should choose based on which one you think is the best estimator for your situation or which one you think will help you solve your applied problem.
Strengths and weaknesses in application
One important strength of sparse PCA is that it can be easier to interpret. There's more on that topic here:
How exactly is sparse PCA better than PCA?
Strengths and weaknesses as estimators
In a typical PCA, the largest eigenvector of the sample covariance is not a consistent estimator of the largest eigenvector of the true covariance unless $\lim \frac{p}{n}=0.$ If you have too many features and not enough samples, dense PCA is totally misleading. If you have enough samples, this is not a problem. In your situation, it sounds like $\frac{p}{n}$ is small ($\approx 0.02$), so dense PCA seems reasonable. I don't know of any diagnostics to test this, but here's an attempt to make some up.
- If your features are standardized, you can compute the Marchenko-Pastur upper bound, $(\sqrt{\frac{p}{n}} + 1)^2$. Are your largest eigenvalues bigger than this? If not, they could be pure noise.
- Compute the PCA twice on multiple disjoint or independently selected subsets of your data. Are the largest components approximately the same on each subset?
Sparsity-based approximations can perform better than dense PCA (they are consistent with fewer samples), if you use a basis in which the true principal components are approximately sparse. You can read more about this here.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2898454/
I don't know of any diagnostics to see whether your data are close enough to meeting the sparsity assumptions. I would fall back on my earlier diagnostic: compute the PCA twice on multiple disjoint or independently selected subsets of your data. Are the largest components approximately the same on each subset?