From what I understand, MLE for a model helps us find out what parameters in the model will suit the data most.
Thus, in linear regression, we try to find $L(\theta)$ as $\prod_{i=1}^{m}p(y|x;\theta)$ i.e., we try to find what $\theta$ will suit the data points most and we already have $x$ given to us. This is similar to logistic regression, where also we maximize $\prod_{i=1}^{m}p(y|x;\theta)$. However, for GDA and Naive Bayes, we define $L(\theta)$ as $\prod_{i=1}^{m}p(y,x;\theta)$ because these are generative models and we find both $x$ and $y$ ourselves.
Is my understanding correct? If not, then why is MLE calculated differently for GDA and linear regression?