I recall from my graduate school days that the Gauss-Markov (GM) theorem states that the Best Linear Unbiased Estimator (BLUE) in a linear regression is $\vec{\beta}=(X^TX)^{-1}X^T\vec{y}$. An amazing aspect of the proof is that you do not need distributional assumptions to prove the claim.
I've been studying up on GLMs (and using them) now for several years and you can definitely draw various analogies between GLMs and the linear model.
I've been trying to understand how the GM theorem-or a generalization of it-fits into the GLM theory. It seems like the only place the distribution assumption is used-and even then only sometimes used-is for the link choice. Notably hypothesis testing does not use the distributional assumption instead usually using a Gaussian, or large sample approximation.
My question is if for estimation purposes there is an analogous result to the GM theorem stating BLUE? This may require a generalized definition of linearity but should still be stateable and provable. If anyone is aware of this result a reference would be appreciated.