I am just looking for a quick check whether my reasoning is correct when calculating $E(X)$, given that $X \sim \Gamma(\alpha, \beta)$. My calculations are as follows:
\begin{align*} \text{E}(X) &= \int_0^{\infty} x \frac{1}{\Gamma(\alpha) \beta^{\alpha}} x^{\alpha - 1} e^{-\frac{x}{\beta}} dx\\ &= \alpha \beta \int_0^{\infty} \frac{1}{\alpha\Gamma(\alpha) \beta \beta^{\alpha}} x^{\alpha} e^{-\frac{x}{\beta}} dx\\ &= \alpha \beta \int_0^{\infty} \frac{1}{\Gamma(\alpha + 1) \beta^{\alpha + 1}} x^{(\alpha + 1) - 1} e^{-\frac{x}{\beta}} dx \text{ (property of gamma function)}\\ &= \alpha \beta, \end{align*}
since the integrand is the density of a $\Gamma(\alpha + 1, \beta)$-distributed random variable.
I know my answer is correct. However, the answer key in the textbook I have at hand (Introduction to Probability, by Roussas) is different, and is actually very similar to Michael Hardy's answer here (https://math.stackexchange.com/questions/1967601/expected-value-of-the-gamma-distribution).
In essence it appears that my approach is to turn the integrand into a pdf, whereas the alternate approach is to turn the integral into a value of the gamma function. My method seems very slightly simpler, but the fact that I haven't found anyone else using my approach leads me to ask: is there a flaw in my reasoning?
Thanks.