To sum up the long series of comments:
Yes, your working is correct. More generally, if $X$ and $Y$ are independent
normal random variables with means $\mu_X$, $\mu_Y$ respectively
and variances $\sigma_X^2$ and $\sigma_Y^2$ respectively, then
$aX+bY$ is a normal random variable with mean $a\mu_X+b\mu_Y$
and variance $a^2\sigma_X^2 + b^2\sigma_Y^2$.
The various comments by whuber, cardinal, myself, and the Answer
by Tai Galili are all occasioned by the fact that there are at least
three different conventions for interpreting $X \sim N(a,b)$ as
a normal random variable. Usually, $a$ is the mean $\mu_X$
but $b$ can have different meanings.
$X \sim N(a,b)$ means that the standard deviation of $X$ is $b$.
(This is the convention you are using).
$X \sim N(a,b)$ means that the variance of $X$ is $b$.
$X \sim N(a,b)$ means that the variance of $X$ is $\dfrac{1}{b}$.
Fortunately, $X \sim N(0,1)$ (which is what you asked about)
means that $X$ is a standard
normal random variable in all three of the above conventions!