The null hypothesis $H_0$ of a one-way ANOVA is that the means of all groups are equal: $$H_0: \mu_1 = \mu_2 = ... = \mu_k.$$ The null hypothesis $H_0$ of a one-way MANOVA is that the [multivariate] means of all groups are equal: $$H_0: \boldsymbol \mu_1 = \boldsymbol \mu_2 = ... = \boldsymbol \mu_k.$$ This is equivalent to saying that the means are equal for each response variable, i.e. your first option is correct.
In both cases the alternative hypothesis $H_1$ is the negation of the null. In both cases the assumptions are (a) Gaussian within-group distributions, and (b) equal variances (for ANOVA) / covariance matrices (for MANOVA) across groups.
Difference between MANOVA and ANOVAs
This might appear a bit confusing: the null hypothesis of MANOVA is exactly the same as the combination of null hypotheses for a collection of univariate ANOVAs, but at the same time we know that doing MANOVA is not equivalent to doing univariate ANOVAs and then somehow "combining" the results (one could come up with various ways of combining). Why not?
The answer is that running all univariate ANOVAs, even though would test the same null hypothesis, will have less power. See my answer here for an illustration: How can MANOVA report a significant difference when none of the univariate ANOVAs reaches significance? Naive method of "combining" (reject the global null if at least one ANOVA rejects the null) would also lead to a huge inflation of type I error rate; but even if one chooses some smart way of "combining" to maintain the correct error rate, one would lose in power.
How the testing works
ANOVA decomposes the total sum-of-squares $T$ into between-group sum-of-squares $B$ and within-group sum-of-squares $W$, so that $T=B+W$. It then computes the ratio $B/W$. Under the null hypothesis, this ratio should be small (around $1$); one can work out the exact distribution of this ratio expected under the null hypothesis (it will depend on $n$ and on the number of groups). Comparing the observed value $B/W$ with this distribution yields a p-value.
MANOVA decomposes the total scatter matrix $\mathbf T$ into between-group scatter matrix $\mathbf B$ and within-group scatter matrix $\mathbf W$, so that $\mathbf T = \mathbf B + \mathbf W$. It then computes the matrix $\mathbf W^{-1} \mathbf B$. Under the null hypothesis, this matrix should be "small" (around $\mathbf{I}$); but how to quantify how "small" it is? MANOVA looks at the eigenvalues $\lambda_i$ of this matrix (they are all positive). Again, under the null hypothesis, these eigenvalues should be "small" (all around $1$). But to compute a p-value, we need one number (called "statistic") in order to be able to compare it with its expected distribution under the null. There are several ways to do it: take the sum of all eigenvalues $\sum \lambda_i$; take maximal eigenvalue $\max\{\lambda_i\}$, etc. In each case, this number is compared with the distribution of this quantity expected under the null, resulting in a p-value.
Different choices of the test statistic lead to slightly different p-values, but it is important to realize that in each case the same null hypothesis is being tested.