This is six questions--and @StubbornAtom has done a good job outlining answers in one pithy comment. There appear to be two obstacles to going further: one is doing the initial algebra and the other is conceptual.
Let's deal with the algebra first, because that's straightforward and doing it doesn't deprive you of the joy of discovery and learning the statistical concepts.
Begin with the Normal density function $$f_{\sigma}(x) = \frac{1}{\sqrt{2\pi\, \sigma^2}}\exp\left(-\frac{x^2}{2\sigma^2}\right) = \exp\left(-\frac{1}{2}\left(\frac{1}{\sigma^2}\right)x^2 + \frac{1}{2}\log\left(\frac{1}{\sigma^2}\right) + \frac{1}{2}\log \left(\frac{1}{2\pi}\right)\right).$$
The repeated appearance of $1/\sigma^2$ as well as the exponential suggest (a) working with $\phi = 1/\sigma^2$ as the parameter and (b) taking the logarithm, which in these terms is
$$\frac{1}{2}\left(- \phi\, x^2 + \log \phi + C\right)$$
where $C = \log (1/(2\pi))$ will soon disappear.
For $n$ iid data $\mathbf{x} = (x_1, \ldots, x_n)$ the log likelihood therefore is the sum of the individual log probability densities
$$\Lambda(\phi; \mathbf{x}) = \frac{1}{2}\left(- \phi\sum_{i=1}^n x_i^2 + n\log \phi + nC\right).$$
The log likelihood ratio for a pair of parameter values $\phi_1$ and $\phi_0$ is the difference
$$\operatorname{LLR}(\phi_1,\phi_0;\mathbf{x}) = \Lambda(\phi_1;\mathbf{x}) - \Lambda(\phi_0;\mathbf{x}) = \frac{1}{2}\left(n\left(\log \phi_1 - \log \phi_0\right) - \frac{\phi_1-\phi_0}{2}\sum_{i=1}^n x_i^2\right).$$
Here, now, are some guides to answering the remaining questions:
The test statistic. The data appear in the log likelihood ratio in the form $\sum x_i^2.$ This, therefore, is all you need to know about the data.
The chi-squared distribution. A Normal$(0,\sigma)$ variable has the same distribution as a standard Normal variable multiplied by $\sigma.$ By definition, the sum of squares of $n$ iid standard Normal variables has a $\chi^2(n)$ distribution. Therefore $\sum x_i^2,$ when divided by $\sigma^2,$ must have a $\chi^2(n)$ distribution.
How does the likelihood ratio determine a most powerful test? The Neyman-Pearson Lemma tells you how to test the simple hypothesis $H_0: \phi = \phi_0$ against the simple alternative $H_1: \phi=\phi_1.$
For a given $\phi_0,$ different alternatives $\phi_1 \lt \phi_0$ (which correspond to $\sigma_1 \gt \sigma_0$) don't change the test. This is why the N-P lemma applies to your situation.
The N-P lemma implies the boundary of the critical region is where the test statistic equals some constant, say $c.$ The value of that constant determines the size of the test as well as its power curve.
Since this problem is being posed to you, you must already have learned the N-P lemma, whether or not you know it by that name. Your notes and textbook therefore should be good resources for studying it.
Computing power. Any given value of $\phi$ determines $\sigma = 1/\sqrt{\phi}.$ Thus, by $(2),$ it completely determines the distribution of the test statistic. Use this to compute the power of the test associated with the constant $c$ in $(3).$ By definition, the power for the alternative $\sigma_1$ is the chance that the test statistic will lie in the critical region and by $(2)$ and $(3)$ that's given by a chi-squared distribution.
Why does this all work? The underlying concept is worked out in detail, with extensive illustrations, in my answer at https://stats.stackexchange.com/a/130772/919. The only departure from the present question is that this explanation concerns the other alternative $\sigma_1 \lt \sigma_0$ -- but that's an inconsequential difference.