Once you have an estimated log-log model, you have an equation
$$\widehat{\log(y)}=\widehat{\beta_0}+\widehat{\beta_1} \log(x).$$
Skip the hats for now to get
$$\log(y)=\beta_0+\beta_1 \log(x).$$
You want to obtain $y$. Obviously, you just have to undo the $\log$. Exponentiate both sides of the equality to get
$$\exp(\log(y))=\exp(\beta_0+\beta_1 \log(x)),$$
which is nothing more than
$$y=\exp(\beta_0+\beta_1 \log(x)).$$
You could stop here and take this as an OK solution. However, there are some subtleties that you may want to consider.
The added difficulty with hats is the following. If you ran OLS estimation, $\widehat{\log(y)}$ is the expected mean of $\log(y)$ given $x$, i.e.
$$\widehat{\log(y)}=\mathbb{E}(\log(y)|x).$$
Once you exponentiate, it does not hold that
$$\exp(\widehat{\log(y)})=\mathbb{E}(\exp(\log(y))|x).$$
That is, it des not hold that
$$\exp(\widehat{\log(y)})=\mathbb{E}(y|x).$$
For example, if $y \sim N(\mu,\sigma^2)$ then $\mathbb{E}(\exp(y))=\exp(\mu+\frac{1}{2} \sigma^2)$. However, this need not be a big problem in practice. Bardsen and Lutkepohl "Forecasting levels of log variables in vector autoregressions" (2011) show some examples when simple exponentiation is desirable. Dave Giles has some good discussion in his blog post "More on Prediction From Log-Linear Regressions" for alternative solutions.