There are several issues here.
One is the absence of the error term (does it enter additively? is it assuming constant spread on the original scale or the log scale, or something else?).
That issue is discussed briefly here and here.
Assuming the error term is multiplicative (additive in the logs) and at least not too far from constant variance on the log scale, then using linear regression to fit the model on the log scale may be reasonable. This is the approach in the page you link to.
The second issue is arriving at predictions.
Under the abovementioned assumptions, your transformed model is right.
You fit it by subtracting $\log t$ from both sides. Let $z=\log(y)-log(t)$ and fit $E(z)=\alpha+\beta t$ via regression.
Then an estimate of $c_1$ would be $\exp(\hat{\alpha})$ and an estimate of $c_2$ would be $\hat{\beta}$.
What constitutes "best" fitted values depends on what you mean by "best". If the errors are near symmetric on the log scale, exponentiating the log fit (not forgetting to add $\log(t)$ back in) will result in a median forecast on the original scale (under the variance assumption I mentioned before, prediction intervals transform back without an issue).
If you want something nearer to a mean forecast, simply exponentiating the fit on the log scale will be biased. This may not be much of an issue if the noise on the log-scale is very small. It's possible to roughly correct for the bias using a correction derived from a Taylor expansion, or by assuming normality on the log-scale and multiplying the median prediction by $\exp(\frac12 \hat{\sigma}^2)$, assuming the sample size is large enough that the uncertainty in the estimate of $\hat{\sigma}^2$ may be ignored.
Another approach would be to fit a gamma GLM with log link to the untransformed $y$ (and with the $\log(t)$ term as an offset) and then the model gives mean predictions directly without any need to worry about bias due to the non-linear transformation.