The AIC is not an estimator of a true parameter. It is a data-dependent measurement of the model fit. The model fit is what it is, there is no model fit that is any "truer" than the one you have, because it's the one you have that is measured. But without any true parameter for which the AIC would be an estimator, one cannot have a confidence interval (CI).
I'm by the way not disputing the answer by Richard Hardy. The AIC, as some other quantities such as $R^2$, can be interpreted as estimating something "true but unobservable", in which case one can argue that a CI makes sense. Personally I find the interpretation as measuring fit quality more intuitive and direct, for which one wouldn't have a CI for the reasons above, but I'm not saying that there is no way for it to be well defined and of some use.
Edit: As a response to the addition in the question: "I don't mean to imply that AIC is the same as parameter estimation. I'm asking why we treat goodness-of-fit estimates (AIC, BIC, etc.) differently from estimates that are reported with a CI." - The definition of a CI relies on a parameter being estimated. It says that given the true parameter value the CI catches this value with probability $(1-\alpha)$. As long as you're not interested in that true parameter value, a CI is meaningless.