Without seeing your actual data and especially what tool you are using, it is very hard to give a conclusive answer. Note that calculating confidence intervals for odds ratios is not straightforward and usually involves some large-sample approximations. These, in turn, are likely not warranted with your small dataset, so your tool may be doing something else, perhaps some simulation/resampling approach.
If we feed your data into R and use the epiDisplay
package,
dataset <- data.frame(
disease=c(rep(TRUE,7),rep(FALSE,1),rep(TRUE,7),rep(FALSE,11)),
smoking=c(rep(TRUE,7+1),rep(FALSE,7+11)))
model <- glm(disease~smoking,data=dataset,family="binomial")
library(epiDisplay)
logistic.display(model)
we get this:
Logistic regression predicting disease
OR(95%CI) P(Wald's test) P(LR-test)
smoking 11 (1.1,109.67) 0.041 0.016
We note that the point estimate of the OR is the same as yours, $11$, which is unsurprising, since the point estimate relies on the parameter estimate only, and there is little leeway in estimating this. The uncertainty comes in in estimating the confidence interval, and we note that our output above gives you a different CI than the one you post in your question.
However, our CI is still rather wide, and you might have asked the exact same question here if you had seen a CI of $[1.1, 109.67]$ in your tool's output. And the problem is the same, regardless of your tool: it's that you have very little data indeed. Remember that the OR measures the multiplicative increase in the odds of having the disease between a smoker and a nonsmoker. Thus, it relies on all four entries in your cross-table. And the precision by which we can estimate it, as a rule of thumb, depends on the smallest entry. Note that there is only a single smoking and non-diseased participant. If there had been just a single more such person, your estimated OR would have changed drastically, halving to $5.5$ (and of course the CI would have shrunk dramatically, too)! Thus, there is just a huge uncertainty in estimating the underlying relationships from small datasets, and the CI correctly reflects this in being very wide, no matter what approximation we use.
(Incidentally, that is also why we should take your statistically significant result with a large grain of salt.)
The only remedy is to collect more data. Ideally lots more data. I understand that may not be feasible for you, but matters being the way they are, you can simply not conclude a lot from the data you have - and that the CI correctly shows this is definitely a feature, not a bug.