The problem of comparing credible sets and confidence intervals is that they are not apples to apples or apples to oranges comparisons. They are an apples to tractors comparison. They are only substitutes for one another in certain circumstances.
The primary use of a confidence interval is in scientific research. Although businesses use them, their value is lessened since it is often difficult to choose an action based on a range. Applied business statistical methods tend to favor point estimates for practical reasons, even if intervals are included in reports. When included, they are mostly as warnings.
Credible sets tend to be less used in Bayesian methods because the entire posterior is reported as well as the marginals. They are reported out and descriptively provide a feel for the data if no graph of the posterior is provided, but they do not have the same usefulness as confidence intervals because they mean something different.
There are four cases where you will tend to see a credible set used instead of a confidence interval, but I am not certain that most of them are practical. It happens, but not often.
The first one has already been mentioned. There are times where a confidence interval appears to produce a pathological interval. I am less happy with this use. It is important to remember that confidence procedures produce valid intervals at least $1-\alpha$ percent of the time upon infinite repetition, but the price of that may be total nonsense sometimes. I am not sure that is a good reason to discard a Frequentist method.
Rare or widespread events are a typical example. If a high enough percentage of a population is doing or not doing something, then it may appear that everybody or nobody is doing something. Because Frequentist intervals are built around point estimates, and the sample has no variance, the interval lacks a range. I find it disturbing to abandon a method because it sometimes produces a result that others may not accept. The virtue of a Frequentist method is that all information comes from the data. It just happens that the data didn’t have enough information in it.
That is not the sum total of all pathologies, however. Other pathologies may encourage the use of a Bayesian method because an appropriate Frequentist method may exist but cannot be found. For example, the sample mean coordinate of the points in a donut centered on $(0,0,0)$ should be near $(0,0,0)$, but there is no donut there. That is where the donut hole is. A range built around an unsupported point may encourage a Bayesian alternative if information about the shape cannot be included in the non-Bayesian solution for some reason.
The second reason has a partial Frequentist analog, the case of outside information. In the general case, where there is outside research on a parameter of interest, both a Bayesian prior and a Frequentist meta-analysis produce useable intervals. The difficulty happens when the outside knowledge is not contained in data, per se, but in outside knowledge.
Some knowledge is supported by theory and observations in unrelated studies but should logically hold. For example, consider the case of a well engineered object that should range between 1 and 0. If it reaches 0, then it terminates. The next value $x_{t+1}=\beta{x}_t+\epsilon,0<\beta<1$. It can only have a value of 1 at $t=0$. It may be the case that $x_t$ can go up or down, but it can never reach 1 again and stops at 0. Furthermore, because it is well-engineered, $\beta=.9999999\pm{.00000001}$. Of course, we could have deceived ourselves about the true tolerance. That is the rub when using a Bayesian method.
In the case of the well-engineered product, confidence intervals are too conservative and overestimate the range of the interval. In that case, it can be trivially true that a 95% interval covers it at least 95% of the time because it may be so wide, given that prior information was excluded from its construction, that it should cover the parameter nearly 100% of the time.
The third case happens when something is a one-off event instead of a repeating event. Interestingly, you can create a case where a confidence interval is the valid interval for one party, and a credible set is the valid interval for another party with the same data.
Consider a manufacturing firm that produces some product that fails from time to time. It wants to guarantee that at least 99% of the time, it can recover from failure based on an interval. A confidence interval provides that guarantee. However, the party buying a product that failed may want an interval that has a 99% chance of being the correct interval to fix the problem as this will not repeat, and it must only work this one time. They are concerned about the data they have and the one event they are experiencing. They do not care about the product’s efficacy for the other customers of the firm.
The fourth case may have no real-world analogs, but it has to do with the difference in the type of loss being experienced. Most Frequentist procedures are mini-max procedures. The minimize the maximum amount of risk that you are exposed to. That is also true for confidence procedures. Most Bayesian interval estimates minimize average loss. If your concern is minimizing your average loss from using an interval built by a non-representative sample, then you should use a credible set. If you are concerned about taking the smallest possible largest risk, then you should use a confidence interval.
But getting back to the apples and tractors, these do not happen that often. Frequentist procedures overtook the pre-existing Bayesian paradigm because it works in most settings for most problems. Bayesian procedures are clearly superior in some cases, but not necessarily Bayesian intervals.
The real-world cases for Bayesian credible sets are things like search and rescue because they can be quickly and easily updated and can use knowledge without prior research. It can also be superior when significant amounts of data are missing because Bayesian methods can treat a missing data point as it does a parameter. That can prevent a pathological interval created by information loss because it can then marginalize out the impact of the missing data.
This is a personal guess based on the observation that Bayesian methods are not in heavy use comparatively, but I am not that convinced an interval holds the same value on the Bayesian side of the coin.
Frequentist methods are built around points. Bayesian methods are built around distributions. Distributions carry more information than a single point. Bayesian methods can split inference and probability from actions taken based on those probabilities.
If an interval would be helpful, a loss function can be applied to the posterior, and boundaries for the interval can be discovered. In that case, it is a formalism to support a proper action given the data.
I do not suspect that specific use happens that much except in risk management, where ranges are essential. I do not know that it happens that much in that case.
Confidence intervals carry more information than point estimates. Credible sets are an information reduction technique.
A confidence interval of $7\pm{3}$ isn’t giving the same information as a credible set of $[6,7]\cup[7.5,9]$ for the same data.