Bayesian multinomial regression
I would recommend (as stated in the comments) Multinomial Regression, whose conjugate prior is the Dirichlet Distribution, so your posterior will be a Dirichlet Distribution.
In this case, there is no difference between running the experiment 10 times with N different candidates each time, or thinking of this as "1 experiment with 10N" candidates.
If your prior is $Dirichlet(\alpha_{1}, \alpha_{2}, \ldots, \alpha_{10})$, your posterior will be $Dirichlet(\alpha_{1}+n_{1}, \alpha_{2}+n_{2}, \ldots, \alpha_{N}+n_{10})$
In which $n_{1}$ is the number of people who chose candy bar 1 as their favourite, and so forth.
Your prior should probably be (unless you have information that you did not state), that all candy bars are equally popular, and thus $\alpha_{1}=\alpha_{2}=\ldots=\alpha_{10}$, but what value these take, will govern how restrictive you want to be, i.e. how much you wish to penalise as implausible, posteriors in which some candy bars are much more popular than others.
If you set all $\alpha_{i}$ to 1, that's the equivalent of a uniform prior.
In terms of how to prepare your data, for each candy bar, you only need to calculate how many people chose it as their favourite, and then your posterior is analytically known, as above.
Also, if you want to know the posterior for any individual candy bar (i.e. what fraction of the world (call this q) is likely to prefer candy bar k), that is given by the beta distribution, $\beta(q| \alpha_{k} + n_{k}, \sum_{j\neq k} \alpha_{j} + n_{j}$
Bayesian multinomial regression when not sampling all candidates
Edit (as I had originally misread your post/you added more information, see comments):
I would still propose a solution which involves Multinomial Regression, leading to a Dirichlet Posterior, which ignores the features of the candy bars (e.g. size, flavour). What underpins all of this, is that if you knew the probability of a member of the general population preferring each of candy bars, assuming they can choose from the full selection, you need to make an assumption of how their choice would vary, if you removed some bars from the pool. If you make an assumption that their preferences are redistributed in some random/proportional way (see below), and not that people who prefer the largest bar are likely to prefer the second largest one if the largest is removed from the pool, then I suggest ignoring the features completely.
For simplicity, let us assume there are only 3 different bars for now.
We know that a random selected member of the population has a probability $q_{i}$ of saying they prefer the $i^{th}$ bar, such that $\sum_{i=1}^{3}q_{i}=1$. We now ask somebody chosen at random to tell us whether they prefer bar 1 or bar 2 (so we don't show them bar 3). In that case, I am going to assume that the probability they will prefer bar 1 is $p_{1}=\frac{q_{1}}{q_{1}+q_{2}}$ and the probability they will prefer bar 2 is given by $p_{2}=\frac{q_{2}}{q_{1}+q_{2}}$
Clearly, this is a simplifying assumption, as another response to this question states, the Decoy Effect makes this potentially a bit shakey, and as I stated above, candy bar 3 might be more similar to 2 than it is to 1. But this assumption might work well in scenarios in which all bars are sufficiently different and it allows for a tractable solution. In words, this solution assumes that if we remove bar 3 from the pool, the people who would have preferred it transfer their preferences to 1 and 2 randomly, but according to the ratio in which people prefer bars 1 and 2.
So given we perform one experiment only, and we want to determine $q_{1},q_{2},q_{3}$ from some data, we want to calculate $P(\underline{q}|D)$, which by Bayes is given by $\frac{P(D|\underline{q})P(\underline{q})}{P(D)}$. The only way this is different to more conventional multinomial regression, is that $P(D|\underline{q})$ is not just a categorical distribution.
In this case, if the first experiment resulted in $n_{11}$ people choosing bar 1 and $n_{12}$ people choosing bar 2 (the first index in the subscript indicating that this was experiment number 1, and the second denoting which bar it was), and $n_{13}=0$ because bar 3 was not an option in experiment 1, then the probability of seeing the data we saw, given some vector q, is given by (up to a constant) $\left(\frac{q_{1}}{q_{1}+q_{2}}\right)^{n_{11}}\left(\frac{q_{2}}{q_{1}+q_{2}}\right)^{n_{12}}$
Using a Dirichlet prior with parameter vector $\underline{\alpha}$ yields:
$P(q|D_{1}) \propto \left(\frac{q_{1}}{q_{1}+q_{2}}\right)^{n_{11}}\left(\frac{q_{2}}{q_{1}+q_{2}}\right)^{n_{12}} q_{1}^{\alpha_{1}-1}q_{2}^{\alpha_{2}-1}q_{3}^{\alpha_{3}-1}$
(I've added a subscript to the D here, to denote this is the data from experiment 1). The proportionality coefficient is hard to calculate, and will have to be calculated numerically, but because there's more to come, let's just treat it as a number tbd later for now.
So now, let's say in experiment 2, we only let them choose between candy bars 2 and 3. Then $P(D_{2}|q)\propto \left(\frac{q_{2}}{q_{2}+q_{3}}\right)^{n_{22}}\left(\frac{q_{3}}{q_{2}+q_{3}}\right)^{n_{23}}$, where $n_{22}$ and $n_{23}$ are the numbers of people who chose bars 2 and 3 in experiment 2 respectively.
So now if you want to know $P(q|D_{2})$ (which technically is $P(q|D_{2}, D_{1})$), you can do the same trick, but now, you use $P(q|D_{1})$ where you would have used your Dirichlet prior $P(q)$ previously.
Explicitly: $P(\underline{q}|D_{1},D_{2})\propto \left(\frac{q_{2}}{q_{2}+q_{3}}\right)^{n_{22}}\left(\frac{q_{3}}{q_{2}+q_{3}}\right)^{n_{23}}\left(\frac{q_{1}}{q_{1}+q_{2}}\right)^{n_{11}}\left(\frac{q_{2}}{q_{1}+q_{2}}\right)^{n_{12}} q_{1}^{\alpha_{1}-1}q_{2}^{\alpha_{2}-1}q_{3}^{\alpha_{3}-1}$
I hope you can see a pattern developing, and how this process can be continued over many experiments with any combination of available candy bars.
There is one final step, which is to turn the constant of proportionality into an equality. I'll outline here for this case, where there were only 2 experiments.
We know that $\int d\underline{q} P(\underline{q}|D_{1},D_{2})=1$, and thus:
$P(\underline{q}|D_{1},D_{2})= \frac{\left(\frac{q_{2}}{q_{2}+q_{3}}\right)^{n_{22}}\left(\frac{q_{3}}{q_{2}+q_{3}}\right)^{n_{23}}\left(\frac{q_{1}}{q_{1}+q_{2}}\right)^{n_{11}}\left(\frac{q_{2}}{q_{1}+q_{2}}\right)^{n_{12}} q_{1}^{\alpha_{1}-1}q_{2}^{\alpha_{2}-1}q_{3}^{\alpha_{3}-1}}{\int d\underline{q} \left(\frac{q_{2}}{q_{2}+q_{3}}\right)^{n_{22}}\left(\frac{q_{3}}{q_{2}+q_{3}}\right)^{n_{23}}\left(\frac{q_{1}}{q_{1}+q_{2}}\right)^{n_{11}}\left(\frac{q_{2}}{q_{1}+q_{2}}\right)^{n_{12}} q_{1}^{\alpha_{1}-1}q_{2}^{\alpha_{2}-1}q_{3}^{\alpha_{3}-1}}$
Note that this integral is over a complex multidimensional surface on which $q_{1}+q_{2}+q_{3}=1$ and $0 \leq q_{i} \leq 1, \forall i$
I suspect this integral cannot be done analytically, so you'll need to solve numerically. I'm no expert on which Monte Carlo sampling method you'll want to use, perhaps somebody else can suggest whether this is best suited to HCMC or other. Whether you need to normalise will however depend on what you want to do with it.
Similarly, you'll need to do more numerical integrals if you want to calculate $\langle q_{i} \rangle$, i.e. the expected fraction of the general population who will prefer bar i.