First of all, the 'random effects' can be viewed in different ways and the approaches to them and associated definitions may seem conflicting but it is just a different viewpoint.
The 'random effect' term in a model can be seen as both a term in the deterministic part of the model as a term in the random part of the model.
Basically, in general, the difference between fixed effect and random effect is whether a parameter is considered fixed within the experiment or not. From that point you get all kinds of different practical applications, and the many varying answers (opinions) to the question "When to use random effects?". It might actually be more a linguistic problem (when something is called random effect or not) than something with a problem with modelling (where we all understand the mathematics in the same way).
The Bayesian and Frequentist frameworks look in the same way at a statistical model, say: observations $Y_{ij}$ where $j$ is the observation number and $i$ indicates a grouping
$$Y_{ij} = \underbrace{ \alpha + \beta}_{\substack{\llap{\text{mod}}\rlap{\text{el}} \\ \llap{\text{parame}}\rlap{\text{ters}} }}\overbrace{X_{ij}}^{\substack{\llap{\text{indep}}\rlap{\text{endent}} \\ \text{variables}}} +\overbrace{Z_{i}}^{\substack{\llap{\text{ran}}\rlap{\text{dom}} \\ \text{group}\\ \text{term}}} + \overbrace{\epsilon_{j}}^{\substack{\llap{\text{ran}}\rlap{\text{dom}} \\ \text{individual}\\ \text{term}}}$$
The observations $Y_{ij}$ will depend on some model parameters $\alpha$ and $\beta$, which can be seen as the 'effects' which describe how the $Y_{ij}$ depends on the variable $X_{ij}$.
But the observations will not be deterministic and only depend on $X_{ij}$, there will also be random terms such that the observation conditional on the independent variables $Y_{ij} \vert X_{ij}$ will follow some random distribution. The terms $Z_{i}$ and $\epsilon_j$ are the nondeterministic part of the model.
This is the same for the Bayesian and Frequentist approach, which in principle do not differ in their way to describe a probability for the observations $Y_{ij}$ conditional on the model parameters $\alpha$ and $\beta$ and independent variables $X_{ij}$, where $Z_i$ and $\epsilon_j$ describe a non-deterministic part.
The difference is in the approach to 'inference'.
The Bayesian approach uses reverse probability and describes a probability distribution of the (fixed effect) parameters $\alpha$ and $\beta$. This implies an interpretation of those parameters as random variables. With a Bayesian approach the outcome is a statement about the probability distribution for the fixed effect parameters $\alpha$ and $\beta$.
A Frequentist method does not consider a distribution of the fixed effect parameters $\alpha$ and $\beta$ and avoids making statements that imply such distribution (but it is not explicitly rejected). The probability/frequency statements in a frequentist approach do not relate to a frequency/probability statement about the parameters but to a frequency/probability statement about the success rate of the estimation procedure.
So if you like, you could say that a frequentist definition of a fixed effect is: 'a model parameter that describes the deterministic part in a statistical model'. (ie. parameters that describe how dependent variables depend on independent variables).
And more specifically in most contexts this relates only to the parameters for the deterministic model that describe $E[Y_{ij} \vert X_{ij}]$. For instance, with a frequentist model one can estimate both the mean and variance, but only the parameters that relate to the mean are considered 'effects'. And even more specifically, the effects are most often used in the context of a 'linear' model. E.g. a for a nonlinear model like $E[y] \sim a e^{-bt}$ the parameters $a$ and $b$ are not really called 'effects'.
In a Bayesian framework all effects are sort of random and not deterministic (so the difference between random effect and fixed effect is not so obvious). The model parameters $\alpha$ and $\beta$ are random variables.
How I interpret the question's description/definition of the difference in random effect and fixed effect in the Bayesian framework is more as something pragmatic than as some principle.
- the fixed effects $\alpha$ and $\beta$ are considered to be like "where we estimate each parameter ... independently" (the $\alpha$ and $\beta$ are randomly drawn from a distribution, but they are the same for all $i$ and $j$ within the analysis, e.g. the mean of a species is a model parameter that is considered the same for each species)
- and the random effects are like "for a random effect the parameters for each level are modeled as being drawn from a distribution" (for each observation category $i$ a different random effect is 'drawn' from the distribution, e.g. the mean of a species is a model parameter that is considered different for each species)
In a frequentist framework the fixed effect model parameters are not considered as random parameters, or at least it doesn't matter for the inference whether the parameters are a random parameter or not and it is left out the analysis. However, the random effect term is explicitly considered as a random variable (that is, as a nondeterministic component of the model) and this will influence the analysis (e.g. as in a mixed effects model the imposed structure of the random error term).