Your first question is key, so let's focus on it. You are concerned about a bivariate random variable $(X_{n-1},X_n)$ with a probability distribution somehow defined by giving $X_{n-1}$ a distribution and then defining the distribution function of $X_n$ in terms of the random variable $X_{n-1}.$
There are many subtleties involved, so I write the following in the hope that exploring the consequences of the definitions in detail and carrying out the calculations explicitly will reveal what actually is being done when the distribution of one random
variable is defined in terms of another random variable. We will see that this makes sense, but by means of the first example in the question (the uniform distributions) we will see that it does not uniquely determine the random variables themselves.
You are working with conditional expectations, so let's begin by expressing probabilities in terms of expectations. Let $x\in\mathbb{R}$ be a value at which we wish to compute the distribution function of $X_n,$
$$F_n(x) = \Pr(X_n \le x) = \phi(x;X_{n-1})$$
where $\phi$ defines the distribution of $X_n$ in terms of $X_{n-1}.$ For instance, assuming $\Pr(X_n\gt 0) = 1$ for simplicity and taking $x\gt 0,$ $\phi$ might be a uniform distribution function
$$\phi(x;X_{n-1}) = \frac{\min(X_{n-1},x)}{X_{n-1}} = \min\left(1, \frac{x}{X_{n-1}}\right) .$$
The probability determined by $F_n$ can be expressed in terms of the indicator function $\mathcal{I}(X_n\le x)$ as
$$\Pr(X_n \le x) = E\left[\mathcal{I}(X_n\le x)\right].$$
Because a property of $X_n$ has been expressed in terms of $X_{n-1},$ to be explicit we must consider this to be a conditional expectation,
$$E\left[I(X_n\le x)\right] = E\left[\mathcal{I}(X_n\le x)\mid X_{n-1}\right].$$
We may assemble all the foregoing into the formula
$$ E\left[ \mathcal{I}(X_n\le x)\mid X_{n-1}\right] = \phi(x;X_{n-1}).\tag{*}$$
Does this make sense? Let's check it against a definition of conditional expectation. $X_n$ is a random variable with respect to some sigma algebra $\mathfrak{F}_n$ defined on a probability space $(\Omega, \mathfrak{F}_n, \mathbb{P}).$ Let $\mathfrak{F}_{n-1}\subset \mathfrak{F}_n$ be the sigma algebra generated by the conditioning variable $X_{n-1}.$ Then the conditional expectation in $(*)$ is a $\mathfrak{F}_{n-1}$-measurable random variable, allowing us to write expressions like
$$E\left[ \mathcal{I}(X_n\le x)\mid X_{n-1}\right](\omega)$$
to refer to its value on an outcome $\omega\in\Omega.$ Notice that the right hand side of $(*)$ is also $\mathfrak{F}_{n-1}$-measurable provided $y \to \phi(x;y)$ is a measurable function for every $x,$ because it's a function of $X_{n-1}.$ So far so good: at least $(*)$ is equating two comparable mathematical objects!
The defining property of a conditional expectation is
$$\int_A E\left[ \mathcal{I}(X_n\le x)\mid X_{n-1}\right](\omega)\,\mathrm{d}\mathbb{P}(\omega) = \int_A \mathcal{I}(X_n(\omega)\le x)\, \mathrm{d}\mathbb{P}(\omega)\tag{**}$$
for all $\mathfrak{F}_{n-1}$-measurable sets $A.$
Substituting $(*)$ gives
$$\int_A \phi(x;X_{n-1}(\omega))\,\mathrm{d}\mathbb{P}(\omega) = \int_A \mathcal{I}(X_n(\omega)\le x)\, \mathrm{d}\mathbb{P}(\omega)$$
for all events $A\in\mathfrak{F}_{n-1}.$ Thus, specifying $\phi$ amounts to specifying all possible values of the integral on the right for all $X_{n-1}$-measurable sets $A.$ That's enough to determine $X_n$ up to distribution because if $Y$ is another variable with this property, then for all $A\in\mathfrak{F}_{n-1},$
$$0 = \int_A ( \mathcal{I}(X_n(\omega)\le x) - \mathcal{I}(Y(\omega)\le x))\, \mathrm{d}\mathbb{P}(\omega)$$
implies the two indicator functions are almost surely ($\mathfrak{F}_{n-1}$) equal. In particular, setting $A=\Omega$ gives
$$0 = \Pr(X_n \le x) - \Pr(Y \le x),$$
showing that $X_n$ and $Y$ are identically distributed.
A counterexample might help reinforce the fact that the random variable $X_n$ is usually not uniquely determined by $\phi$ and $X_{n-1}.$ Let $\Omega = [0,1]\times [0,1]$ with the usual Borel sigma algebra and Lebesgue measure $\mathbb P.$ I will exploit the obvious fact that the map
$$\iota: \Omega\to\Omega;\quad \iota(\omega_1,\omega_2) = (\omega_1,1-\omega_2)$$
(which merely flips the square $\Omega$ upside down) preserves all the essential properties of this probability space.
The random variable defined by
$$X_{n-1}(\omega_1,\omega_2) = \omega_1$$
has a Uniform$(0,1)$ distribution. The random variables
$$X_n(\omega_1,\omega_2) = \omega_1\omega_2$$
and
$$X_n^{(2)}(\omega_1,\omega_2) = \omega_1(1-\omega_2)$$
are identically distributed because they are related by $\iota.$ To compute their common conditional distribution, note that the sigma algebra $\sigma(X_{n-1}) = \mathfrak{F}_{n-1}$ is generated by the sets of the form
$$\Omega_{t} = \{(\omega_1,\omega_2)\mid \omega_1\le t\}.$$
Visualize these as slicing the square $[0,1]\times[0,1]$ vertically at the location $t;$ the two halves are measurable in the subalgebra and all $\mathfrak{F}_{n-1}$-measurable sets consist of collections built out of such slices.
It suffices therefore to let $A = \Omega_{t}$ for arbitrary $t$ and compute
$$\eqalign{\int_A \mathcal{I}(X_n(\omega)\le x)\, \mathrm{d}\mathbb{P}(\omega) &= \int_0^1\int_0^t \mathcal{I}(\omega_1\omega_2 \le x)\, \mathrm{d}\omega_1\,\mathrm{d}\omega_2\\
&= \int_0^1\int_0^t \min\left(1,\frac{x}{\omega_1}\right)\, \mathrm{d}\omega_1\,\mathrm{d}\omega_2\\
&= \int_0^1\int_0^t \phi(x, \omega_1)\, \mathrm{d}\omega_1\,\mathrm{d}\omega_2\\
&= \int_0^1\int_0^t \phi(x, X_n(\omega))\, \mathrm{d}\omega_1\,\mathrm{d}\omega_2\\
&= \int_A \phi(x, X_n(\omega))\, \mathrm{d}\mathbb{P}.
}$$
Everything in this chain of equalities merely substitutes a definition or previous equality except for the first and last lines, which apply Fubini's Theorem, and the move from the first line to the second, which is simple arithmetic.
Thus, the integrand at the end must be the conditional expectation because it has been seen to satisfy the defining property $(**):$
$$E\left[ \mathcal{I}(X_n\le x)\mid X_{n-1}\right] = \phi(x, X_n(\omega)).$$
Although this equation is true of both $X_{n}$ and $X_{n}^{(2)},$ these are distinct random variables. Indeed,
$$X_{n}(\omega_1,\omega_2) - X_{n}^{(2)}(\omega_1,\omega_2) = \omega_1\omega_2 - \omega_1(1-\omega_2) = \omega_1(2\omega_2 - 1)$$
is almost surely nonzero.