When thinking about and using random variables, it helps to master some basic constructions. The most fundamental is creating a procedure to produce a random variable with any desired distribution.
Motivation: Why This is Important and What It's Good For
Observe that all computer simulation ultimately employs that most basic function, the uniform random number generator: each time it is called, it produces a number between $0$ and $1$; the numbers it produces from one invocation to the next are supposed to be independent, and the chance that any number lies between limits $0\le a\le b \le 1$ is $b-a.$
Theory proceeds in a similar way. Suppose, then, that you wish to produce a random variable $X$ out of three independent random variables $X_1,$ $X_2,$ and $I$ as in this question. In the question you describe an algebraic combination $X = IX_1 + (1-I)X_2.$ The issue concerns constructing the trivariate random variable
$$(X_1, X_2, I)$$
so that (a) all three components are independent and (b) they have specified probability distributions.
This is done in two steps: (1) forming the sample space $\Omega$ and (2) creating the probability distribution.
Intuition
There are many equivalent ways to proceed. Before doing so, permit me to share some intuition, which I draw from the tickets in a box model. The "tickets" (slips of paper) in a box are the sample space $\Omega;$ what you write on them are values of random variables. To produce independent variables, first create a box representing one variable, writing its values on each ticket and creating enough tickets of each value to give the intended probability.
Next, pull each of those tickets systematically out of the box. Use each one to create a new box of tickets in which two numbers are written: the first is the value on the pulled ticket and the second is a possible value of the second random variable. This new box gives values of the second random variable "conditioned on" the first. The proportions of each new ticket give the conditional distribution.
If you make the proportions always the same, so that the conditional distribution does not vary, you will create two independent random variables simply by taking all the new ("conditional") boxes you created and dumping their contents into one big box. The values written on the tickets in the big box are all ordered pairs of numbers: one for the first random variable and another for the second.
What we need, then, in our construction are (1) a mathematical way to create ordered pairs--that's obviously the Cartesian product--and (2) a formal way to specify the proportions. That's given by the distribution functions $F$ of the variables, or rather by their inverses: given any proportion $q$, the value of the inverse $x=F^{-1}(q)$ tells you what fraction of tickets must have values of $x$ or less written on them.
Preliminary Definitions and Notation
Let's take as our point of departure these quantile functions for the random variables. To be specific, suppose you are given a distribution function $F$. By definition, $F:\mathbb R \to [0,1].$ Its quantile function $X_F$ (also written $F^{-1}$) is defined on the interval $\mathcal{I}=[0,1]$ and takes values in the "extended real line" $$\mathbb{R}_\omega = \{-\infty\} \cup \mathbb{R} \cup \{\infty\}$$ according to the rule
$$X_F(q) = \sup\,\{x\in \mathbb{R}_\omega\mid F(x) \lt q.\}.$$
This has the essential property of mapping each subinterval $[0,q]$ into the set of all numbers $x$ for which $F(x)\le q$:
$$\{x \in \mathcal{I}\mid X_F(x) \le q\} = [0, q].\tag{*}$$
The Sample Space
Given a tuple of distribution functions $F_1, \ldots, F_n$ define
$$\Omega = [0,1]^n$$
to be the Cartesian product of the unit interval.
The Probability Measure
The probability measure on $\Omega$ is the uniform one: that is, for all $0\le a_i\le b_i\le 1$ with $i=1,2,3,\ldots, n,$
$$\Pr\left(\omega \in [a_1,b_1] \times [a_2,b_2] \times \cdots \times [a_n,b_n]\right) = (b_1-a_1)(b_2-a_2)\cdots(b_n-a_n).\tag{**}$$
The Construction
For $\omega=(q_1,q_2,\ldots,q_n)\in\Omega,$ define
$$X_i(\omega) = X_{F_i}(q_i)$$
to be the quantile function of $F_i$ evaluated at the $i^\text{th}$ component of $\omega.$ The tuple $(X_1,X_2,\ldots,X_n)$ is the desired set of independent random variables on $\Omega$ having the given (marginal) distributions.
(The fact that they are random variables includes an assertion that they are measurable functions--but because I wish to focus on the statistical properties of the variables, I will omit all discussion of the measure-theoretic aspects of this construction and leave the corresponding parts of the proofs as a (simple) exercise for those who are interested.)
Proofs
We have to demonstrate two things: that the $X_i$ are independent and that each $X_i$ has $F_i$ for its distribution. Together, these mean that for all extended real numbers $x_1, x_2, \ldots, x_n,$ the distributional values are given by the product of the $F_i$:
$$\Pr(X_1 \le x_1, X_2 \le x_2, \ldots, X_n \le x_n) = F_1(x_1) F_2(x_2) \cdots F_n(x_n).$$
It is almost trivial that this is so, because by virtue of $(*)$ above,
$$\{\omega\in\Omega\mid X_1(\omega)\le x_1, \ldots, X_n(\omega)\le x_n\} = [0, F_1(x_1)] \times[0, F_2(x_2)] \times \cdots \times [0, F_n(x_n)]$$
which under the uniform distribution $({**})$ has probability $$(F_1(x_1)-0)(F_2(x_2)-0)\cdots (F_n(x_n)-0) = F_1(x_1)F_2(x_2)\cdots F_n(x_n),$$ QED.