Since the overarching goal here is to generate the $\text{AR}(1)$ series values, I will show you how you can generate these directly, rather than via the error terms. (The method of generating the error terms is very similar.) The way to do this is to write out the multivariate normal distribution for the process and then use the rules for the conditional distribution for a multivariate normal. I will use more standard notation by using the form:
$$Y_t = \mu + \phi (Y_{t-1} - \mu) + u_t
\quad \quad \quad
u_t \sim \text{IID N}(0, \sigma^2).$$
The stationary form of this model is for vectors of $y_i$ values to be distributed by a multivariate normal distribution, with variance matrix determined by the auto-covariance function of the process. Thus, when you want conditional distributions, you can use standard results for conditioning in the multivariate normal distribution.
Conditional distribution given endpoints: The auto-correlation function for the $\text{AR}(1)$ process above is:
$$\rho(t) = \frac{\phi^t}{1-\phi^2}
\quad \quad \quad
\text{for } t \in \mathbb{Z}.$$
Thus, the marginal distribution of the vector $\mathbf{Y} = (Y_0,...,Y_T)$ is:
$$\mathbf{Y} \sim \text{N} \Bigg( \mu \mathbf{1}, \sigma^2 \mathbf{\Sigma}(\phi) \Bigg)
\quad \quad \quad
\mathbf{\Sigma}(\phi) \equiv \frac{1}{1-\phi^2} \begin{bmatrix}
1 & \phi & \phi^2 & \cdots & \phi^T \\
\phi & 1 & \phi & \cdots & \phi^{T-1} \\
\phi^2 & \phi & 1 & \cdots & \phi^{T-2} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\phi^T & \phi^{T-1} & \phi^{T-2} & \cdots & 1 \\
\end{bmatrix}.$$
To obtain the conditional distribution of interest, we decompose the random vector $\mathbf{Y}$ as:
$$\mathbf{y} = \begin{bmatrix} \mathbf{Y}_\text{INT} \\ \mathbf{Y}_\text{END} \end{bmatrix},$$
where $\mathbf{Y}_\text{INT} = (Y_1,...,Y_{T-1})$ and $\mathbf{Y}_\text{END} = (Y_0,Y_T)$. We likewise decompose the mean vector and variance matrix as:
$$\mu \cdot \mathbf{1} = \begin{bmatrix}
\mu \cdot \mathbf{1} \\
\mu \cdot \mathbf{1}
\end{bmatrix}
\quad \quad \quad
\mathbf{\Sigma}(\phi) = \begin{bmatrix}
\mathbf{\Sigma}_\text{INT}(\phi) & \mathbf{\Sigma}_\text{CROSS}(\phi) \\
\mathbf{\Sigma}_\text{CROSS}(\phi)^\text{T} & \mathbf{\Sigma}_\text{END}(\phi)
\end{bmatrix},$$
where the mean vectors are of appropriate lengths and the variance parts are:
$$\begin{aligned}
\mathbf{\Sigma}_\text{INT}(\phi)
&\equiv \frac{1}{1-\phi^2} \begin{bmatrix}
1 & \phi & \phi^2 & \cdots & \phi^{T-2} \\
\phi & 1 & \phi & \cdots & \phi^{T-3} \\
\phi^2 & \phi & 1 & \cdots & \phi^{T-4} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\phi^{T-2} & \phi^{T-3} & \phi^{T-4} & \cdots & 1 \\
\end{bmatrix}, \\[30pt]
\mathbf{\Sigma}_\text{CROSS}(\phi)
&\equiv \frac{1}{1-\phi^2} \begin{bmatrix}
\phi & \phi^{T-1} \\
\phi^2 & \phi^{T-2} \\
\phi^3 & \phi^{T-3} \\
\vdots & \vdots \\
\phi^{T-1} & \phi \\
\end{bmatrix}, \\[30pt]
\mathbf{\Sigma}_\text{END}(\phi)
&\equiv \frac{1}{1-\phi^2} \begin{bmatrix}
1 & \phi^{T} \\
\phi^{T} & 1 \\
\end{bmatrix}. \\[6pt]
\end{aligned}$$
Now, using the conditional density rule for the multivariate normal distribution, we have:
$$p(\mathbf{y}_\text{INT}| \mathbf{y}_\text{END}) = \text{N}(\mathbf{y}_\text{INT}| \boldsymbol{\mu}_*, \boldsymbol{\Sigma}_*),$$
where:
$$\begin{equation} \begin{aligned}
\boldsymbol{\mu}_* &\equiv \mu \cdot \mathbf{1} + \boldsymbol{\Sigma}_\text{CROSS}(\phi) \boldsymbol{\Sigma}_\text{END}(\phi)^{-1} (\boldsymbol{y}_\text{END} - \mu \cdot \mathbf{1}), \\[8pt]
\boldsymbol{\Sigma}_* &\equiv \boldsymbol{\Sigma}_\text{INT}(\phi) - \boldsymbol{\Sigma}_\text{CROSS}(\phi) \boldsymbol{\Sigma}_\text{END}(\phi)^{-1} \boldsymbol{\Sigma}_\text{CROSS}(\phi)^\text{T}. \\[6pt]
\end{aligned} \end{equation}$$
This gives you the conditional distribution of the observable values in the process. If you particularly want the error terms, you can extract them from the observed values using the recursive equation for the process.