This question was asked a long time ago, but I'm posting a response in case anyone discovers it in future. In short, the answer is yes: you can do this in many settings, and you are justified in correcting for the change in sample size by the $\sqrt{\frac{M}{N}}$. This approach is usually called the $M$ out of $N$ boostrap, and it works in most settings that the ``traditional''' bootstrap does, as well as some settings in which it doesn't.
The reason why is that many bootstrap consistency arguments use estimators of the form $\sqrt{N} (T_N - \mu)$, where $X_1, \ldots, X_N$ are random variables and $\mu$ is some parameter of the underlying distribution. For example, for the sample mean, $T_N = \frac{1}{N} \sum_{i=1}^N X_i$ and $\mu = \mathbb{E}(X_1)$.
Many bootstrap consistency proofs argue that, as $N \to \infty$, given some finite sample $\{x_1, \ldots, x_N\}$ and associated point estimate $\hat{\mu}_N = T_N(x_1, \ldots, x_N)$,
$$
\sqrt{N}(T_N(X_1^*, \ldots, X_N^*) - \hat{\mu}_N) \overset{D}{\to} \sqrt{N}(T_N(X_1, \ldots, X_N) - \mu)
\tag{1} \label{convergence}
$$
where the $X_i$ are drawn from the true underlying distribution and the $X_i^*$ are drawn with replacement from $\{x_1, \ldots, x_N\}$.
However, we could also use shorter samples of length $M < N$ and consider the estimator
$$
\sqrt{M}(T_M(X_1^*, \ldots, X_M^*) - \hat{\mu}_N).
\tag{2} \label{m_out_of_n}
$$
It turns out that, as $M, N \to \infty$, the estimator (\ref{m_out_of_n}) has s the same limiting distribution as above in most settings where (\ref{convergence}) holds and some where it does not. In this case, (\ref{convergence}) and (\ref{m_out_of_n}) have the same limiting distribution, motivating the correction factor $\sqrt{\frac{M}{N}}$ in e.g. the sample standard deviation.
These arguments are all asymptotic and hold only in the limit $M, N \to \infty$. For this to work, it's important not to pick $M$ too small. There's some theory (e.g. Bickel & Sakov below) as to how to pick the optimal $M$ as a function of $N$ to get the best theoretical results, but in your case computational resources may be the deciding factor.
For some intuition: in many cases, we have $\hat{\mu}_N \overset{D}{\to} \mu$ as $N \to \infty$, so that
$$
\sqrt{N}(T_N(X_1, \ldots, X_N) - \mu),
\tag{3} \label{m_out_of_n_intuition}
$$
can be thought of a bit like an $m$ out of $n$ bootstrap with $m=N$ and $n = \infty$ (I'm using lower case to avoid notation confusion). In this way, emulating the distribution of (\ref{m_out_of_n_intuition}) using an $M$ out of $N$ bootstrap with $M < N$ is a more ``right'' thing to do than the traditional ($N$ out of $N$) kind. An added bonus in your case is that it's less computationally expensive to evaluate.
As you mention, Politis and Romano is the main paper. I find Bickel et al (1997) below a nice overview of the $M$ out of $N$ bootstrap as well.
Sources:
PJ Bickel, F Goetze, WR van Zwet. 1997. Resampling fewer than $n$ observations: gains, losses and remedies for losses. Statistica Sinica.
PJ Bickel, A Sakov. 2008. On the choice of $m$ in the $m$ ouf of $n$ bootstrap and confidence bounds for extrema. Statistica Sinica.