This was asked a long time ago, but I wrote an answer for a very similar question (maybe these should be linked?) and will post it here as well in case anyone discovers this question in future.
For your first question, @Bill is right -- you should "block bootstrap" the individuals to ensure the dependence structures within each individual's data are respected.
For your second question, in short, the answer is yes: you can do this in many settings, but you should correct for the sample size, since the estimator you are determining is actually different (i.e., for the sample mean, $\frac{1}{N}\sum_{i=1}^N X_i$ is a different estimator than $\frac{1}{M}\sum_{i=1}^M X_i$ if $M \ne N$). This approach is usually called the $M$ out of $N$ boostrap, and it works (in the sense of being consistent) in most settings that the "traditional" bootstrap does, as well as some settings in which it doesn't.
The reason why is that many bootstrap consistency arguments use estimators of the form $\frac{1}{\sqrt{N}} (T_N - \mu)$, where $X_1, \ldots, X_N$ are random variables and $\mu$ is some parameter of the underlying distribution. For example, for the sample mean, $T_N = \frac{1}{N} \sum_{i=1}^N X_i$ and $\mu = \mathbb{E}(X_1)$.
Many bootstrap consistency proofs argue that, as $N \to \infty$, given some finite sample $\{x_1, \ldots, x_N\}$ and associated point estimate $\hat{\mu}_N = T_N(x_1, \ldots, x_N)$,
$$
\sqrt{N}(T_N(X_1^*, \ldots, X_N^*) - \hat{\mu}_N) \overset{D}{\to} \sqrt{N}(T_N(X_1, \ldots, X_N) - \mu)
\tag{1} \label{convergence}
$$
where the $X_i$ are drawn from the true underlying distribution and the $X_i^*$ are drawn with replacement from $\{x_1, \ldots, x_N\}$.
However, we could also use shorter samples of length $M < N$ and consider the estimator
$$
\sqrt{M}(T_M(X_1^*, \ldots, X_M^*) - \hat{\mu}_N).
\tag{2} \label{m_out_of_n}
$$
It turns out that, as $M, N \to \infty$, the estimator (\ref{m_out_of_n}) has s the same limiting distribution as above in most settings where (\ref{convergence}) holds and some where it does not. In this case, (\ref{convergence}) and (\ref{m_out_of_n}) have the same limiting distribution, motivating the correction factor $\sqrt{\frac{M}{N}}$ in e.g. the sample standard deviation.
These arguments are all asymptotic and hold only in the limit $M, N \to \infty$. For this to work, it's important not to pick $M$ too small. There's some theory (e.g. Bickel & Sakov below) as to how to pick the optimal $M$ as a function of $N$ to get the best theoretical results, but in your case computational resources may be the deciding factor.
For some intuition: in many cases, we have $\hat{\mu}_N \overset{D}{\to} \mu$ as $N \to \infty$, so that
$$
\sqrt{N}(T_N(X_1, \ldots, X_N) - \mu),
\tag{3} \label{m_out_of_n_intuition}
$$
can be thought of a bit like an $m$ out of $n$ bootstrap with $m=N$ and $n = \infty$ (I'm using lower case to avoid notation confusion). In this way, emulating the distribution of (\ref{m_out_of_n_intuition}) using an $M$ out of $N$ bootstrap with $M < N$ is a more ``right'' thing to do than the traditional ($N$ out of $N$) kind. An added bonus in your case is that it's less computationally expensive to evaluate.
I know of two good sources in case anyone wants more details on using bootstrap samples shorter than the original sample:
PJ Bickel, F Goetze, WR van Zwet. 1997. Resampling fewer than $n$ observations: gains, losses and remedies for losses. Statistica Sinica.
PJ Bickel, A Sakov. 2008. On the choice of $m$ in the $m$ ouf of $n$ bootstrap and confidence bounds for extrema. Statistica Sinica.