To put my question into context, I am a physicist but with limited exposure to statistics and what I have learned about it was over 30 years ago.
I am trying to learn about block bootstrapping as that technique might be suitable for solving an issue I am working on. I can find lots of papers/books/info on the mathematics of block bootstrapping but I would like to find first a generic description of the process of block bootstrapping before 'venturing' into issues as moving block bootstrapping, circular block bootstrapping, stationary block bootstrapping, blocklengths, samplesize, etc.
I have oversampled correlated data, 5 variables (columns) by 10000 observations (rows) which I want to reduce to about 100 rows of data. The data is a timeseries, but not continuous and there might be data from different locations in it too, which means you can have different data at the same time (if the latter is an issue for block bootstrapping, I could remove 'duplicated' data in time). Block bootstrapping would allow to replicate the correlation of the data.
The ultimate aim is to reduce the dataset to ~100 rows of data such that both pdf and cdf of the full dataset and the reduced dataset are the same (within a still-to-be-defined minimum error range) for all 5 variables .
Question: 1) Will block bootstrapping be able to do this? 2) What is the step-by-step process this is done? I don't expect anyone to write the full process in detail here, but maybe someone has put a youtube video or a 'bootstrapping for dummies' out there that I could start with.
I have looked at similar questions on block bootstrapping here and there is one on "Resources to learn about block bootstrap in time series analysis", but references in the answers assume a statistical literacy I still have to master.