Your description is not very clear, presumably because what you are describing is not clear for you, hence your question. I'll try explaining the terms you described and the usual usage of them, as well the ways how to use those procedures and algorithms as usual. Pay attention to what I'm saying in case I misunderstood what you meant.
So, let's say I have a dataset of 100 observations. That dataset is
then split into a 75/25 split of train and test sets.
You are describing using a hold-out validation. This means that you split the data to train and test set. Here you use the 75/25 proportions, hence 75% of your data goes into the train set and the remainder to the test set. 75% of 100 is 75 observations.
In some cases, you would split the data into three sets train, validation, and test. The validation set would be used for hyperparameter tuning.
Then the bootstrapping gets done and I "extract" 1000 different
bootstrap samples of that same split (now I have 1000 different 75/25
splits in theory).
This paragraph is not clear to me. Random forest uses bootstrap to resample the data is trained on. In your example, this would mean that you would use bootstrap resampling on the 75 samples from the train set. During each bootstrap sampling, you would draw with replacement 75 out of 75 samples. The procedure would be repeated 1000 times, so you would have 1000 bootstrap samples of 75 observations each. Each of those bootstrap samples would consist of different (at random) combinations of the 75 observations from the training set. In this step, you don't touch the test set observations.
This is where I'm not quite sure if I understood the algorithm
correctly; Essentially we now have 1000 different forests, and they
all get aggregated to make a final decision?
Not a 1000 random forests, but a 1000 decision trees. You would make a prediction using each of those trees and then aggregate the individual predictions to make the final prediction. To aggregate them, you would use something like majority vote (for classification) or average (regression), but this doesn't matter that much.