There are plethora kind of modelisations to compute a quantitative outputs.
There is the KNN (K-Nearest-Neighbors) used as regression even if it is more known for classification. You just must use the right object of the library you are using. For instance in scikit-learn
there are KNeighborsClassifier
and KNeighborsRegressor
. The main problem with this algorithm is it is slow. And it is clearly a big issue with large dimensionalities dataset (more than 100) which is not your case.
There is linear regressions. Many kind available like OLS, Ridge based on the OLS but with more constraints, Lasso and so on. Generally these are ones the fastest in training. There are certainly the easier to explain too.
Decision tree (random forest for many decisions tree aggregated in one model, more used than decision tree alone). It is very good in training, so to explain a phenomenon is good with this, less easy than linear regression but possible. But as far as I know, it is bad in learning (for one single tree, random forest has the property to make trees correcting other trees so it is not concerning random forest). So not good to predict in a regression case. There is a great pros, you do not need to treat the data before: standardisation and so on. It is slower than linear models that is the cons.
For suggestions of litterature, I do not know your level to suggest you any litterature. In available litterature from PhD ppl who published their works, you must be great in maths. That is not always the case for every ML user and they do not always need it(an high very high level in maths) actually. Besides you have a constraint of time because you talked about stream data, and here a language and a library can be a huge difference in time execution for a same method, and I do not think university papers will guide you on this point. And even if you are using stream data, we do not know if you want an explanation of the phenomenon or a prediction of it no matter why. That can change totally the model you will use.