Say I have two datasets that have the same features but different samples. If I build two linear models, one for each of the datasets, and then take a weighted average (say the weights here are the number of samples in each of the datasets that the model was built on) of the weights of each of these linear models, what can I say about the resulting meta-model?
Model 1, built over the first $m$ samples $$ Y_0 = \sum_{i=0}^{k}(\beta_{0,i}*x_{i}) $$ Model 2, built over the remaining samples ($n$ in this case) $$ Y_1 = \sum_{i=0}^{k}(\beta_{1,i}*x_{i}) $$ The merged model that I am interested in $$ Y_{1+0}= \sum_{i=0}^{k}(\frac{m*\beta_{0,i}+n*\beta_{1,i}}{m+n}*x_{i}) $$
I have looked into ensembling, and as far as I can tell it is common practice to use multiple models for prediction (average the result) or classification (majority vote), but I have yet to find someone discussing what the result of a merged model would be.