For time dependent regressors, it is pretty straightforward. Many classes of time series models can handle them, including from the ARIMA family (ex: ARIMAX and regression with ARIMA errors), BSTS, Facebook Prophet, and others.
The tricky part is time independent regressors: Most people don't realize that time independent regressors are of no use whatsoever unless you are modeling multiple time series at the same time.
Consider your housing example: If you are trying to model the price of a single house over time, which has a fixed size, then there is no way such a model can capture the effect of the size of the house on the price, since it hasn't "seen" the effect of any other sizes on the price in order to determine what the right coefficients/weights should be.
This means you will have to use an approach that allows you to model multiple time series together, while including the effect of the time-independent variables. Here are a couple of ways you might be able to pull that off (there might be more, but I am not familiar with them):
If your time independent variables are categorical or discrete valued, you can use a hierarchical/grouped time series forecasting approach, where you group your time series along the different possible values of your time independent variable. The advantage of this approach is that you can use most of the text book time series methods (ARIMA, Exponential Smoothing, etc...) in concert with a hierarchical forecasting scheme. The downside of this approach is that it doesn't allow for continuous time-independent variables, and you have to make some assumptions before hand on the importance and the effect of your time independent variables (e.g. should I aggregate my time series based on zip code first, then based on number of rooms, or the other way around?).
If your time independent variables are continuous, you can use a general machine learning based approach like XGBoost, or Deep Learning models. The advantage of this approach is that it can handle any type of additional variables because this type of ML model is usually very flexible. The downside of this approach is that ML models are much harder to implement (coding, Hyperparameter optimization, etc...) than regular time series models, and it is usually difficult to interpret their output, since they tend to highly non-linear and "black box" in nature.
In response to the comment about how to include size in a hierarchical model:
Simply transforming the sizes to a discrete value instead of continuous one won't help much, because you would still end up with a very large range of nodes in your hierarchy, and each one will only have a very small number of time series in it, thus defeating the purpose of hierarchical forecasting in the first place.
Instead I suggest one of the two following ways of dealing with size variable:
Plot the distribution and histograms of your sizes, and see if they have any distinct modes or clusters. Hopefully there would be only a small number of them. You can then assign each size to a bin that corresponds to the cluster it is in, and use that as your aggregation criteria.
Example: Your sizes are $[1899 , 2023, 2200, 2300, 3500, 3570, 3995, 4012]$, you can see that there are two clusters. Assign $[1899 , 2023, 2200, 2300]$ to $group 1$, and assign $[3500, 3570, 3995, 4012]$ to $group 2$. Then uses those two groups to aggregate you data.
Note that this will only work if there are definite clusters in the data. It will not work if you had a size distribution that was more uniform like $[1899 , 2023, 2850, 3010, 3500, 3995, 4012, 4300]$.
A second approach (this is not really stats, just an example of using domain knowledge) is to just ignore the size, because the size will correlate strongly with other, more manageable, discrete variables like number of rooms, and number of stories in the house. You can just use those as proxies for the size of the house.