One way to go about is to build an "Arima with exogenous variables" model for each customer. In which case, I would be looking to manage 200K models.
I always find it strange when practitioners suggest fitting a large number of individual models to individual objects --- in this case 200K models! A good statistical model should describe the overall dataset, allowing for variation between individuals. If you have covariate information on the individuals that would allow you to differentiate them and group them by characteristics then presumably it would be possible to formulate a single model for all the customers, which uses that covariate information. When dealing with time-series data for multiple individuals it is also common to see correlation in the series of a single individual, and this can be accommodated by using hierarchical models, adding "random effect" terms for each individual, or by other similar methods.
Without more information, it is not really possible to say what will be the best model, but I am always sceptical when I see practitioners segment data down into tiny parts and then apply ad hoc models to the parts. That is a method that risks loss of information (since the model excludes data from other individuals) and also risks over-fitting (since models are tailored to small parts). As a first pass with this kind of data, I would suggest fitting a time-series model that includes the covariate variables for the individuals, and also includes some kind of random effects terms to induce correlation between the actions of a single individual over time. This should give you some idea of the predictive effect of the covariates, and will highlight whether there are any residual variations in individuals that can't be adequately described by simple random effects. Regardless of the particular model you end up with, my view is that it is best to approach the problem by seeking to accommodate the entire dataset in a single model.