I'm having the following scenario:
I'm working with Deep Learning for solving seismic processing tasks. The seismic data may be understood as a collection of images that represents a given sub-surface region. Kind of an x-ray but to see the underground. To simplify, it is a discrete cube $C$, and each sample $s \in C$ are real numbers.
A seismic cube can be seen as a 3D matrix of real values:
My question is in regard to the best practices for normalizing such kind of dataset for training and inferencing.
Usually, the neural network (NN) receives batches of sections (images) of this volumes. But it is unfeasible to load this volume beforehand to compute their min/max values to perform some kind of normalization and standardization. And we know that ideally, the NN should receive samples (images) such that their values are in the range $[0,1]$.
What is the recommended practice to train the NN in this situation?
And after I got the trained network, how to treat the new images for inference?