I could think of a scenario whereby you have an "IoT"/Edge/Fog device that gets updated with trained models to detect features. Initially, while untrained, you may want to transmit a lot of live data. Once you train models (say, for anomaly detection), the need for large amounts of data may drop.
You may still want to collect data over time, though, to avoid a deteriorating real life thing, which changes its behaviour over time, drifitng out of the trained dataset, but you would gradually move to a "sanity" check. One way could be, switching from live full up data feeds to bursts, another to increase sample rate, that would depend on the application.
So, instead of measuring the temprature in an airconditioned room every 10 secs, after you acquired these data for a year (for the sake of an example), if your model was good enough after it was trained for one full seasonal cycle (half a year would be sufficient most of the time), you could could decrease sample rate thus saving transmission cost, or increasing bandwidth.