Machine learning research papers often treat learning and inference as two separate tasks, but it is not quite clear to me what the distinction is. In this book for example they use Bayesian statistics for both kinds of tasks, but do not provide a motivation for that distinction. I have several vague ideas what it could be about, but I would like to see a solid definition and perhaps also rebuttals or extensions of my ideas:
- The difference between inferring the values of latent variables for a certain data point, and learning a suitable model for the data.
- The difference between extracting variances (inference) and learning the invariances so as to be able to extract variances (by learning the dynamics of the input space/process/world).
- The neuroscientific analogy might be short-term potentiation/depression (memory traces) vs long-term potentiation/depression.