In this paper by Gabrielsson, Nelson, et al. the authors "present a differentiable topology layer that can, among other things, construct a loss on the output of a deep generative network to incorporate topological priors".
I only have a basic understanding of topology and it's causing me some confusion. To summarize the context for my question, the authors state this in the introduction (emphasis is my own):
In many deep learning settings there is a natural topological perspective. This is true both for images and for 3D data such as point clouds or voxel spaces. In fact, many of the failure cases of generative models are topological in nature [32, 18]. We show how topological priors can be used to improve such models.
For example, later on in 3.1, the authors describe an example on MNIST:
We show how one can encourage the formation of lines, clusters, or holes in a set of points using geometric filtrations
As far as I can tell, the "geometric filtrations" are applied as part of the loss function, and they express the kind of topological prior described in the first quote.
So my question: is the topological prior learned by the topological layer, or is the prior is imposed by the human who's training the network?
To put it in terms of the example, does the topological layer learn to "encourage the formation of lines, clusters, or holes," or is that prior information supplied by the human by properly specifying the regularizing loss function term?