It's quite intuitive that most neural network topologies/architectures are not identifiable. But what are some well-known results in the field? Are there simple conditions which allow/prevent identifiability? For example,
- all networks with nonlinear activation functions and more than one hidden layer are not identifiable
- all networks with more than two hidden units are not identifiable
Or things like these. NOTE:I'm not saying that these conditions prevent identifiability (though they seem pretty good candidates to me). They are just examples of what I mean with "simple conditions".
If it helps to narrow down the question, feel free to consider only feed-forward and recurrent architectures. If this is still not enough, I'd be satisfied with an answer which cover at least one architecture among MLP, CNN and RNN. I had a quick look around on the Web but it looks like the only discussion I could find was on Reddit. Come on, people, we can do better than Reddit ;-)