6

I have read in several places that convolutional neural networks were biologically inspired. In what ways do CNNs mirror biology, and in what ways don't they? Is there a more biologically plausible computer vision architecture that succeeds in learning translation invariance?

Ari Herman
  • 541
  • 1
  • 4
  • 8
  • It seems like this would be something like a biology question (about characteristics of actual brains). I don't immediately see how it can be conceived as falling within the [range of topics](http://stats.stackexchange.com/help/on-topic) for our site. If you can clarify how it is a question in statistics/machine learning, please do so. – Glen_b Mar 21 '17 at 02:08
  • 3
    I believe that an answer to this question requires some knowledge of artificial neural networks as well as some neuro-biology. Therefore, it seems equally sensible to post it on here as on biology stack exchange. Further, I believe that understanding which aspects of NN's mirror biology (and which do not) should be of interest to at least some NN reserachers. – Ari Herman Mar 21 '17 at 02:11
  • Neural networks are to some extent a proof of concept for how a particular problem could be solved, using units that are no more complex than neurons in the brain. But even if you can show that this *can* be done using units that are similar to neurons, you don't know that that's how the brain actually solves the same problem – Marius Mar 21 '17 at 02:13
  • Since this question has been put on hold as off-topic, can someone suggest which stack exchange I should re-post it to? I do not think that biology stack exchange will be helpful, since I am guessing that most biologists will not be familiar with concepts like weight sharing in CNNs. – Ari Herman Mar 21 '17 at 02:22
  • 1
    Just so you have a little something to go off of...neural nets used in machine learning are only a loose analogy to the brain. Convnets are vaguely similar to the visual system in the sense of having local connectivity, and the visual features they learn bear some resemblance. Shared weights aren't biologically plausible. Synaptic weights are stored in the local physical configuration of each synapse/cell (e.g. the density of various receptors and ion channels). Backprop isn't biologically plausible either. – user20160 Mar 21 '17 at 02:48
  • I believe this question is on topic enough, as mimicking the human visual system is behind many ideas in computer vision, including CNN. – Ophir Yoktan Mar 21 '17 at 06:35
  • I've seen similar questions on the [AI SE](https://ai.stackexchange.com). – Hong Ooi Mar 21 '17 at 08:25
  • Convolution is basic to biological modeling. Here is an [example that give some motivation as well](http://ejnmmiphys.springeropen.com/track/pdf/10.1186/s40658-016-0166-z?site...com). – Carl Mar 21 '17 at 08:40
  • @Ari Rather than reposting the question somewhere else, flag to migrate. But before you do so, note that your post is only one vote short of reopening here. [On the other hand, I haven't seen *anyone* address the specific criteria for what's on topic here and base an argument for reopening on those criteria -- the absence of such an argument in spite of a very long comment thread here implies that there's isn't one, which would be a strong argument to leave it closed. Nonetheless, if the vote to reopen carries and it looks like it will quite soon, I will not stop that happening] – Glen_b Mar 21 '17 at 11:07
  • @Glen_b In its current form, this Q looks perfectly on-topic to me. And interesting too (+1). – amoeba Mar 21 '17 at 11:24
  • I appreciate this being reopened. @Carl, I am was specifically talking about convolutional neural networks in the context of vision, not the general concept of convolution in biological modelling. – Ari Herman Mar 21 '17 at 15:58
  • Convolution neural networks are common. Convolution is common to many processes in biology, and, it would not surprise me to see convolution happening in the visual neurons in the retina. What is your point? – Carl Mar 23 '17 at 01:39

2 Answers2

2

The paper https://arxiv.org/pdf/1807.04587.pdf (July 2018) reports on some efforts to find artificial neural network learning algorithms that are biologically plausible. They focus mainly on backpropagation, but also discuss weight sharing. They review a lot of work by major researchers in the field and others. They conclude that algorithms that work well are not plausible, and algorithms that are plausible don't work well. Their references look like a good starting point for further reading, and it looks like the whole question is heating up again a little bit.

I think there is some confusion about what is meant by convolutional. ConvNets, in ANN research, use weight sharing (aka weight tying). There is a tutorial at https://www.quora.com/What-exactly-is-meant-by-shared-weights-in-convolutional-neural-network.

Weight sharing, not convolution per se, is the point here. It is essential for translational invariance, which is one of ConvNets' most important claims. Without it they wouldn't be able to learn anything in reasonable time. So folks in ANN research tend to assume that "convolutional" implies weight sharing.

In other disciplines, I think there is no such notion as weight sharing. Convolutional structures are familiar in the brain, as @Carl says, but there seems to be nothing known in the brain that is like weight sharing in form or function.

So to answer the OP's original question: convolution is highly plausible, but weight sharing is not. Therefore there is no biologically plausible model for ConvNets, in vision or any other domain, nor for some other kinds of ANN that also use weight-sharing. (One could also say the same thing about all ANN's that use backprop, which includes most supervised learning, whether convolutional or not.)

Caveat: I only glanced at the paper @Carl referenced. Too much chemistry for me, so I just assumed that it has nothing about convolution with weight sharing.

JWG
  • 21
  • 5
0

Related to the paper linked here by @JWG, here is a lecture by Hinton regarding the same topic, also be sure you take a look into his lately explored notions on capsule networks

https://www.youtube.com/watch?v=rTawFwUvnLE

And in more general terms, Hinton is certainly one of my best first bets whenever trying to bridge the gap between the brain to modeling via ANNs.