0

I'm working on a problem where I need to concatenate feature from a resnet model and few extracted features for a end-to-end deep learning model.

Model summary:

 IMAGE -> ResNet-50 -> 2048 features --\
                                         -- 2053 features (concatenated) -> Dense -> Softmax
Extracted features (5) -> 5 features --/

The feature vector from images is extremely high dimensional, on the other hand, the other vector is very low dimensional. The performance is not very good yet, how to apply some operations/change the architecture so that the discrepancy between this feature vectors is resolved?

Possibly some papers which solves this problem would be much appreciated.

  • 1
    Have you tried adding more dense layers? – Sycorax Sep 14 '20 at 18:56
  • yes, you mean after the ResNet-50, right? Yes, but increases the number of parameters too much (I tried to add Dense with 64, 128 units, etc), but it overfits and performs poorly on the validation data. Is there any other approach which won't add too much parameters to the model? – Zabir Al Nazi Sep 14 '20 at 19:06
  • You could add more dense layers and use [tag:regularization]. See: https://stats.stackexchange.com/questions/365778/what-should-i-do-when-my-neural-network-doesnt-generalize-well – Sycorax Sep 14 '20 at 19:10
  • make sure the resnet outputs and your features have similar scale. maybe the resnet outputs are really large and your features are essentially lost. if you haven't tried it yet, you might want to normalize all your input features. – goker Sep 24 '20 at 22:20

0 Answers0