0

There is a method for imposing physical constraints on the neural networks, in which a physics-based loss is added to the loss function. This term is usually a function of the output of the network.

As a simplistic example, assume the network outputs a number, which should not fall below $3$. We add a (penalty) regularization term of the form $\max(3-output, 0)$ that penalizes when output is below $3$ and is canceled out when output is above $3$.

As this is essentially a constrained optimization (minimizing NN loss function subject to the physical constraint above), which is turned into a regularized unconstraint optimization (in the form of a Lagrangian function), we need to find a Lagrangian multiplier for the regularization term.

Question:

I was wondering, in the case of a regularized loss function for a neural network, is it possible to learn the Lagrangian multiplier for the regularization term based on some criteria on the output value while training the network? I have looked at many research papers and it seems everyone is “tuning” the Lagrangian multiplier than learning, which does not guarantee to satisfy KKT conditions.

Ref:

Blade
  • 611
  • 4
  • 16
  • 1
    I believe the discussion on this non-duplicate thread https://stats.stackexchange.com/questions/463751/loss-function-in-machine-learning-how-to-constrain/463762#463762 addresses this question, somewhat indirectly. In particular, I think the link to http://proceedings.mlr.press/v98/cotter19a.html contains one strategy to solve this problem. – Sycorax Jun 16 '20 at 15:03
  • 1
    What about this one? https://www.jstor.org/stable/41582932?seq=1 I only read the abstract, so I'm not sure if it's what you are asking, but they do say they are learning the Lagrange multiplier and they do mention KKT in the abstract. – Joe Mar 04 '21 at 06:04

0 Answers0