1

I have used deep learning with Keras with 8 layers and 40 neurons to predict the lift and drag of an airfoil of a particular shape, using input data from different airfoil shapes, and output data of lift and drag. The result is very good, with about a 1% error.

So if I need to get max lift or drag from my given constraint, how can I use my deep learning model to get it? Is there any library or tools I can explore?

Doing a Google search kept giving me optimizing hyperparameters which is not what I'm looking for.

Thanks.

Jeremy Miles
  • 13,917
  • 6
  • 30
  • 64
quarkz
  • 41
  • 3
  • What do you mean by `from my given constraint`? What are you constraining on? – Bananin Apr 16 '20 at 15:00
  • Sorry, guess I wasn't too clear. My constraints are variables like max camber, max airfoil thickness. So maybe there are 2 to 4 of these constraints which I have to satisfy. Within these constraints, I hope to obtain e.g. maximum lift. thanks – quarkz Apr 16 '20 at 23:44
  • I think this is the same question, but it's hard to say. https://stats.stackexchange.com/questions/350869/maximization-of-output-based-on-input/351959#351959 – Sycorax May 28 '20 at 03:07

1 Answers1

0

If by your constraint you mean given a fixed shape, I cannot think of any way of optimizing the output. If you want to change the output you will have to change the weights, and thus your model will no longer have this 1% error, or the input (see next paragraph).

One way to solve similar problems to yours is to optimize some objective over the inputs instead of over the weights of the model. Let $x$ be the lift and drag, $w$ the models of your model and $y$ the shape. Right now you have something like $x=f_w(y)$, where $w$ was found as $w=\arg\min_w L(f_w(y),x)$ for some loss $L$. In your case changing the input would of course violate your constraint.

Say you have an inverse model giving $y=g_w(x)$, where again $w=\arg\min_w L'(g_w(x),y)$. You can maximize some function of lift and drag (say its product $x[0]\cdot x[1]$) regularized by how much violated is the constraint. An example is:

$x=\arg\max_x x[0] \cdot x[1] + \lambda (g_w(x) - y)^ 2$

where as $\lambda \to \infty$ the constraint (in this case measured with MSE) is exactly satisfied. Note that here the weights don't change and you can control the change of the shape.

Oriol B
  • 893
  • 3
  • 10
  • 1
    Additionally to what @Oriol B said, there always is the possibility to solve the problem with the constraints 'mathematically exact': A neural network is an explicit expression that we can compute the gradient of. Hence, we could also use Lagrange Multipliers in order to do this: https://en.wikipedia.org/wiki/Lagrange_multiplier – Fabian Werner Apr 17 '20 at 06:13
  • @Oriol B thanks for the response. I understand your points up to the inverse model. After that, I'm a bit lost. Can you explain? Suppose I want to get max lift and some of my inputs are fixed (e.g. angle of attack), while some others can be changed within a certain constraint (e.g. max thickness between 0.3 to 0.5), how do I solve this problem? Or is there a paper explaining your approach? Thanks! – quarkz Apr 18 '20 at 14:06
  • @quarkz a similar method is the one used [in this paper](https://arxiv.org/pdf/1508.06576.pdf). They want to find an image that satisfy certain properties expressed by a neural network. To do so, they start with noise and optimize over the input as I suggested (note last optimization is over $x$, not $w$ - I don know if this was your confusion). If you have the model lift -> features (one of them your angle of attack), you could find max lift as max_lift = argmax_{lift} lift + $\lambda$(features - target_features)^2. And of course features=g(lift). – Oriol B Apr 18 '20 at 17:49
  • I see that you don have a fixed thickness but an interval, so you can choose the appropriate loss function for each feature. It could be piecewise with value 0 in your allowed interval and some value outside. If outside the value is $\infty$, you will satisfy the constraints, but if you se e.g. MSE (relax constraints) it's much easier to find a solution. If this is not ok for you, you should give a try to @FabianWerner approach. – Oriol B Apr 18 '20 at 17:57
  • 1
    @OriolB thanks. I'm reading the paper. still trying to understand what the mtd it uses.;-) – quarkz Apr 19 '20 at 23:16