If by your constraint you mean given a fixed shape, I cannot think of any way of optimizing the output. If you want to change the output you will have to change the weights, and thus your model will no longer have this 1% error, or the input (see next paragraph).
One way to solve similar problems to yours is to optimize some objective over the inputs instead of over the weights of the model. Let $x$ be the lift and drag, $w$ the models of your model and $y$ the shape. Right now you have something like $x=f_w(y)$, where $w$ was found as $w=\arg\min_w L(f_w(y),x)$ for some loss $L$. In your case changing the input would of course violate your constraint.
Say you have an inverse model giving $y=g_w(x)$, where again $w=\arg\min_w L'(g_w(x),y)$. You can maximize some function of lift and drag (say its product $x[0]\cdot x[1]$) regularized by how much violated is the constraint. An example is:
$x=\arg\max_x x[0] \cdot x[1] + \lambda (g_w(x) - y)^ 2$
where as $\lambda \to \infty$ the constraint (in this case measured with MSE) is exactly satisfied. Note that here the weights don't change and you can control the change of the shape.