1

I am trying to develop a surrogate model for a thermoacoustic engine. The engine is modelled in a program called DeltaEC. I have developed automation code so that I can change parameters, run the model and read output. Each run takes roughly 1 second with my automation code. I want to automate the investigation to gather the most amount of data, quickly.

I am now in the process designing an investigation/sample plan. I want to investigate the effects of 5 parameters, each within a small range.

My greatest problem so far is that the simulation software (DeltaEC) can not cope with a large change in any of the parameters, which will lead to non convergence. DeltaEC solves a problem with a guess-target shooting method. Such that after each iteration during a simulation run, the guesses are updated and then compared to the target. If the target parameter is changed significantly from one run to the next then guesses from the previous run are not accurate enough for the run to converge. In this case DeltaEC will continue iterating often moving guesses entirely in the wrong direction.

Furthermore, there will be some cases where DeltaEC will never converge as a result of the combination of input parameters.

Therefore, I need to find a 'path' through the input space such that moving along the path only changes parameters very slowly. Is there a sampling approach that allows experiment design with only slowly changing parameters (aside from one factor at a time)?

Alternatively I could develop a route finding algorithm to slowly advance my model between treatments.

I would be very grateful for any comments or suggestions.

Francis
  • 13
  • 3
  • 1
    Some posts which might help: https://stats.stackexchange.com/questions/490698/change-parameters-block-wise-in-experiment-design, https://stats.stackexchange.com/questions/338740/best-doe-method-to-fit-gaussian-process-regressor, https://stats.stackexchange.com/questions/495315/how-to-compare-and-evaluate-different-business-concepts Please tell us what kind of engine you are simulating, how many parameters to change, if you have some kind of simplified math model to aid, ... – kjetil b halvorsen Feb 09 '21 at 13:02
  • Thank you! a paper (Engineering Design via Surrogate Modelling) referenced in one of the answers has significantly broadened my understanding. I have updated my question. – Francis Feb 09 '21 at 15:46
  • Say more about how DeltaEC "can't cope" with a large change of parameters. I can't help but think about some biopharmaceutical studies where we find the highest safe dose by slowly titrating "up" a dose level until a dose limiting toxicity occurs. – AdamO Feb 09 '21 at 16:56
  • Thanks. I have updated the question. I will look into biopharmaceutical studies – Francis Feb 09 '21 at 17:33

1 Answers1

1

In statistical design, the process of slowly moving to an optimum is called response surface methodology (here and here). At each design stage, you basically create a mini-design with only small movements in the parameters. You use that design to find the gradient and then move in the right direction to the next design space. It is the design version of simple optimization algorithms.

I agree with your misgivings about Latin hypercube designs given the problem constraints. A Latin hypercube would help you explore the space more fully which would help in the situation where you might get caught in a local minimum / maximum. Theoretically, you could lay down the Latin hypercube design, and then order your runs in a way that gets you through all of them with the smallest changes. You might need to run intermediate design points to get from one LHC design point to the next.

R Carnell
  • 2,566
  • 5
  • 21
  • Thanks for your response. So it seems optimisation in an achievable approach but would not give me information about interaction/main factor effects. I will explore your suggestion of ordering Latin hypercube designs and planning intermediate runs between differing treatments. – Francis Feb 10 '21 at 14:48
  • Unfortunately, I don't have the reputation to publicly credit your answer but thank you! :) – Francis Feb 10 '21 at 14:49
  • You are welcome! Even if you can't upvote, you should be able to accept the answer since it is your question. – R Carnell Feb 10 '21 at 15:06
  • Not bother with interactions until you seem to have found an optimum, then you can do a larger local experiment permitting estimation of interactions, (and to see if you really have a local maximum), and if not a better gradient to try along ... also look into *evolutionary operation*, which is about response surface modelling in a sequantial way, doing only small changes from run to run. It was linked in some of my earlier links! – kjetil b halvorsen Feb 10 '21 at 15:23
  • Thanks kjetil! Why do you optimize first, when performing the larger search will allow optimisation and information gathering in parallel? – Francis Feb 10 '21 at 15:59