2

I am solving a large set of nonlinear optimisation problems using different algorithms. I have compared their performance using performance profiles (see Dolan and Moré, 2002). These profiles are figures that indicate the global performance of an algorithm. They indicate what fraction of problems are solved within a factor $\tau$ of the best problems. An example can be seen below. Performance profile of three algorithms.

The difference between these two algorithms is clearly not very large. They both solve 80% of all problems within 1.5 times the optimum.

My question is: can I quantify the difference between these two algorithms using statistics?

I have looking into the student t-test and the Wilcoxon test, but if I understand correctly these assume some kind of stochastic nature of the sampling method. The algorithms I use are deterministic; when solving the same problem, they always produce the same result.

Using things like the average is possible, but that would not yield more information than the above performance profile.

UPDATE

One of the founders of the performance profiles writes me that he wouldn't know "how to quantify the difference between the solvers in any significant way" when the profiles cross or when one profile is not on top of another. It would thus be quite spectacular if someone could come up with an idea.

H. Vabri
  • 143
  • 4
  • What inputs are these performance measures using? Clearly the numerical inputs to the same problem can vary a lot, and they can serve as your random variable for statistical purposes. – Zahava Kor May 03 '17 at 17:31
  • A single problem is defined by multiple variables. But once changing these variables the difficulty of the problem itself also changes, thus I don't think you can then compare the two. – H. Vabri May 04 '17 at 08:37

0 Answers0