3

Simulated annealing aims at a series of target distributions $$\pi_T(x)\propto\exp\{T\,H(x)\}$$ to find the maximum of the function $H$ and its argument $$\arg_x\max_{x\in \mathfrak X} H(x)$$ if the temperature $T$ increases slowly enough. When $\mathfrak X$ is of large dimension, it is usual to apply a Gibbs sampler to divide the simulation from $\exp\{T\,H(x)\}$ into smaller dimension targets.

Consider now solving the saddle-point problem $$\arg_{(x_1,x_2)}\max_{x_1\in \mathfrak X_1}\min_{x_2\in \mathfrak X_2} H(x_1,x_2)$$ One can attempt a Gibbs sampling strategy with an increasing sequence of temperatures $T$ by simulating successively $$X_2^{(t)}|X_1^{(t)}=x_1^{(t)}\sim \exp\{-T\,H(x_1^{(t)},x_2)\}$$ and $$X_1^{(t+1)}|X_2^{(t)}=x_2^{(t)}\sim \exp\{+T\,H(x_1,x_2^{(t)})\}$$ but I wonder at the justification (and convergence) of this strategy given that the two Gibbs conditionals are not compatible with a joint but instead antagonistic.

Xi'an
  • 90,397
  • 9
  • 157
  • 575
  • 1
    https://scicomp.stackexchange.com/questions/3372/simulated-annealing-proof-of-convergence – Mark L. Stone Apr 02 '19 at 01:32
  • 1
    @MarkL.Stone: thank you for the link. Downhill simulated annealing is essentially the same as uphill simulated annealing, while saddlepoint simulated annealing remains a mystery (to me at least). – Xi'an Apr 02 '19 at 06:18
  • 1
    Not a solution, but - https://arxiv.org/abs/2002.06277 - is some recent work which studies certain algorithms for solving saddle-point problems, consisting of greedy ascent / descent steps, with noise injected. Their Theorem 1 shows that such a procedure can converge to an invariant measure. This might be useful. – πr8 May 23 '20 at 16:47

0 Answers0