I've read through Hazan's paper on online convex optimization. I don't quite understand why the regularization term must be strongly convex instead of more relaxed condition such as strictly convex.
https://ie.technion.ac.il/~ehazan/papers/shalom.pdf
For instance, we want to compute $$x = \text{argmin}_{x \in X} {f^Tx + R(x)}$$
Let $f \in \mathbb{R}^n, x \in \mathbb{R}^n$, so $f^Tx$ is a linear function, and $R(x)$ is the strongly convex regularization term, such as $\|x\|^2$. So $x$ will be unique, because the objective is strongly convex.
But what if $R(x)$ is simply strictly convex? It is a much more relaxed condition. The argument will be strictly convex, the minimizer again will be unique. What seem to be the issue with using strictly convex regularizers?