In general (of course there may be exceptions), convex functions are easier to optimize than nonconvex functions. But this question is kind of like asking whether it's better to multiply two 3 digit numbers together, or two 6 digit numbers together, when using an abacus. One is easier than the other, but 3 digits is more limiting than 10 digits, and sometimes you may need to use all 6. People choose to solve harder optimization problems because they think it'll perform better some ways -- for huber loss, this might be robustness to outliers.
Also, maybe you're aware of this already, but even if the loss is convex with respect to your predictions, that doesn't necessarily mean it's convex with respect to the parameters you're optimizing. For example, a neural network with a hidden layer and MSE as an objective has a nonconvex loss.
Momentum in optimizers can be beneficial in both convex and nonconvex cases.