I'm a student studying ML.
After I searched the differences between the loss function, cost function, and object function, I had some questions.
Objective function, cost function, loss function: are they the same thing?
As the above answer, object function is a more generic function than cost function. And object function can have a different formula like the maximum likelihood function, which is neither loss function nor cost function.
But as I understood about ML (especially gradient-based learning), we can train our ML models by optimizing our cost function to converge to maxima/minima point, and at that time, we calculate gradient per each weight parameter to converge optima points.
In short, when we train our models, we use the cost function to compute gradients.
If cost function and object function are the same, that isn't a huge problem, but otherwise, then we need to prove our object function can be optimized through optimizing our cost function? In other words, do we need to explicitly prove that our object function could be reduced to our cost function??
If not, why we don't need to prove it?