Assuming you have a function $f$ the Hessian matrix $H$ of $f$ at a particular point $\theta$ is is essentially the second-derivatives of that function at $\theta$. More importantly if now the function you optimize happens to be a log-likelihood the Hessian matrix $H$ is equal to the inverse of the covariance matrix $K$. This is why the inverse of the covariance matrix is often call precision matrix. Now if you want the standard errors you can get them immediately of each variable in $\theta$ you can get them directly by taking the square root of the diagonal elements in $K$ (where the diagonal elements $K_{i,i}$ are practically the variances of each $\theta_i$.
OK, so lets make this a bit more obvious using code. I base this example of linear regression for illustration purposes.
First we generate some data:
% Generate some data
N = 100;
rng(1234);
X = rand(N,2);
beta = [10,20];
epsilon = randn(100,1) *2;
y = epsilon + X*beta';
Then we use MATLAB's in-built function to fit our model so have some idea if we right or just completely off (always a good thing). We also quickly check our coefficient covariance:
goodLM = fitlm(X,y,'Intercept',false);
goodLM.CoefficientCovariance
% ans =
% 0.3164 -0.2338
% -0.2338 0.2910
Then we define the cost function we optimize against. Here as we have as simple linear model we optimize against the normal log-likelihood. Luckily for us MATLAB has got an in-built function for that (normlike). At this point we stop and remember that actually our linear model means that $y \sim N(X\beta, \sigma^2)$ and that similarly we assume that $\epsilon \sim N(0,\sigma^2)$. So we can ahead and optimize against the likelihood of our residuals coming from a normal with mean 0 and standard deviation $\sigma$.
myNlogLik =@(beta) normlike([0, std(y-X*beta')], y-X*beta');
We move ahead and optimize using a solver that actually offers us information about the Hessian (eg. fminunc). I set some starting values and I start my optimizer:
[x,fval,exitflag,output,grad,hessian] = fminunc(myNlogLik, [19,25]);
So what is the inverse of our Hessian?
inv(hessian)
% ans =
% 0.3103 -0.2289
% -0.2289 0.2854
Something quite close to our original matrix! :) If you compute the square roots of the diagonal of this matrix you will see that actually they are very close to the standard errors MATLAB gave originally (or in general the standard errors one would get using close form matrix equations).
[ sqrt(diag(inv(hessian))) goodLM.Coefficients.SE]
% ans =
% 0.5570 0.5625 % first column "our fits", second "MATLAB's"
% 0.5342 0.5395
Clearly as you are not going to get the same exact results for a couple of reasons. For starters one might not have converged to the same minimum. Second and more importantly one approximates the Hessian in some way; a stencil approach, some form of spline fitting, Richardson extrapolation? Pick your numerical poison...