I have checked several offline and online resources, and they are conflictive.
Some define the sample variance as
$$s_n^2 = \frac{\sum_{i}^{n}(x_i-\overline{x})^2}{n}$$
That is just the second central moment of the sample. Others claim this is a biased estimator and propose to replace the above expression by
$$s_{n-1}^2 = \frac{\sum_{i}^{n}(x_i-\overline{x})^2}{n-1}$$
The notation in the literature is also ambiguous because $s^2$ is often used to denote both concepts.
Among those authors that define the sample variance by the $s_n$ expression, they refer to the $s_{n-1}$ expression as an estimation of the population variance.
The same happens with the standard deviation of a sample. Some define it like
$$\sigma_s = \sqrt {\frac{\sum_{i}^{n}(x_i-\overline{x})^2}{n-1}}$$
others claim that it is, instead, given by
$$\sigma_s = \sqrt {\frac{\sum_{i}^{n}(x_i-\overline{x})^2}{n}}$$
and that the expression with the $(n-1)$ is an estimate of the standard deviation of the population
$$\sigma_p \approx \sqrt {\frac{\sum_{i}^{n}(x_i-\overline{x})^2}{n-1}}$$
Here the subindex "s" and "p" mean sample and population respectively.
To add further confusion, the logic behind the Bessel's correction on the variance does not apply to the deviation because the square root of $s_{n-1}^2$ does not provide an unbiased estimate of the population standard deviation.
Therefore, which are the expressions for the variance and standard deviation of a sample?