This formula doesn't "solve" $k$-means, it's just an high-level mathematical representation of the problem you are trying to solve. $k$-means is the whole algorithm using expectation-maximization to minimize the given loss function. Commonly, the improved versions of the algorithm, like $k$-means++, are used instead because they give better performance and time complexity guarantees.
The loss function used by $k$-means can be written in different ways. For example, Wikipedia uses
$$
\underset{S}{\operatorname{arg\,min}} \sum_{j=1}^k \sum_{\mathbf{x} \in S_j} \| \mathbf{x} - m_j\|^2
$$
For each of the $1, 2, \dots, k$ clusters $S_j$ containing the $\mathbf{x}$ points, the loss is the sum of squared errors between the points within the cluster $\mathbf{x} \in S_j$ and cluster's mean $m_i$, i.e. the part
$$
\sum_{\mathbf{x} \in S_j} \| \mathbf{x} - m_j \|^2
$$
we sum the within-cluster squared error to get the total loss. Squared error is used, because minimizing squared error leads to finding the mean. The formula you refer to is a little bit different, but says the same thing
$$
\underset{m_1,m_2,\dots,m_k}{\operatorname{arg\,min}} \sum_{i=1}^n \underset{j=1,2,\dots,k}{\min} \| \mathbf{x}_i - m_j \|^2
$$
Notice that the above formula uses single sum, but over all the $n$ samples. It says that we are looking for smallest means $m_1,m_2,\dots,m_k$ such that it is minimized. There is also this part that differs from the first one
$$
\underset{j=1,2,\dots,k}{\min} \| \mathbf{x}_i - m_j \|^2
$$
Notice that if you take points $x_i \in S_j$ those points would be the closest to the $j$-th cluster mean $m_j$, rather than to other's cluster mean (otherwise they would be assigned to a different cluster). This is the same as saying that we take the smallest of the distances between $\mathbf{x}_i$ and means of all the clusters. So both versions of the formula say the same thing.
As said above, the formulas tell you nothing how to efficiently find the minimum, just that this is the minimum we are aiming to find.