I'll take a different approach towards developing the intuition that underlies the formula $\text{Var}\,\hat{\beta}=\sigma^2 (X'X)^{-1}$. When developing intuition for the multiple regression model, it's helpful to consider the bivariate linear regression model, viz., $$y_i=\alpha+\beta x_i + \varepsilon_i, \quad i=1,\ldots,n.$$ $\alpha+\beta x_i$ is frequently called the deterministic contribution to $y_i$, and $\varepsilon_i$ is called the stochastic contribution. Expressed in terms of deviations from the sample means $(\bar{x},\bar{y})$, this model may also be written as $$(y_i-\bar{y}) = \beta(x_i-\bar{x})+(\varepsilon_i-\bar{\varepsilon}), \quad i=1,\ldots,n.$$
To help develop the intuition, we will assume that the simplest Gauss-Markov assumptions are satisfied: $x_i$ nonstochastic, $\sum_{i=1}^n(x_i-\bar{x})^2>0$ for all $n$, and $\varepsilon_i \sim \text{iid}(0,\sigma^2)$ for all $i=1,\ldots,n$. As you already know very well, these conditions guarantee that $$\text{Var}\,\hat{\beta}=\tfrac{1}{n}\sigma^2(\text{Var}\,x)^{-1}\text{,}$$ where $\text{Var}\,x$ is the sample variance of $x$. In words, this formula makes three claims: "The variance of $\hat{\beta}$ is inversely proportional to the sample size $n$, it is directly proportional to the variance of $\varepsilon$, and it is inversely proportional to the variance of $x$."
Why should doubling the sample size, ceteris paribus, cause the variance of $\hat{\beta}$ to be cut in half? This result is intimately linked to the iid assumption applied to $\varepsilon$: Since the individual errors are assumed to be iid, each observation should be treated ex ante as being equally informative. And, doubling the number of observations doubles the amount of information about the parameters that describe the (assumed linear) relationship between $x$ and $y$. Having twice as much information cuts the uncertainty about the parameters in half. Similarly, it should be straightforward to develop one's intuition as to why doubling $\sigma^2$ also doubles the variance of $\hat{\beta}$.
Let's turn, then, to your main question, which is about developing intuition for the claim that the variance of $\hat{\beta}$ is inversely proportional to the variance of $x$. To formalize notions, let us consider two separate bivariate linear regression models, called Model $(1)$ and Model $(2)$ from now on. We will assume that both models satisfy the assumptions of the simplest form of the Gauss-Markov theorem and that the models share the exact same values of $\alpha$, $\beta$, $n$, and $\sigma^2$. Under these assumptions, it is easy to show that $\text{E}\,\hat{\beta}{}^{(1)}=\text{E}\,\hat{\beta}{}^{(2)}=\beta$; in words, both estimators are unbiased. Crucially, we will also assume that whereas $\bar{x}^{(1)}=\bar{x}^{(2)}=\bar{x}$, $\text{Var}\,x^{(1)}\ne \text{Var}\,x^{(2)}$. Without loss of generality, let us assume that $\text{Var}\,x^{(1)}>\text{Var}\,x^{(2)}$. Which estimator of $\hat{\beta}$ will have the smaller variance? Put differently, will $\hat{\beta}{}^{(1)}$ or $\hat{\beta}{}^{(2)}$ be closer, on average, to $\beta$?
From the earlier discussion, we have $\text{Var}\,\hat{\beta} {}^{(k)} =\tfrac{1}{n}\sigma^2/\text{Var}\,x{}^{(k)})$ for $k=1,2$. Because $\text{Var}\,x^{(1)}>\text{Var}\,x^{(2)}$ by assumption, it follows that $\text{Var}\,\hat{\beta}{}^{(1)} <\text{Var}\,\hat{\beta}{}^{(2)}$. What, then, is the intuition behind this result?
Because by assumption $\text{Var}\,x^{(1)}>\text{Var}\,x^{(2)}$, on average each $x_i^{(1)}$ will be farther away from $\bar{x}$ than is the case, on average, for $x_i^{(2)}$. Let us denote the expected average absolute difference between $x_i$ and $\bar{x}$ by $d_x$. The assumption that $\text{Var}\,x^{(1)}>\text{Var}\,x^{(2)}$ implies that $d_x^{(1)} >d_x^{(2)}$. The bivariate linear regression model, expressed in deviations from means, states that $d_y = \beta d_x^{(1)}$ for Model $(1)$ and $d_y = \beta d_x^{(2)}$ for Model $(2)$. If $\beta\ne0$, this means that the deterministic component of Model $(1)$, $\beta d_x^{(1)}$, has a greater influence on $d_y$ than does the deterministic component of Model $(2)$, $\beta d_x^{(2)}$. Recall that the both models are assumed to satisfy the Gauss-Markov assumptions, that the error variances are the same in both models, and that $\beta^{(1)}=\beta^{(2)}=\beta$. Since Model $(1)$ imparts more information about the contribution of the deterministic component of $y$ than does Model $(2)$, it follows that the precision with which the deterministic contribution can be estimated is greater for Model $(1)$ than is the case for Model $(2)$. The converse of greater precision is a lower variance of the point estimate of $\beta$.
It is reasonably straightforward to generalize the intuition obtained from studying the simple regression model to the general multiple linear regression model. The main complication is that instead of comparing scalar variances, it is necessary to compare the "size" of variance-covariance matrices. Having a good working knowledge of determinants, traces and eigenvalues of real symmetric matrices comes in very handy at this point :-)