First realise that there is a mathematical theorem behind this, which has assumptions. The statement isn't true in general. A standard assumption is affine equivariance, which roughly means that this only holds if estimators "move" in a certain sense with the data. For example, if you compute the sample mean and then add 5 to all observations, the mean moves by 5 as well.
Particularly, the estimator needs to be able to move to infinity if the data are changed more and more. Technically, estimating the mean as 0 independently of the data defines an "estimator" as well (a very crappy one!), and this estimator has a breakdown point of 100% - regardless of what the data are and how much you change them, it will always be the same!
Now imagine you have a (reasonably flexible, see above) estimator $T$ that has a breakdown point of 60%. So you have a data set, say $x=(x_1,\ldots,x_{100})$. Having a breakdown point of 60% means that you can keep observations $x_1,\ldots,x_{41}$ and remove the other 59% of the observations by something else, and the estimator will still stay in a neighborhood of the original value $T(x)$.
Now imagine a sequence of other data sets $y^k=(y_1^k,\ldots,y^k_{100})$ for $k\to\infty$ so that $T(y^k)\to\infty$ (which is possible because of the affine equivariance assumption, see above), i.e., $T(y^k)$ can be arbitrarily far away from $T(x)$. If the estimator has a breakdown point of 60%, you can change 59% of the observations of $y^k$ and the resulting estimator will still be arbitrarily far away from $T(x)$.
But this isn't possible, because when replacing 59% observations of $y^k$, you may well introduce $x_1,\ldots,x_{41}$ to the data set, and then the estimator needs to be close to $T(x)$ as explained in the paragraph before. So there is one >40% portion of the data that requires that the estimator should be in one place, and another >40% portion that requires the estimator to be in a totally different place. This cannot be true.
This can only be avoided by having a breakdown point <50%.