I think we have a difficulty involving terminology here, because the standard deviation of the sampling distribution (of the distribution of sample means) is in fact exactly what the standard error of the mean is.
But your second paragraph suggests that you are intending to ask, "Why not estimate the accuracy of our point estimate by using the within-sample standard deviation alone instead of the standard error?" Without considering sample size, the standard deviation (SD) within a sample could hardly tell us the precision of our estimate for the mean. For instance, with IQ scores, the SD is typically 15. If among some group of 500 people the mean IQ were 98 (a hair below average), it would hardly make sense to say that the 95% confidence interval for that mean would extend all the way to +/- 1.96 * 15 =~ 30 points, which is the range [68, 128]. To say that would mean accepting that our population represented by the 500 subjects could well average an IQ score that suggests developmental disability (mental retardation). In fact, the range [68, 128] is about what we would use if we wanted to guess the IQ score of a single person with 95% confidence. With a group of 500, naturally we have more information and would expect greater precision (a narrower interval).
So it is necessary, and unquestionably beneficial, to adjust using the sample size, which is what we do by dividing the SD by n^(1/2). The larger the sample is, under random sampling, the more confident we should be in our estimate. The SD, being part of the formula for standard error, does come into play, but it is often overshadowed by n.