[The example in your question doesn't make sense; a discrete uniform distribution isn't approximately normal in any reasonable sense of the word.]
There are tests of goodness of fit which can (sometimes) tell you when a distributional model fails to describe data in some way. Quite literally hundreds of them -- and dozens just for the case of normality.
However, it's generally not useful to test for something we know for sure to be false (we can often know before we even see them that our data are not exactly normal, even if the population distribution may be fairly close); in any case it answers the wrong question (a good questions is something like "is this distributional model sufficiently good to be useful for our purposes" which is simply not addressed by a goodness of fit test)
As the famous saying goes, "All models are wrong, some are useful" (actually there's a couple of different versions - a different version of the Box quote is on my profile - but that one will do for now).
ie how do I say, this data is a ______ distribution without eye balling it
In general you can't say it is something, but you can use knowledge of the variable, of the subject area and of the characteristics of distributions to pick models that are likely to be reasonable.
Nothing will tell you the population distribution the data were drawn from is $F_1$ rather than the very similar $F_2$, $F_3$, $F_4$, $...\,$. Even worse, with enough data I may be able to reject every simple distributional model I can think of.
In the absence of rock-solid theory (which still may fail to capture aspects of the entire data generating process by the time the data ends up on your computer) the best you can usually hope to do is identify distributional models that capture the most salient features in a fruitful way.
As gung suggests, you usually choose models by thinking about what you're dealing with (how the characteristics of the situation / data-generating-process / kind of variable you'll observe match up with the characteristics of a distributional model).
If I am dealing with counts or times, say, my first thought won't be the normal distribution but something more directly related to those types of variables (there are several models commonly used for count data, and it's often clear that some of those can be ruled out before you even see the data).
If I know something will be close to normal (though for me this sort of knowledge is fairly rare), then often I'll just use it; it may work just about as well as anything else. However, if I can see that the way it isn't quite normal may be an issue for what I want to do, then I might do something a bit different. For example, if I was just doing a t-test or a regression, being more or less close to normal may often present little problem but if I was (say) interested in computing a conditional tail expectation for a very high quantile, just saying "well that's more or less close to normal" may not be nearly adequate; the behaviour of the extreme tail will matter.
Sometimes you simply have no theory and not enough practical subject area knowledge to choose a model, and then you might begin by (say) looking at some fraction of your data, or in more complicated cases, choosing a convenient model for that portion and examining diagnostics to help choose a better one.