This is a question concerning the width of the bar in a histogram. Let's say we have frequency distribution like this:
As far as I've learned when you define size of the bar for example 0.5, for each integer value you cluster all the points from the interval integer-0.5 and integer+0.5. So, for example if we draw bar around the integer 50 then it represents all the items from 49,8 + all the items from 50 + all the items in aggregate, that is 20+4+3=27 items in that subset. It doesn't necessarily have to be an integer value, but let's simplify it for this purposes!
Question: By taking a random observation from this distribution what is the probability that it will be in range >=50?
The way it's usually pictured is to sum all (elements contained in) bars in histogram from 50 till the end. If that is the case and thing I've previously described is true, then we should not inculde the subset from the 49,5 to get the right probability?
Maybe the error including that small subset is negligible if the size of the population is high, but still what if the size of the population is small?
Or, it is the other way around: To make a histogram, a size of a bar must be such that there is constant frequency around each number (so that you don't include anything around!).