My interpretation of the question:
I do not believe this question is to be taken simplistically as a a computational geometry complexity issue. It should be better understood as saying: we perceive an ability to find the answer in constant time, when we can. What explains this perception, and up to this explanation and to human limitations, can a computer do as well.
Thus the question should probably be seen first as a question for a psychologist. The issue is
probably related to your perception of time and effort. Can you brain
really perceive a difference between $O(1)$ time and $O(log( n))$ time?
Specific counter examples do not really matter, as in issues of
perception we tend to think instinctively of average cost (complexity
being probably to precise a concept psychologically). More accurately, we are more interested in the common case, than in special cases when we feel we cannot readily answer the question.
This may be reinforced by the Weber-Fechner laws, that states
that our perception is to be measured on a logarithmic scale of the
actual physical measure. In other words, we perceive relative
variations rather than absolute variations. This is for example why sound
intensity is measured in decibels.
If you apply this to the perception of the time we use to find the
closest point, this is no longer $O(log (n))$ but $O_\psi(log (log (n)))$,
where $O_\psi$ is my just invented Landau notation for "psychological
complexity".
Actually I am cheating, because the psychological perception of the
scatterplot graph size also obeys the logarithmic law, which should
compensate in this simple complexity relation. Still, it means that
a scatterplot will alsways seem much simpler to us than it really is,
especially for very large ones. But whatever the size we perceive, if
we have a built-in logarithmic algorithm to find a closest point
(such as neuronal quadtrees), then the perceived processing time is
to be mesured by $O_\psi(log (log (n)))$ which for all practical purposes
is probably perceptually undistinguishable from a constant, and ther is necessarily some constant time added to it to start the
recognition process and acknowledge the result.
Taking into account the physiological limitations
The above conclusion is further sustained when considering the image
acquisition steps.
The OP was careful to separate the construction of a proper data
structure, "such as a quadtree", which is amortized on several
queries.
This does not work for most people who do not memorize the image.
I think the image is scanned for each query, but that does not imply
scanning all points: not the first time, and not for later queries.
The eyes scans a raster image, in constant time $T_{scan}$,
idenpendently of the size of the scene taken in, and with a fixed resolution defined by the retina structure (see below). Thus it gets a
constant amount of information, possibly not distinguishing all
points. It can then focus on the relevant part of the image, to
distinguish relevant points in another time $T_{scan}$, plus possibly
the time to change the orientation and focus of the
eye. Theoretically, this operation could have to be repeated, leading
to logarithmic focussing, but I believe that in perceptual practice,
there is at most one additional step for focussing vision.
Scanning probably results in a structure in the brain that can be
analyzed to find the answer. It may still contain a very large number
of points. Though I do not know how the brain proceeds, It is not
unreasonable to think that it is some kind of focussing process that
takes at worst logarithmic time, possibly even less. This process is
applied to the perceived image, that has a bounded size. This implies
of course a bounded number of points, though it can be quite
large. Thus there is a fixed upperbound $m$ to the information to be
processed. Assuming logarithmic processing, and reusing the above
analysis, the perceived processing time is $O_\psi(log (log (m)))$.
The resolution of the human eye is fixed by the number of rods, which
is about 125 millions. That is about $2^{27}$. Using base 2 for the
logs, that gives for the processing an upperbound of about
$log_2(27)$, i.e. about 5 steps of whatever cost there is for a step.
Using instead the estimated value of the eye resolution which is on the order of 500 megapixels does not change the final result.
Without knowing the actual units to be used, this simply shows that
the variation for processing is at worst on the same order as other
constant time operations. Hence, it is quite natural that the
perceived time for finding the closest point feels constant . . . whether we determine the closest point or only a set of the closer points.
About counter-examples and a possible solution
It is of course easy to build counter-examples that make eyes
determination of the closest point very difficult among a small
collection of the closer points. This is why the OP is actually asking
for an algorithm that eliminate quickly most of the points, except for
the closest ones. This issue of the possible difficulty of chosing
among several close points is adressed in many answers, with the
paradigmatic example of closest points being almost on a circle around
the reference point. Typically the Weber-Fechner laws precludes being
able to distinguish small distance variations over long enough
distances. This effect may actually be increased by the presence of
other points which, though eliminated, may distort the perception of
distances. So trying to identify the closest point will be a harder
task, and may well require specific examination steps, such as using
instruments, that will completely destroy the feeling of constant
time. But it seems clearly outside the range of experiments considered
by the OP, hence not very relevant.
The question to answer, which is the question actually asked by the
OP, is whether there is a way to eliminate most of the points, except
possibly for the remaining few which seem to have very similar
distances to the reference point.
Following our analysis of what may be hidden behind a perceived
constant time, a computer solution that does it in $O(log(n))$ time
could be considered satisfactory. On the other hand, relying on
amortized cost should not really be acceptable, since the brain does
not do it that way, afaik.
Rejecting amortized cost does not allow for a computer solution, since
all points have to be looked at. This underscores a major difference
in the computing power of the brain and of human perception: it can
use analog computation with properties that are quite different from
digital computation. This is typically the case when billions of
points are not distinguishable by the eye, which does not have the
resolution to see anything but a big cloud with various shades of
dark. But the eye can then focus on relevant smaller part, and see a
bounded number of points, containing the relevant ones. It does not
have to know of all points individually. For a computer to do the
same, you would have to give it a similar sensor, rather than the
precise numerical coordinates of each point. It is a very different
problem.
"Mere visual inspection" is in some respects a lot more powerful than
digital computation. And it is due also to the physics of sensors, not
just to a possibly greater computing power of the brain.
37You are aware of all the points already and roughly where they are; the "software drivers" for your eyes have already done the hard work for you of interpreting the image. In your analogy you are considering this work "free" when in actual fact it isn't. If you already had a data structure that broke down the point positions into some sort of octotree representation you could do much better than O(n). A lot of pre-processing happens in the subconscious part of your brain before the information even reaches the conscious part; never forget that in these sorts of analogies. – Richard Tingle – 2014-03-17T11:36:48.740
21I think at least one of your assumptions does not hold in general. assume all points arranged on a circle with 'small' perturbations and 1 additional point P being the center of the circle. If you want to find the closest point to P, you cannot dismiss any of the other points in the graph. – collapsar – 2014-03-17T12:29:22.507
4Because our brain is really amazing! Sounds like a cheap answer but it's true. We really don't know a whole lot about how our (apparently massively parallel) image processing works. – Carl Witthoft – 2014-03-17T12:29:49.030
8Well, basically, your brain uses space partitioning without you noticing. The fact that this appears really fast does not mean it's a constant time - you're working with some finite resolution, and your image processing software is designed for that (and might even be handling all that parallely). The fact that you're using a hundred million little CPUs to do the preprocessing doesn't put you in ${\cal O}(1)$ - it just does the complicated operation on a lot of little processors. And don't forget the plotting to the 2D paper - that on its own has to be at least ${\cal O}(n)$. – Luaan – 2014-03-17T13:01:53.833
2To add to all the previous people have said -- finding the closest point is not an O(n) problem with some preprocessing. You might want to look into
kd-trees,quad-trees andVoronoi tesselation. – TC1 – 2014-03-17T13:37:38.4879Not sure if it has already been mentioned, but the human brain works very differently from a SISD von Neumann type computing system. Particularly relevant here is that, as I understand it, the human brain is inherently parallel and especially so when it comes to processing sensory stimuli: you can hear, see, and feel multiple things at the same time and be (roughly, anyway) aware of all of them simultaneously. I'm concentrating on writing a comment but see my desk, a can of soda, my jacket hanging on the door, the pen on my desk, etc. Your brain can check many points simultaneously. – Patrick87 – 2014-03-17T15:52:23.290
@Patrick87 as long as you have a fixed number of processors dedicated to the problem, parallelism does not affect complexity classes. – collapsar – 2014-03-17T16:34:30.390
1@collapsar although if the number of processors is very large, it can be reasonably treated as infinite (much like how software running in very large finite memory is still called "turing-complete"). – Brilliand – 2014-03-17T18:33:36.513
@Brilliand The viability of your approach depends on the expected size of problem instances. moreover, afaik, with regard to any task beyond detection of perceptual primitives (eg. high intensity gradients in vision, aka 'edges'), it is not clear how many neurons make up a 'processor' in the brain (nor whether the parallel processor model of the brain is really appropriate in the first place), so the claim of a 'reasonably infinite' processor count might be way too strong - however, that deeply invades the field of neurobiology. – collapsar – 2014-03-17T19:08:56.020
1Look into quadtrees. Very similar to how one might approach the problem from a visual direction. – Adam Davis – 2014-03-17T19:25:25.453
1ockquote>
mere visual inspection... it turns out visual inspection is in fact a quite complex algorithm in itself ;) try implementing that in software and you have an algorithm that solves the problem just like you do
– Niklas B. – 2014-03-17T23:19:06.010For large graphs you brain cannot do this, for small graphs your brain has more “cpus” then there are points in the graph. – Ian Ringrose – 2014-03-18T11:08:47.913
Because a) your Visual Cortex is massively parallel, and 2) the number of parallel computations in your visual cortex equals or exceeds the numbers of objects that your eyes can distinguish. Thus
O(n/c) = O(1)so long asc >= n. – RBarryYoung – 2014-03-20T20:22:11.093As an illustration to the importance of the computational model/data structures used (and kind of "just for fun" too), take a look at "spaghetti sort": http://en.wikipedia.org/wiki/Spaghetti_sort
– Kache – 2014-05-20T11:14:46.760i think the heart of the question is encoding of the data structure (among other things). Although it is true that in some model of (human) computation O(n) time is the limit. Depending on encoding of input data it can also be in O(1). In fact many np-complete problems are only np-complete using certain encodings of the data and not with others (e.g the subset-sum problem) – Nikos M. – 2014-06-08T11:17:56.027