ANSWER:
You may want to set a preference for selecting observations at points where your model is currently underperforming (taking a lesson from boosted models).
OVERLY LITERAL ANSWER:
If by 'reduce uncertainty' you mean 'increase confidence' or 'explain a higher proportion of the variance in the sample'... in the case of linear regression, choosing an arbitrarily extreme or high-leverage value will increase the amount of variance explained by your model and increase any significance tests the model might produce. Though doing this means your model becomes highly dependent on the extreme point(s), violates IID, and may not really be trusted.
For example, say you have 5 random observations from a larger population:
x y
1 -7.390 -9.380
2 -4.580 -4.610
3 -0.723 -2.400
4 0.827 0.463
5 3.470 6.710
A linear model using least squares with the above 5 points explains ~92% of the variance and has a t-value for x
of ~6.9.
Let's say you have the power to substitute one of your observations with an arbitrarily large x
value from your population. Say x
> |45|. Your new observation might come out to be:
x y
6 45.30 41.80
If you now build a model with this point and any four of the previous points, the variance explained will be >98% and the t-value for x
will be >15.0. Though this seems to have substantially improved things, for the reasons mentioned previously, this is not really a satisfying answer...