Introduction
My interest in the topic is now about 7 years and resulted in PhD thesis Time series: aggregation, disaggregation and long memory, where attention was paid to a specific question of cross-sectional disaggregation problem for AR(1) scheme.
Data
Working with different approaches to aggregation the first question you need to clarify is what type of data you deal with (my guess is spatial, the most thrilling one). In practice you may consider temporal aggregation (see Silvestrini, A. and Veridas, D. (2008)), cross-sectional (I loved the article by Granger, C. W. J. (1990)) or both time and space (spatial aggregation is nicely surveyed in Giacomini, R. and Granger, C. W. J. (2004)).
Answers (lengthy)
Now, answering your questions, I put some rough intuition first. Since the problems I meet in practice are often based on inexact data (Andy's assumption
you can measure a time series of observations at any level of precision in time
seems too strong for macro-econometrics, but good for financial and micro-econometrics or any experimental fields, were you do control the precision quite well) I do have to bear in mind that my monthly time series are less precise than when I work with yearly data. Besides more frequent time series at least in macroeconomics do have seasonal patterns, that may lead to spurious results (seasonal parts do correlate not the series), so you need to seasonally adjust your data - another source of smaller precision for higher frequency data. Working with cross-sectional data revealed that high level of disaggregation brings more problems with probably, lots of zeroes to deal with. For instance, a particular household in the panel of data may purchase a car once per 5-10 years, but aggregated demand for new (used) cars is much smoother (even for a small town or region).
The weakest point aggregation always results in the loss of information, you may have the GDP produced by the cross-section of EU countries during the whole decade (say period of 2001-2010), but you will loose all the dynamic features that may be present in your analysis considering detailed panel data set. Large scale cross-sectional aggregation may turn to be even more interesting: you, roughly, take simple things (short memory AR(1)) average them over the quite large population and get "representative" long memory agent that resembles none of the micro units (one more stone to the representative agent's concept). So aggregation ~ loss of information ~ different properties of the objects and you would like to take control over the level of this loss and/or new properties. In my opinion, it is better to have precise micro level data at as high frequency as possible, but... there is a usual measurement trade-off, you can't be everywhere perfect and precise :)
Technically producing any regression analysis you do need more room (degrees of freedom) to be more or less confident that (at least) statistically your results are not junk, though they still may be a-theoretical and junk :) So I do put equal weights to question 1 and 2 (usually choose quarterly data for the macro-analysis). Answering the 3rd sub-question, all you decide in practical applications what is more important to you: more precise data or degrees of freedom. If you take the mentioned assumption into account the more detailed (or higher frequency) data is preferable.
Probably the answer will be edited latter after some sort of discussion if any.