Does an average of the 100 calls now count as 1 measurement?
Basically you can measure anything and define the measurements to be the values of an (unknown) variable. In your case this variable X:="average duration across 100 functions calls given parameter n", so 100 calls count as one measurement.
Does it have a standard error?
Yes, given you have enough data (2 points), you can measure the standarddeviation/error of every variable. The question is whether the value is meaningful, which is only the case when the distribution is symmetric. If you are not sure about that you are better of considering the percentiles of the data or a robust scale measurement (see On univariate outlier tests (or: Dixon Q versus Grubbs))
Is this approach completely orthogonal to the earlier use of the standard error?
No. Let's assume a normal distribution $N(\mu,\sigma^2)$. When one repeatly draws a sample of 100 values from this distribution and calculates the average across this values, is this the same as estimating the mean from $N(\mu,\sigma^2)$ based on samples of size 100. The distribution of this estimation is t-distributed (approximately normal) with mean=$\mu$ and variance=$(\frac{\sigma}{\sqrt{n}})^2$
In summary: I'd check that the average duration of function calls is approximately normal (or symmetrically) distributed and just report the error. If this is not true, I'd report the percentiles instead. In the latter case (or in general) a boxplot provides a good visualization.