I am currently evaluating the measurement of the thermal impedance of different semiconductor devices. In order to properly evaluate these measurements, I have to determine the derivative of the thermal-impedance function $Z_{th}(a)$, where $a = \ln(t)$. $Z_{th}(a)$ is a continuous function, but due to the way the measurement works, I only have a set of data points $\{(a_1, Z_1), \dots, (a_{100}, Z_{100})\}$. Because of this, the derivative is estimated using linear regressions over small subsets of the data points (e.g. 9 of the 100 data points at a time). So I would calculate slope using a linear regression of the points $a_1$ to $a_9$, then the next slope using the points $a_2$ to $a_{10}$ and so on.
Now I am faced with the following question: When I perform a linear regression over a small set of points $\{(x_1, y_1), \dots, (x_9, y_9)\}$ that belong to a nonlinear function, I get one resulting slope. How do I find the point x where this resulting slope is closest to the slope of the nonlinear function? Should I just assume it is closest in the middle of my regression intervall? Would "middle" mean $x = 1/2 \ (x_1 + x_9)$, or would it be the average of all my x values (since the points x might not be equidistant)?
My intuition would be the latter, but I am wondering if there is a better way to do this.