In the first one, the distribution of the quantity involves an unknown parameter or parameters and DOES depend on the distribution of the data. What it doesn't depend on is that parameter or parameters.
So for $X\sim N(\theta,1)$, if we take $Q(\underline{x};\theta) = \bar{x}-\theta$, the distribution of $Q$ is $N(0,1/n)$. This is useful, because you can immediately write down an interval for $Q$ and hence back out an interval for $\theta$.
Note that if $X$ had a different distribution, $Q$ would no longer be $N(0,1/n)$. It's NOT distribution free, its distribution is free of $\theta$ (but note that $Q$ itself is still a function of $\theta$).
In the second one, the distribution of the statistic, $T(\underline{x})$ (which statistic doesn't involve any unknown parameters) - for some specific given value of some population quantities/parameters (so that you're under a null, not an alternative*) - doesn't depend on the distribution of the data.
So, for example, under the null, the distribution of the statistic in a sign test doesn't depend on the distribution the data were drawn from (it's always binomial, as long as the data are continuous, independent, etc). But said distribution sure as heck changes if you change the median difference from zero (i.e. move away from the null).
* if its distribution didn't depend on whatever population quantity or effect the test was trying to pick up under the alternative, it would be useless as a test - the power would always be the significance level
There may well be some circumstances where $Q = T(x-\theta)$ is pivotal at say a large class of location families, and where $T(x)$ is distribution free when $\theta=0$. I think it should work in a variety of circumstances; perhaps the aforementioned sign test would be an obvious place to start.