4

Say that one has data over time, t, on an outcome, y. There is an event that happens at t==0. One is interested in testing for evidence that the event is related to (I am being cautious on a causal interpretation) a change in the outcome. Importantly, there are many observations for each t (as opposed to a traditional time series when there is only one observation per t).

Suppose also that this is not a situation where the event happened in only or some of the units (e.g., states), but that there is only one unit. This rules out an analysis such as a difference in difference as there is no control group.

In a situation like this, should I use a interrupted time series analysis or regression discontinuity design? If both are fine, what are the differences/advantages/disadvantages? (I am also much more familiar with RDD than the interrupted time series; where is a good place to learn about the latter?)

bill999
  • 267
  • 3
  • 15
  • 4
    There is a nice discussion of this issue here https://www.annualreviews.org/doi/pdf/10.1146/annurev-resource-121517-033306 I think the usual concerns with using time as the forcing variable in RDD are that 1.) you might not have a lot of observations near the cutoff, 2.) the treatment might be something that evolves over time so looking in a narrow window is not meaningful, 3.) serial correlation in outcome and errors 4.) endogenous timing. You can test some of these and model others. Probably need to ask whether looking at effect in a narrow band is meaningful in your application. – gfgm Sep 20 '20 at 22:03
  • Thank you, @gfgm, both for the article and for listing the concerns. Very helpful! – bill999 Sep 21 '20 at 14:40

0 Answers0