0

I am using a generalized difference-in-differences (DiD) equation for a staggered treatment. The law (i.e., treatment) is implemented at different times.

From this discussion of @Thomas Bilach here, it seems that you can stop the sample around 1, 2, or 3 years after the laws are implemented in each country. Is there any literature supporting this? I am wondering if this understanding is correct? And does it mean that, if we stop the sample after 2 years after the event date in each country, we need to drop all the observations in this country after 2 years?

Thomas Bilach
  • 4,732
  • 2
  • 6
  • 25
Louise
  • 97
  • 1
  • 16
  • Welcome. How do you want to report your coefficients? Your question suggests to me that you want to do some sort of event study. Is this correct? – Thomas Bilach Aug 31 '21 at 05:09
  • Hi @ThomasBilach. Yes, like the impact of laws on firms assets growth globally and each country passed the law in different year. And I suspect that the impact of law will be faded after 2,3,4 years. Could I stop the sample at the 4th year for each country? – Louise Aug 31 '21 at 07:15

1 Answers1

0

From this discussion of @Thomas Bilach..., it seems that you can stop the sample around 1, 2, or 3 years after the laws are implemented in each country.

I do not recommend "stopping" or "truncating" the event time. Rather, I simply advise reporting (plotting) the appropriate number of leads and lags within a specific time interval; effects should concentrate around the law change. The width of the event window, more appropriately referred to as the effect window, is going to vary across studies.

I am wondering if this understanding is correct?

I'm unaware of any literature suggesting the "optimal" lead/lag structure. It is entirely context-dependent.

And does it mean that, if we stop the sample after 2 years after the event date in each country, we need to drop all the observations in this country after 2 years?

You do not need to discard all observations two years after the event date for a treated cohort. I imagine you want to achieve some correspondence between calendar time and event time. In staggered adoption designs, where some countries adopt a law early while others adopt late, this cannot be achieved. In any one slice of event time you have a mixture of treated cohorts. If you censor (discard) all observations two periods after the event year to achieve balance in event time, then your panel invariable becomes unbalanced in calendar time. By definition, the "time to event" in a staggered adoption design is never in harmony with calendar time.

Truncating the effect window also limits the number of country-year observations among the early-adopter cohorts, while the late-adopter cohorts may not even require any censorship beyond the second lag. Suppose a late-adopter cohort enacts the new legislation in the last year of your panel. By design, you can only estimate an instantaneous treatment effect. Note also that the sample size of the late-adopter cohort is larger by the mere fact that they are treated much later! You'll also be working with different subsets of your data as you alter the lead/lag structure. In short, I would not recommend dropping observations beyond the limits of your effect window.

Here are a few ways to proceed. First, you could report (plot) all lead and lag coefficients. This is arguably a more robust—and transparent—approach when reporting event study coefficients. By estimating a full set of leads and lags, you're tracing out the full dynamic response to treatment. Second, you could "bin" the endpoint(s). For example, when we "bin" the second lag we set the variable equal to 1 in the second year after the immediate adoption year and in all periods going forward, 0 otherwise. Put differently, a "binned" second lag 'turns on' (i.e., switches from 0 to 1) two years after the initial year of change and 'stays on' until the end of your panel. Note this implicitly assumes a constant treatment effect beyond this point, so we're only interested in dynamic treatment effects in the short-term. Third, you could estimate all lead and lag coefficients as before, but only report (plot) those within the effect window. For example, evaluators may estimate 20 leads and 20 lags, but only report two years before the law change and three years after. Whether or not you decide to include a "binned" endpoint(s) is entirely up to you. In my survey of the literature on difference-in-differences, it's quite common for evaluators to only report the leads and lags within a predetermined window. That window where effects concentrate should be defined by you.

Thomas Bilach
  • 4,732
  • 2
  • 6
  • 25