The short answer is that $D_{st+j}$ is the product of the time and treatment dummy indicators. These test the difference-in-differences effects. However, how you create the indicator variables affects the interpretation.
Let's start with the simple case of two groups (treatment and control) and two time periods (before and after). If we create the indicator variables as 0/1 dummy codes with treatment=1/control=0 and before=0/after=1 then the product of these would have treatment+after=1/all others=0. Assuming a balanced design (equal sample sizes in all conditions) and no other variables in the model, a basic regression model would produce the following: (1) intercept that equals the mean for the control group at time 0 (before); (2) treatment coefficient that equals the treatment-control difference at time 0 (before); (3) time coefficient that equals the time effect for the control group; and (4) the interaction coefficient between treatment and time that is the difference-in-differences or difference in the change over time for the treatment versus control conditions. Only the latter coefficient is of any real interpretive value in this model. Using effect coding produces meaningful coefficients for the intercept, treatment, and time coefficients, with the intercept reflecting the grand mean, the treatment coefficient reflecting 1/2 of the marginal treatment effect and the time coefficient reflecting 1/2 of the marginal time effect (you could also use .5/-.5). [Note that effect-coding with 3 or more conditions is 1/0/-1 with the -1 used for the referent category).
The model you present above extends this out to more time periods. If time 1 (before) is the referent category, then the typical 0/1 dummy indicators reflect the difference-in-differences effect for time 0 relative to time t. With different indicator-coding, you can test different hypotheses (e.g., time 0 versus times 1+2+3).