The mixed model is fine for the repeated measures. There's no need to subsequently split the data into two subsets and fit models for each timepoint separately. That will just lose power. Be aware that when you have an interaction, the 'main effects' are simple effects for the situation where the other interacting variable is exactly $0$. Since your timepoints seem to be $0$ and $1$ (during and after), the test of the main effect of event_valence
is the simple effect for when the event happened. That leaves you with just the simple effect for after. This can be calculated from your betas and their variance covariance matrix, but it's a little complicated and tedious for many people. A simple hack is to relevel timepoint
so that after
is the reference level and refit the model. The main effect of event_valence
is then its simple effect for after, and its test is the test of that simple effect.
Here's a simple example, coded in R (and adapted from a largely unrelated answer of mine here):
set.seed(1) # this makes the example exactly reproducible
obs = data.frame(v1=sample(c("A","B"), size=100, replace=TRUE),
v2=sample(c("Y","Z"), size=100, replace=TRUE),
dv=rnorm(100))
B = ifelse(obs$v1=="B", 1, 0)
Z = ifelse(obs$v2=="Z", 1, 0)
obs$dv = with(obs, .6*B + .7*Z + .5*B*Z + dv)
modelA = lm(dv~v1*v2, data=obs)
summary(modelA)$coefficients
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 0.28470179 0.1988959 1.4314109 0.155560012
# v1B 0.07341859 0.2812813 0.2610148 0.794640188
# v2Z 0.33010778 0.2663352 1.2394449 0.218202270
# v1B:v2Z 1.01996406 0.3832608 2.6612795 0.009125761
obs$v1 = relevel(obs$v1, "B")
modelB = lm(dv~v1*v2, data=obs)
summary(modelB)$coefficients
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 0.35812037 0.1988959 1.8005416 7.491550e-02
# v1A -0.07341859 0.2812813 -0.2610148 7.946402e-01
# v2Z 1.35007184 0.2755983 4.8986950 3.906155e-06
# v1A:v2Z -1.01996406 0.3832608 -2.6612795 9.125761e-03
The models are identical, but modelA
has "A"
as the reference level of v1
, whereas modelB
has "B"
as the reference level instead. Thus, the main effects of v2
are the simple slopes of v2
where v1
is at its reference level, and the tests assess whether those slopes are consistent with $0$.
One other point: Be cautious about interpreting a non-significant result for event_valence
as "not affected by the objective event valence". That's accepting the null hypothesis; hypothesis testing doesn't work that way. It may help you to read my answer to: Why do statisticians say a non-significant result means “you can't reject the null” as opposed to accepting the null hypothesis?