1

I measured participants' ratings on how they felt after perceiving an individually recalled event on a scale from -10 (very negative) to +10 (very positive) at two time points, t1: directly when the event happened and t2: when further thinking about the event.

Additionally, I assessed the objective valence of this event on a scale from -10 (very negative) to +10 (very positive). My hypothesis is that at t1, feeling ratings are equally negative and not affected by the objective event valence. However, at t2, ratings should differ and the objective event valence should significantly predict these ratings.

I am struggling to figure out what analysis is best to test these assumptions. I was thinking of a mixed model (rating ~ timepoint*event_valence + (1|subject)) to obtain a significant interaction between time point and event valence, and then split the regression for the two time points and test whether the slope is zero for t1 and different from zero for t2.

Is this an appropriate approach, or do you have any other suggestions?

Lafayote
  • 109
  • 7

1 Answers1

1

The mixed model is fine for the repeated measures. There's no need to subsequently split the data into two subsets and fit models for each timepoint separately. That will just lose power. Be aware that when you have an interaction, the 'main effects' are simple effects for the situation where the other interacting variable is exactly $0$. Since your timepoints seem to be $0$ and $1$ (during and after), the test of the main effect of event_valence is the simple effect for when the event happened. That leaves you with just the simple effect for after. This can be calculated from your betas and their variance covariance matrix, but it's a little complicated and tedious for many people. A simple hack is to relevel timepoint so that after is the reference level and refit the model. The main effect of event_valence is then its simple effect for after, and its test is the test of that simple effect.


Here's a simple example, coded in R (and adapted from a largely unrelated answer of mine here):

set.seed(1)  # this makes the example exactly reproducible
obs    = data.frame(v1=sample(c("A","B"), size=100, replace=TRUE), 
                    v2=sample(c("Y","Z"), size=100, replace=TRUE), 
                    dv=rnorm(100)) 
B      = ifelse(obs$v1=="B", 1, 0)
Z      = ifelse(obs$v2=="Z", 1, 0)
obs$dv = with(obs, .6*B + .7*Z + .5*B*Z + dv)

modelA = lm(dv~v1*v2, data=obs) 
summary(modelA)$coefficients
#               Estimate Std. Error   t value    Pr(>|t|)
# (Intercept) 0.28470179  0.1988959 1.4314109 0.155560012
# v1B         0.07341859  0.2812813 0.2610148 0.794640188
# v2Z         0.33010778  0.2663352 1.2394449 0.218202270
# v1B:v2Z     1.01996406  0.3832608 2.6612795 0.009125761

obs$v1 = relevel(obs$v1, "B")
modelB = lm(dv~v1*v2, data=obs) 
summary(modelB)$coefficients
#                Estimate Std. Error    t value     Pr(>|t|)
# (Intercept)  0.35812037  0.1988959  1.8005416 7.491550e-02
# v1A         -0.07341859  0.2812813 -0.2610148 7.946402e-01
# v2Z          1.35007184  0.2755983  4.8986950 3.906155e-06
# v1A:v2Z     -1.01996406  0.3832608 -2.6612795 9.125761e-03

The models are identical, but modelA has "A" as the reference level of v1, whereas modelB has "B" as the reference level instead. Thus, the main effects of v2 are the simple slopes of v2 where v1 is at its reference level, and the tests assess whether those slopes are consistent with $0$.


One other point: Be cautious about interpreting a non-significant result for event_valence as "not affected by the objective event valence". That's accepting the null hypothesis; hypothesis testing doesn't work that way. It may help you to read my answer to: Why do statisticians say a non-significant result means “you can't reject the null” as opposed to accepting the null hypothesis?

gung - Reinstate Monica
  • 132,789
  • 81
  • 357
  • 650