I would interpret these results as:
every 1 unit increase in parenting_style
is associated with a 0.06 increase in aggression
every 1 unit increase in sibling_aggression
is associated with a 0.09 increase in aggression
the p-values indicate that if these associations were actually absent, the probability of observing these data (or data even more extreme) is very low.
I would also like to make a few points about this model which will hopefully be useful. First, I assume you have thought about the causal relations between these variables. When dealing with causal inference it is important to decide which variable is the main exposure, for which you want to estimate the total causal effect. Is it possible that sibling_aggression
has a causal effect on parenting_style
? If so then you need to remove parenting_style
from the model because it is a mediator. If not, perhaps it is a confounder ? If so then you should retain it. Second, do you expect the associations between these variables to be linear ? Linearity is often plausible over a small range, but over a bigger range sometimes you need to allow for non-linear associations, include interactions ? For instance, perhaps the "effect" of parenting style is different depending on the level of sibling_aggression
. If so, then you would want to consider the interaction between these variables. Or, perhaps the direct associations are quadratic, logarithmic or some other non-linear relationships. If so then you can can transform the variables or introduce non-linear terms to the model. Third, what scale are the variables on ? Typically in my experience these kinds of data are often ordinal, rather than continuous ? If so, then this should also be taken into account.
Simple models are good. As Einstein, once said "Everything should be made as simple as possible, but not simpler". However it is important to consider that a model may be too simplistic to be useful.
If you would like to know more about how to estimate causal effects while minimising any biases, please refer to this question and answer:
How do DAGs help to reduce bias in causal inference?
I hope that at least some of this helps !