This happens all the time in statistics. Just because estimate A is significantly different from zero, and estimate B is NOT significantly different from zero, doesn’t mean A and B are significantly different from each other.
This is a bit of an over-simplification (which I’ll explain in a second) but it might help to think about this in terms of confidence intervals.
Let’s say you estimate that, between time 1 and time 2, the treatment group improved by 4 points with 95% confidence intervals ranging from 2.5 to 5.5. The confidence intervals don’t overlap with zero so this increase is statistically significant at the 95% level.
Then you estimate that the control group improved by 2 points with 95% CIs ranging from -1 to 4. This confidence interval overlaps with zero so this change is not significant at the 95% level.
So you have one change that is significantly different from zero, and another that is not. But notice that the confidence intervals around these two estimates overlap with each other: the treatment group might have only increased by 2.5, which is right in the middle of the CIs for the treatment group. So the difference between the differences (the “difference-in-differences” which is the kind of analysis you are running) is not itself statistically significant.
The oversimplification here is that you can’t actually tell whether the difference between two estimates is statistically significant at the 95% level just by checking whether the 95% confidence intervals overlap (if two CIs don't overlap you can be sure that the difference is significant at that level of confidence but if they do overlap slightly the difference might still be significant because of stuff related to how you calculate standard errors) but this is one of the reasons why t tests (which account for this stuff) exist in the first place!