Tests that have an alternative hypothesis of a lack of difference fall under a tests for "statistical equivalence" or "statistical noninferiority." The key difference between tests for statistical difference and tests for statistical equivalence lie in how the null hypothesis is formulated. In the former, the null hypothesis takes a form akin to: "There is no difference in the means of the two groups." In the latter, the null takes a form such as: "There is a meaningful difference in means of the two groups." This is a nice description of the method.
To become more specific for your problem, the main question you have to answer is what difference in posting projects would be considered sufficiently meaningful that your organization would find it unacceptable? Would a reduction in 50 projects be too much? 100? That's only a question you can answer. Further, this is the most important question. Once you determine this threshold, then you can conduct an appropriate one-sided test for non-inferiority where the null is whatever you decide.
As an illustrative example, lets say you define "worse" as 50 fewer projects, your data are normal, and you like the usual confidence of 95%. Then you'd test against $H_0 = 50$ using a $t$-test with one-sided $\alpha = 0.05$. If this test is significant, you can say that with 95% confidence that your new method is no worse than the old method (or that it is non-inferior).
To take it one step further and declare equivalence, you can pick an upper bound as well, and conduct a similar one-sided test. If you do this, it fits into a TOST, or a two one-sided test, methodology. Graphically, as in the link above, this is the same as plotting confidence intervals (of level $2\alpha$) with an appropriate zone of scientific indifference (the two limits you previously determined).