Jittering your data before testing is problematic (jittering is more used for plots so the points are not on top of each other, but the viewer can still understand that there were ties). With jittering you are comparing datasets that are mixtures of the original and the jittering distribution and adding the jittering could artificially increase or decrease the p-value.
The problem with ties in the KS test is that the p-values are computed based on assumptions that no ties are possible, so the work around is to not use the computed p-values but find your own. One possibility is to do a permutation test: run the KS test but ignore the p-value but record the KS test statistic; now combine the 2 sets of data and randomly choose out of the combined data a new set that is the same size as one of the original (and the rest will represent the other group); run the KS test on these new sets and again just record the test statistic. Repeat this process a bunch of times (2,000 or so) and the recorded test statistics will represent the distribution under the null hypothesis, the proportion of the test statistics that are more extreeme than your original one is the p-value.
You mentioned that you thought the null should be rejected based on a plot of the data, perhaps the methods in this paper:
Buja, A., Cook, D. Hofmann, H., Lawrence, M. Lee, E.-K., Swayne,
D.F and Wickham, H. (2009) Statistical Inference for exploratory
data analysis and model diagnostics Phil. Trans. R. Soc. A 2009
367, 4361-4383 doi: 10.1098/rsta.2009.0120
would appeal to you instead. The simplified version is that they generate several plots of the data under the assumption of the null being true (permuting similar to above) and show those plots along with the same plot of the original data. If you cannot figure out which is the plot of the original data then that supports the null hypothesis, if the plot of the original is very obvious then that argues against the null. The vis.test
function in the TeachingDemos package for R helps implement this test.
If you have already decided that there is a difference and just want a p-value less than 0.05 then you can use SnowsPenultimateNormalityTest
that is also in the TeachingDemos package (but be warned, the documentation for that function is considered more useful than the function itself).