If you are not using regularised logistic regression, the difference in average magnitude is probably caused by that. The SVM has a term in the cost function that aims to minimise the magnitude of the weights (which is where the maximum margin separation idea comes from).
Regularised logistic regression will also give somewhat different parameter estimates as it is trying to give accurate estimates of the probability of class membership everywhere, whereas the SVM is trying only to locate the decision boundary for this problem (in the default case, this is where the probability of class membership is 0.5). This is both a good thing and a bad thing. If accuracy genuinely is the quantity of interest, then the SVM may give better results as it is less likely to be affected by trying to minimise errors at high or low probabilities at the expense of errors near 0.5 (see my answer to a vaguely related question). However, this does mean that if misclassification costs or class frequencies change, you need to retrain the classifier.
There is a good discussion of the difference at the end (Appendix D.2) of Mike Tipping's paper on the Relevance Vector Machine (a sort of Bayesian regularised logistic regression model).