Interesting question.
The effect of duplicates in training data is slightly different than the effect of duplicates in the test data.
If an element is duplicated in the training data, it is effectively the same as having its 'weight' doubled. That element becomes twice as important when the classifier is fitting your data, and the classifier becomes biased towards correctly classifying that particular scenario over others.
It's up to you whether that's a good or bad thing. If the duplicates are real (that is, if the duplicates are generated through a process you want to take into account), then I'd probably advise against removing them, especially if you're doing logistic regression. There are other questions about dealing with oversampled and undersampled datasets on this SE. When it comes to neural networks and the like, other people may be able to answer better whether it is necessary to worry about this.
If your dataset is, for example, tweets, and you are trying to train a natural language processor, I would advise removing duplicate sentences (mainly retweets) as that doesn't really help to train the model for general language use.
Duplicated elements in the test data serve no real purpose. You've tested the model on that particular problem once, why would you do it again, when you'd expect the exact same answer? If there is a high proportion of the same duplicated entries in the test set as are in the training set, you'll get an inflated sense of how well the model performs overall, because the rarer scenarios are less well represented, and the classifier's poor performance with them will contribute less to the overall test score.
If you are going to remove duplicates, I'd recommend doing it before splitting the dataset between train and test.