I have never seen this done, and I doubt other people have either. One usually gets informed answers on this site within a couple of hours of posting something. It's been a day, and no joy.
My thinking is this: if you want to tell the model that some values are more trustworthy than others, use weights. If you downweight values where you doubt the accuracy of the data, the model will basically accept a worse fit at that point -- which is what you want.
Example: suppose you have a very "married" set of covariates for someone coded "unmarried" in the dodgy data set. Without weights, the fitting algorithm could distort the parameter estimates in order to get some kind of fit. With weights, the algorithm need not try so hard. In effect, it lets you have bigger residuals when you don't trust the data.
If you want to go with your first idea of substituting data with probabilities, I would iterate: estimate probabilities that someone is married or not, then fit the model with my best guesses, then go back and adjust the estimates. This is an EM approach. So, I would not replace 0's and 1's with 0.8 and 0.2 in the fit. I would use 1 and 0 according as the probabilities were less than or greater than 0.5 - but then I would go back and adjust the probabilities on the basis of lack of fit at those points.
If you look at what happens in a logistic regression model, the math involved really expects that the data are going to be 0's or 1's. I think you want to stick with that. My advice boils down to using weights or estimating marital status from the rest of the data.