I am trying to fit some binomially distributed data to a model. There is a reasonable systematic error associated with the measurements. This uncertainty comes from imperfect theoretical knowledge of one of the parameters assumed by the model. I can calculate this number by 3 or 4 different methods and model the uncertainty. The model is quite complex and itself is a function of 10 or so paramters of the experiment as well as the calculation method. What I don't know is how to incorporate this into the binomial maximum likelihod estimation expression. So my model without systematics is
$$ L(\theta | x, k) = \Pi_i^N p(\theta, x_i)^{k_i}(1-p(\theta, x_i))^{(n_i-k_i)}$$
I assume that it is some kind of muliplicative factor so I end up with
$$ L(\theta | x, k) = \Pi_i^N p(\theta, x_i)^{k_i}(1-p(\theta, x_i))^{(n_i-k_i)}\times L(syst)$$
Is there a general way of building a systematic uncertainty model into an MLE minimisation/maximisation? More specifically is there a way of weighting different contributions to the systematic error model?
Whatever the answer to my above question I suspect I will end up with a binomial maximum likelihood expression multipled by a bayesian systematic error model. Would such a hybrid approach be invalid in anyway?