I have been doing some research on constrained models and have recently read the paper:
Gunn and Dunson (2005) "A Transformation Approach for Incorporating Monotone or Unimodel Constraints", Biostatistics, 6, 434-449
In this paper they advocate fitting an unconstrained hierarchical model, and then applying the constraint on the posterior distribution. They cite the fact that the usual Gibbs sampling routines (Gelfand et. al., JASA, 1992, 523-532) are difficult to apply to constrained parameter problems when fitting a hierarchical model.
My question is whether JAGS requires this or not, or is it able to implement the constraints in the prior (where I would like them implemented). Suppose I have the following data:
X1 <- c(327,125,7,6,107,277,54)
X2 <- c(637,40,197,36,54,53,97,63,216,118)
N1 <- 7
N2 <- 10
and I want to fit an isotonic regression for set of X values with the hierarchical model:
X1[i] ~ exponential(theta1(i)), i = 1,...,N1
X2[i] ~ exponential(theta2(i)), i = 1,...,N2
theta1[i] ~ exponential(delta1)
theta2[i] ~ exponential(delta2)
delta1 ~ exponential(lambda)
delta2 ~ exponential(lambda)
where lambda
is a specified constant, and we add the following constraints:
theta1[1] > theta1[2] > ... > theta1[N1]
theta2[1] > theta2[2] > ... > theta2[N2]
I specified the JAGS model as follows:
model {
for(i in 1:N1) {
X1[i] ~ dexp(theta1[i])
theta10[i] ~ dexp(d1)
}
for(i in 1:N2) {
X2[i] ~ dexp(theta2[i])
theta20[i] ~ dexp(d2)
}
d1 ~ dexp(d0)
d2 ~ dexp(d0)
d0 <- 0.01
theta11[1:N1] <- sort(theta10)
theta21[1:N2] <- sort(theta20)
for(i in 1:N1) {
theta1[i] <- theta11[N1-i+1]
}
for(i in 1:N2) {
theta2[i] <- theta21[N2-i+1]
}
}
JAGS compiles the model and seems to run fine, and the results seem okay. But is it really fitting the model that I think it is fitting?