I have $N$ discrete random variables $X_i$, with distributions $p_i(X_i)$. $X_i$ can take integer values in a finite range $[a_i,b_i]$, where $a_i,b_i$ are integers. The distribution $p_i(X_i)$ is stored as an array of the values $p_i(a_i)$, $p_i(a_i+1)$, ..., $p_i(b_i)$. Another random variable $Z$ has a distribution $p(z)$ defined as:
$$p(z) \propto \sum_{x_1 = a_1}^{b_1} \dots \sum_{x_N=a_N}^{b_N} \delta \left( cz - \sum_{i=1}^N c_i x_i \right) p_1(x_1) \dots p_N(x_N)$$
where $\delta(x)$ is a Kronecker-delta function, with $\delta(0) = 1$ and $\delta(x) = 0$ for all $x\ne 1$, and $c,c_i$ are known integers. The proportion sign is to allow for a normalization constant.
If $c=c_i=1$ for all $i$, there is a very efficient way to compute the array of values $p(Z)$, using the Fast Fourier Transforms and the Convolution Theorem.
But I am interested in the general case, where $c,c_i$ are arbitrary integers (in my particular scenario they are small integers). Is there a way to adapt the FFT and the convolution theorem for this case too?
I tried making the change of variables $W=c Z$, $Y_i = c_i X_i$, but it gets very messy since you have to account for the fact that since $z,x_i$ have to be integers, $W$ and $Y_i$ will skip many values. I didn't get far down this path. Is this the right way to do it? Is there a better way?