It looks like you have mostly solved this problem, but you are left with a quantile function that you are not familiar with. However, you also appear not to have recognised the simplicity of some aspects of the problem, and particularly the fact that when you are constructing the uniformly most-powerful test (UMP) you will ultimately implement this by deriving the p-value function for that test. This method bypasses the need to ever compute the cut-off level for a particular rejection region, and so, in practice, we never really bother computing these. Let me show you how to generate the UMP and implement this in statistical programming.
Deriving the UMP: In this problem you have data $X_1,...,X_n \sim \text{IID Laplace}(0, \theta)$ from the centered Laplace distribution. Since this distribution is symmetric around zero, for all $x>0$ you have:
$$f_{|X|}(x) = 2 \times \text{Laplace}(x|0,\theta) = \frac{1}{\theta} \cdot \exp \Big( - \frac{x}{\theta} \Big) = \text{Exp}(x|\theta).$$
This means that you have $|X_1|,...,|X_n| \sim \text{IID Exp}(\theta)$, and the estimator of the scale parameter is:
$$\hat{\theta}_n \equiv \frac{1}{n} \sum_{i=1}^n |X_i| \sim \frac{1}{n} \cdot \text{Gamma}(n,\theta).$$
In your hypothesis test you are considering the case where $\theta_1 < \theta_0$, so your alternative hypothesis is that the observations have a smaller scale and therefore they will tend to be closer to the zero value than under the null hypothesis. Thus, your most-powerful test should use the rejection region $\sum|X_i| \leqslant k$ where the cut-off value $k$ is determined by the requirement that $\alpha = F_\text{Ga}(k|n,\theta)$. Letting $Q_\text{Ga}$ denote the quantile function for the $\text{Ga}(n,\theta)$ distribution, you therefore have $k = Q_\text{Ga}(\alpha)$. That is, the cut-off value $k$ is the $\alpha$ percentile of the gamma distribution with shape $n$ and scale $\theta$. There is no closed form expression for this quantile value, but it is programmed into standard statistical software. For example, in R
you can obtain this value with the function qgamma
.
While this method can find the cut-off value for the test, in practice, there is no need to do this. Instead of computing the rejection region for a stipulated significance level, it is more useful to generate the p-value function. In the case where $\theta_1 < \theta_0$ a lower estimated scale constitutes evidence in favour of the alternative hypothesis, and so the p-value function for this test is:
$$p(\mathbf{x}) = \mathbb{P} \Bigg( \sum_{i=1}^n |X_i| \leqslant \sum_{i=1}^n |x_i| \Bigg| \theta = \theta_0 \Bigg) = F_\text{Ga} \Bigg( \sum_{i=1}^n |x_i| \Bigg| n, \theta_0 \Bigg).$$
(Note that in the contrary case where $\theta_1 > \theta_0$ a higher estimated scale constitutes evidence in favour of the alternative hypothesis, so the p-value would be one minus this amount.) This gives you the p-value function for the test, which is sufficient to implement the test with any set of observed data. An important point about the p-value function is that it does not depend on $\theta_1$, except to the extent of checking whether this value is larger or smaller than $\theta_0$. We therefore see that it is not necessary to stipulate the value $\theta_1$ in the test --- we merely need to specify which direction of the one-sided test we are performing.
Programming the test in R
: It is relatively simple to program this test in R
using the standard method for programming a hypothesis test. Here we will program a slightly more general version of the test, that allows you to stipulate a parameter $\mu$ in the Laplace distribution, or leave this parameter to be estimated from the data. (Note that we use an approximation for the p-value for the case where this parameter is estimated from the data.) Our test will be a one-side test that allows you to test for a larger or smaller scale parameter,
Laplace.scale.test <- function(X, mu = NULL, theta0, alternative = "greater") {
#Check validity of inputs
if(!is.numeric(X)) { stop("Error: Data should be numeric"); }
if(length(X) == 0) { stop("Error: You require at least one observation"); }
if(!is.null(mu)) {
if(length(X) == 1) { stop("Error: You require at least two observations"); }
if(!is.numeric(mu)) { stop("Error: Parameter mu should be numeric or NULL"); }
if(length(mu) != 1) { stop("Error: Parameter mu should be a scalar or NULL"); } }
if(!is.numeric(theta0)) { stop("Error: Parameter theta0 should be numeric"); }
if(length(theta0) != 1) { stop("Error: Parameter theta0 should be a scalar"); }
if(theta0 <= 0) { stop("Error: Parameter theta0 should be positive"); }
if(!(alternative %in% c("greater", "less")))
{ stop("Error: Alternative must be 'greater' or 'less'"); }
#Set description of test and data
if (is.null(mu)) {
method <- "Laplace scale test"; } else {
method <- paste0("Laplace scale test (with assumed mean of ", mu, ")"); }
data.name <- paste0(deparse(substitute(X)));
#Set null hypothesis value
null.value <- theta0;
attr(null.value, "names") <- "scale parameter";
#Calculate test statistics
n <- length(X);
if (is.null(mu)) { df <- n; } else { df <- n-1; }
if (is.null(mu)) {
estimate <- sum(abs(X))/df; } else {
mu.hat <- stats::median(X);
estimate <- sum(abs(X-mu.hat))/df; }
attr(estimate, "names") <- "estimated scale";
statistic <- estimate;
attr(statistic, "names") <- "theta.hat";
#Calculate p-value
if (alternative == "less") {
p.value <- pgamma(df*statistic, shape = df, scale = null.value,
lower.tail = TRUE, log.p = FALSE); }
if (alternative == "greater") {
p.value <- pgamma(df*statistic, shape = df, scale = null.value,
lower.tail = FALSE, log.p = FALSE); }
attr(p.value, "names") <- NULL;
#Create htest object
TEST <- list(method = method, data.name = data.name,
null.value = null.value, alternative = alternative,
estimate = estimate, statistic = statistic, p.value = p.value);
class(TEST) <- "htest";
TEST; }
This gives us a general testing function that we can use to perform a one-sided test for the scale parameter from data from a Laplace distribution. We can either stipulate the mean parameter, or estimate this from the data.
Implementation of the test: We can implement this test on a set of mock data which I will set out in the code below. We will input the vector DATA
and test whether the scale parameter for this data is larger than the null value $\theta_0 = 10$ (we will assume zero mean for the Laplace distribution).
#Input the data for the test
DATA <- c( -3.48, 12.15 , -4.93, -28.91, -8.62, -8.91,
-9.07, -23.96, 32.04, -0.58, -25.93, -17.66,
6.36, -16.18, -17.82, 5.12, -20.74, 7.96)
#Generate the scale test
TEST <- Laplace.scale.test(DATA, mu = 0, theta0 = 10);
#Print the test
TEST;
Laplace scale test (with assumed mean of 0)
data: DATA
theta.hat = 12.659, p-value = 0.1376
alternative hypothesis: true scale parameter is greater than 10
sample estimates:
estimated scale
12.65882
Using the above data, we can see that the estimated scale parameter is $\hat{\theta}_n = 12.65882$, but there is no evidence to reject the null hypothesis that $\theta_0 = 10$ in favour of a larger value. We would not reject the hypothesis that this is the scale parameter.