It might be good to read What follows if we fail to reject the null hypothesis? before the explanation below.
Desirable properties: power
In hypothesis testing, the goal is to find 'statistical evidence' for $H_1$. Thereby we can make type I errors, i.e. we reject $H_0$ (and decide that there is evidence in favour of $H_1$) while $H_0$ was true (i.e. $H_1$ is false). So a type I error is 'finding false evidence' for $H_1$.
A type II error is made when $H_0$ can not be rejected while it is false in reality, i.e. we ''accept $H_0$'' and we 'miss' the evidence for $H_1$.
The probability of a type I error is denoted by $\alpha$, the choosen significance level. The probability of a type II error is denoted as $\beta$ and $1-\beta$ is called the power of the test, it is the probability to find evidence in favour of $H_1$ when $H_1$ is true.
In statitistical hypothesis testing the scientist fixes an upper threshold for the probability of a type I error and under that constraint tries to find a test with maximum power, given $\alpha$.
The desirable properties of likelihood ratio tests have to do with power
In a hypothesis test $H_0: \theta=\theta_0$ versus $H_1: \theta = \theta_1$ the null hypothesis and the alternative hypothesis are called ''simple'', i.e. the parameter is fixed to one value, just as well under $H_0$ as under $H_1$ (more precisely; the distributions are fully determined).
The Neyman-Pearson Lemma states that, for hypothesis tests with simple hypothesises, and for given type I error probability, a likelihood ratio test has the highest power. Obviously, high power given $\alpha$ is a desirable property: power is a measure of 'how easy it is to find evidence for $H_1$'.
When the hypothesis is composite; like e.g. $H_0: \theta = \theta_1$ versus $H_1: \theta > \theta_1$ then the Neyman-Pearson lemma can not be applied because there are 'multiple values in $H_1$'. If one can find a test such that it is most powerfull for every value 'under $H_1$' then that test is said to be 'uniformly most powerfull' (UMP) (i.e. most powerfull for every value under $H_1$).
There is a theorem by Karlin and Rubin that gives the necessary conditions for a likelihood ratio test to be uniformly most powerfull. These conditions are fullfilled for many one-sided (univariate) tests.
So the desirable property of the likelihood ratio test lies in the fact that in several cases it has the highest power (although not in all cases).
In most cases the existence of an UMP test can not be shown and in many cases (especially the multivariate) it can be shown that an UMP test does not exist. Nevertheless, in some of these cases likelihood ratio tests are applied because of their desirable properties (in the above context), because they are relatively easy to apply, and sometimes because no other tests can be defined.
As an example, the one-sided test based on the standard normal distribution is UMP.
Intuition behind the likelihood ratio test:
If I want to test $H_0: \theta=\theta_0$ versus $H_1: \theta = \theta_1$ then we need an observation $o$ derived from a sample. Note that this is one single value.
We know that either $H_0$ is true or $H_1$ is true, so one can compute the probability of $o$ when $H_0$ is true (lets call it $L_0$) and also the probability of observing $o$ when $H_1$ is true (call it $L_1$).
If $L_1 > L_0$ then we are inclined to believe that ''probably $H_1$ is true''. So if the ration $\frac{L_1}{L_0} > 1$ we have reasons to believe that $H_1$ is more realistic than $H_0$.
If $\frac{L_1}{L_0}$ would be something like $1.001$ then we might conclude that it could be due to chance, so to decide we need a test and thus the distribution of $\frac{L_1}{L_0}$ which is ... a ratio of two likelihoods.
I found this pdf on the internet.