-1

fT(t;B,C) = exp(-t/C)-exp(-t/B) / C-B where our mean is C+B and t>0.

so far i have found my log likelihood functions and differentiated them as follows:

dl/dB = sum[t*exp(t/C) / (B^2(exp(t/c)-exp(t/B)))] +n/(C-B) = 0

i have also found a similar dl/dC.

I have now been asked to comment what you can find in the way of sufficient statistics for estimating these parameters and why there is no simple way of using Maximum Likelihood for estimation in the problem. I am simply unsure as to what to comment upon. Any help would be appreciated. Thanks, Rachel

onestop
  • 16,816
  • 2
  • 53
  • 83
R.M
  • 7
  • 2
  • 3
    this site supports latex, please reformat your question. I am reluctant to do it myself, since the notation is not very clear. – mpiktas Jan 25 '11 at 13:41

2 Answers2

4

OK, your question isn't perfectly clear but maybe I can help a little.

A statistic $T(X)$ is sufficient for a parameter $\theta$ if

$P(X|T(X), \theta) = P(X|T(X))$

In terms of likelihood functions you can verify that this implies

$f(x;\theta) = h(x)g(T(x); \theta)$

for some $h$ and $g$, which is known by a few different monikers (the factorization theorem/lemme/criteria and sometimes with a name or two attached). This is where @probabilityislogic's comment comes from, although like I said it's just a property of the likelihood function.

There are often a lot of different sufficient statistics (in particular, take $h=1$ and $g=f$, where $T(X)=X$ is just the entire dataset). Since the goal is to find a particular way to reduce the data without losing information, this leads into questions of minimal/complete sufficient statistics, etc. It's not clear what you need for your question, so I'll leave off there.

In terms of the MLE, your notation is a little confusing to me so I'll make a couple general comments. What problems can happen finding the MLE? It might not have a closed form, which is less a problem than a complication. It can fail to be unique, or occur at the edge of the parameter space, be infinite, etc. You need to at least define the parameter space, which you haven't done in your problem statement so far as I can tell.

Michael Lew
  • 10,995
  • 2
  • 29
  • 47
JMS
  • 4,660
  • 1
  • 22
  • 32
1

The "failsafe" way to find a sufficient statistic in just about any problem: Calculate the Bayesian posterior distribution (up to prop constant) using a uniform prior. If a sufficient statistic exists, this method will tell you what it is. So basically, strip all multiplicative factors (those factors which do not involve your parameters, but may involve functions of the data) from your likelihood function. I would suggest that the "sum function" stuff is your sufficient statistic (although the notation was not clear at the time I answered this question).

NOTE: the use of a uniform prior may make the posterior improper but it will show you what the functions are sufficient for your problem.

probabilityislogic
  • 22,555
  • 4
  • 76
  • 97
  • 1
    This hasn't got anything to do with Bayes; it's just the factorization lemma and is a direct consequence of the definition of sufficiency. Bringing Bayes into the mix just complicates things. – JMS Feb 25 '11 at 01:53
  • @JMS - Yes I do understand your "factorisation" theorem argument. What I am saying is that using Bayes Theorem will 1) tell you if a sufficient statistic of reduced dimensionality exists, and 2) what the sufficient statistic is for your problem. The use of "sufficient statistics" is basically a way to bring "frequentist" statistics" closer to "Bayesian statistics" without admitting that one is doing so. They also reduce the calculations one has to perform in an analysis. Sufficiency is also closely related to the maximum entropy method (aka "ultimate inference"). – probabilityislogic Feb 25 '11 at 08:37
  • "What I am saying is that using Bayes Theorem will 1) tell you if a sufficient statistic of reduced dimensionality exists, and 2) what the sufficient statistic is for your problem." That has nothing to do with Bayes theorem, which is my point. Sufficiency is a property of the likelihood function and exists independently of whatever mode of inference you prefer. – JMS Feb 26 '11 at 03:39
  • Incidentally, the ""factorisation" theorem argument" isn't mine but a result which goes back to Fisher & Neyman and is in any intro stat inference text. – JMS Feb 26 '11 at 03:55
  • @JMS - I'm not disputing that you can't find sufficient statistics in other ways from Bayes theorem. Fisher was, I believe, the one who actually coined the term "sufficiency". But what I'm saying is that Bayes theorem is one of the easiest ways to *find* a sufficient statistic. I would have a small bet that most "intro stat text" don't make much mention of Bayesian analysis, which is an essential tool for any good statistician. – probabilityislogic Feb 26 '11 at 04:07
  • 2
    You're missing my point entirely. Whatever mode of inference you prefer, sufficiency is simply a property of the likelihood function. Bayes theorem doesn't add anything at all. You *aren't* using Bayes theorem when you factor a likelihood! Nor does it add to the results that follow about minimal/complete/ancillary statistics. Deriving sufficient statistics doesn't require you to accept the principles of Bayesian inference; in fact it's intimately related to the ideas behind MLE. – JMS Feb 27 '11 at 05:02
  • 1
    Incidentally, having read many of those textbooks on statistical inference I'd take your bet. Casella & Berger, Bickel & Doksum, Lehman & Casella, etc all at least discuss Bayesian parameter estimation if not some decision theory. Certainly it doesn't get the treatment it deserves but it's unreasonable to expect an introductory text to go too deep - and you can't become a statistician just on the back of them anyway. I'm an avowed Bayesian myself, but a solid understanding of classical statistics is important. – JMS Feb 27 '11 at 05:12