Questions tagged [parameter-estimation]

Questions about parameter estimation. Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured/empirical data that has a random component. (Def: http://en.m.wikipedia.org/wiki/Estimation_theory)

Questions about parameter estimation. Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured/empirical data that has a random component. Reference: Wikipedia.

The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements.

1888 questions
74
votes
3 answers

Intuitive explanation of a definition of the Fisher information

I'm studying statistics. When I read the textbook about Fisher Information, I couldn't understand why the Fisher Information is defined like this: $$I(\theta)=E_\theta\left[-\frac{\partial^2 }{\partial \theta^2}\ln P(\theta;X)\right].$$ Could anyone…
maple
  • 2,723
  • 2
  • 28
  • 36
32
votes
4 answers

Maximum Likelihood Estimator of parameters of multinomial distribution

Suppose that 50 measuring scales made by a machine are selected at random from the production of the machine and their lengths and widths are measured. It was found that 45 had both measurements within the tolerance limits, 2 had satisfactory length…
31
votes
2 answers

Difference between logarithm of an expectation value and expectation value of a logarithm

Assuming I have a always positive random variable $X$, $X \in \mathbb{R}$, $X > 0$. Then I am now interested in the difference between the following two expectation values: $E \left[ \ln X \right]$ $\ln E \left[ X \right]$ Is one maybe always a…
Matthias
  • 573
  • 1
  • 6
  • 11
20
votes
4 answers

Maximum likelihood estimation of $a,b$ for a uniform distribution on $[a,b]$

I'm supposed to calculate the MLE's for $a$ and $b$ from a random sample of $(X_1,...,X_n)$ drawn from a uniform distribution on $[a,b]$. But the likelihood function, $\mathcal{L}(a,b)=\frac{1}{(b-a)^n}$ is constant, how do I find a maximum? Would…
17
votes
1 answer

Is a probability density function necessarily a $L^2$ function?

If a nonnegative continuous real valued function $f$ is integrable over $\mathbb{R}$ with $$\int_\mathbb{R} f\,\mathrm{d}x = 1,$$ does it hold true $$\int_\mathbb{R} f^2 \,\mathrm{d}x<\infty?$$ Motivation: I am wondering if the mean squared error…
newbie
  • 3,351
  • 2
  • 30
  • 50
15
votes
1 answer

MLE for Uniform $(0,\theta)$

I am a bit confused about the derivation of MLE of Uniform$(0,\theta)$. I understand that $L(\theta)={\theta}^{-n}$ is a decreasing function and to find the MLE we want to maximize the likelihood function. What is confusing me is that if a function…
12
votes
2 answers

simple example of recursive least squares (RLS)

I'm vaguely familiar with recursive least squares algorithms; all the information about them I can find is in the general form with vector parameters and measurements. Can someone point me towards a very simple example with numerical data, e.g. $y =…
Jason S
  • 3,059
  • 1
  • 20
  • 27
11
votes
2 answers

You see a route 14 bus on the moon. What is the most likely number of bus routes on the moon?

This question was asked on a forum and while many argued that the answer is 14 (since the probability of you seeing bus 14 is maximum in this case), I argued against it that they were working backwards. My claim is that this question is invalid as…
10
votes
1 answer

Estimating Parameter - What is the qualitative difference between MLE fitting and Least Squares CDF fitting?

Given a parametric pdf $f(x;\lambda)$ and a set of data $\{ x_k \}_{k=1}^n$, here are two ways of formulating a problem of selecting an optimal parameter vector $\lambda^*$ to fit to the data. The first is maximum likelihood estimation (MLE):…
9
votes
2 answers

Convergence Rate of Sample Average Estimator

Let $X_1, X_2,\cdots$ be i.i.d. random variables with $E(X_1) = \mu, Var(X_1) = σ^2> 0$ and let $\bar{X}_n = {X_1 + X_2 + \cdots + X_n \over n}$ be the sample average estimator. Is there a way to calculate how many samples are needed to obtain a…
9
votes
3 answers

Why should Gaussian noise have fractal dimension of 1.5?

In a paper I'm trying to understand, the following time series is generated as "simulated data": $$Y(i)=\sum_{j=1}^{1000+i}Z(j) \:\:\: ; \:\:\: (i=1,2,\ldots,N)$$ where $Z(j)$ is a Gaussian noise with mean $0$ and standard deviation $1$. The paper…
8
votes
1 answer

Minimum variance unbiased estimator for scale parameter of a certain gamma distribution

Let $X_1, X_2, ..., X_n$ be a random sample from a distribution with p.d.f., $$f(x;\theta)=\theta^2xe^{-x\theta} ; 00$$ Obtain minimum variance unbiased estimator of $\theta$ and examine whether it is attained? MY WORK: Using MLE i…
8
votes
1 answer

MLE (Maximum Likelihood Estimator) of Beta Distribution

Let $X_1,\ldots,X_n$ be i.i.d. random variables with a common density function given by: $f(x\mid\theta)=\theta x^{\theta-1}$ for $x\in[0,1]$ and $\theta>0$. Clearly this is a $\operatorname{BETA}(\theta,1)$ distribution. Calculate the maximum…
8
votes
1 answer

Finding UMVUE of $\theta$ when the underlying distribution is exponential distribution

Hi I'm solving some exercise problems in my text : "A Course in Mathematical Statistics". I'm in the chapter "Point estimation" now, and I want to find a UMVUE of $\theta$ where $X_1 ,...,X_n$ are i.i.d random variables with the p.d.f $f(x;…
7
votes
1 answer

Improper Uniform Prior Distribution

In Bayesian method, choosing the prior distribution is an important step when using the Bayesian method. When choosing prior, we consider the prior knowledge to choose which prior distribution is the best for our problem. By hold to Laplace…
1
2 3
99 100