Questions tagged [nadaraya-watson]
18 questions
7
votes
1 answer
Nadaraya-Watson Optimal Bandwidth
I am currently working on a statistical project where I need to estimate a conditional expectation $E[Y|X=x_i]$ using the Nadaraya-Watson estimator. For doing that, I have the sample $(x_1,y_1),...,(x_n,y_n)$, where $n=14$, and I have chosen the…

JJFM
- 81
- 3
6
votes
2 answers
How to choose appropriate bandwidth for kernel regression?
I'm trying to understand how to choose an appropriate bandwidth for kernel regression. Note that this is NOT about kernel density estimation (unless someone can convince me that the same techniques can be used).
Here's my thinking on this: The…

makansij
- 1,919
- 5
- 27
- 38
3
votes
0 answers
Nonparametric Quantile Regression for AR(1)-ARCH(1) process
I would like to estimate the conditional scale function $(\sigma_\tau(X_t))$ in a QAR-QARCH model represented by:
\begin{equation}
Y_t = \mu_\tau(X_t) + \sigma_\tau(X_t)\epsilon_t,\, t = 1,2,\ldots
\end{equation}
where $\epsilon$ is zero (0)…
2
votes
1 answer
Nadaraya Watson regression increasing bandwidth
I'm working with the Nadaraya Watson estimator and read the book of Wand and Jones (Kernel Smoothing, 1995) for introduction. On page 117 (for those who have the book) there is written that with larger bandwidth h the estimate tends towards the…

To Mate
- 87
- 3
2
votes
1 answer
What is Nadaraya-Watson Kernel Regression Estimator for Multivariate Response?
Given a regression setting with covariates $X_{n \times m}$ and response $Y_{n \times p}$ where $p>1$, i.e the responses are vector-valued or multivariate, is there a Nadaraya-Watson estimator for kernel regression in this setting?
This boils down…

hearse
- 2,355
- 1
- 17
- 30
1
vote
0 answers
In kernel regression, what are the common theoretical motivations for using a kernel that is Lipschitz continuous?
I read a few papers on Nadaraya-Watson kernel regression in which I saw assumptions that require the kernel function being Lipschitz continuous without explanation ( and without citation of such assumption in the proof), so I'm wondering what are…

T34driver
- 1,608
- 5
- 11
1
vote
1 answer
Suppose $\widehat{m}'(x)$ is the derivative of Nadaraya-Watson estimator, can I get its uniform rate from the rate for its numerator and denominator?
Suppose $E(Y|x)=m(x)$ is the regression function that is twice differentiable, $f(x)$ is the density of $X$ that is also twice differentiable. Suppose $Y_i=m(X_i)+e_i$.
$m'(x)$ is the derivative of regression function.
We have i.i.d. data…

T34driver
- 1,608
- 5
- 11
1
vote
1 answer
Why taking an average makes convergence to zero faster?
Let $f(x,y)$ be some density, and let the leave-one-out Nadaraya-Watson estimator $\widehat{f}_{-i}(x,y)$ be defined as follows:
$\widehat{f}_{-i}(x,y)=\frac{1}{(n-1)h^2}\sum_{j=1,j\neq i}^nK(\frac{(X_j,Y_j)-(x,y)}{h})$, where $K(\cdot,\cdot)$ is…

T34driver
- 1,608
- 5
- 11
1
vote
0 answers
Can we apply the Nadaraya–Watson kernel regression estimator when $X$ is discrete?
Suppose $Y_i=g(X_i)+e_i$ with $E(e_i|X_i)=0$, $g(\cdot)$ being an unknown function and $X_i\in S=\{1,2,3,4\}$ with equal probability of taking each value. We want to estimate $g(x)$ using data $\{Y_i,X_i\}_{i=1}^{n}$. Can we estimate $g(x)$ with the…

T34driver
- 1,608
- 5
- 11
1
vote
0 answers
Derive an expression for the decision rule for a binary classification classifier
I want to derive the decision rule for the local constant logistic regression:
Consider the log-likelihood for the GLM (general linearised model)
\begin{equation}
l( \beta_{0}, \beta_{1})=…

Stochastic
- 799
- 1
- 6
- 28
0
votes
0 answers
Nadaraya-Watson regression alternative for binary outcome
I am looking for pointers as to what would be the non-parametric equivalent of Nadaraya-Watson regression when modelling a binary outcome. I have been googling and ended up with Generalized Additive Models, but I think these impose functional forms?

Papayapap
- 211
- 2
- 15
0
votes
0 answers
why is the nadaraya watson estimator unbiased?
Say I have the model $Y_{i} = m(x_{i}) + \epsilon_{i}$ and $Y_{i}$ and $X_{i}$ are two mutually independent i.i.d. sequences.
Then, how can I show that the Nadaraya Watson estimator is unbiased for this model, regardless of the bandwidth? And what…

user34031
- 1
- 1
0
votes
1 answer
The nonparametric estimation in generalized regression model
Let $Y_t \in \mathbb{R}$ be a response variable and $X_t$ a $d$-dimensional explanatory variable. Assume we observe the process that $(X_1, Y_1), \cdots, (X_n, Y_n)$.
\begin{equation}
Y_{t} = \mu(X_{t})+\sigma(X_{t})\varepsilon_{t}, \quad…

香结丁
- 11
- 3
0
votes
1 answer
About the validity of two statements
Let $f(x)$ be some smooth univariate density, and let the leave-one-out Nadaraya-Watson estimator $\widehat{f}_{-i}(x)$ be defined as follows:
$\widehat{f}_{-i}(x)=\frac{1}{(n-1)h}\sum_{j=1,j\neq i}^nK(\frac{X_j-x}{h})$, where $K(\cdot)$ is the…

T34driver
- 1,608
- 5
- 11
0
votes
0 answers
Optimal grid for Nadaraya-Watson CV bandwidth selection
Is there any rule of thumb or optimization technique for number of grid points for Kernel Regression?
I am doing Nadaraya-Watson on 10 years data (2500 daily observations) of Swap rate. While performing cross-validation for optimal bandwidth…

Alex
- 1