aws.gaussian.Rd
The function implements an semiparametric adaptive weights smoothing algorithm designed for regression with additive heteroskedastic Gaussian noise. The noise variance is assumed to depend on the value of the regression function. This dependence is modeled by a global parametric (polynomial) model.
aws.gaussian(y, hmax = NULL, hpre = NULL, aws = TRUE, memory = FALSE,
varmodel = "Constant", lkern = "Triangle",
aggkern = "Uniform", scorr = 0, mask=NULL, ladjust = 1,
wghts = NULL, u = NULL, varprop = 0.1, graph = FALSE, demo = FALSE)
y |
|
---|---|
hmax |
|
hpre | Describe |
aws | logical: if TRUE structural adaptation (AWS) is used. |
memory | logical: if TRUE stagewise aggregation is used as an additional adaptation scheme. |
varmodel | Implemented are "Constant", "Linear" and "Quadratic" refering to a polynomial model of degree 0 to 2. |
lkern | character: location kernel, either "Triangle", "Plateau", "Quadratic", "Cubic" or "Gaussian". The default "Triangle" is equivalent to using an Epanechnikov kernel, "Quadratic" and "Cubic" refer to a Bi-weight and Tri-weight kernel, see Fan and Gijbels (1996). "Gaussian" is a truncated (compact support) Gaussian kernel. This is included for comparisons only and should be avoided due to its large computational costs. |
aggkern | character: kernel used in stagewise aggregation, either "Triangle" or "Uniform" |
scorr | The vector |
mask | Restrict smoothing to points where |
ladjust | factor to increase the default value of lambda |
wghts |
|
u | a "true" value of the regression function, may be provided to
report risks at each iteration. This can be used to test the propagation condition with |
varprop | Small variance estimates are replaced by |
graph | If |
demo | If |
The function implements the propagation separation approach to
nonparametric smoothing (formerly introduced as Adaptive weights smoothing)
for varying coefficient likelihood models on a 1D, 2D or 3D grid.
In contrast to function aws
observations are assumed to follow a Gaussian distribution
with variance depending on the mean according to a specified global variance model.
aws==FALSE
provides the stagewise aggregation procedure from Belomestny and Spokoiny (2004).
memory==FALSE
provides Adaptive weights smoothing without control by stagewise aggregation.
The essential parameter in the procedure is a critical value lambda
. This parameter has an
interpretation as a significance level of a test for equivalence of two local
parameter estimates.
Values set internally are choosen to fulfil a propagation condition, i.e. in case of a
constant (global) parameter value and large hmax
the procedure
provides, with a high probability, the global (parametric) estimate.
More formally we require the parameter lambda
to be specified such that
\(\bf{E} |\hat{\theta}^k - \theta| \le (1+\alpha) \bf{E} |\tilde{\theta}^k - \theta|\)
where \(\hat{\theta}^k\) is the aws-estimate in step k
and \(\tilde{\theta}^k\)
is corresponding nonadaptive estimate using the same bandwidth (lambda=Inf
).
The value of lambda can be adjusted by specifying the factor ladjust
. Values
ladjust>1
lead to an less effective adaptation while ladjust<<1
may lead
to random segmentation of, with respect to a constant model, homogeneous regions.
The numerical complexity of the procedure is mainly determined by hmax
. The number
of iterations is approximately Const*d*log(hmax)/log(1.25)
with d
being the dimension
of y
and the constant depending on the kernel lkern
. Comlexity in each
iteration step is Const*hakt*n
with hakt
being the actual bandwith
in the iteration step and n
the number of design points.
hmax
determines the maximal possible variance reduction.
returns anobject of class aws
with slots
y
dim(y)
numeric(0)
integer(0)
logical(0)
Estimates of regression function, length: length(y)
Mean absolute error for each iteration step if u was specified, numeric(0) else
approx. variance of the estimates of the regression function. Please note that this does not reflect variability due to randomness of weights.
numeric(0)
numeric(0)
numeric(0), ratio of distances wghts[-1]/wghts[1]
0
effective hmax
provided or estimated error variance
scorr
"Gaussian"
NULL
integer code for lkern, 1="Plateau", 2="Triangle", 3="Quadratic", 4="Cubic", 5="Gaussian"
effective value of lambda
effective value of ladjust
aws
memory
homogen
FALSE
varmodel
estimated parameters of the variance model
the arguments of the call to aws.gaussian
Joerg Polzehl, Vladimir Spokoiny, Adaptive Weights Smoothing with applications to image restoration, J. R. Stat. Soc. Ser. B Stat. Methodol. 62 , (2000) , pp. 335--354
Joerg Polzehl, Vladimir Spokoiny, Propagation-separation approach for local likelihood estimation, Probab. Theory Related Fields 135 (3), (2006) , pp. 335--362.
Joerg Polzehl, Vladimir Spokoiny, in V. Chen, C.; Haerdle, W. and Unwin, A. (ed.) Handbook of Data Visualization Structural adaptive smoothing by propagation-separation methods Springer-Verlag, 2008, 471-492
Joerg Polzehl, polzehl@wias-berlin.de, http://www.wias-berlin.de/people/polzehl/