Skip to content

Commit 571cf82

Browse files
export of randomized and man page
1 parent e90a8f1 commit 571cf82

File tree

3 files changed

+162
-2
lines changed

3 files changed

+162
-2
lines changed

selectiveInference/NAMESPACE

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,8 @@ export(lar,fs,
1414
TG.pvalue,
1515
TG.limits,
1616
TG.interval,
17-
debiasingMatrix
17+
debiasingMatrix,
18+
randomizedLassoInf
1819
)
1920

2021
S3method("coef", "lar")

selectiveInference/man/fixedLassoInf.Rd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@ is not allowed.
113113
114114
Note that the coefficients and standard errors reported are unregularized.
115115
Eg for the Gaussian, they are the usual least squares estimates and standard errors
116-
for the model fit to the actice set from the lasso.
116+
for the model fit to the active set from the lasso.
117117
}
118118
\value{
119119
\item{type}{Type of coefficients tested (partial or full)}
Lines changed: 159 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,159 @@
1+
\name{randomizedLassoInf}
2+
\alias{randomizedLassoInf}
3+
4+
\title{
5+
Inference for the randomized lasso, with a fixed lambda
6+
}
7+
\description{
8+
Compute p-values and confidence intervals based on selecting
9+
an active set with the randomized lasso, at a
10+
fixed value of the tuning parameter lambda and using Gaussian
11+
randomization.
12+
}
13+
\usage{
14+
randomizedLassoInf(X,
15+
y,
16+
lam,
17+
sigma=NULL,
18+
noise_scale=NULL,
19+
ridge_term=NULL,
20+
condition_subgrad=TRUE,
21+
level=0.9,
22+
nsample=10000,
23+
burnin=2000,
24+
max_iter=100,
25+
kkt_tol=1.e-4,
26+
parameter_tol=1.e-8,
27+
objective_tol=1.e-8,
28+
objective_stop=FALSE,
29+
kkt_stop=TRUE,
30+
param_stop=TRUE)
31+
}
32+
\arguments{
33+
\item{X}{
34+
Matrix of predictors (n by p);
35+
}
36+
\item{y}{
37+
Vector of outcomes (length n)
38+
}
39+
\item{lam}{
40+
Value of lambda used to compute beta. See the above warning
41+
Be careful! This function uses the "standard" lasso objective
42+
\deqn{
43+
1/2 \|y - x \beta\|_2^2 + \lambda \|\beta\|_1.
44+
}
45+
In contrast, glmnet multiplies the first term by a factor of 1/n.
46+
So after running glmnet, to extract the beta corresponding to a value lambda,
47+
you need to use \code{beta = coef(obj, s=lambda/n)[-1]},
48+
where obj is the object returned by glmnet (and [-1] removes the intercept,
49+
which glmnet always puts in the first component)
50+
}
51+
\item{sigma}{
52+
Estimate of error standard deviation. If NULL (default), this is estimated
53+
using the mean squared residual of the full least squares based on
54+
selected active set.
55+
}
56+
\item{noise_scale}{
57+
Scale of Gaussian noise added to objective. Default is
58+
0.5 * sd(y) times the sqrt of the mean of the trace of X^TX.
59+
}
60+
\item{ridge_term}{
61+
A small "elastic net" or ridge penalty is added to ensure
62+
the randomized problem has a solution.
63+
0.5 * sd(y) times the sqrt of the mean of the trace of X^TX divided by
64+
sqrt(n).
65+
}
66+
\item{condition_subgrad}{
67+
In forming selective confidence intervals and p-values should we condition
68+
on the inactive coordinates of the subgradient as well?
69+
Default is TRUE.
70+
}
71+
\item{level}
72+
{
73+
Level for confidence intervals.
74+
}
75+
\item{nsample}
76+
{
77+
Number of samples of optimization variables to sample.
78+
}
79+
\item{burnin}
80+
{
81+
How many samples of optimization variable to discard (should be less than nsample).
82+
}
83+
\item{max_iter}
84+
{
85+
How many rounds of updates used of coordinate descent in solving randomized
86+
LASSO.
87+
}
88+
\item{kkt_tol}{
89+
Tolerance for checking convergence based on KKT conditions.
90+
}
91+
\item{parameter_tol}{
92+
Tolerance for checking convergence based on convergence
93+
of parameters.
94+
}
95+
\item{objective_tol}{
96+
Tolerance for checking convergence based on convergence
97+
of objective value.
98+
}
99+
\item{kkt_stop}{
100+
Should we use KKT check to determine when to stop?
101+
}
102+
\item{parameter_tol}{
103+
Should we use convergence of parameters to determine when to stop?
104+
}
105+
\item{objective_tol}{
106+
Should we use convergence of objective value to determine when to stop?
107+
}
108+
}
109+
110+
\details{
111+
This function computes selective p-values and confidence intervals for a
112+
randomized version of the lasso,
113+
given a fixed value of the tuning parameter lambda.
114+
115+
}
116+
\value{
117+
\item{type}{Type of coefficients tested (partial or full)}
118+
\item{lambda}{Value of tuning parameter lambda used}
119+
\item{pv}{One-sided P-values for active variables, uses the fact we have conditioned on the sign.}
120+
\item{ci}{Confidence intervals}
121+
\item{tailarea}{Realized tail areas (lower and upper) for each confidence interval}
122+
\item{vlo}{Lower truncation limits for statistics}
123+
\item{vup}{Upper truncation limits for statistics}
124+
\item{vmat}{Linear contrasts that define the observed statistics}
125+
\item{y}{Vector of outcomes}
126+
\item{vars}{Variables in active set}
127+
\item{sign}{Signs of active coefficients}
128+
\item{alpha}{Desired coverage (alpha/2 in each tail)}
129+
\item{sigma}{Value of error standard deviation (sigma) used}
130+
\item{call}{The call to lassoInf}
131+
}
132+
133+
\references{
134+
Xiaoying Tian, and Jonathan Taylor (2015).
135+
Selective inference with a randomized response. arxiv.org:1507.06739
136+
137+
Xiaoying Tian, Snigdha Panigrahi, Jelena Markovic, Nan Bi and Jonathan Taylor (2016).
138+
Selective inference after solving a convex problem.
139+
arxiv:1609.05609
140+
141+
}
142+
\author{Jelena Markovic, Jonathan Taylor}
143+
144+
\examples{
145+
set.seed(43)
146+
n = 50
147+
p = 10
148+
sigma = 1
149+
150+
x = matrix(rnorm(n*p),n,p)
151+
x = scale(x,TRUE,TRUE)
152+
153+
beta = c(3,2,rep(0,p-2))
154+
y = x\%*\%beta + sigma*rnorm(n)
155+
156+
result = randomizedLassoInf(X, y, lam)
157+
158+
}
159+

0 commit comments

Comments
 (0)