Skip to content

Commit a27fa3c

Browse files
authored
Vasilis/docs (#370)
* restructured modules into folders. Deprecate warning for old naming conventions. * re-structured public module in docs * removed automl for now from docs until import is fixed * fixed wording in hyperparm tuning * fixed reference to hoenst forest in docs * added verbosity to bootstrap * mvoed bootstrap to private
1 parent 69fadc3 commit a27fa3c

File tree

70 files changed

+6513
-6130
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

70 files changed

+6513
-6130
lines changed

README.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -162,7 +162,7 @@ To install from source, see [For Developers](#for-developers) section below.
162162
<summary>Orthogonal Random Forests (click to expand)</summary>
163163

164164
```Python
165-
from econml.ortho_forest import DMLOrthoForest, DROrthoForest
165+
from econml.orf import DMLOrthoForest, DROrthoForest
166166
from econml.sklearn_extensions.linear_model import WeightedLasso, WeightedLassoCV
167167
# Use defaults
168168
est = DMLOrthoForest()
@@ -233,7 +233,7 @@ To install from source, see [For Developers](#for-developers) section below.
233233
* Linear final stage
234234

235235
```Python
236-
from econml.drlearner import LinearDRLearner
236+
from econml.dr import LinearDRLearner
237237
from sklearn.ensemble import GradientBoostingRegressor, GradientBoostingClassifier
238238

239239
est = LinearDRLearner(model_propensity=GradientBoostingClassifier(),
@@ -246,7 +246,7 @@ lb, ub = est.effect_interval(X_test, alpha=0.05)
246246
* Sparse linear final stage
247247

248248
```Python
249-
from econml.drlearner import SparseLinearDRLearner
249+
from econml.dr import SparseLinearDRLearner
250250
from sklearn.ensemble import GradientBoostingRegressor, GradientBoostingClassifier
251251

252252
est = SparseLinearDRLearner(model_propensity=GradientBoostingClassifier(),
@@ -259,7 +259,7 @@ lb, ub = est.effect_interval(X_test, alpha=0.05)
259259
* Nonparametric final stage
260260

261261
```Python
262-
from econml.drlearner import ForestDRLearner
262+
from econml.dr import ForestDRLearner
263263
from sklearn.ensemble import GradientBoostingRegressor, GradientBoostingClassifier
264264

265265
est = ForestDRLearner(model_propensity=GradientBoostingClassifier(),
@@ -276,7 +276,7 @@ lb, ub = est.effect_interval(X_test, alpha=0.05)
276276
* Intent to Treat Doubly Robust Learner (discrete instrument, discrete treatment)
277277

278278
```Python
279-
from econml.ortho_iv import LinearIntentToTreatDRIV
279+
from econml.iv.dr import LinearIntentToTreatDRIV
280280
from sklearn.ensemble import GradientBoostingRegressor, GradientBoostingClassifier
281281
from sklearn.linear_model import LinearRegression
282282

@@ -295,7 +295,7 @@ lb, ub = est.effect_interval(X_test, alpha=0.05) # OLS confidence intervals
295295

296296
```Python
297297
import keras
298-
from econml.deepiv import DeepIVEstimator
298+
from econml.iv.nnet import DeepIV
299299

300300
treatment_model = keras.Sequential([keras.layers.Dense(128, activation='relu', input_shape=(2,)),
301301
keras.layers.Dropout(0.17),
@@ -310,11 +310,11 @@ response_model = keras.Sequential([keras.layers.Dense(128, activation='relu', in
310310
keras.layers.Dense(32, activation='relu'),
311311
keras.layers.Dropout(0.17),
312312
keras.layers.Dense(1)])
313-
est = DeepIVEstimator(n_components=10, # Number of gaussians in the mixture density networks)
314-
m=lambda z, x: treatment_model(keras.layers.concatenate([z, x])), # Treatment model
315-
h=lambda t, x: response_model(keras.layers.concatenate([t, x])), # Response model
316-
n_samples=1 # Number of samples used to estimate the response
317-
)
313+
est = DeepIV(n_components=10, # Number of gaussians in the mixture density networks)
314+
m=lambda z, x: treatment_model(keras.layers.concatenate([z, x])), # Treatment model
315+
h=lambda t, x: response_model(keras.layers.concatenate([t, x])), # Response model
316+
n_samples=1 # Number of samples used to estimate the response
317+
)
318318
est.fit(Y, T, X=X, Z=Z) # Z -> instrumental variables
319319
treatment_effects = est.effect(X_test)
320320
```

doc/conf.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -212,7 +212,8 @@
212212
intersphinx_mapping = {'python': ('https://docs.python.org/3', None),
213213
'numpy': ('https://docs.scipy.org/doc/numpy/', None),
214214
'sklearn': ('https://scikit-learn.org/stable/', None),
215-
'matplotlib': ('https://matplotlib.org/', None)}
215+
'matplotlib': ('https://matplotlib.org/', None),
216+
'shap': ('https://shap.readthedocs.io/en/stable/', None)}
216217

217218
# -- Options for todo extension ----------------------------------------------
218219

doc/map.svg

Lines changed: 17 additions & 17 deletions
Loading

0 commit comments

Comments
 (0)