Skip to content

Commit 90cf2cc

Browse files
Update README.md
1 parent 6f69950 commit 90cf2cc

File tree

1 file changed

+44
-20
lines changed

1 file changed

+44
-20
lines changed

README.md

Lines changed: 44 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -171,32 +171,56 @@ The ADAPT library proposes numerous transfer algorithms and it can be hard to kn
171171

172172
## Quick Start
173173

174+
Here is a simple usage example of the ADAPT library. This is a simulation of a 1D sample bias problem with binary classfication task. The source input data are distributed according to a Gaussian distribution centered in -1 with standard deviation of 2. The target data are drawn from Gaussian distribution centered in 1 with standard deviation of 2. The output labels are equal to 1 in the interval [-1, 1] and 0 elsewhere.
175+
174176
```python
177+
# Import standard librairies
175178
import numpy as np
176-
from adapt.feature_based import DANN
179+
from sklearn.linear_model import LogisticRegression
180+
181+
# Import KMM method form adapt.instance_based module
182+
from adapt.instance_based import KMM
183+
177184
np.random.seed(0)
178185

179-
# Xs and Xt are shifted along the second feature.
180-
Xs = np.concatenate((np.random.random((100, 1)),
181-
np.zeros((100, 1))), 1)
182-
Xt = np.concatenate((np.random.random((100, 1)),
183-
np.ones((100, 1))), 1)
184-
ys = 0.2 * Xs[:, 0]
185-
yt = 0.2 * Xt[:, 0]
186-
187-
# With lambda set to zero, no adaptation is performed.
188-
model = DANN(lambda_=0., random_state=0)
189-
model.fit(Xs, ys, Xt=Xt, epochs=100, verbose=0)
190-
print(model.evaluate(Xt, yt)) # This gives the target score at the last training epoch.
191-
>>> 0.0231
192-
193-
# With lambda set to 0.1, the shift is corrected, the target score is then improved.
194-
model = DANN(lambda_=0.1, random_state=0)
195-
model.fit(Xs, ys, Xt=Xt, epochs=100, verbose=0)
196-
model.evaluate(Xt, yt)
197-
>>> 0.0011
186+
# Create source dataset (Xs ~ N(-1, 2))
187+
# ys = 1 for ys in [-1, 1] else, ys = 0
188+
Xs = np.random.randn(1000, 1)*2-1
189+
ys = (Xs[:, 0] > -1.) & (Xs[:, 0] < 1.)
190+
191+
# Create target dataset (Xt ~ N(1, 2)), yt ~ ys
192+
Xt = np.random.randn(1000, 1)*2+1
193+
yt = (Xt[:, 0] > -1.) & (Xt[:, 0] < 1.)
194+
195+
# Instantiate and fit a source only model for comparison
196+
src_only = LogisticRegression(penalty="none")
197+
src_only.fit(Xs, ys)
198+
199+
# Instantiate a KMM model : estimator and target input
200+
# data Xt are given as parameters with the kernel parameters
201+
adapt_model = KMM(
202+
estimator=LogisticRegression(penalty="none"),
203+
Xt=Xt,
204+
kernel="rbf", # Gaussian kernel
205+
gamma=1., # Bandwidth of the kernel
206+
verbose=0,
207+
random_state=0
208+
)
209+
210+
# Fit the model.
211+
adapt_model.fit(Xs, ys);
212+
213+
# Get the score on target data
214+
adapt_model.score(Xt, yt)
215+
```
216+
```python
217+
>>> 0.574
198218
```
199219

220+
| <img src="src_docs/_static/images/results_qs.png"> |
221+
|:--:|
222+
| **Quick-Start Plotting Results**. *The dotted and dashed lines are respectively the class separation of the "source only" and KMM models. Note that the predicted positive class is on the right of the dotted line for the "source only" model but on the left of the dashed line for KMM.* |
223+
200224

201225
## Contents
202226

0 commit comments

Comments
 (0)