You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+44-20Lines changed: 44 additions & 20 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -171,32 +171,56 @@ The ADAPT library proposes numerous transfer algorithms and it can be hard to kn
171
171
172
172
## Quick Start
173
173
174
+
Here is a simple usage example of the ADAPT library. This is a simulation of a 1D sample bias problem with binary classfication task. The source input data are distributed according to a Gaussian distribution centered in -1 with standard deviation of 2. The target data are drawn from Gaussian distribution centered in 1 with standard deviation of 2. The output labels are equal to 1 in the interval [-1, 1] and 0 elsewhere.
175
+
174
176
```python
177
+
# Import standard librairies
175
178
import numpy as np
176
-
from adapt.feature_based importDANN
179
+
from sklearn.linear_model import LogisticRegression
180
+
181
+
# Import KMM method form adapt.instance_based module
182
+
from adapt.instance_based importKMM
183
+
177
184
np.random.seed(0)
178
185
179
-
# Xs and Xt are shifted along the second feature.
180
-
Xs = np.concatenate((np.random.random((100, 1)),
181
-
np.zeros((100, 1))), 1)
182
-
Xt = np.concatenate((np.random.random((100, 1)),
183
-
np.ones((100, 1))), 1)
184
-
ys =0.2* Xs[:, 0]
185
-
yt =0.2* Xt[:, 0]
186
-
187
-
# With lambda set to zero, no adaptation is performed.
188
-
model = DANN(lambda_=0., random_state=0)
189
-
model.fit(Xs, ys, Xt=Xt, epochs=100, verbose=0)
190
-
print(model.evaluate(Xt, yt)) # This gives the target score at the last training epoch.
191
-
>>>0.0231
192
-
193
-
# With lambda set to 0.1, the shift is corrected, the target score is then improved.
194
-
model = DANN(lambda_=0.1, random_state=0)
195
-
model.fit(Xs, ys, Xt=Xt, epochs=100, verbose=0)
196
-
model.evaluate(Xt, yt)
197
-
>>>0.0011
186
+
# Create source dataset (Xs ~ N(-1, 2))
187
+
# ys = 1 for ys in [-1, 1] else, ys = 0
188
+
Xs = np.random.randn(1000, 1)*2-1
189
+
ys = (Xs[:, 0] >-1.) & (Xs[:, 0] <1.)
190
+
191
+
# Create target dataset (Xt ~ N(1, 2)), yt ~ ys
192
+
Xt = np.random.randn(1000, 1)*2+1
193
+
yt = (Xt[:, 0] >-1.) & (Xt[:, 0] <1.)
194
+
195
+
# Instantiate and fit a source only model for comparison
196
+
src_only = LogisticRegression(penalty="none")
197
+
src_only.fit(Xs, ys)
198
+
199
+
# Instantiate a KMM model : estimator and target input
200
+
# data Xt are given as parameters with the kernel parameters
|**Quick-Start Plotting Results**. *The dotted and dashed lines are respectively the class separation of the "source only" and KMM models. Note that the predicted positive class is on the right of the dotted line for the "source only" model but on the left of the dashed line for KMM.*|
0 commit comments