Sometimes a user may want to give weights to a large number of respondents (e.g., 10 million), but computation time is too slow.
In such cases, it might be better to create the model based on a smaller sample (say, 100K), and then apply it to all samples (e.g., once this issue is done: #81 )
It would be great to have this capability in some nice flow. E.g., as an extra argument in .adjust() (or something similar)