You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/user_guide.rst
+19-19Lines changed: 19 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -84,8 +84,8 @@ classes. However, there are two additional requirements for datasets in *mmlearn
84
84
85
85
The :class:`~mmlearn.datasets.core.example.Example` class represents a single example in the dataset and all the attributes
86
86
associated with it. The class is an extension of the :class:`~collections.OrderedDict` class that provides attribute-style access
87
-
to the dictionary values and handles the creation of the ``'example_ids'`` tuple, combining the ``'example_index'`` and
88
-
``'dataset_index'`` values. The ``'example_index'`` key is created by the dataset object for each example returned by the
87
+
to the dictionary values and handles the creation of the ``'example_ids'`` tuple, combining the ``'example_index'`` and
88
+
``'dataset_index'`` values. The ``'example_index'`` key is created by the dataset object for each example returned by the
89
89
dataset. On the other hand, the ``'dataset_index'`` key is created by the :class:`~mmlearn.datasets.core.combined_dataset.CombinedDataset`
90
90
each :class:`~mmlearn.datasets.core.example.Example` object returned by the dataset.
91
91
@@ -94,8 +94,8 @@ each :class:`~mmlearn.datasets.core.example.Example` object returned by the data
94
94
which is a subclass of :class:`torch.utils.data.Dataset`. As such, the user almost never has to add/define the ``'dataset_index'``
95
95
key explicitly.
96
96
97
-
Since batching typically combines data from the same modality into one tensor, both the ``'example_index'`` and ``'dataset_index'``
98
-
keys are essential for uniquely identifying paired examples across different modalities from the same dataset. The
97
+
Since batching typically combines data from the same modality into one tensor, both the ``'example_index'`` and ``'dataset_index'``
98
+
keys are essential for uniquely identifying paired examples across different modalities from the same dataset. The
99
99
:func:`~mmlearn.datasets.core.example.find_matching_indices` function does exactly this by finding the indices of the
100
100
examples in a batch that have the same ``'example_ids'`` tuple.
101
101
@@ -119,9 +119,9 @@ Modules are building blocks for models and tasks in *mmlearn*. They can be anyth
119
119
learning rate schedulers, metrics, etc. Modules in *mmlearn* are generally defined by extending PyTorch's :class:`nn.Module <torch.nn.Module>`
120
120
class.
121
121
122
-
Users have the flexibility to design new modules according to their requirements, with the exception of encoder modules
122
+
Users have the flexibility to design new modules according to their requirements, with the exception of encoder modules
123
123
and modules associated with specific pre-defined tasks (e.g., loss functions for the :class:`~mmlearn.tasks.contrastive_pretraining.ContrastivePretraining` task).
124
-
The forward method of encoder modules must accept a dictionary as input, where the keys are the names of the modalities
124
+
The forward method of encoder modules must accept a dictionary as input, where the keys are the names of the modalities
125
125
and the values are the corresponding (batched) tensors/data. This format makes it easier to reuse the encoder with different
126
126
modalities and different tasks. In addition, the forward method must return a list-like object where the first element is
127
127
the last layer's output. The following code snippet shows how to define a new text encoder module:
@@ -148,7 +148,7 @@ the last layer's output. The following code snippet shows how to define a new te
148
148
)
149
149
return (out,)
150
150
151
-
For modules associated with pre-defined tasks, the new modules must adhere to the same function signature as the existing
151
+
For modules associated with pre-defined tasks, the new modules must adhere to the same function signature as the existing
152
152
modules for that task. For instance, the forward method of a new loss function for the :class:`~mmlearn.tasks.contrastive_pretraining.ContrastivePretraining`
153
153
task must have the following signature to be compatible with the existing loss functions for the task:
154
154
@@ -175,11 +175,11 @@ involving only evaluation should extend the :class:`~mmlearn.tasks.hooks.Evaluat
175
175
176
176
Training Tasks
177
177
~~~~~~~~~~~~~~
178
-
The :class:`~mmlearn.tasks.base.TrainingTask` class is an extension of the :class:`~lightning.pytorch.core.LightningModule`
179
-
class, which itself is an extension of the :class:`~torch.nn.Module` class. The class provides a common interface for training
180
-
tasks in *mmlearn*. It allows users to define the training loop, validation loop, test loop, and the setup for the model,
178
+
The :class:`~mmlearn.tasks.base.TrainingTask` class is an extension of the :class:`~lightning.pytorch.core.LightningModule`
179
+
class, which itself is an extension of the :class:`~torch.nn.Module` class. The class provides a common interface for training
180
+
tasks in *mmlearn*. It allows users to define the training loop, validation loop, test loop, and the setup for the model,
181
181
optimizer, learning rate scheduler and loss function, all in one place (a functionality inherited from PyTorch Lightning).
182
-
The class also provides hooks for customizing the training loop, validation loop, and test loop, as well as a suite of
182
+
The class also provides hooks for customizing the training loop, validation loop, and test loop, as well as a suite of
183
183
other functionalities like logging, checkpointing and handling distributed training.
184
184
185
185
.. seealso::
@@ -214,8 +214,8 @@ a `training_step` method. The following code snippet shows the minimum requireme
214
214
215
215
# Since this class also inherits from torch.nn.Module, we can define the
216
216
# model and its components directly in the constructor and also define
217
-
# a forward method for the model as an instance method of this class.
218
-
# Alternatively, we can pass the model as an argument to the constructor
217
+
# a forward method for the model as an instance method of this class.
218
+
# Alternatively, we can pass the model as an argument to the constructor
219
219
# and assign it to an instance variable.
220
220
self.model =...
221
221
@@ -229,13 +229,13 @@ a `training_step` method. The following code snippet shows the minimum requireme
229
229
230
230
Evaluation Tasks
231
231
~~~~~~~~~~~~~~~~
232
-
The :class:`~mmlearn.tasks.hooks.EvaluationHooks` class is intented to be used for evaluation tasks that don't require training,
232
+
The :class:`~mmlearn.tasks.hooks.EvaluationHooks` class is intended to be used for evaluation tasks that don't require training,
233
233
e.g. zero-shot evaluation tasks (as opposed to evaluation tasks like linear probing, which require training). The class provides
234
-
an interface for defining and customizing the evaluation loop.
234
+
an interface for defining and customizing the evaluation loop.
235
235
236
-
Classes that inherit from :class:`~mmlearn.tasks.hooks.EvaluationHooks` cannot be run/used on their own. They must be used
237
-
in conjunction with a training task, which will call the hooks defined in the evaluation task during the evaluation phase.
238
-
This way, multiple evaluation tasks can be defined and used with the same training task. The model to be evaluated is
236
+
Classes that inherit from :class:`~mmlearn.tasks.hooks.EvaluationHooks` cannot be run/used on their own. They must be used
237
+
in conjunction with a training task, which will call the hooks defined in the evaluation task during the evaluation phase.
238
+
This way, multiple evaluation tasks can be defined and used with the same training task. The model to be evaluated is
239
239
provided by the training task to the evaluation task.
240
240
241
241
Training tasks that wish to use one or more evaluation tasks must accept an instance of the evaluation task(s) as an argument
@@ -483,7 +483,7 @@ Configuring an Experiment
483
483
~~~~~~~~~~~~~~~~~~~~~~~~~
484
484
To configure an experiment, create a new `.yaml` file in the ``configs/experiment/`` directory of the project. The configuration
485
485
file should define the experiment-specific configuration options and override the base configuration options as needed.
486
-
Configurable components from the config store can be referenced by name in the configuration file under the
486
+
Configurable components from the config store can be referenced by name in the configuration file under the
487
487
`defaults list <https://hydra.cc/docs/advanced/defaults_list/>`_. The following code snippet shows an example configuration
0 commit comments