Skip to content

Commit 7e6bbd0

Browse files
committed
Fix formatting issues and typo in user guide documentation
1 parent e9ff577 commit 7e6bbd0

File tree

1 file changed

+19
-19
lines changed

1 file changed

+19
-19
lines changed

docs/source/user_guide.rst

Lines changed: 19 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -84,8 +84,8 @@ classes. However, there are two additional requirements for datasets in *mmlearn
8484
8585
The :class:`~mmlearn.datasets.core.example.Example` class represents a single example in the dataset and all the attributes
8686
associated with it. The class is an extension of the :class:`~collections.OrderedDict` class that provides attribute-style access
87-
to the dictionary values and handles the creation of the ``'example_ids'`` tuple, combining the ``'example_index'`` and
88-
``'dataset_index'`` values. The ``'example_index'`` key is created by the dataset object for each example returned by the
87+
to the dictionary values and handles the creation of the ``'example_ids'`` tuple, combining the ``'example_index'`` and
88+
``'dataset_index'`` values. The ``'example_index'`` key is created by the dataset object for each example returned by the
8989
dataset. On the other hand, the ``'dataset_index'`` key is created by the :class:`~mmlearn.datasets.core.combined_dataset.CombinedDataset`
9090
each :class:`~mmlearn.datasets.core.example.Example` object returned by the dataset.
9191

@@ -94,8 +94,8 @@ each :class:`~mmlearn.datasets.core.example.Example` object returned by the data
9494
which is a subclass of :class:`torch.utils.data.Dataset`. As such, the user almost never has to add/define the ``'dataset_index'``
9595
key explicitly.
9696

97-
Since batching typically combines data from the same modality into one tensor, both the ``'example_index'`` and ``'dataset_index'``
98-
keys are essential for uniquely identifying paired examples across different modalities from the same dataset. The
97+
Since batching typically combines data from the same modality into one tensor, both the ``'example_index'`` and ``'dataset_index'``
98+
keys are essential for uniquely identifying paired examples across different modalities from the same dataset. The
9999
:func:`~mmlearn.datasets.core.example.find_matching_indices` function does exactly this by finding the indices of the
100100
examples in a batch that have the same ``'example_ids'`` tuple.
101101

@@ -119,9 +119,9 @@ Modules are building blocks for models and tasks in *mmlearn*. They can be anyth
119119
learning rate schedulers, metrics, etc. Modules in *mmlearn* are generally defined by extending PyTorch's :class:`nn.Module <torch.nn.Module>`
120120
class.
121121

122-
Users have the flexibility to design new modules according to their requirements, with the exception of encoder modules
122+
Users have the flexibility to design new modules according to their requirements, with the exception of encoder modules
123123
and modules associated with specific pre-defined tasks (e.g., loss functions for the :class:`~mmlearn.tasks.contrastive_pretraining.ContrastivePretraining` task).
124-
The forward method of encoder modules must accept a dictionary as input, where the keys are the names of the modalities
124+
The forward method of encoder modules must accept a dictionary as input, where the keys are the names of the modalities
125125
and the values are the corresponding (batched) tensors/data. This format makes it easier to reuse the encoder with different
126126
modalities and different tasks. In addition, the forward method must return a list-like object where the first element is
127127
the last layer's output. The following code snippet shows how to define a new text encoder module:
@@ -148,7 +148,7 @@ the last layer's output. The following code snippet shows how to define a new te
148148
)
149149
return (out,)
150150
151-
For modules associated with pre-defined tasks, the new modules must adhere to the same function signature as the existing
151+
For modules associated with pre-defined tasks, the new modules must adhere to the same function signature as the existing
152152
modules for that task. For instance, the forward method of a new loss function for the :class:`~mmlearn.tasks.contrastive_pretraining.ContrastivePretraining`
153153
task must have the following signature to be compatible with the existing loss functions for the task:
154154

@@ -175,11 +175,11 @@ involving only evaluation should extend the :class:`~mmlearn.tasks.hooks.Evaluat
175175

176176
Training Tasks
177177
~~~~~~~~~~~~~~
178-
The :class:`~mmlearn.tasks.base.TrainingTask` class is an extension of the :class:`~lightning.pytorch.core.LightningModule`
179-
class, which itself is an extension of the :class:`~torch.nn.Module` class. The class provides a common interface for training
180-
tasks in *mmlearn*. It allows users to define the training loop, validation loop, test loop, and the setup for the model,
178+
The :class:`~mmlearn.tasks.base.TrainingTask` class is an extension of the :class:`~lightning.pytorch.core.LightningModule`
179+
class, which itself is an extension of the :class:`~torch.nn.Module` class. The class provides a common interface for training
180+
tasks in *mmlearn*. It allows users to define the training loop, validation loop, test loop, and the setup for the model,
181181
optimizer, learning rate scheduler and loss function, all in one place (a functionality inherited from PyTorch Lightning).
182-
The class also provides hooks for customizing the training loop, validation loop, and test loop, as well as a suite of
182+
The class also provides hooks for customizing the training loop, validation loop, and test loop, as well as a suite of
183183
other functionalities like logging, checkpointing and handling distributed training.
184184

185185
.. seealso::
@@ -214,8 +214,8 @@ a `training_step` method. The following code snippet shows the minimum requireme
214214
215215
# Since this class also inherits from torch.nn.Module, we can define the
216216
# model and its components directly in the constructor and also define
217-
# a forward method for the model as an instance method of this class.
218-
# Alternatively, we can pass the model as an argument to the constructor
217+
# a forward method for the model as an instance method of this class.
218+
# Alternatively, we can pass the model as an argument to the constructor
219219
# and assign it to an instance variable.
220220
self.model = ...
221221
@@ -229,13 +229,13 @@ a `training_step` method. The following code snippet shows the minimum requireme
229229
230230
Evaluation Tasks
231231
~~~~~~~~~~~~~~~~
232-
The :class:`~mmlearn.tasks.hooks.EvaluationHooks` class is intented to be used for evaluation tasks that don't require training,
232+
The :class:`~mmlearn.tasks.hooks.EvaluationHooks` class is intended to be used for evaluation tasks that don't require training,
233233
e.g. zero-shot evaluation tasks (as opposed to evaluation tasks like linear probing, which require training). The class provides
234-
an interface for defining and customizing the evaluation loop.
234+
an interface for defining and customizing the evaluation loop.
235235

236-
Classes that inherit from :class:`~mmlearn.tasks.hooks.EvaluationHooks` cannot be run/used on their own. They must be used
237-
in conjunction with a training task, which will call the hooks defined in the evaluation task during the evaluation phase.
238-
This way, multiple evaluation tasks can be defined and used with the same training task. The model to be evaluated is
236+
Classes that inherit from :class:`~mmlearn.tasks.hooks.EvaluationHooks` cannot be run/used on their own. They must be used
237+
in conjunction with a training task, which will call the hooks defined in the evaluation task during the evaluation phase.
238+
This way, multiple evaluation tasks can be defined and used with the same training task. The model to be evaluated is
239239
provided by the training task to the evaluation task.
240240

241241
Training tasks that wish to use one or more evaluation tasks must accept an instance of the evaluation task(s) as an argument
@@ -483,7 +483,7 @@ Configuring an Experiment
483483
~~~~~~~~~~~~~~~~~~~~~~~~~
484484
To configure an experiment, create a new `.yaml` file in the ``configs/experiment/`` directory of the project. The configuration
485485
file should define the experiment-specific configuration options and override the base configuration options as needed.
486-
Configurable components from the config store can be referenced by name in the configuration file under the
486+
Configurable components from the config store can be referenced by name in the configuration file under the
487487
`defaults list <https://hydra.cc/docs/advanced/defaults_list/>`_. The following code snippet shows an example configuration
488488
file for an experiment:
489489

0 commit comments

Comments
 (0)