|
| 1 | +Help on class Circle in module keras.src.losses.losses: |
| 2 | + |
| 3 | +class Circle(LossFunctionWrapper) |
| 4 | + | Circle(gamma=80.0, margin=0.4, remove_diagonal=True, reduction='sum_over_batch_size', name='circle', dtype=None) |
| 5 | + | |
| 6 | + | Computes Circle Loss between integer labels and L2-normalized embeddings. |
| 7 | + | |
| 8 | + | This is a metric learning loss designed to minimize within-class distance |
| 9 | + | and maximize between-class distance in a flexible manner by dynamically |
| 10 | + | adjusting the penalty strength based on optimization status of each |
| 11 | + | similarity score. |
| 12 | + | |
| 13 | + | To use Circle Loss effectively, the model should output embeddings without |
| 14 | + | an activation function (such as a `Dense` layer with `activation=None`) |
| 15 | + | followed by UnitNormalization layer to ensure unit-norm embeddings. |
| 16 | + | |
| 17 | + | Args: |
| 18 | + | gamma: Scaling factor that determines the largest scale of each |
| 19 | + | similarity score. Defaults to `80`. |
| 20 | + | margin: The relaxation factor, below this distance, negatives are |
| 21 | + | up weighted and positives are down weighted. Similarly, above this |
| 22 | + | distance negatives are down weighted and positive are up weighted. |
| 23 | + | Defaults to `0.4`. |
| 24 | + | remove_diagonal: Boolean, whether to remove self-similarities from the |
| 25 | + | positive mask. Defaults to `True`. |
| 26 | + | reduction: Type of reduction to apply to the loss. In almost all cases |
| 27 | + | this should be `"sum_over_batch_size"`. Supported options are |
| 28 | + | `"sum"`, `"sum_over_batch_size"`, `"mean"`, |
| 29 | + | `"mean_with_sample_weight"` or `None`. `"sum"` sums the loss, |
| 30 | + | `"sum_over_batch_size"` and `"mean"` sum the loss and divide by the |
| 31 | + | sample size, and `"mean_with_sample_weight"` sums the loss and |
| 32 | + | divides by the sum of the sample weights. `"none"` and `None` |
| 33 | + | perform no aggregation. Defaults to `"sum_over_batch_size"`. |
| 34 | + | name: Optional name for the loss instance. |
| 35 | + | dtype: The dtype of the loss's computations. Defaults to `None`, which |
| 36 | + | means using `keras.backend.floatx()`. `keras.backend.floatx()` is a |
| 37 | + | `"float32"` unless set to different value |
| 38 | + | (via `keras.backend.set_floatx()`). If a `keras.DTypePolicy` is |
| 39 | + | provided, then the `compute_dtype` will be utilized. |
| 40 | + | |
| 41 | + | Examples: |
| 42 | + | |
| 43 | + | Usage with the `compile()` API: |
| 44 | + | |
| 45 | + | ```python |
| 46 | + | model = models.Sequential([ |
| 47 | + | keras.layers.Input(shape=(224, 224, 3)), |
| 48 | + | keras.layers.Conv2D(16, (3, 3), activation='relu'), |
| 49 | + | keras.layers.Flatten(), |
| 50 | + | keras.layers.Dense(64, activation=None), # No activation |
| 51 | + | keras.layers.UnitNormalization() # L2 normalization |
| 52 | + | ]) |
| 53 | + | |
| 54 | + | model.compile(optimizer="adam", loss=keras.losses.Circle()) |
| 55 | + | ``` |
| 56 | + | |
| 57 | + | Reference: |
| 58 | + | - [Yifan Sun et al., 2020](https://arxiv.org/abs/2002.10857) |
| 59 | + | |
| 60 | + | Method resolution order: |
| 61 | + | Circle |
| 62 | + | LossFunctionWrapper |
| 63 | + | keras.src.losses.loss.Loss |
| 64 | + | keras.src.saving.keras_saveable.KerasSaveable |
| 65 | + | builtins.object |
| 66 | + | |
| 67 | + | Methods defined here: |
| 68 | + | |
| 69 | + | __init__( |
| 70 | + | self, |
| 71 | + | gamma=80.0, |
| 72 | + | margin=0.4, |
| 73 | + | remove_diagonal=True, |
| 74 | + | reduction='sum_over_batch_size', |
| 75 | + | name='circle', |
| 76 | + | dtype=None |
| 77 | + | ) |
| 78 | + | Initialize self. See help(type(self)) for accurate signature. |
| 79 | + | |
| 80 | + | get_config(self) |
| 81 | + | |
| 82 | + |
0 commit comments