Skip to content

Commit f7f22e9

Browse files
committed
checkin new tethers
1 parent a8fbafe commit f7f22e9

20 files changed

+1349
-0
lines changed
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
__signature__
2+
keras.activations.sparse_plus(x)
3+
__doc__
4+
SparsePlus activation function.
5+
6+
SparsePlus is defined as:
7+
8+
`sparse_plus(x) = 0` for `x <= -1`.
9+
`sparse_plus(x) = (1/4) * (x + 1)^2` for `-1 < x < 1`.
10+
`sparse_plus(x) = x` for `x >= 1`.
11+
12+
Args:
13+
x: Input tensor.
14+
Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
__signature__
2+
keras.activations.sparsemax(x, axis=-1)
3+
__doc__
4+
Sparsemax activation function.
5+
6+
For each batch `i`, and class `j`,
7+
sparsemax activation function is defined as:
8+
9+
`sparsemax(x)[i, j] = max(x[i, j] - τ(x[i, :]), 0).`
10+
11+
Args:
12+
x: Input tensor.
13+
axis: `int`, axis along which the sparsemax operation is applied.
14+
15+
Returns:
16+
A tensor, output of sparsemax transformation. Has the same type and
17+
shape as `x`.
18+
19+
Reference:
20+
21+
- [Martins et.al., 2016](https://arxiv.org/abs/1602.02068)
22+
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
__signature__
2+
keras.activations.threshold(
3+
x,
4+
threshold,
5+
default_value
6+
)
7+
__doc__
8+
Threshold activation function.
9+
10+
It is defined as:
11+
12+
`threshold(x) = x` if `x > threshold`,
13+
`threshold(x) = default_value` otherwise.
14+
15+
Args:
16+
x: Input tensor.
17+
threshold: The value that decides when to retain or replace x.
18+
default_value: Value to assign when `x <= threshold`.
19+

.tether/man/layer_equalization.txt

Lines changed: 127 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,127 @@
1+
Help on class Equalization in module keras.src.layers.preprocessing.image_preprocessing.equalization:
2+
3+
class Equalization(keras.src.layers.preprocessing.image_preprocessing.base_image_preprocessing_layer.BaseImagePreprocessingLayer)
4+
| Equalization(value_range=(0, 255), bins=256, data_format=None, **kwargs)
5+
|
6+
| Preprocessing layer for histogram equalization on image channels.
7+
|
8+
| Histogram equalization is a technique to adjust image intensities to
9+
| enhance contrast by effectively spreading out the most frequent
10+
| intensity values. This layer applies equalization on a channel-wise
11+
| basis, which can improve the visibility of details in images.
12+
|
13+
| This layer works with both grayscale and color images, performing
14+
| equalization independently on each color channel. At inference time,
15+
| the equalization is consistently applied.
16+
|
17+
| **Note:** This layer is safe to use inside a `tf.data` pipeline
18+
| (independently of which backend you're using).
19+
|
20+
| Args:
21+
| value_range: Optional list/tuple of 2 floats specifying the lower
22+
| and upper limits of the input data values. Defaults to `[0, 255]`.
23+
| If the input image has been scaled, use the appropriate range
24+
| (e.g., `[0.0, 1.0]`). The equalization will be scaled to this
25+
| range, and output values will be clipped accordingly.
26+
| bins: Integer specifying the number of histogram bins to use for
27+
| equalization. Defaults to 256, which is suitable for 8-bit images.
28+
| Larger values can provide more granular intensity redistribution.
29+
|
30+
| Input shape:
31+
| 3D (unbatched) or 4D (batched) tensor with shape:
32+
| `(..., height, width, channels)`, in `"channels_last"` format,
33+
| or `(..., channels, height, width)`, in `"channels_first"` format.
34+
|
35+
| Output shape:
36+
| 3D (unbatched) or 4D (batched) tensor with shape:
37+
| `(..., target_height, target_width, channels)`,
38+
| or `(..., channels, target_height, target_width)`,
39+
| in `"channels_first"` format.
40+
|
41+
| Example:
42+
|
43+
| ```python
44+
| # Create an equalization layer for standard 8-bit images
45+
| equalizer = keras.layers.Equalization()
46+
|
47+
| # An image with uneven intensity distribution
48+
| image = [...] # your input image
49+
|
50+
| # Apply histogram equalization
51+
| equalized_image = equalizer(image)
52+
|
53+
| # For images with custom value range
54+
| custom_equalizer = keras.layers.Equalization(
55+
| value_range=[0.0, 1.0], # for normalized images
56+
| bins=128 # fewer bins for more subtle equalization
57+
| )
58+
| custom_equalized = custom_equalizer(normalized_image)
59+
| ```
60+
|
61+
| Method resolution order:
62+
| Equalization
63+
| keras.src.layers.preprocessing.image_preprocessing.base_image_preprocessing_layer.BaseImagePreprocessingLayer
64+
| keras.src.layers.preprocessing.tf_data_layer.TFDataLayer
65+
| keras.src.layers.layer.Layer
66+
| keras.src.backend.tensorflow.layer.TFLayer
67+
| keras.src.backend.tensorflow.trackable.KerasAutoTrackable
68+
| tensorflow.python.trackable.autotrackable.AutoTrackable
69+
| tensorflow.python.trackable.base.Trackable
70+
| keras.src.ops.operation.Operation
71+
| keras.src.saving.keras_saveable.KerasSaveable
72+
| builtins.object
73+
|
74+
| Methods defined here:
75+
|
76+
| __init__(
77+
| self,
78+
| value_range=(0, 255),
79+
| bins=256,
80+
| data_format=None,
81+
| **kwargs
82+
| )
83+
| Initialize self. See help(type(self)) for accurate signature.
84+
|
85+
| compute_output_shape(self, input_shape)
86+
|
87+
| compute_output_spec(
88+
| self,
89+
| inputs,
90+
| **kwargs
91+
| )
92+
|
93+
| get_config(self)
94+
| Returns the config of the object.
95+
|
96+
| An object config is a Python dictionary (serializable)
97+
| containing the information needed to re-instantiate it.
98+
|
99+
| transform_bounding_boxes(
100+
| self,
101+
| bounding_boxes,
102+
| transformation,
103+
| training=True
104+
| )
105+
|
106+
| transform_images(
107+
| self,
108+
| images,
109+
| transformation,
110+
| training=True
111+
| )
112+
|
113+
| transform_labels(
114+
| self,
115+
| labels,
116+
| transformation,
117+
| training=True
118+
| )
119+
|
120+
| transform_segmentation_masks(
121+
| self,
122+
| segmentation_masks,
123+
| transformation,
124+
| training=True
125+
| )
126+
|
127+

.tether/man/layer_mix_up.txt

Lines changed: 95 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,95 @@
1+
Help on class MixUp in module keras.src.layers.preprocessing.image_preprocessing.mix_up:
2+
3+
class MixUp(keras.src.layers.preprocessing.image_preprocessing.base_image_preprocessing_layer.BaseImagePreprocessingLayer)
4+
| MixUp(alpha=0.2, data_format=None, seed=None, **kwargs)
5+
|
6+
| MixUp implements the MixUp data augmentation technique.
7+
|
8+
| Args:
9+
| alpha: Float between 0 and 1. Controls the blending strength.
10+
| Smaller values mean less mixing, while larger values allow
11+
| for more blending between images. Defaults to 0.2,
12+
| recommended for ImageNet1k classification.
13+
| seed: Integer. Used to create a random seed.
14+
|
15+
| References:
16+
| - [MixUp paper](https://arxiv.org/abs/1710.09412).
17+
| - [MixUp for Object Detection paper](https://arxiv.org/pdf/1902.04103).
18+
|
19+
| Example:
20+
| ```python
21+
| (images, labels), _ = keras.datasets.cifar10.load_data()
22+
| images, labels = images[:8], labels[:8]
23+
| labels = keras.ops.cast(keras.ops.one_hot(labels.flatten(), 10), "float32")
24+
| mix_up = keras.layers.MixUp(alpha=0.2)
25+
| output = mix_up({"images": images, "labels": labels})
26+
| ```
27+
|
28+
| Method resolution order:
29+
| MixUp
30+
| keras.src.layers.preprocessing.image_preprocessing.base_image_preprocessing_layer.BaseImagePreprocessingLayer
31+
| keras.src.layers.preprocessing.tf_data_layer.TFDataLayer
32+
| keras.src.layers.layer.Layer
33+
| keras.src.backend.tensorflow.layer.TFLayer
34+
| keras.src.backend.tensorflow.trackable.KerasAutoTrackable
35+
| tensorflow.python.trackable.autotrackable.AutoTrackable
36+
| tensorflow.python.trackable.base.Trackable
37+
| keras.src.ops.operation.Operation
38+
| keras.src.saving.keras_saveable.KerasSaveable
39+
| builtins.object
40+
|
41+
| Methods defined here:
42+
|
43+
| __init__(
44+
| self,
45+
| alpha=0.2,
46+
| data_format=None,
47+
| seed=None,
48+
| **kwargs
49+
| )
50+
| Initialize self. See help(type(self)) for accurate signature.
51+
|
52+
| compute_output_shape(self, input_shape)
53+
|
54+
| get_config(self)
55+
| Returns the config of the object.
56+
|
57+
| An object config is a Python dictionary (serializable)
58+
| containing the information needed to re-instantiate it.
59+
|
60+
| get_random_transformation(
61+
| self,
62+
| data,
63+
| training=True,
64+
| seed=None
65+
| )
66+
|
67+
| transform_bounding_boxes(
68+
| self,
69+
| bounding_boxes,
70+
| transformation,
71+
| training=True
72+
| )
73+
|
74+
| transform_images(
75+
| self,
76+
| images,
77+
| transformation=None,
78+
| training=True
79+
| )
80+
|
81+
| transform_labels(
82+
| self,
83+
| labels,
84+
| transformation,
85+
| training=True
86+
| )
87+
|
88+
| transform_segmentation_masks(
89+
| self,
90+
| segmentation_masks,
91+
| transformation,
92+
| training=True
93+
| )
94+
|
95+

.tether/man/layer_rand_augment.txt

Lines changed: 99 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,99 @@
1+
Help on class RandAugment in module keras.src.layers.preprocessing.image_preprocessing.rand_augment:
2+
3+
class RandAugment(keras.src.layers.preprocessing.image_preprocessing.base_image_preprocessing_layer.BaseImagePreprocessingLayer)
4+
| RandAugment(value_range=(0, 255), num_ops=2, factor=0.5, interpolation='bilinear', seed=None, data_format=None, **kwargs)
5+
|
6+
| RandAugment performs the Rand Augment operation on input images.
7+
|
8+
| This layer can be thought of as an all-in-one image augmentation layer. The
9+
| policy implemented by this layer has been benchmarked extensively and is
10+
| effective on a wide variety of datasets.
11+
|
12+
| References:
13+
| - [RandAugment](https://arxiv.org/abs/1909.13719)
14+
|
15+
| Args:
16+
| value_range: The range of values the input image can take.
17+
| Default is `(0, 255)`. Typically, this would be `(0, 1)`
18+
| for normalized images or `(0, 255)` for raw images.
19+
| num_ops: The number of augmentation operations to apply sequentially
20+
| to each image. Default is 2.
21+
| factor: The strength of the augmentation as a normalized value
22+
| between 0 and 1. Default is 0.5.
23+
| interpolation: The interpolation method to use for resizing operations.
24+
| Options include `nearest`, `bilinear`. Default is `bilinear`.
25+
| seed: Integer. Used to create a random seed.
26+
|
27+
| Method resolution order:
28+
| RandAugment
29+
| keras.src.layers.preprocessing.image_preprocessing.base_image_preprocessing_layer.BaseImagePreprocessingLayer
30+
| keras.src.layers.preprocessing.tf_data_layer.TFDataLayer
31+
| keras.src.layers.layer.Layer
32+
| keras.src.backend.tensorflow.layer.TFLayer
33+
| keras.src.backend.tensorflow.trackable.KerasAutoTrackable
34+
| tensorflow.python.trackable.autotrackable.AutoTrackable
35+
| tensorflow.python.trackable.base.Trackable
36+
| keras.src.ops.operation.Operation
37+
| keras.src.saving.keras_saveable.KerasSaveable
38+
| builtins.object
39+
|
40+
| Methods defined here:
41+
|
42+
| __init__(
43+
| self,
44+
| value_range=(0, 255),
45+
| num_ops=2,
46+
| factor=0.5,
47+
| interpolation='bilinear',
48+
| seed=None,
49+
| data_format=None,
50+
| **kwargs
51+
| )
52+
| Initialize self. See help(type(self)) for accurate signature.
53+
|
54+
| build(self, input_shape)
55+
|
56+
| compute_output_shape(self, input_shape)
57+
|
58+
| get_config(self)
59+
| Returns the config of the object.
60+
|
61+
| An object config is a Python dictionary (serializable)
62+
| containing the information needed to re-instantiate it.
63+
|
64+
| get_random_transformation(
65+
| self,
66+
| data,
67+
| training=True,
68+
| seed=None
69+
| )
70+
|
71+
| transform_bounding_boxes(
72+
| self,
73+
| bounding_boxes,
74+
| transformation,
75+
| training=True
76+
| )
77+
|
78+
| transform_images(
79+
| self,
80+
| images,
81+
| transformation,
82+
| training=True
83+
| )
84+
|
85+
| transform_labels(
86+
| self,
87+
| labels,
88+
| transformation,
89+
| training=True
90+
| )
91+
|
92+
| transform_segmentation_masks(
93+
| self,
94+
| segmentation_masks,
95+
| transformation,
96+
| training=True
97+
| )
98+
|
99+

0 commit comments

Comments
 (0)