Skip to content

Commit cecbeb2

Browse files
authored
Small docstring fixes for the upcoming release (#1253)
keras-io renders `Call arguments:` as a special keyword, but not `Call args:`.
1 parent 0b0b9d5 commit cecbeb2

File tree

5 files changed

+6
-6
lines changed

5 files changed

+6
-6
lines changed

keras_nlp/layers/modeling/position_embedding.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ class PositionEmbedding(keras.layers.Layer):
3434
to `"glorot_uniform"`.
3535
seq_axis: The axis of the input tensor where we add the embeddings.
3636
37-
Call args:
37+
Call arguments:
3838
inputs: The tensor inputs to compute an embedding for, with shape
3939
`(batch_size, sequence_length, hidden_dim)`. Only the input shape
4040
will be used, as the position embedding does not depend on the

keras_nlp/layers/modeling/reversible_embedding.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ class ReversibleEmbedding(keras.layers.Embedding):
5353
For stability, it is usually best to use full precision even when
5454
working with half or mixed precision training.
5555
56-
Call args:
56+
Call arguments:
5757
inputs: The tensor inputs to the layer.
5858
reverse: Boolean. If `True` the layer will perform a linear projection
5959
from `output_dim` to `input_dim`, instead of a normal embedding
@@ -69,7 +69,7 @@ class ReversibleEmbedding(keras.layers.Embedding):
6969
# Generate random inputs.
7070
token_ids = np.random.randint(vocab_size, size=(batch_size, seq_length))
7171
72-
embedding = keras.layers.Embedding(vocab_size, hidden_dim)
72+
embedding = keras_nlp.layers.ReversibleEmbedding(vocab_size, hidden_dim)
7373
# Embed tokens to shape `(batch_size, seq_length, hidden_dim)`.
7474
hidden_states = embedding(token_ids)
7575
# Project hidden states to shape `(batch_size, seq_length, vocab_size)`.

keras_nlp/layers/modeling/rotary_embedding.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ class RotaryEmbedding(keras.layers.Layer):
3939
sequence_axis: int. Sequence axis in the input tensor.
4040
feature_axis: int. Feature axis in the input tensor.
4141
42-
Call args:
42+
Call arguments:
4343
inputs: The tensor inputs to apply the embedding to. This can have
4444
any shape, but must contain both a sequence and feature axis. The
4545
rotary embedding will be applied to `inputs` and returned.

keras_nlp/layers/modeling/sine_position_encoding.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ class SinePositionEncoding(keras.layers.Layer):
3535
curves, as described in Attention is All You Need. Defaults to
3636
`10000`.
3737
38-
Call args:
38+
Call arguments:
3939
inputs: The tensor inputs to compute an embedding for, with shape
4040
`(batch_size, sequence_length, hidden_dim)`.
4141
start_index: An integer or integer tensor. The starting position to

keras_nlp/models/xlnet/xlnet_backbone.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ class XLNetBackbone(Backbone):
5353
defaults to "zeros". The bias initializer for
5454
the dense and multiheaded relative attention layers.
5555
56-
Call Args:
56+
Call arguments:
5757
token_ids: Indices of input sequence tokens in the vocabulary of shape
5858
`[batch_size, sequence_length]`.
5959
segment_ids: Segment token indices to indicate first and second portions

0 commit comments

Comments
 (0)