Skip to content

Commit b25ab02

Browse files
committed
update urls
1 parent 9611031 commit b25ab02

12 files changed

+21
-21
lines changed

R/applications.R

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2642,7 +2642,7 @@ function (input_shape = NULL, alpha = 1, include_top = TRUE,
26422642
#'
26432643
#' # Reference
26442644
#' - [Searching for MobileNetV3](
2645-
#' https://arxiv.org/pdf/1905.02244.pdf) (ICCV 2019)
2645+
#' https://arxiv.org/pdf/1905.02244) (ICCV 2019)
26462646
#'
26472647
#' The following table describes the performance of MobileNets v3:
26482648
#' ------------------------------------------------------------------------
@@ -2788,7 +2788,7 @@ function (input_shape = NULL, alpha = 1, minimalistic = FALSE,
27882788
#'
27892789
#' # Reference
27902790
#' - [Searching for MobileNetV3](
2791-
#' https://arxiv.org/pdf/1905.02244.pdf) (ICCV 2019)
2791+
#' https://arxiv.org/pdf/1905.02244) (ICCV 2019)
27922792
#'
27932793
#' The following table describes the performance of MobileNets v3:
27942794
#' ------------------------------------------------------------------------

R/losses.R

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -131,7 +131,7 @@ function (y_true, y_pred, from_logits = FALSE, label_smoothing = 0,
131131
#' Computes focal cross-entropy loss between true labels and predictions.
132132
#'
133133
#' @description
134-
#' According to [Lin et al., 2018](https://arxiv.org/pdf/1708.02002.pdf), it
134+
#' According to [Lin et al., 2018](https://arxiv.org/pdf/1708.02002), it
135135
#' helps to apply a focal factor to down-weight easy examples and focus more on
136136
#' hard examples. By default, the focal tensor is computed as follows:
137137
#'
@@ -157,7 +157,7 @@ function (y_true, y_pred, from_logits = FALSE, label_smoothing = 0,
157157
#' when `from_logits=TRUE`) or a probability (i.e, value in `[0., 1.]` when
158158
#' `from_logits=FALSE`).
159159
#'
160-
#' According to [Lin et al., 2018](https://arxiv.org/pdf/1708.02002.pdf), it
160+
#' According to [Lin et al., 2018](https://arxiv.org/pdf/1708.02002), it
161161
#' helps to apply a "focal factor" to down-weight easy examples and focus more
162162
#' on hard examples. By default, the focal tensor is computed as follows:
163163
#'
@@ -274,13 +274,13 @@ function (y_true, y_pred, from_logits = FALSE, label_smoothing = 0,
274274
#' @param alpha
275275
#' A weight balancing factor for class 1, default is `0.25` as
276276
#' mentioned in reference [Lin et al., 2018](
277-
#' https://arxiv.org/pdf/1708.02002.pdf). The weight for class 0 is
277+
#' https://arxiv.org/pdf/1708.02002). The weight for class 0 is
278278
#' `1.0 - alpha`.
279279
#'
280280
#' @param gamma
281281
#' A focusing parameter used to compute the focal factor, default is
282282
#' `2.0` as mentioned in the reference
283-
#' [Lin et al., 2018](https://arxiv.org/pdf/1708.02002.pdf).
283+
#' [Lin et al., 2018](https://arxiv.org/pdf/1708.02002).
284284
#'
285285
#' @param from_logits
286286
#' Whether to interpret `y_pred` as a tensor of
@@ -450,7 +450,7 @@ function (y_true, y_pred, from_logits = FALSE, label_smoothing = 0,
450450
#' `class_weights`. We expect labels to be provided in a `one_hot`
451451
#' representation.
452452
#'
453-
#' According to [Lin et al., 2018](https://arxiv.org/pdf/1708.02002.pdf), it
453+
#' According to [Lin et al., 2018](https://arxiv.org/pdf/1708.02002), it
454454
#' helps to apply a focal factor to down-weight easy examples and focus more on
455455
#' hard examples. The general formula for the focal loss (FL)
456456
#' is as follows:

R/metrics.R

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
#' Computes the binary focal crossentropy loss.
44
#'
55
#' @description
6-
#' According to [Lin et al., 2018](https://arxiv.org/pdf/1708.02002.pdf), it
6+
#' According to [Lin et al., 2018](https://arxiv.org/pdf/1708.02002), it
77
#' helps to apply a focal factor to down-weight easy examples and focus more on
88
#' hard examples. By default, the focal tensor is computed as follows:
99
#'

R/optimizers-schedules.R

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
#' SGDR: Stochastic Gradient Descent with Warm Restarts.
88
#'
99
#' For the idea of a linear warmup of our learning rate,
10-
#' see [Goyal et al.](https://arxiv.org/pdf/1706.02677.pdf).
10+
#' see [Goyal et al.](https://arxiv.org/pdf/1706.02677).
1111
#'
1212
#' When we begin training a model, we often want an initial increase in our
1313
#' learning rate followed by a decay. If `warmup_target` is an int, this

man/application_mobilenet_v3_large.Rd

Lines changed: 1 addition & 1 deletion
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

man/application_mobilenet_v3_small.Rd

Lines changed: 1 addition & 1 deletion
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

man/layer_tfsm.Rd

Lines changed: 2 additions & 2 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

man/learning_rate_schedule_cosine_decay.Rd

Lines changed: 1 addition & 1 deletion
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

man/loss_binary_focal_crossentropy.Rd

Lines changed: 4 additions & 4 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

man/loss_categorical_focal_crossentropy.Rd

Lines changed: 1 addition & 1 deletion
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)