Skip to content

Commit 36e3a1f

Browse files
authored
Update references in docs to 2025 (#1685)
Fixes #1651
1 parent 5993e79 commit 36e3a1f

File tree

5 files changed

+6
-14
lines changed

5 files changed

+6
-14
lines changed

docs/README.MD

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,4 +6,4 @@ PhotonVision is a free open-source vision processing software for FRC teams.
66

77
This repository is the source code for our ReadTheDocs documentation, which can be found [here](https://docs.photonvision.org).
88

9-
[Contribution and formatting guidelines for this project](https://docs.photonvision.org/en/latest/docs/contributing/photonvision-docs/index.html)
9+
[Contribution and formatting guidelines for this project](https://docs.photonvision.org/en/latest/docs/contributing/index.html)

docs/source/docs/apriltag-pipelines/about-apriltags.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,5 +10,5 @@ AprilTags are a common type of visual fiducial marker. Visual fiducial markers a
1010
A more technical explanation can be found in the [WPILib documentation](https://docs.wpilib.org/en/latest/docs/software/vision-processing/apriltag/apriltag-intro.html).
1111

1212
:::{note}
13-
You can get FIRST's [official PDF of the targets used in 2024 here](https://firstfrc.blob.core.windows.net/frc2024/FieldAssets/Apriltag_Images_and_User_Guide.pdf).
13+
You can get FIRST's [official PDF of the targets used in 2025 here](https://firstfrc.blob.core.windows.net/frc2025/FieldAssets/Apriltag_Images_and_User_Guide.pdf).
1414
:::

docs/source/docs/apriltag-pipelines/multitag.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ The returned field to camera transform is a transform from the fixed field origi
5151

5252
## Updating the Field Layout
5353

54-
PhotonVision ships by default with the [2024 field layout JSON](https://github.com/wpilibsuite/allwpilib/blob/main/apriltag/src/main/native/resources/edu/wpi/first/apriltag/2024-crescendo.json). The layout can be inspected by navigating to the settings tab and scrolling down to the "AprilTag Field Layout" card, as shown below.
54+
PhotonVision ships by default with the [2025 field layout JSON](https://github.com/wpilibsuite/allwpilib/blob/main/apriltag/src/main/native/resources/edu/wpi/first/apriltag/2025-reefscape.json). The layout can be inspected by navigating to the settings tab and scrolling down to the "AprilTag Field Layout" card, as shown below.
5555

5656
```{image} images/field-layout.png
5757
:alt: The currently saved field layout in the Photon UI

docs/source/docs/examples/aimingatatarget.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ The following example is from the PhotonLib example repository ([Java](https://g
77
- A Robot
88
- A camera mounted rigidly to the robot's frame, cenetered and pointed forward.
99
- A coprocessor running PhotonVision with an AprilTag or Aurco 2D Pipeline.
10-
- [A printout of AprilTag 7](https://firstfrc.blob.core.windows.net/frc2024/FieldAssets/Apriltag_Images_and_User_Guide.pdf), mounted on a rigid and flat surface.
10+
- [A printout of AprilTag 7](https://firstfrc.blob.core.windows.net/frc2025/FieldAssets/Apriltag_Images_and_User_Guide.pdf), mounted on a rigid and flat surface.
1111

1212
## Code
1313

docs/source/docs/objectDetection/about-object-detection.md

Lines changed: 2 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -4,13 +4,7 @@
44

55
PhotonVision supports object detection using neural network accelerator hardware built into Orange Pi 5/5+ coprocessors. The Neural Processing Unit, or NPU, is [used by PhotonVision](https://github.com/PhotonVision/rknn_jni/tree/main) to massively accelerate certain math operations like those needed for running ML-based object detection.
66

7-
For the 2024 season, PhotonVision shipped with a **pre-trained NOTE detector** (shown above), as well as a mechanism for swapping in custom models. Future development will focus on enabling lower friction management of multiple custom models.
8-
9-
```{image} images/notes-ui.png
10-
11-
```
12-
13-
For the 2025 season, we intend to release a new trained model once gamepiece data is released.
7+
For the 2025 season, PhotonVision does not currently ship with a pre-trained detector. If teams are interested in using object detection, they can follow the custom process outlined {ref}`below <docs/objectDetection/about-object-detection:Uploading Custom Models>`.
148

159
## Tracking Objects
1610

@@ -49,6 +43,4 @@ Coming soon!
4943
PhotonVision currently ONLY supports YOLOv5 models trained and converted to `.rknn` format for RK3588 CPUs! Other models require different post-processing code and will NOT work. The model conversion process is also highly particular. Proceed with care.
5044
:::
5145

52-
Our [pre-trained NOTE model](https://github.com/PhotonVision/photonvision/blob/main/photon-server/src/main/resources/models/note-640-640-yolov5s.rknn) is automatically extracted from the JAR when PhotonVision starts, only if a file named “note-640-640-yolov5s.rknn” and "labels.txt" does not exist in the folder `photonvision_config/models/`. This technically allows power users to replace the model and label files with new ones without rebuilding Photon from source and uploading a new JAR.
53-
54-
Use a program like WinSCP or FileZilla to access your coprocessor's filesystem, and copy the new `.rknn` model file into /home/pi. Next, SSH into the coprocessor and `sudo mv /path/to/new/model.rknn /opt/photonvision/photonvision_config/models/note-640-640-yolov5s.rknn`. Repeat this process with the labels file, which should contain one line per label the model outputs with no training newline. Next, restart PhotonVision via the web UI.
46+
Use a program like WinSCP or FileZilla to access your coprocessor's filesystem, and copy the new `.rknn` model file into /home/pi. Next, SSH into the coprocessor and `sudo mv /path/to/new/model.rknn /opt/photonvision/photonvision_config/models/NEW-MODEL-NAME.rknn`. Repeat this process with the labels file, which should contain one line per label the model outputs with no training newline. Next, restart PhotonVision via the web UI.

0 commit comments

Comments
 (0)