You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/agents.md
+31Lines changed: 31 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,6 +30,37 @@ Let’s dig deeper into this class. It has to implement the following methods:
30
30
Details on the output format can be found below.
31
31
32
32
**The future trajectory has to be returned as an object of type `from navsim.common.dataclasses.Trajectory`. For examples, see the constant velocity agent or the human agent.**
33
+
34
+
# Learning-based Agents
35
+
Most likely, your agent will involve learning-based components.
36
+
Navsim provides a lightweight and easy-to-use interface for training.
37
+
To use it, your agent has to implement some further functionality.
38
+
In addition to the methods mentioned above, you have to implement the methods below.
39
+
Have a look at `navsim.agents.ego_status_mlp_agent.EgoStatusMLPAgent` for an example.
40
+
41
+
-`get_feature_builders()`
42
+
Has to return a List of feature builders (of type `navsim.planning.training. abstract_feature_target_builder.AbstractFeatureBuilder`).
43
+
FeatureBuilders take the `AgentInput` object and compute the feature tensors used for agent training and inference. One feature builder can compute multiple feature tensors. They have to be returned in a dictionary, which is then provided to the model in the forward pass.
44
+
Currently, we provide the following feature builders:
45
+
- EgoStateFeatureBuilder (returns a Tensor containing current velocity, acceleration and driving command)
46
+
- _the list will be increased in future devkit versions_
47
+
48
+
-`get_target_builders()`
49
+
Similar to `get_feature_builders()`, returns the target builders of type `navsim.planning.training. abstract_feature_target_builder.AbstractTargetBuilder` used in training. In contrast to feature builders, they have access to the Scene object which contains ground-truth information (instead of just the AgentInput).
50
+
51
+
-`forward()`
52
+
The forward pass through the model. Features are provided as a dictionary which contains all the features generated by the feature builders. All tensors are already batched and on the same device as the model. The forward pass has to output a Dict of which one entry has to be "trajectory" and contain a tensor representing the future trajectory, i.e. of shape [B, T, 3], where B is the batch size, T is the number of future timesteps and 3 refers to x,y,heading.
53
+
54
+
-`compute_loss`()`
55
+
Given the features, the targets and the model predictions, this function computes the loss used for training. The loss has to be returned as a single Tensor.
56
+
57
+
-`get_optimizers()`
58
+
Use this function to define the optimizers used for training.
59
+
Depending on wheter you want to use a learning-rate scheduler or not, this function needs to either return just an Optimizer (of type `torch.optim.Optimizer`) or a dictionary that contains the Optimizer (key: "optimizer") and the learning-rate scheduler of type `torch.optim.lr_scheduler.LRScheduler` (key: "lr_scheduler").
60
+
61
+
-`compute_trajectory()`
62
+
In contrast to the non-learning-based Agent, you don't have to implement this function.
63
+
In inference, the trajectory will automatically be computed using the feature builders and the forward method.
33
64
## Inputs
34
65
35
66
`get_sensor_config()` can be overwritten to determine which sensors are accessible to the agent.
Copy file name to clipboardExpand all lines: docs/submission.md
+9-5Lines changed: 9 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,14 +4,18 @@ NAVSIM comes with official leaderboards on HuggingFace. The leaderboards prevent
4
4
5
5
To submit to a leaderboard you need to create a pickle file that contains a trajectory for each test scenario. NAVSIM provides a script to create such a pickle file.
6
6
7
-
Have a look at `run_cv_submission_evaluation.sh`: this file creates the pickle file for the ConstantVelocity agent. You can run it for your own agent by replacing the `agent` override.
8
-
9
-
**Note that you have to set the variables `TEAM_NAME`, `AUTHORS`, `EMAIL`, `INSTITUTION`, and `COUNTRY`for your submission to be valid.**
7
+
Have a look at `run_create_submission_pickle.sh`: this file creates the pickle file for the ConstantVelocity agent. You can run it for your own agent by replacing the `agent` override.
8
+
Follow the [submission instructions on huggingface](https://huggingface.co/spaces/AGC2024-P/e2e-driving-2024) to upload your submission.
9
+
**Note that you have to set the variables `TEAM_NAME`, `AUTHORS`, `EMAIL`, `INSTITUTION`, and `COUNTRY`in `run_create_submission_pickle.sh`to generate a valid submission file**
10
10
11
11
### Warm-up track
12
12
The warm-up track evaluates your submission on a [warm-up leaderboard](https://huggingface.co/spaces/AGC2024-P/e2e-driving-warmup) based on the `mini` split. This allows you to test your method and get familiar with the devkit and the submisison procedure, with a less restrictive submission budget (up to 5 submissions daily). Instructions on making a submission on HuggingFace are available in the HuggingFace space. Performance on the warm-up leaderboard is not taken into consideration for determining your team's ranking for the 2024 Autonomous Grand Challenge.
13
+
Use the script `run_create_submission_pickle_warmup.sh` which already contains the overrides `scene_filter=warmup_test` and `split=mini` to generate the submission file for the warmup track.
13
14
14
-
You should be able to obtain the same evaluation results as on the server, by running the evaluation locally with the `warmup_test` scene filter. To do so, use the override `scene_filter=warmup_test` when executing the script to run the PDM scoring (e.g., `run_cv_pdm_score_evaluation.sh` for the constant-velocity agent).
15
+
You should be able to obtain the same evaluation results as on the server, by running the evaluation locally.
16
+
To do so, use the overrides `scene_filter=warmup_test` when executing the script to run the PDM scoring (e.g., `run_cv_pdm_score_evaluation.sh` for the constant-velocity agent).
15
17
16
18
### Formal track
17
-
This is the [official challenge leaderboard](https://huggingface.co/spaces/AGC2024-P/e2e-driving-2024), based on secret held-out test frames. **Details and instructions for submission will be provided soon!**
19
+
This is the [official challenge leaderboard](https://huggingface.co/spaces/AGC2024-P/e2e-driving-2024), based on secret held-out test frames (see submission_test split on the install page).
20
+
Use the script `run_create_submission_pickle.sh`. It will by default run with `scene_filter=competition_test` and `split=competition_test`.
21
+
You only need to set your own agent with the `agent` override.
0 commit comments