-
Notifications
You must be signed in to change notification settings - Fork 37
Description
Hello, thank you very much for your work. I have two questions that I would like to kindly ask for clarification:
- Regarding the
scores_all_data.pklfile
May I ask whether this file is directly taken from the original Mind2Web release? As far as I remember, this file was produced by Mind2Web using a tuned DeBERTa-v3-base model for candidate generation.
If so, what impact might directly using the outputs of this model have on the results reported under add scores?
In particular, inmemory.pythere is the following operation:pos_candidates = [ c for c in s["pos_candidates"] if c["rank"] < args.top_k_elements ]
If the predicted ranks from this model are inaccurate, this filtering step could potentially result in an empty pos_candidates list.
-
Regarding the evaluation setup
Inmemory.py, the following code is used:if len(pos_candidates) == 0: element_acc.append(0) action_f1.append(0) step_success.append(0) prev_obs.append("Observation: `" + target_obs + "`") prev_actions.append("Action: `" + target_act + "` (" + act_repr + ")") conversation.append("The ground truth element is not in cleaned html") continue
Here,
pos_candidatescan be empty either because it is already empty in the original dataset, or because all candidates are filtered out due toc["rank"] >= args.top_k_elements.
Since neither of these cases is caused by the model or method being evaluated, would counting such steps directly as failures potentially lead to an underestimation of the actual performance?
Thank you very much for your time and help. I would greatly appreciate any clarification on these points.