-
Notifications
You must be signed in to change notification settings - Fork 4
Open
Labels
policyAn issue about refining a policyAn issue about refining a policy
Description
Must the artifact provide data and tools to replicate ALL experiments in a paper, or is it allowable to scope an artifact to consider only part of the claims?
Who decides (authors, reviewers or chairs) what claims in a given paper should be supported by the artifact?
What should we consider "too much data" or "too long of an experiment" that can’t be submitted in full for artifact evaluation? For instance, one researcher might consider a 2GB dataset too large to submit in full, while another might submit a 2TB dataset.
Whatever the criteria for "too big" is - what process should authors follow if their artifact is too big to submit some subset of their artifact for evaluation?
Metadata
Metadata
Assignees
Labels
policyAn issue about refining a policyAn issue about refining a policy