-
Notifications
You must be signed in to change notification settings - Fork 4
Open
Description
How do we know we're getting adoption?
- If we measure growth or adoption via papers - we need a benchmark
- As we incentivise open data, we can score open data highly. We can give partial marks for properietary but non-NSO data (as this can possibly be requested)
- For code/research compendium - we can score with a scale as well as just having some code doesn't mean that its easy to reproduce. However some code is better than no code.
Possible scoring approach:
- data: [0,0.5,1]:
- 0 - if the data is not availible at all or an internal NSO dataset is used and its not clear how to get it. In this way its nearly impossible to reproduce the process.
- 0.5 - if a proprierty dataset is used that could in theory be obtained by another researchers (perhaps by paying a fee).
- 1.0 - if an open dataset is used
- code repo/compendium
- 0 - no code shared in any way, impossible to understand how data was transformed to make results
- 0.5 - code availible in some way such as in annex of the paper or in a repository. If the code doesn't doesn't follow good convention however (i.e. its hard to replicarte for another user as many implicit assumptions are baked in)
- 1.0 - some structure followed, some environment management used, etc - its possible without too much effort to reuse the code and process
Average of the two is the total score.
Key question for measuring this KR:
- what is the denominator? All papers and presentations, or for example only 'empirical research'?
- If the former - the rate of increase will probably be small as many non-emprical research papers can be in the demoninator.
- If the latter - then we can have a more ambitious rate, however we need to clearly agree on what is in scope and what is not.
- Rate increase or count of papers to aim for?
Metadata
Metadata
Assignees
Labels
No labels
Type
Projects
Status
In Progress