You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _posts/newsletters/2025-01-30-update-dmv-rse.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,24 +8,24 @@ tags: [update, dmv-rse]
8
8
9
9
The second DMV-RSE meetup, broadly centered on RSE Career Development, started strong -- pizza, drinks, and some casual introductory chatter and professional updates. After that, a small but dedicated crowd heard from Dr. Angeline Burrell, a research physicist at the Space Science Division, Naval Research Laboratory.
10
10
11
-
<divstyle="text-align: center;">
12
-

11
+
<divalign="center>
12
+

13
13
</div>
14
14
15
15
**Promoting (RSE) careers: writing effective recommendation letters.** Not everyone will be called upon to write an effective recommendation letter, but almost everyone will need one at some point. Recommendation letters are an important leverage point in a research scientists' career. But there’s nuance to it: How to make sure to best promote the candidate, but also retain professional integrity and provide a fair assessment? How to make sure the language used does not bias against the candidate's prospects? (hint: use a gender bias calculator: https://slowe.github.io/genderbias/)
16
16
17
17
**Baby steps, linters, and collaboration: Incentivizing good coding practices in research labs.** The second part of the meetup was centered on publishing research code. How do we incentivize domain scientists to adopt good coding practices (think code versioning, unit testing, and packaging…)? Most research labs don’t incentivize code quality, so do you spend time writing the paper or documenting the codebase? Do you spend time polishing publication-ready figures or writing unit tests?
18
18
19
-
<divstyle="text-align: center;">
20
-

19
+
<divalign="center">
20
+

21
21
</div>
22
22
23
23
In practice, Dr. Burrell suggested that the main force that will push towards adoption of these practices is _collaboration_. If researchers know other researchers or labs will depend on their codebase, they will most likely spend additional time on making sure the code runs, is understandable, etc. But even if the incentive is there, how do you adopt the practices on a daily basis? Start small and embrace baby steps. Does your code produce a figure? Write down all the steps that happen such that somebody else can understand it. That's the first version. Call it the alpha version. Did a reviewer require changes to figures and/or analyses? Update the code. Update documentation. Update the code version. Rinse and repeat.
24
24
25
25
What's the one practice that’s the most efficient in order to start maintaining robust code? In Dr. Burrell's experience that's clear: _code linters_. Start off by making code well formatted. This will make it easier for others to review it. Once that bottleneck is cleared, other changes (refactoring, documenting) are easier to take on.
26
26
27
-
<divstyle="text-align: center;">
28
-

27
+
<divalign="center">
28
+

29
29
</div>
30
30
31
31
**Incentivizing via funding: evaluation needs to show some teeth.** Finally, an audience question moved us to more systemic leverage points: how do we incentivize sharing code and data in funding schemes and applications? The first step is to make artifact (e.g. code and data) management plans required. However, this measure alone can and will fail if the evaluation stage shows no teeth and does not directly penalize poorly thought through applications. Attendees experienced in evaluating funding proposals shared their stories of how such watered down evaluation can look like in practice.
0 commit comments