You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _posts/2025-01-30-update-dmv-rse.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,11 +6,11 @@ category: update
6
6
tags: [update, dmv-rse]
7
7
---
8
8
9
-
The second meetup of the the [DMV (Delaware-Maryland-Virginia) RSE affinity group](https://us-rse.org/ag/dmv-rse/), broadly centered on RSE Career Development, started strong -- pizza, drinks, with some casual introductory chatter and professional updates. After that, a small but dedicated crowd heard from Dr. Angeline Burrell, a research physicist at the Space Science Division, Naval Research Laboratory who gave a talk on writing effective recommendation letters and publishing research code.
9
+
The second meetup of the [DMV (Delaware-Maryland-Virginia) RSE affinity group](https://us-rse.org/ag/dmv-rse/) met in January to discuss RSE Career Development broadly. The meetup started strong with pizza, drinks, with some casual introductory chatter and professional updates. After that, a small but dedicated crowd heard from Dr. Angeline Burrell, a research physicist at the Space Science Division, Naval Research Laboratory who gave a talk on writing effective recommendation letters and publishing research code.
10
10
11
11

12
12
13
-
**Promoting (RSE) careers: writing effective recommendation letters.** Not everyone will be called upon to write an effective recommendation letter, but almost everyone will need one at some point. Recommendation letters are an important leverage point in a research scientists' career. But there’s nuance to it: How can you best promote the candidate, but also retain professional integrity and provide a fair assessment? How can you make sure the language used (e.g. unenthusiastic, unassertive tone) does not bias against the candidate's chances of getting an offer? (hint: as one option, use a gender bias calculator: https://slowe.github.io/genderbias/)
13
+
**Promoting (RSE) careers: writing effective recommendation letters.** Not everyone will be called upon to write an effective recommendation letter, but almost everyone will need one at some point. Recommendation letters are an important leverage point in a research scientists' career. But there’s nuance to it: How can you best promote the candidate, but also retain professional integrity and provide a fair assessment? How can you make sure the language used (e.g. unenthusiastic, unassertive tone) does not bias against the candidate's chances of getting an offer? (Hint: as one option, use a [gender bias calculator](https://slowe.github.io/genderbias/).)
14
14
15
15
**Baby steps, linters, and collaboration: Incentivizing good coding practices in research labs.** The second part of the meetup was centered on publishing research code. How do we incentivize domain scientists to adopt good coding practices (think code versioning, unit testing, and packaging…)? Most research labs don’t incentivize code quality, so do you spend time writing the paper or documenting the codebase? Do you spend time polishing publication-ready figures or writing unit tests?
16
16
@@ -22,7 +22,7 @@ What's the one practice that’s the most efficient in order to start maintainin
22
22
23
23

24
24
25
-
**Incentivizing via funding: evaluation needs to show some teeth.** Finally, an audience question moved us to more systemic leverage points: how do we incentivize sharing code and data in funding schemes and applications? The first step is to make artifact (e.g. code and data) management plans required. However, this measure alone can and will fail if the evaluation stage shows no teeth and does not directly penalize poorly thought through applications. Attendees experienced in evaluating funding proposals shared their stories of how such watered down evaluation can look like in practice.
25
+
**Incentivizing via funding: evaluation needs to show some teeth.** Finally, an audience question moved us to discussion of more systemic leverage points: how do we incentivize sharing code and data in funding schemes and applications? The first step is to make artifact (e.g. code and data) management plans required. However, this measure alone can and will fail if the evaluation stage shows no teeth and does not directly penalize poorly thought through applications. Attendees experienced in evaluating funding proposals shared their stories of how such watered down evaluation can look like in practice.
26
26
27
27
All in all, a half-hour presentation led to about an hour long, rich discussion and sharing of RSE experiences among RSEs coming from a diverse set of research environments (National Institutes of health, Naval Research Laboratory, National Labs, National Institutes of Standards and Technology, NASA, private sector...). A few new participants joined the US-RSE Slack, and some of us connected afterwards. Just what this meetup is supposed to do.
0 commit comments