You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Attendees (partial): Samraat , Dylan Jones, Georgios Kalogiannis, Calum Pennington, Stanislav Modrak, Francis Windram, Yaxin Liu, Zhilu Zhang.
Purpose
Plan a major refresh of MQB teaching materials and assessments so students learn computational thinking effectively in an AI-enabled world (Copilot, ChatGPT/Claude, etc.).
Key points & emerging direction
Keep fundamentals, raise the bar conceptually. Early “loops/if” exercises still useful, but shift assessment toward algorithmic thinking, evaluation, optimisation, and explaining why solutions are correct/best (e.g., complexity, trade-offs).
Assess process, not just product. Require Git commit history that evidences iterative development for the major practicals; consider logs of prompts/iterations where practical. Avoid free-text “vibes” marking; prefer structured evidence (commits).
AI usage: allow but structure it. Provide embedded “ask the AI this now” style tips; teach good prompting and debugging workflows. Don’t entirely block AI; reserve human GTA time for higher-level guidance.
On “blocking” AI: Mixed views. Some suggested limited/no-AI sessions or pen-and-paper checks for basics; consensus leans to teaching proper AI use rather than banning it. Pseudocode assessments are a feasible middle ground.
Progressive scaffolding. Students asked for smoother ramps: simple → medium → harder, with guided steps (especially for Bash and early Python/R).
**Documentation & code annotation.**Make richer, purposeful inline comments part of the specification and grading; AI still struggles to produce excellent, context-aware annotations. **
Internal/less-common tools to reduce “AI solves it instantly.” Build projects around documented but less-LLM-indexed packages (e.g., RTPC, recent {parallel} updates) so students must read docs and experiment.
Keep strong, conceptually valuable classics. E.g., the T-autocorrelation/randomisation practical teaches frequentist intuition; keep and possibly spotlight it.
Data realism is a feature. Don’t over-clean datasets; “make data messy again” to force robust pipelines. Even consider API-sourced data that vary slightly per download.
Staffing/roles. Dylan to be primary GTA across Unix → Python → R → maths; separate end-Sept GTA readiness meeting to follow.
Longer-term: Term-2 Maths in Ecology & Evolution overhaul later; explore symbolic/transformer methods for theory problems; broader “AI for EEC students” track (statistical learning/ML with sensor, image, and sound data).
Decisions (for now)
Use Git commit history as a required artefact for the 6–7 major practicals to evidence iterative problem-solving.
Integrate AI-usage guidance directly into notebooks via embedded tips/checkpoints (model-agnostic).
Retain and refresh cornerstone practicals (e.g., randomisation/ACF test), emphasising reasoning and robustness.
Open questions/needs
How (or whether) to require prompt logs without creating subjective, hard-to-mark free text? Could we constrain to a short, structured template per task?
What exact rubric for annotation/commit-quality (e.g., minimum commit granularity, message standards, docstrings)?
Which practicals need scaffolded, step-wise redesign first (Bash, early Python/R, alignment, etc.)?
What “internal/less-common tool” set do we back this year (e.g., RTPC + recent parallelism features), and who maintains the docs/examples?
Action list (owner → actions)
Samraat
Create a GitHub issues board for MQB refresh; seed with priority tasks below.
Draft a one-pager policy on “Using AI well in MQB” (allowed uses, required artefacts, academic integrity, examples of good prompts). Embed short tip boxes into notebooks.
Identify the 6–7 major practicals that will mandate commit-history evidence; sketch expected development stages for each.
Keep & polish the T-autocorrelation/randomisation practical; add reasoning checkpoints (why, tail tests, null via shuffling).
Schedule the late-September GTA prep meeting (Dylan + others) to align on teaching flow and AI-usage norms.
(Logistics) Be at Silwood Friday morning to meet departing students (photos/farewells).
Dylan (with Samraat)
7. Propose a commit-quality mini-rubric: e.g., minimum commit frequency, meaningful messages, checkpoints matched to notebook sections.
8. Identify spots in Unix/Python/R weeks where embedded “AI tip” callouts help unblock syntax while keeping thinking high-level.
Georgios + Stanislav
9. Draft pseudocode checkpoints for early weeks (pen-and-paper or typed) that verify fundamentals without AI; 15–20 min in-class tasks.
10. Suggest scaffolded sequences for Bash → simple scripts → multi-step script (with hints).
Calum
11. Write a code-annotation spec (what good looks like: command purpose, arguments, semantic block intent, context-specific purpose) and a brief grading checklist.
Francis
12. Propose a mini-project using RTPC/less-indexed tools (with documentation links) to reduce “LLM-answers-instantly”; include API-sourced, slightly variable data to stress pipeline robustness.
13. Note availability for image-analysis inputs when that strand launches (likely next academic year).
Yaxin + Zhilu (student perspective)
14. List 3–5 pain-point exercises where difficulty jumped too fast; propose interim steps/hints that would have helped.
Near-term priorities
Ship the AI-usage policy + embed tip boxes in at least Unix + early Python notebooks.
Add commit-history requirement and rubric to course README/assessment brief; update submission templates.
Refactor one flagship practical (e.g., Florida warming / randomisation test) to include: staged prompts, reasoning checkpoints, and required annotations.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Meeting: 3 Sept 2025, 13:08
Attendees (partial): Samraat , Dylan Jones, Georgios Kalogiannis, Calum Pennington, Stanislav Modrak, Francis Windram, Yaxin Liu, Zhilu Zhang.
Purpose
Plan a major refresh of MQB teaching materials and assessments so students learn computational thinking effectively in an AI-enabled world (Copilot, ChatGPT/Claude, etc.).
Key points & emerging direction
Decisions (for now)
Open questions/needs
Action list (owner → actions)
Samraat
Dylan (with Samraat)
7. Propose a commit-quality mini-rubric: e.g., minimum commit frequency, meaningful messages, checkpoints matched to notebook sections.
8. Identify spots in Unix/Python/R weeks where embedded “AI tip” callouts help unblock syntax while keeping thinking high-level.
Georgios + Stanislav
9. Draft pseudocode checkpoints for early weeks (pen-and-paper or typed) that verify fundamentals without AI; 15–20 min in-class tasks.
10. Suggest scaffolded sequences for Bash → simple scripts → multi-step script (with hints).
Calum
11. Write a code-annotation spec (what good looks like: command purpose, arguments, semantic block intent, context-specific purpose) and a brief grading checklist.
Francis
12. Propose a mini-project using RTPC/less-indexed tools (with documentation links) to reduce “LLM-answers-instantly”; include API-sourced, slightly variable data to stress pipeline robustness.
13. Note availability for image-analysis inputs when that strand launches (likely next academic year).
Yaxin + Zhilu (student perspective)
14. List 3–5 pain-point exercises where difficulty jumped too fast; propose interim steps/hints that would have helped.
Near-term priorities
Beta Was this translation helpful? Give feedback.
All reactions