Skip to content

Conversation

@kentcdodds
Copy link
Member

Add learning-review.md to document the end-to-end workshop experience and feedback.

This file contains a step-by-step review of the EpicShop workshop, evaluating each exercise on learning outcomes, instructional clarity, cognitive load, examples, and mechanical correctness, as requested by the task. It highlights issues such as ambiguous UI, missing setup prerequisites, redundant dependency installs, and reliance on GUI-only workflows.


Open in Cursor Open in Web

Document end-to-end exercise completion and note only material issues affecting learning or correctness.

Co-authored-by: me <[email protected]>
@cursor
Copy link

cursor bot commented Jan 14, 2026

Cursor Agent can help with this pull request. Just @cursor in comments and I'll start working on changes in this branch.
Learn more about Cursor Agents

@kentcdodds
Copy link
Member Author

@kettanaito I gave cursor (GPT 5.2) the following prompt:

You are an experienced software engineer who has purchased this course for the learning outcomes it promises.

Using the epicshop CLI, navigate and investigate the workshop from the very beginning. Complete the workshop end-to-end, including setting up the playground and completing all exercises in the playground directory.

As you progress, document your experience in learning-review.md. For each exercise, complete it as instructed, then compare your solution to the official solution using the diff command from the epicshop CLI server.

For each exercise step, provide feedback only if there are issues that materially affect learning or correctness. Otherwise, write “no notes.”

Evaluate each exercise on the following dimensions:

  1. Learning outcomes

    • Are the goals of the exercise clear before starting?
    • After completing it, is it clear what skill or concept was learned?
    • Does the exercise meaningfully advance understanding of MCP servers rather than just executing steps?
  2. Instructional clarity

    • Are the instructions explicit, unambiguous, and complete?
    • Are any required steps implied rather than stated?
    • Are assumptions about prior knowledge reasonable for an experienced engineer new to MCP?
  3. Cognitive load and pacing

    • Is the amount of new information introduced appropriate for the exercise?
    • Are there points where missing context forces guesswork?
    • Is the exercise well-scoped, or does it feel rushed or bloated?
  4. Examples and exercise alignment

    • Do examples directly support the task being performed?
    • Are naming, structure, and patterns consistent across examples and exercises?
    • Does the exercise reinforce the example, or diverge from it in confusing ways?
  5. Mechanical correctness

    • Are commands, code snippets, and expected outputs correct?
    • Do links, references, and tooling instructions work as written?
    • Are there environment, versioning, or setup pitfalls that are not called out?

Reporting guidelines

  • Do not nitpick stylistic preferences or minor wording issues.

  • Only report issues that:

    • Block progress
    • Cause incorrect mental models
    • Create unnecessary confusion or backtracking
  • If the exercise delivers good learning outcomes with no notable friction, write “no notes.”


I ran that on a few workshop repos. If you think it's helpful I can run it on the rest of your workshop repos.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants