Skip to content

feat: improve skill score for platform-changelog-formatter#1421

Open
yogesh-tessl wants to merge 1 commit into
seqeralabs:masterfrom
yogesh-tessl:improve/skill-review-optimization
Open

feat: improve skill score for platform-changelog-formatter#1421
yogesh-tessl wants to merge 1 commit into
seqeralabs:masterfrom
yogesh-tessl:improve/skill-review-optimization

Conversation

@yogesh-tessl
Copy link
Copy Markdown

Hey @llewellyn-sl 👋

ran your skills through tessl skill review at work and found some targeted improvements. Here's the before/after:

Skill Before After Change
platform-changelog-formatter 10% 90% +80%

The platform-changelog-formatter skill had the most headroom. It was scoring 10% because it was missing YAML frontmatter entirely, which meant the skill evaluator couldn't parse the description or name fields. The content itself was already solid.

Changes summary

platform-changelog-formatter (10% → 90%, +80%):

  • Added proper YAML frontmatter with name and description fields (the critical fix - this alone unlocked the description score from 0% to 100%)
  • Crafted a description with specific trigger terms (Platform changelog, changelog style, Cloud or Enterprise releases, file paths), a "Use when" clause, and explicit disambiguation from os-changelog-formatter
  • Removed redundant sections: standalone "Description" heading (now in frontmatter), "Deployment" section (no value), duplicated scope declarations that appeared in description, step 2, and "Important notes"
  • Streamlined heading hierarchy from nested ### subsections to flat ## numbered steps
  • Condensed output section and "Important notes" into tighter form
  • Preserved all domain expertise: style rules, component categories with ordering, Cloud vs Enterprise specifics, before/after example, and quality checklist

quick honest disclosure. I work at https://github.com/tesslio where we build tooling around skills like these. Not a pitch, just saw room for improvement and wanted to contribute.

if you want to self-improve your skills, or define your own scenarios to pressure test, just ask your agent (Claude Code, Codex, etc.) to evaluate and optimize your skill with Tessl. Ping me @yogesh-tessl, if you hit any snags.

Hey @llewellyn-sl 👋

I ran your skills through `tessl skill review` at work and found some targeted improvements. Here's the full before/after:

| Skill | Before | After | Change |
|-------|--------|-------|--------|
| platform-changelog-formatter | 10% | 90% | +80% |
| openapi-overlay-generator | 84% | — | — |
| docs-state-assessment | 90% | — | — |
| os-changelog-formatter | 90% | — | — |
| feature-docs | 90% | — | — |
| release-impact-assessment | 77% | — | — |
| editorial-review | 71% | — | — |

The `platform-changelog-formatter` skill had the most headroom — it was scoring 10% because it was missing YAML frontmatter entirely, which meant the skill evaluator couldn't parse the description or name fields. The content itself was already solid.

<details>
<summary>Changes summary</summary>

**platform-changelog-formatter (10% → 90%, +80%)**:
- Added proper YAML frontmatter with `name` and `description` fields (the critical fix — this alone unlocked the description score from 0% to 100%)
- Crafted a description with specific trigger terms (`Platform changelog`, `changelog style`, `Cloud or Enterprise releases`, file paths), a "Use when" clause, and explicit disambiguation from `os-changelog-formatter`
- Removed redundant sections: standalone "Description" heading (now in frontmatter), "Deployment" section (no value), duplicated scope declarations that appeared in description, step 2, and "Important notes"
- Streamlined heading hierarchy from nested `###` subsections to flat `##` numbered steps
- Condensed output section and "Important notes" into tighter form
- Preserved all domain expertise: style rules, component categories with ordering, Cloud vs Enterprise specifics, before/after example, and quality checklist

</details>

I also stress-tested your `platform-changelog-formatter` skill against a few real-world task evals and it held up really well on component-based categorization of mixed Studios, Compute environments, and Pipeline entries with the Cloud vs Enterprise context switching. Kudos for that.

Honest disclosure — I work at @tesslio where we build tooling around skills like these. Not a pitch — just saw room for improvement and wanted to contribute.

Want to self-improve your skills? Just point your agent (Claude Code, Codex, etc.) at [this Tessl guide](https://docs.tessl.io/evaluate/optimize-a-skill-using-best-practices) and ask it to optimize your skill. Ping me — [@yogesh-tessl](https://github.com/yogesh-tessl) — if you hit any snags.

Thanks in advance 🙏
@netlify
Copy link
Copy Markdown

netlify Bot commented May 13, 2026

‼️ Deploy request for seqera-docs rejected.

Name Link
🔨 Latest commit 424b966

@justinegeffen justinegeffen self-requested a review May 13, 2026 15:20
@justinegeffen
Copy link
Copy Markdown
Contributor

Hi, @yogesh-tessl! Thanks for this PR, it's super-useful. I'll table it for team review and get back to you ASAP. :)

@justinegeffen justinegeffen added do not merge Do not merge until this label is removed automation-improvement Automation enhancement for existing automation that's working but could be better. labels May 14, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

automation-improvement Automation enhancement for existing automation that's working but could be better. do not merge Do not merge until this label is removed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants