Skip to content

Add Performance Engineering#3952

Open
be-next wants to merge 1 commit intosindresorhus:mainfrom
be-next:add-performance-engineering
Open

Add Performance Engineering#3952
be-next wants to merge 1 commit intosindresorhus:mainfrom
be-next:add-performance-engineering

Conversation

@be-next
Copy link

@be-next be-next commented Feb 18, 2026

What is this list about?

Observability and performance testing — the two disciplines at the heart of performance engineering for modern distributed systems. The list covers production-grade tools (metrics, tracing, profiling, load testing, chaos engineering, CI/CD integration) alongside practical guidance on when and why to use them.

I created this list because I've spent years working on performance for large-scale systems and kept noticing the same gap: observability tools and performance testing tools are almost always discussed separately, but in practice they're deeply interconnected. You can't interpret load test results without good observability, and you can't validate observability setups without realistic load. This list bridges that gap.


By submitting this pull request I confirm I've read and complied with the below requirements.

Failure to properly do so will just result in the pull request being closed and everyone's time wasted. Please read it twice. Most people miss many things.

  • I have read and understood the instructions for creating a list.
  • This pull request has a title in the format Add Name of List.
  • The entry in the list has an appropriate description with a capital letter and ends with a period.
  • The entry is added at the bottom of the appropriate category.
  • The list I'm submitting complies with these requirements:
    • Has been around for at least 30 days (first commit: January 13, 2026).
    • It's the result of hard work and the best I could possibly produce.
    • It is a non-generated Markdown file in a GitHub repo.
    • The repo has awesome-list & awesome as GitHub topics.
    • Not a duplicate.
    • Only includes awesome stuff (curated, not exhaustive).
    • Includes a project logo/image.
    • Entries have descriptions.
    • Has the Awesome badge.
    • Has a Table of Contents section named Contents.
    • Has an appropriate license (CC0).
    • Has contribution guidelines.
    • Consistent formatting and no hard-wrapping.
    • No CI badge in the readme.
    • Default branch is main.
    • Passes awesome-lint.

@be-next
Copy link
Author

be-next commented Feb 18, 2026

unicorn

@be-next
Copy link
Author

be-next commented Feb 18, 2026

@erkcet
Copy link

erkcet commented Feb 22, 2026

I reviewed the list at be-next/awesome-performance-engineering. Here's my feedback:

Structure & Organization
The split into a main README index with separate awesome-observability-tools.md and awesome-performance-testing-tools.md files is a nice approach, but I'm not sure it aligns with the typical awesome-list format where everything lives in the README. Most awesome lists keep all entries in a single file. Worth checking if the awesome-lint pass actually validated the sub-files.

Entries with contradictory descriptions
A few tools are listed with active/maintained markers but the descriptions themselves note they're aging or unmaintained:

  • Graphite is described as having "historical significance but limited compared to modern alternatives" yet still listed alongside modern tools without distinction.
  • Redash is noted as having "minimal maintenance since Databricks acquisition."
  • Nagios is described as "showing its age."

Per awesome-list guidelines, unmaintained or deprecated items shouldn't be in the main list. Consider moving these to a separate "Legacy" section or removing them.

Formatting issues in sub-files

  • Flood (Tricentis) appears with strikethrough formatting as a decommissioned tool — this should be removed entirely rather than kept with strikethrough.
  • Toxiproxy appears in two different sections (Service Virtualization and Network Simulation). Duplicate entries will fail awesome-lint if it catches cross-file duplicates.
  • The LLM-assisted test scripting item in the AI-Augmented section reads as prose commentary rather than a - [Name](url) - Description. entry.

AI section length
The "Performance Engineering in the Age of AI" section in the main README is quite long and reads more like a blog post/opinion piece than a curated list. Consider condensing it significantly or linking out to a separate document.

Positives

  • Great topic that bridges two usually-separate domains
  • CC0 license, topics, and contribution guidelines all present
  • The emoji legend for tool categorization is thoughtful

Overall a solid list with good curation. The main things to address are the contradictory maintenance status entries and the formatting issues in the sub-files.

@erkcet erkcet mentioned this pull request Feb 22, 2026
8 tasks
@realadeel
Copy link

I reviewed this PR against the awesome list requirements. Here's what I found:

Potential issues:

  1. Contributing/Footnotes sections in TOC: The guidelines state "Must not feature Contributing or Footnotes sections" in the Table of Contents. The repo's TOC includes both Contributing and Footnotes entries, and they link to sections in the readme.

  2. Emojis in section headings: The section headings use emojis (🎯, 📋, 🔭, 🚀, 🧭, 🤖, 🔗). While not explicitly prohibited, this is atypical for awesome lists and may be seen as inconsistent formatting by the maintainer.

  3. Low star count: The repo has 18 stars. Not a hard requirement, but the guidelines emphasize "If you have not put in considerable effort into your list, your pull request will be immediately closed."

What looks good:

  • Entry format is correct: - [Performance Engineering](…#readme) - Observability and performance testing for reliable distributed systems. — proper casing, ends with period, doesn't describe the list itself.
  • Category placement (Testing) makes sense.
  • CC0-1.0 license, awesome and awesome-list topics, default branch is main.
  • Has CONTRIBUTING.md and the awesome badge.
  • Repo is 42 days old (created Jan 12, 2026), meeting the 30-day requirement.

@realadeel realadeel mentioned this pull request Feb 23, 2026
16 tasks
@levz0r
Copy link

levz0r commented Feb 24, 2026

I reviewed the awesome-performance-engineering repository in detail, including both the main README and the two sub-list files (awesome-observability-tools.md and awesome-performance-testing-tools.md). Here is my assessment, focusing on areas the existing reviews have not fully covered.


Content Quality & Curation Depth

The list demonstrates genuine domain expertise. The descriptions are original and opinionated rather than copied from project READMEs — for example, calling out that wrk suffers from coordinated omission, noting Elasticsearch's license change implications, and flagging that Neosync was acquired and is no longer actively maintained. This is the kind of practitioner knowledge that separates a curated list from a link dump.

The "Observability by Intent" and "Performance Testing by Use Case" cross-reference tables at the top of each sub-file are an excellent navigation aid. They address a real problem: practitioners think in terms of what they need to accomplish, not which tool category to browse.

The callout box on coordinated omission in the HTTP Benchmarking section is genuinely useful technical content that most performance testing resources omit.

Notable Missing Entries

A few widely-used tools seem absent and would strengthen the list:

Observability:

  • Cilium — While Hubble is listed, Cilium itself (the eBPF-based networking/observability/security platform) is a CNCF graduated project and arguably deserves its own entry given its growing role in Kubernetes observability.
  • Elastic APM — The Elastic Stack is covered but the dedicated APM agent/server component is not called out separately, despite being a common open-source APM choice.
  • OpenObserve — Rust-based, open-source observability platform positioned as a lower-cost alternative to Elasticsearch/Datadog. Growing rapidly in the space.

Performance Testing:

  • Playwright Test with performance tracing — Playwright is listed for browser automation, but its built-in tracing and HAR recording capabilities for performance regression testing in CI deserve a mention.
  • Grafana Beyla — Listed in observability but worth cross-referencing in the performance testing context, since eBPF-based auto-instrumentation during load tests is a powerful combination.
  • Bench (bench.sh) or Geekbench — For the System & Infrastructure Benchmarking section, these are commonly used quick-assessment tools.

Structural Observations

  1. The split-file architecture is unusual but defensible. The main README acts as an index/manifesto, while the two sub-files contain the actual tool entries. This works well for the scope of the content — a single README would be overwhelming. However, it means awesome-lint only validates the README and may not catch issues in the sub-files. The author should confirm that lint was run against all three files or acknowledge this gap.

  2. Some entries appear in multiple sections within the same file. Toxiproxy appears in Service Virtualization, Chaos Engineering, and Network Simulation within the performance testing file. OpenTelemetry Collector and Vector appear in both "Metrics Collection" and "Observability Pipelines" in the observability file. While cross-referencing is valid, having identical full entries (not just "see also" links) inflates the list and could confuse readers. Consider keeping the full entry in the primary section and using a brief cross-reference (e.g., "(See also: Toxiproxy)") in secondary sections.

  3. The "AI in Performance Engineering" section in the main README reads as a standalone essay (~800 words). While well-written, it is speculative in places ("What AI Will Transform Next" with predictions about autonomous remediation and conversational interfaces). Awesome lists typically curate existing resources rather than provide forward-looking analysis. Consider trimming this to the "What AI Is Already Changing" portion and linking to a blog post or separate document for the futurism.

  4. Decommissioned tools should be removed. The strikethrough entry for Flood (Tricentis) in the Commercial section is historical reference, but awesome lists should only contain active, usable resources. A "Historical Note" at most could mention it in a sentence, but a full entry — even struck through — is noise.

Minor Issues

  • The code-of-conduct.md file exists but is not linked from the README or CONTRIBUTING.md. Consider adding a reference.
  • The .lycheeignore file and .markdownlint-cli2.jsonc suggest good CI hygiene — link checking and markdown linting are set up, which is a positive signal for long-term maintenance.
  • Plausible and Matomo in the RUM section are web analytics tools, not really RUM/frontend observability tools. Their inclusion feels like scope creep — they measure page views and traffic, not Core Web Vitals or frontend performance.

PR Entry Format

The PR entry itself is clean and correct:

- [Performance Engineering](https://github.com/be-next/awesome-performance-engineering#readme) - Observability and performance testing for reliable distributed systems.

Proper capitalization, ends with a period, links to #readme, and the description is concise and accurate. Placed correctly at the bottom of the Testing category.

Summary

This is a high-quality, well-curated list that fills a genuine gap — bridging observability and performance testing into a unified "performance engineering" perspective. The author clearly has deep domain experience. The main areas for improvement are: (1) deduplicating cross-listed tool entries, (2) trimming the speculative AI section, (3) removing decommissioned tools, and (4) tightening scope in a few sections (RUM analytics tools). With those adjustments, this would be a strong addition to the awesome ecosystem.

@levz0r levz0r mentioned this pull request Feb 24, 2026
9 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants