Skip to content

Commit 54fbec6

Browse files
authored
Merge pull request #1326 from BalaSubramaniam12007/discussion
RFC: API Explorer Template Library Architecture
2 parents 1d7a0aa + ca61cda commit 54fbec6

File tree

2 files changed

+115
-0
lines changed

2 files changed

+115
-0
lines changed
Lines changed: 115 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,115 @@
1+
### Initial Idea Submission
2+
3+
**Full Name:** BALASUBRAMANIAM L
4+
**University name:** SAVEETHA ENGINEERING COLLEGE
5+
**Program you are enrolled in (Degree & Major/Minor):** B.Tech, AIML
6+
**Year:** 2nd Year
7+
**Expected graduation date:** 2028
8+
9+
**Project Title:** API Explorer
10+
**Relevant Issues:** [https://github.com/foss42/apidash/issues/619](https://github.com/foss42/apidash/issues/619)
11+
12+
---
13+
## Idea Description
14+
15+
### Problem
16+
17+
This project is designed to enhance the API Dash user experience by integrating a curated library of popular and publicly available APIs. This feature allows users to discover, browse, search, and directly import API endpoints into their workspace for seamless testing and exploration. Developers can access pre-configured API request templates, complete with authentication details, sample payloads, and expected responses. This eliminates the need to manually set up API requests, reducing onboarding time and improving efficiency. APIs spanning various domains—such as AI, finance, weather, and social media—are organized into categories, making it easy for users to find relevant services. You are required to develop the entire process backend in the form of an automation pipeline which parses OpenAPI/HTML files, auto-tag it to relevant category, enrich the data, create templates. You can also add features such as user ratings, reviews, and community contributions (via GitHub) to ensure accurate and up-to-date resources
18+
19+
20+
### What I'm Building
21+
22+
API Dash currently has no way for users to discover and import pre-built API requests. I want to add a curated template library — a collection of ready-to-use request templates for popular APIs like OpenAI, Stripe, GitHub, and weather services — that users can browse and import directly into their workspace.
23+
24+
I'm planning to create a `registry.yaml` file that lists every source the pipeline should pull from. Each entry is one of four types:
25+
26+
- **`github_repo`** — a GitHub repo (e.g. `openai/openai-openapi`) that publishes an official OpenAPI spec
27+
- **`raw_url`** — a direct link to a hosted `swagger.json` or spec file
28+
- **`aggregator_feed`** — APIs.guru publishes a machine-readable index of ~2,000 public APIs; one fetch gives a lot of coverage
29+
- **`community_pr`** — a contributor trigger the pipeline and open for PR.
30+
31+
```yaml
32+
# registry.yaml
33+
sources:
34+
- id: openai
35+
type: github_repo # fetched via GitHub API
36+
url: openai/openai-openapi
37+
38+
- id: stripe
39+
type: raw_url # fetched directly over HTTP
40+
url: https://raw.githubusercontent.com/stripe/openapi/master/openapi/spec3.json
41+
42+
- id: apis_guru
43+
type: aggregator_feed # one request, hundreds of specs
44+
url: https://api.apis.guru/v2/list.json
45+
46+
- id: weatherapi
47+
type: community_pr # contributor-submitted file
48+
path: contrib/weatherapi.yaml
49+
```
50+
51+
I'm writing a fixed set of **adapters** in Python - a small module with one job: fetch the raw spec content and hand it to the pipeline. The pipeline itself never knows whether content came from GitHub or a community PR. It just receives raw content and starts processing.
52+
53+
54+
---
55+
56+
### Pipeline
57+
58+
![API Explorer Workflow](images/apiexplorer-workflow.png)
59+
60+
Once content is fetched, it goes through three main stages — **Parse**, **Validate**, and **Enrich** — and comes out the other end as a clean JSON template.I won't go deep into each stage here since I open the issue for discussion highlevel workflow, and the output of the pipeline is a file like `stripe.json` that contains everything API Dash needs to pre-fill a request: the method, URL, headers, sample body, and auth type.
61+
62+
This pipeline runs inside **GitHub Actions**, which is free and requires no server.
63+
64+
65+
---
66+
67+
### Repository Structure
68+
69+
I'm planning to keep this as **two separate repositories** rather than one monolith or embedding it inside the main API Dash repo.
70+
71+
- **`apidash-templates-core`** — holds the pipeline code: the adapters, registry, and processing logic. This is the engine, written in Python. It runs in CI and is never shipped to the user.
72+
- **`apidash-templates`** — holds only the output: the generated JSON template files and the `index.json` manifest. This repo is what gets deployed and served. Contributors who just want to add a new API template never need to touch the pipeline repo.
73+
74+
The reason I prefer this split is that it keeps concerns clean and makes hosting straightforward — the output repo can be served directly via **GitHub Pages** as an optional human-browsable UI, and via **jsDelivr CDN** as the programmatic fetch endpoint for API Dash. If both lived in one repo, jsDelivr would be mirroring pipeline code alongside template data, which is messy.
75+
76+
---
77+
78+
### `index.json`
79+
80+
I'm proposing a small `index.json` file that lives at the root of the output repo. API Dash fetches this single file on launch. It's lightweight (maybe a few kilobytes) and uses it to render the entire browse UI. It doesn't contain the templates themselves, just metadata pointing to them:
81+
82+
```json
83+
{
84+
"id": "stripe-create-charge",
85+
"name": "Stripe — Create Charge",
86+
"category": "Finance",
87+
"version": "1.2.0",
88+
"url": "cdn.jsdelivr.net/gh/apidash/templates@v1.2/finance/stripe-create-charge.json"
89+
}
90+
```
91+
92+
When the user clicks **Import**, API Dash uses the `url` field to fetch just that one template. Everything is lazy,nothing heavy is downloaded until the user actually asks for it.
93+
94+
---
95+
96+
### Versioning
97+
98+
Every successful pipeline run produces a **semantic version tag** (e.g. `v1.3.0`) on the output repo. GitHub Releases capture an immutable snapshot at each tag. jsDelivr's CDN URL includes the version, so `@v1.2` always points to exactly that build — stable and predictable.
99+
100+
---
101+
102+
### Sync — Still an Open Area
103+
104+
I had proposed a hash-comparison approach for sync but I agree it gets messy fast ,cross-checking hundreds or thousands of templates on every run introduces fragility. I think this is worth discussing before committing to an approach. The core question is: **how does the pipeline know what to regenerate?** I want to think through this more carefully rather than propose something that works at 10 templates but breaks at 1,000.
105+
106+
On the client side, API Dash checks the latest tag against its cached version on launch. If they match, nothing is downloaded. If there's a new version, only the updated `index.json` is re-fetched, and individual templates are pulled fresh on next import.
107+
108+
---
109+
110+
### Open Questions
111+
112+
1. **Repo structure** is the first real question. Separate repos keep things clean and make hosting straightforward, but inside the main repo is simpler to manage early on. I lean toward separate but I want to align on this before structuring anything.
113+
2. **Python in the pipeline** feels like the right call given the available tooling, but if there's a strong preference to keep the entire project in Dart or avoid Python as a dependency, I'd like to know that now before the pipeline is built around it.
114+
3. **Sync strategy** is genuinely unsolved. I have a rough idea of version tagging and scheduled runs, but how the client decides when to pull fresh data and how the pipeline decides what to rebuild needs more thought — and is probably the most important design decision left open.
115+
4. **Scope of the registry** : how many sources and categories to target first .Is something I'd like to define together so the initial build has a clear, testable boundary rather than expanding indefinitely. And your thoughts on how to add/update the catalog over time.
1.76 MB
Loading

0 commit comments

Comments
 (0)