Skip to content

Commit db94a43

Browse files
Merge branch 'foss42:main' into main
2 parents 5194d04 + 6dce182 commit db94a43

19 files changed

+868
-110
lines changed

README.md

Lines changed: 15 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,19 @@
11
# API Dash ⚡️
22

3-
[![Discord Server Invite](https://img.shields.io/badge/DISCORD-JOIN%20SERVER-5663F7?style=for-the-badge&logo=discord&logoColor=white)](https://bit.ly/heyfoss)
3+
[![Discord Server Invite](https://img.shields.io/badge/DISCORD-JOIN%20SERVER-5663F7?style=for-the-badge&logo=discord&logoColor=white)](https://discord.com/invite/bBeSdtJ6Ue)
4+
5+
### 🚨🚨 API Dash is participating in GSoC 2025! Check out the details below:
6+
7+
<img src="https://github.com/foss42/apidash/assets/615622/493ce57f-06c3-4789-b7ae-9fa63bca8183" alt="GSoC" width="500">
8+
9+
| | Link |
10+
|--|--|
11+
| Learn about GSoC | [Link](https://summerofcode.withgoogle.com) |
12+
| API Dash GSoC Page | [Link](https://summerofcode.withgoogle.com/programs/2025/organizations/api-dash) |
13+
| Project Ideas List | [Link](https://github.com/foss42/apidash/discussions/565) |
14+
| Application Guide | [Link](https://github.com/foss42/apidash/discussions/564) |
15+
| Discord Channel | [Link](https://discord.com/invite/bBeSdtJ6Ue) |
16+
417

518
### Please support this initiative by giving this project a Star ⭐️
619

@@ -277,4 +290,4 @@ You can contribute to API Dash in any or all of the following ways:
277290

278291
## Need Any Help?
279292

280-
In case you need any help with API Dash or are encountering any issue while running the tool, please feel free to drop by our [Discord server](https://bit.ly/heyfoss) and we can have a chat in the **#foss-apidash** channel.
293+
In case you need any help with API Dash or are encountering any issue while running the tool, please feel free to drop by our [Discord server](https://discord.com/invite/bBeSdtJ6Ue) and we can have a chat in the **#foss-apidash** channel.

doc/dev_guide/packaging.md

Lines changed: 78 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,84 @@ git push
7878

7979
## FlatHub (Flatpak)
8080

81-
TODO Instructions
81+
Steps to generate .flatpak package of API Dash:
82+
83+
1. Clone and build API Dash:
84+
85+
Follow the [How to run API Dash locally](setup_run.md) guide.
86+
87+
Stay in the root folder of the project directory.
88+
89+
2. Install Required Packages (Debian/Ubuntu):
90+
91+
```bash
92+
sudo apt install flatpak
93+
flatpak install -y flathub org.flatpak.Builder
94+
flatpak remote-add --if-not-exists --user flathub https://dl.flathub.org/repo/flathub.flatpakrepo
95+
```
96+
97+
*if using another linux distro, download flatpak and follow the rest of the steps.
98+
99+
3. Build API Dash project:
100+
101+
```bash
102+
flutter build linux --release
103+
```
104+
105+
4. Create flatpak manifest file:
106+
107+
```bash
108+
touch apidash-flatpak.yaml
109+
```
110+
in this file, add:
111+
112+
```yaml
113+
app-id: io.github.foss42.apidash
114+
runtime: org.freedesktop.Platform
115+
runtime-version: "23.08"
116+
sdk: org.freedesktop.Sdk
117+
118+
command: /app/bundle/apidash
119+
finish-args:
120+
- --share=ipc
121+
- --socket=fallback-x11
122+
- --socket=wayland
123+
- --device=dri
124+
- --socket=pulseaudio
125+
- --share=network
126+
- --filesystem=home
127+
modules:
128+
- name: apidash
129+
buildsystem: simple
130+
build-commands:
131+
- cp -a build/linux/x64/release/bundle /app/bundle
132+
sources:
133+
- type: dir
134+
path: .
135+
```
136+
137+
5. Create the .flatpak file:
138+
139+
```bash
140+
flatpak run org.flatpak.Builder --force-clean --sandbox --user --install --install-deps-from=flathub --ccache --mirror-screenshots-url=https://dl.flathub.org/media/ --repo=repo builddir apidash-flatpak.yaml
141+
142+
flatpak build-bundle repo apidash.flatpak io.github.foss42.apidash
143+
```
144+
145+
The apidash.flatpak file should be the project root folder.
146+
147+
To test it:
148+
149+
```bash
150+
flatpak install --user apidash.flatpak
151+
152+
flatpak run io.github.foss42.apidash
153+
```
154+
To uninstall it:
155+
156+
```bash
157+
flatpak uninstall io.github.foss42.apidash
158+
```
82159

83160
## Homebrew
84161

Lines changed: 83 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,83 @@
1+
# AI-Powered API Testing and Tool Integration
2+
3+
## Personal Information
4+
5+
- **Full Name:** Debasmi Basu
6+
- **Email:** [[email protected]](mailto:[email protected])
7+
- **Phone:** +91 7439640610
8+
- **Discord Handle:** debasmibasu
9+
- **Home Page:** [Portfolio](https://debasmi.github.io/portfolio/portfolio.html)
10+
- **GitHub Profile:** [Debasmi](https://github.com/debasmi)
11+
- **Socials:**
12+
- [LinkedIn](https://www.linkedin.com/in/debasmi-basu-513726288/)
13+
- **Time Zone:** Indian Standard Time
14+
- **Resume:** [Google Drive Link](https://drive.google.com/file/d/1o5JxOwneK-jv2GxnKTrzk__n7UbSKTPt/view?usp=sharing)
15+
16+
## University Info
17+
18+
- **University Name:** Cluster Innovation Centre, University of Delhi
19+
- **Program:** B.Tech. in Information Technology and Mathematical Innovations
20+
- **Year:** 2023 - Present
21+
- **Expected Graduation Date:** 2027
22+
23+
## Motivation & Past Experience
24+
25+
### Project of Pride: Image Encryption using Quantum Computing Algorithms
26+
27+
This project represents my most significant achievement in the field of quantum computing and cybersecurity. I developed a **quantum image encryption algorithm** using **Qiskit**, leveraging quantum superposition and entanglement to enhance security. By implementing the **NEQR model**, I ensured **100% accuracy in encryption**, preventing any data loss. Additionally, I designed **advanced quantum circuit techniques** to reduce potential decryption vulnerabilities, pushing the boundaries of modern encryption methods.
28+
29+
This project is my pride because it merges **cutting-edge quantum computing** with **practical data security applications**, demonstrating the **real-world potential of quantum algorithms in cryptography**. It reflects my deep technical expertise in **Qiskit, Python, and quantum circuits**, as well as my passion for exploring **future-proof encryption solutions**.
30+
31+
### Challenges that Motivate Me
32+
33+
I am driven by challenges that push the boundaries of **emerging technologies, security, and web development**. The intersection of **AI, cybersecurity, web applications, and quantum computing** excites me because of its potential to redefine **secure digital interactions**. My passion lies in building **robust, AI-powered automation systems** that enhance **security, efficiency, and accessibility** in real-world applications. Additionally, I enjoy working on **scalable web solutions**, ensuring that modern applications remain secure and user-friendly.
34+
35+
### Availability for GSoC
36+
37+
- **Will work full-time on GSoC.**
38+
- I will also dedicate time to exploring **LLM-based security frameworks**, improving **web API integration**, and enhancing my expertise in **AI-driven automation**.
39+
40+
### Regular Sync-Ups
41+
42+
- **Yes.** I am committed to maintaining **regular sync-ups** with mentors to ensure steady project progress and discuss improvements in API security and automation.
43+
44+
### Interest in API Dash
45+
46+
- The potential to integrate **AI-powered automation** for API testing aligns perfectly with my expertise in **web development, backend integration, and security automation**.
47+
- I see a great opportunity in **enhancing API security validation** using AI-driven techniques, ensuring robust **schema validation and intelligent error detection**.
48+
49+
### Areas for Improvement
50+
51+
- API Dash can expand **real-time collaborative testing features**, allowing teams to test and debug APIs more efficiently.
52+
- Enhancing **security automation** by integrating **AI-powered API monitoring** would significantly improve API Dash’s effectiveness.
53+
54+
---
55+
56+
## Project Proposal
57+
58+
### **Title**
59+
60+
AI-Powered API Testing and Tool Integration
61+
62+
### **Abstract**
63+
64+
API testing often requires **manual test case creation and validation**, making it inefficient. Additionally, **converting APIs into structured definitions for AI integration** is a complex task. This project aims to **automate test generation, response validation, and structured API conversion** using **LLMs and AI agents.** The system will provide **automated debugging insights** and integrate seamlessly with **pydantic-ai** and **langgraph.** A **benchmarking dataset** will also be created to evaluate various LLMs for API testing tasks.
65+
66+
### **Weekly Timeline**
67+
68+
| Week | Focus | Key Deliverables & Achievements |
69+
|---------------|--------------------------------|------------------------------------------------------------------------|
70+
| **Week 1-2** | Research & Architecture | Study existing API testing tools, research AI automation methods, explore web-based API testing interfaces, and define the project architecture. Expected Outcome: Clear technical roadmap for implementation. |
71+
| **Week 3-4** | API Specification Parsing | Develop a parser to extract API endpoints, request methods, authentication requirements, and response formats from OpenAPI specs, Postman collections, and raw API logs. Expected Outcome: Functional API parser capable of structured data extraction and visualization. |
72+
| **Week 5-6** | AI-Based Test Case Generation | Implement an AI model that analyzes API specifications and generates valid test cases, including edge cases and error scenarios. Expected Outcome: Automated test case generation covering standard, edge, and security cases, integrated into a web-based UI. |
73+
| **Week 7-8** | Response Validation & Debugging | Develop an AI-powered validation mechanism that checks API responses against expected schemas and detects inconsistencies. Implement logging and debugging tools within a web dashboard to provide insights into API failures. Expected Outcome: AI-driven validation tool with intelligent debugging support. |
74+
| **Week 9-10** | Structured API Conversion | Design a system that converts APIs into structured tool definitions compatible with pydantic-ai and langgraph, ensuring seamless AI agent integration. Expected Outcome: Automated conversion of API specs into structured tool definitions, with visual representation in a web-based interface. |
75+
| **Week 11-12**| Benchmarking & Evaluation | Create a dataset and evaluation framework to benchmark different LLMs for API testing performance. Conduct performance testing on generated test cases and validation mechanisms. Expected Outcome: A benchmarking dataset and comparative analysis of LLMs in API testing tasks, integrated into a web-based reporting system. |
76+
| **Final Week**| Testing & Documentation | Perform comprehensive end-to-end testing, finalize documentation, create usage guides, and submit the final project report. Expected Outcome: Fully tested, documented, and ready-to-use AI-powered API testing framework, with a web-based dashboard for interaction and reporting. |
77+
78+
---
79+
80+
## Conclusion
81+
82+
This project will significantly **enhance API testing automation** by leveraging **AI-driven test generation, web-based API analysis, and structured tool conversion**. The benchmarking dataset will provide **a standard evaluation framework** for API testing LLMs, ensuring **optimal model selection for API validation**. The resulting **AI-powered API testing framework** will improve **efficiency, accuracy, security, and scalability**, making API Dash a more powerful tool for developers.
83+
Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
# AI API Eval Framework For Multimodal Generative AI
2+
3+
## Personal Information
4+
- **Full Name:** Nideesh Bharath Kumar
5+
- **University Name:** Rutgers University–New Brunswick
6+
- **Program Enrolled In:** B.S. Computer Science, Artificial Intelligence Track
7+
- **Year:** Junior Year (Third Year)
8+
- **Expected Graduation Date:** May 2026
9+
10+
## About Me
11+
I’m **Nideesh Bharath Kumar**, a junior (third year) in Rutgers University–New Brunswick taking a **B.S. in Computer Science on the Artificial Intelligence Track**. I have a strong foundation in full stack development and AI engineering: I have project and internship experience in technologies like: **Dart/Flutter, LangChain, RAG, Vector Databases, AWS, Docker, Kubernetes, PostgreSQL, FastAPI, OAuth,** and other technologies that aid in developing scalable and AI-powered systems. I have interned at **Manomay Tech, IDEA, and Newark Science and Sustainability,** developing scalable systems and managing AI systems and completed fellowships with **Google** and **Codepath**, developing my technical skills. I’ve also won awards in hackathons, achieving **Overall Best Project in the CS Base Climate Hackathon for a Flutter-based project** and **Best Use of Terraform in the HackRU Hackathon for an Computer Vision Smart Shopping Cart**. I’m passionate about building distributed, scalable systems and AI technologies, and API Dash is an amazing tool that can facilitate in the process of building these solutions through easy visualization and testing of APIs; I believe my skills in **AI development** and experience with **Dart/Flutter** and **APIs** put me in a position to effectively contribute to this project.
12+
13+
## Project Details
14+
**Project Title:** AI API Eval Framework For Multimodal Generative AI
15+
**Description:**
16+
This project is to develop a **Dart-centered evaluation framework** designed to simplify the testing of generative AI models across **multiple types (text, image, code)**. This will be done by integrating evaluation toolkits: **llm-harness** for text, **torch-fidelity** and **CLIP** for images, and **HumanEval/MBPP** with **CodeBLEU** for code. This project will provide a unified config layer which can support standard and custom benchmark datasets and evaluation metrics. This will be done by providing a **user-friendly interface in API Dash** which allows the user to select model type, dataset management (local or downloadable), and evaluation metrics (standard toolkit or custom script). On top of this, **real-time visual analytics** will be provided to visualize the progress of the metrics as well as **parallelized batch processing** of the evaluation.
17+
18+
**Related Issue:** - [#618](https://github.com/foss42/apidash/issues/618)
19+
20+
**Key Features:**
21+
1) Unified Evaluation Configuration:
22+
- A config file in YAML will serve as the abstraction layer, which will be generated by the user's selection of model type, dataset, and evaluation metrics. This will redirect the config to either use llm-harness, torch-fidelity and CLIP, or HumanEval and MBPP with CodeBLEU. Additionally, custom evaluation scripts and datasets can be attached to this config file which can be interpreted by the systems.
23+
- This abstraction layer ensures that whether any of these specifications are different for the eval job, all of it will be redirected to the correct resources while still providing a centralized layer for creating the job. Furthermore, these config files can be stored in history for running the same jobs later.
24+
25+
2) Intuitive User Interface
26+
- When starting an evaluation, users can select the model type (text, image, or code) through a drop-down menu. The system will provide a list of standard datasets and use cases. The user can select these datasets, or attach a custom one. If the user does not have this dataset locally in the workspace, they can attach it using file explorer or download it from the web. Furthermore, the user can select standard evaluation metrics from a list or attach a custom script.
27+
28+
3) Standard Evaluation Pipelines
29+
- The standard evaluation pipelines include text, image, and code generation.
30+
- For text generation, llm-harness will be used, and utilize custom datasets and tasks to measure Precision, Recall, F1 Score, BLEU, ROUGE, and Perplexity. Custom integration of datasets and evaluation scores can be done through interfacing the llm-harness custom test config file.
31+
- For image generation, torch-fidelity can be used to calculate Fréchet Inception Distance and Inception Score by comparing against a reference image database. For text to image generation, CLIP scores can be used to ensure connection between prompt and generated image. Custom integration of datasets and evaluation scores can be done through a custom interface created using Dart.
32+
- For code generation, tests like HumanEval and MBPP can be used for functional correctness and CodeBLEU can be used for code quality checking. Custom integration will be done the same way as image generation, with a custom interface created using Dart for functional test databases and evaluation metrics.
33+
34+
4) Batch Evaluations
35+
- Parallel Processing will be supported by async runs of the tests, where a progress bar will monitor the number of processed rows in API Dash.
36+
37+
5) Visualizations of Results
38+
- Visualizations of results will be provided as the tests are running, providing live feedback of model performance, as well as a general summary of visualizations after all evals have been run.
39+
- Bar Graphs: These will be displayed from a range of 0 to 100% accuracy to visualize a quick performance comparison across all tested models.
40+
- Line Charts: These will be displayed to show performance trends over time of models, comparing model performance across different batches as well as between each model.
41+
- Tables: These will provide detailed summary statistics about scores for each model across different benchmarks and datasets.
42+
- Box Plots: These will show the distribution of scores per batch, highlighting outliers and variance, while also having side-by-side comparisons with different models.
43+
44+
6) Offline and Online Support
45+
- Offline: Models that are offline will be supported by pointing to the script the model uses to run, and datasets that are locally stored.
46+
- Online: These models can be connected for eval through an API endpoint, and datasets can be downloaded with access to the link.
47+
48+
**Architecture:**
49+
1) UI Interface: Built with Dart/Flutter
50+
2) Configuration Manager: Built with Dart, uses YAML for config file
51+
3) Dataset Manager: Built with Dart, REST APIs for accessing endpoints
52+
4) Evaluation Manager: Built with a Dart - Python layer to manage connections between evaluators and API Dash
53+
5) Batch Processing: Built with Dart Async requests
54+
6) Visualization and Results: Built with Dart/Flutter, using packages like fl_chart and syncfusion_flutter_charts

0 commit comments

Comments
 (0)