This is a Personal AI Assistant that utilizes the flexibility of open-source Ollama models.
- AI assistant powered by Ollama models
- Streaming support
- Frontend integration with easy-to-use HTML templates (index.html and load-tester.html)
- Support all available ollama models.
2025-04-19_00-07-22.mp4
Ensure the following are installed on your machine:
- Ollama (used for AI model serving)
- Node.js & Bun (for running the backend)
Clone the Repository
git clone https://github.com/mdjamilkashemporosh/omni.gitBefore running the project, both the
backendandfrontendneed to be configured.
Navigate to the backend folder and run:
bun install Create a .env file under the backeend directory and add the following properties:
BASE_URL= # Ollama URL (e.g., http://localhost:1134)
MODEL= # Model name (e.g., phi4)
PORT= # Port number (e.g., 8000)To run the backend in development mode:
npm run devTo build the backend:
npm run buildTo run the backend in production mode (ensure to build first):
npm run startNavigate to the frontend folder and run:
npm install -g http-serverIn your frontend code, you need to update the API URL to match your backend configuration. Navigate to the frontend folder and open the config/config.js file. Then, update the following line accordingly:
export const API_URL = ''; // (e.g., http://localhost:8000/chat)http-serverOnce the server is started, URLs should be displayed in your terminal. Now, use one of the URLs to start using the application. 🚀
The application comes with a load tester (for concurrent requests). You can navigate to the
/load-tester.htmlto use it.
├── README.md
├── backend
│ ├── README.md
│ ├── bun.lock
│ ├── config
│ │ └── config.ts
│ ├── index.ts
│ ├── package.json
│ ├── tsconfig.json
│ ├── types
│ │ ├── config.d.ts
│ │ ├── index.d.ts
│ │ └── requireEnv.d.ts
│ ├── utils
│ │ └── requireEnv.ts
│ └── .env
└── frontend
├── config
│ └── config.js
├── index.html
├── js
│ ├── index.js
│ └── load-tester.js
└── load-tester.htmlWe welcome contributions to improve this project! Here are some ways you can contribute:
- Bug Fixes: If you find a bug, please submit an issue on GitHub and, if possible, provide a fix in a pull request.
- Feature Requests: Have an idea for a new feature? Open an issue with a description of the feature, and we can discuss it.
- Code Improvements: Feel free to suggest or submit code improvements for better performance, cleaner code, etc.
- Fork the repository.
- Create a new branch for your feature or bug fix.
- Make your changes.
- Run tests (if applicable) and ensure everything works as expected.
- Create a pull request with a detailed description of your changes.
This project is licensed under the MIT License.
If you encounter any issues or have questions, please feel free to open an issue on GitHub. Make sure to include relevant information such as error messages, system environment, and steps to reproduce the issue.

