Ai-Initializer-Project is a local, customizable, hardware-accelerated, private, secure, universal AI large-language-model client designed for any Linux/UNIX distribution.
-
Clone the repository:
git clone https://github.com/aaronms1/Ai-Initializer-Project
-
cd into the project directory:
cd Ai-Initializer-Project
-
run the ./installers/install.sh script:
./installers/install.sh
-
Download the deb package from the releases page.(in progress)
-
Install the deb package:
sudo dpkg -i Ai-Initializer-Project.deb
-
Pull the docker image from the docker hub:
docker pull aaronms1/ai-initializer-project(in progress)
-
Run the docker image:
docker run -p 8080:8080 aaronms1/ai-initializer-project
Once the application is running, you can access the Vaadin front end in your browser or use your current desktop environment (DE) to interact with the LLM.
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, a range of base language models and instruction-tuned models are released, with sizes ranging from 0.5 to 72 billion parameters. Qwen2.5 introduces the following improvements over Qwen2:
- Customization: The application allows for extensive customization of the LLM, leveraging the BTRFS file system.
- Hardware Acceleration: The application supports hardware acceleration for improved performance.
- Systemd Service: The application includes a systemd service for easy management and monitoring.
- Stand-alone option: The application can be run as a stand-alone service or integrated into an existing system.
- Security: The application includes robust security features, such as data encryption, user authentication and authorization, and secure communication protocols.
- PAM Authentication: The application uses Pluggable Authentication Modules (PAM) for user authentication.
- AppArmor Profiles: The application uses AppArmor profiles - a Linux security module
- to ensure the privacy and security of user data.
- /etc/project-ai-initializer/
- *.conf
- /etc/apparmor/project-ai-initializer/
- *.profile
- /systemd/project-ai-initializer/
- *.service
- /usr/share/applications/
- project-ai-initializer.desktop
- /home/${USER}.project-ai-initializer/
- models
- checksums
- cache
- /home/${USER}/
- TornadoVM
- /opt/
- project-ai-initializer/
- /var/run/project-ai-initializer/
- models/
- data/
- /var/log/project-ai-initializer/
- logs/
To contribute to the project TornadoVM is not a necessary step, but it is recommended if you plan on helping with any gpu related issues. run the following commands from your /home/name/ to install TornadoVm:
installers/tornado_installer.sh
-
Clone the repository:
-
Add an .env with your hugginface api token in "home/pai/.gnupg/pai-token.env":
HUGGINGFACE_API_TOKEN=your_token_here
-
Modify IntelliJ Settings: Ensure your IntelliJ settings are configured as shown in the example screenshot
.
-
Modify Intellij to use the new JVM generated from tornadoVm:
File -> Project Structure -> Project -> Project SDK -> Add SDK -> Add TornadoVm SDK
/home/your-user-name/TornadoVM/etc/dependencies/TornadoVM-graal-jdk-21/graalvm-community-openjdk-21.0.1+12.1` -
Init the vm anv by running from the root of the project.:
source ~/TornadoVM/setvars.sh
-
Select a Task: Choose a 'TODO' or 'FIXME' task from the codebase.
-
Create a Branch: Create a new branch from the default branch.
-
Refactor: Make your changes and refactor the code as needed.
-
Submit a Pull Request: Once your changes are complete, submit a pull request for review.
We appreciate your contributions!