Skip to content

Commit b0c32b8

Browse files
committed
replace overview with quickstart now that install instructions is on AI Server home page
1 parent ce3960e commit b0c32b8

File tree

4 files changed

+194
-194
lines changed

4 files changed

+194
-194
lines changed

MyApp/_pages/ai-server/index.md

Lines changed: 62 additions & 125 deletions
Original file line numberDiff line numberDiff line change
@@ -1,128 +1,65 @@
11
---
2-
title: Quick Start
3-
description: Get AI Server up and running quickly
2+
title: Overview
3+
description: Introduction to AI Server and its key features
44
---
55

6-
Install AI Server by running [install.sh](https://github.com/ServiceStack/ai-server/blob/main/install.sh):
7-
8-
### 1. Clone the Repository
9-
10-
Clone the AI Server repository from GitHub:
11-
12-
:::sh
13-
git clone https://github.com/ServiceStack/ai-server
14-
:::
15-
16-
### 2. Run the Installer
17-
18-
:::sh
19-
cd ai-server && cat install.sh | bash
20-
:::
21-
22-
The installer will detect common environment variables for its supported AI Providers including OpenAI, Anthropic,
23-
Mistral AI, Google, etc. and prompt if you would like to include any others in your AI Server configuration.
24-
25-
<ascii-cinema src="/pages/ai-server/ai-server-install.cast"
26-
loop="true" poster="npt:00:21" theme="dracula" rows="12" />
27-
28-
## Accessing AI Server
29-
30-
Once the AI Server is running, you can access the Admin Portal at [http://localhost:5006/admin](http://localhost:5005/admin) to configure your AI providers and generate API keys.
31-
If you first ran the AI Server with configured API Keys in your `.env` file, you providers will be automatically configured for the related services.
32-
33-
::: info
34-
The default password to access the Admin Portal is `p@55wOrd`. You can change this in your `.env` file by setting the `AUTH_SECRET` or providing it during the installation process.
35-
:::
36-
37-
You will then be able to make requests to the AI Server API endpoints, and access the Admin Portal user interface like the [Chat interface](http://localhost:5005/admin/Chat) to use your AI Provider models.
38-
39-
#### Re-install
40-
41-
If needed you can reset the process by deleting your local `App_Data` directory and rerunning `docker compose up` or re-running the `install.sh`.
42-
43-
### Optional - Install ComfyUI Agent
44-
45-
If your server also has a GPU you can ask the installer to also install the [ComfyUI Agent](/ai-server/comfy-extension) locally:
46-
47-
<ascii-cinema src="/pages/ai-server/agent-comfy-install.cast"
48-
loop="true" poster="npt:00:09" theme="dracula" rows="13" />
49-
50-
The ComfyUI Agent is a separate Docker agent for running [ComfyUI](https://www.comfy.org),
51-
[Whisper](https://github.com/openai/whisper) and [FFmpeg](https://www.ffmpeg.org) on servers with GPUs to handle
52-
AI Server's [Image](/ai-server/transform/image) and
53-
[Video transformations](/ai-server/transform/video) and Media Requests, including:
54-
55-
- [Text to Image](/ai-server/text-to-image)
56-
- [Image to Text](/ai-server/image-to-text)
57-
- [Image to Image](/ai-server/image-to-image)
58-
- [Image with Mask](/ai-server/image-with-mask)
59-
- [Image Upscale](/ai-server/image-upscale)
60-
- [Speech to Text](/ai-server/speech-to-text)
61-
- [Text to Speech](/ai-server/text-to-speech)
62-
63-
#### Comfy UI Agent Installer
64-
65-
To install the ComfyUI Agent on a separate server (with a GPU), you can clone and run the ComfyUI Agent installer from there instead:
66-
67-
**Clone the Comfy Agent Repo:**
68-
69-
:::sh
70-
git clone https://github.com/ServiceStack/agent-comfy.git
71-
:::
72-
73-
**Run the Comfy Agent Installer:***
74-
75-
:::sh
76-
cd agent-comfy && cat install.sh | bash
77-
:::
78-
79-
Providing your AI Server URL and Auth Secret when prompted will automatically register the ComfyUI Agent with your AI Server to handle related requests.
80-
81-
82-
You will be prompted to provide the AI Server URL and ComfyUI Agent URL during the installation.
83-
These should be the accessible URLs for your AI Server and ComfyUI Agent. When running locally, the ComfyUI Agent will be populated with a docker accessible path as `localhost` won't be accessible from the AI Server container.
84-
If you want to reset the ComfyUI Agent settings, remember to remove the provider from the AI Server Admin Portal.
85-
:::
86-
87-
### Supported OS
88-
89-
The AI Server installer is supported on Linux, macOS, and Windows with WSL2, and all require Docker and Docker Compose to be installed at a minimum.
90-
91-
## Prerequisites
92-
93-
### Linux
94-
95-
Linux requires the following software installed:
96-
97-
- Docker Engine (with Docker Compose)
98-
- Git
99-
100-
#### ComfyUI Agent
101-
102-
To run the ComfyUI Agent locally, you will also need:
103-
104-
- Nvidia GPU with CUDA support
105-
- Nvidia Container Toolkit for [Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html)
106-
107-
### macOS
108-
109-
macOS also requires:
110-
111-
- Docker Engine (with Docker Compose)
112-
113-
#### ComfyUI Agent
114-
115-
ComfyUI Agent requires Pytorch running in Docker which isn't available for macOS
116-
117-
### Windows with WSL2
118-
119-
Windows with WSL2 requires the following prerequisites:
120-
121-
- Docker Engine accessible from WSL2 like [Docker Desktop](https://www.docker.com/products/docker-desktop)
122-
- WSL2 with Ubuntu 20.04 LTS or later
123-
124-
#### ComfyUI Agent
125-
126-
To run the ComfyUI Agent locally, you will also need:
127-
128-
- Nvidia GPU with [WSL2 CUDA support](https://docs.nvidia.com/cuda/wsl-user-guide/index.html)
6+
AI Server allows you to orchestrate your systems AI requests through a single self-hosted application to control what AI Providers App's should use without impacting their client integrations. It serves as a private gateway to process LLM, AI, and image transformation requests, dynamically delegating tasks across multiple providers including Ollama, OpenAI, Anthropic, Mistral AI, Google Cloud, OpenRouter, GroqCloud, Replicate, Comfy UI, utilizing models like Whisper, SDXL, Flux, and tools like FFmpeg.
7+
8+
```mermaid{.not-prose}
9+
flowchart TB
10+
A[AI Server]
11+
A --> D{LLM APIs}
12+
A --> C{Ollama}
13+
A --> E{Media APIs}
14+
A --> F{Comfy UI
15+
+
16+
FFmpeg}
17+
D --> D1[OpenAI, Anthropic, Mistral, Google, OpenRouter, Groq]
18+
E --> E1[Replicate, dall-e-3, Text to speech]
19+
F --> F1[Diffusion, Whisper, TTS]
20+
```
21+
22+
## Why Use AI Server?
23+
24+
AI Server simplifies the integration and management of AI capabilities in your applications:
25+
26+
- **Centralized Management**: Manage your LLM, AI and Media Providers, API Keys and usage from a single App
27+
- **Flexibility**: Easily switch 3rd party providers without impacting your client integrations
28+
- **Scalability**: Distribute workloads across multiple providers to handle high volumes of requests efficiently
29+
- **Security**: Self-hosted private gateway to keep AI operations behind firewalls, limit access with API Keys
30+
- **Developer-Friendly**: Simple development experience utilizing a single client and endpoint and Type-safe APIs
31+
- **Manage Costs**: Monitor and control usage across your organization with detailed request history
32+
33+
## Key Features
34+
35+
- **Unified AI Gateway**: Centralize all your AI requests & API Key management through a single self-hosted service
36+
- **Multi-Provider Support**: Seamlessly integrate with Leading LLMs, Ollama, Comfy UI, FFmpeg, and more
37+
- **Type-Safe Integrations**: Native end-to-end typed integrations for 11 popular programming languages
38+
- **Secure Access**: Use simple API key authentication to control which AI resources Apps can use
39+
- **Managed File Storage**: Built-in cached asset storage for AI-generated assets, isolated per API Key
40+
- **Background Job Processing**: Efficient handling of long-running AI tasks, capable of distributing workloads
41+
- **Monitoring and Analytics**: Real-time monitoring performance and statistics of executing AI Requests
42+
- **Recorded**: Auto archival of completed AI Requests into monthly rolling databases
43+
- **Custom Deployment**: Run as a single Docker container, with optional GPU-equipped agents for advanced tasks
44+
45+
## Supported AI Capabilities
46+
47+
- **Large Language Models**: Integrates with Ollama, OpenAI, Anthropic, Mistral, Google, OpenRouter and Groq
48+
- **Image Generation**: Leverage self-hosted ComfyUI Agents and SaaS providers like Replicate, DALL-E 3
49+
- **Image Transformations**: Dynamically transform and cache Image Variations for stored assets
50+
- **Audio Processing**: Text-to-speech, and speech-to-text with Whisper integration
51+
- **Video Processing**: Format conversions, scaling, cropping, and more with via FFmpeg
52+
53+
## Getting Started for Developers
54+
55+
1. **Setup**: Follow the [Quick Start guide](/ai-server/install) to deploy AI Server.
56+
2. **Configuration**: Use the Admin Portal to add your AI providers and generate API keys.
57+
3. **Integration**: Choose your preferred language and use AI Server's type-safe APIs.
58+
4. **Development**: Start making API calls to AI Server from your application, leveraging the full suite of AI capabilities.
59+
60+
## Learn More
61+
62+
- Hosted Example: [openai.servicestack.net](https://openai.servicestack.net)
63+
- Source Code: [github.com/ServiceStack/ai-server](https://github.com/ServiceStack/ai-server)
64+
65+
AI Server is actively developed and continuously expanding its capabilities.

MyApp/_pages/ai-server/overview.md

Lines changed: 0 additions & 65 deletions
This file was deleted.
Lines changed: 128 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,128 @@
1+
---
2+
title: Quick Start
3+
description: Get AI Server up and running quickly
4+
---
5+
6+
Install AI Server by running [install.sh](https://github.com/ServiceStack/ai-server/blob/main/install.sh):
7+
8+
### 1. Clone the Repository
9+
10+
Clone the AI Server repository from GitHub:
11+
12+
:::sh
13+
git clone https://github.com/ServiceStack/ai-server
14+
:::
15+
16+
### 2. Run the Installer
17+
18+
:::sh
19+
cd ai-server && cat install.sh | bash
20+
:::
21+
22+
The installer will detect common environment variables for its supported AI Providers including OpenAI, Anthropic,
23+
Mistral AI, Google, etc. and prompt if you would like to include any others in your AI Server configuration.
24+
25+
<ascii-cinema src="/pages/ai-server/ai-server-install.cast"
26+
loop="true" poster="npt:00:21" theme="dracula" rows="12" />
27+
28+
## Accessing AI Server
29+
30+
Once the AI Server is running, you can access the Admin Portal at [http://localhost:5006/admin](http://localhost:5005/admin) to configure your AI providers and generate API keys.
31+
If you first ran the AI Server with configured API Keys in your `.env` file, you providers will be automatically configured for the related services.
32+
33+
::: info
34+
The default password to access the Admin Portal is `p@55wOrd`. You can change this in your `.env` file by setting the `AUTH_SECRET` or providing it during the installation process.
35+
:::
36+
37+
You will then be able to make requests to the AI Server API endpoints, and access the Admin Portal user interface like the [Chat interface](http://localhost:5005/admin/Chat) to use your AI Provider models.
38+
39+
#### Re-install
40+
41+
If needed you can reset the process by deleting your local `App_Data` directory and rerunning `docker compose up` or re-running the `install.sh`.
42+
43+
### Optional - Install ComfyUI Agent
44+
45+
If your server also has a GPU you can ask the installer to also install the [ComfyUI Agent](/ai-server/comfy-extension) locally:
46+
47+
<ascii-cinema src="/pages/ai-server/agent-comfy-install.cast"
48+
loop="true" poster="npt:00:09" theme="dracula" rows="13" />
49+
50+
The ComfyUI Agent is a separate Docker agent for running [ComfyUI](https://www.comfy.org),
51+
[Whisper](https://github.com/openai/whisper) and [FFmpeg](https://www.ffmpeg.org) on servers with GPUs to handle
52+
AI Server's [Image](/ai-server/transform/image) and
53+
[Video transformations](/ai-server/transform/video) and Media Requests, including:
54+
55+
- [Text to Image](/ai-server/text-to-image)
56+
- [Image to Text](/ai-server/image-to-text)
57+
- [Image to Image](/ai-server/image-to-image)
58+
- [Image with Mask](/ai-server/image-with-mask)
59+
- [Image Upscale](/ai-server/image-upscale)
60+
- [Speech to Text](/ai-server/speech-to-text)
61+
- [Text to Speech](/ai-server/text-to-speech)
62+
63+
#### Comfy UI Agent Installer
64+
65+
To install the ComfyUI Agent on a separate server (with a GPU), you can clone and run the ComfyUI Agent installer from there instead:
66+
67+
**Clone the Comfy Agent Repo:**
68+
69+
:::sh
70+
git clone https://github.com/ServiceStack/agent-comfy.git
71+
:::
72+
73+
**Run the Comfy Agent Installer:***
74+
75+
:::sh
76+
cd agent-comfy && cat install.sh | bash
77+
:::
78+
79+
Providing your AI Server URL and Auth Secret when prompted will automatically register the ComfyUI Agent with your AI Server to handle related requests.
80+
81+
82+
You will be prompted to provide the AI Server URL and ComfyUI Agent URL during the installation.
83+
These should be the accessible URLs for your AI Server and ComfyUI Agent. When running locally, the ComfyUI Agent will be populated with a docker accessible path as `localhost` won't be accessible from the AI Server container.
84+
If you want to reset the ComfyUI Agent settings, remember to remove the provider from the AI Server Admin Portal.
85+
:::
86+
87+
### Supported OS
88+
89+
The AI Server installer is supported on Linux, macOS, and Windows with WSL2, and all require Docker and Docker Compose to be installed at a minimum.
90+
91+
## Prerequisites
92+
93+
### Linux
94+
95+
Linux requires the following software installed:
96+
97+
- Docker Engine (with Docker Compose)
98+
- Git
99+
100+
#### ComfyUI Agent
101+
102+
To run the ComfyUI Agent locally, you will also need:
103+
104+
- Nvidia GPU with CUDA support
105+
- Nvidia Container Toolkit for [Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html)
106+
107+
### macOS
108+
109+
macOS also requires:
110+
111+
- Docker Engine (with Docker Compose)
112+
113+
#### ComfyUI Agent
114+
115+
ComfyUI Agent requires Pytorch running in Docker which isn't available for macOS
116+
117+
### Windows with WSL2
118+
119+
Windows with WSL2 requires the following prerequisites:
120+
121+
- Docker Engine accessible from WSL2 like [Docker Desktop](https://www.docker.com/products/docker-desktop)
122+
- WSL2 with Ubuntu 20.04 LTS or later
123+
124+
#### ComfyUI Agent
125+
126+
To run the ComfyUI Agent locally, you will also need:
127+
128+
- Nvidia GPU with [WSL2 CUDA support](https://docs.nvidia.com/cuda/wsl-user-guide/index.html)

0 commit comments

Comments
 (0)