Universal LLM Provider Connector for Java
When I first started exploring LLMs and Neural Networks in Python, experimenting was easy. But when I switched back to Java—the language I trust for its scalability and performance—I hit a roadblock. There weren’t any simple tools to help me work seamlessly with multiple LLM providers.
This had to be fixed.
The result? Hosp-AI
Hosp is a short form derived from the Danish word - holdspiller - In English it means "team player". The idea is to create a library that's effective and fits in easily.
A library designed for quick prototyping with LLMs, and fully compatible with production-ready frameworks like Spring Boot.
Thanks to Adalflow , the inspiration behind building this library.
- Fork the Repo
- Create a Branch - name it based on issue-fix, documentation, feature
- Pull a PR
- Once Reviewed, PR will be merged by Admin
- Following LLM providers are supported currently: OpenAI, Anthropic, Groq, Ollama
- PromptBuilder to build complex prompts
- Flexibility to add customized client implementations
- Tools(Function Calls) Supported
- Add Image in Prompt
- Stream Response
- Structured Output
- Add jitpack repository in pom file
<repositories> <repository> <id>jitpack.io</id> <url>https://jitpack.io</url> </repository> </repositories>
- Add hosp-ai dependency (check latest version)
<dependency> <groupId>com.github.r7b7</groupId> <artifactId>hosp-ai</artifactId> <version>v1.0.0-alpha.2</version> </dependency>
Use the builder-based facade API to configure provider, model, timeouts, and inject shared dependencies like HttpClient and ObjectMapper.
import java.net.http.HttpClient;
import java.time.Duration;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.r7b7.entity.Provider;
import com.r7b7.llm.DefaultLlmClient;
import com.r7b7.llm.LlmClient;
import com.r7b7.model.BaseLLMRequest;
// Build your prompt however you prefer
var request = new BaseLLMRequest(promptMessages, Map.of("temperature", 0.2), null, null);
LlmClient client = DefaultLlmClient.builder()
.provider(Provider.OPENAI)
.apiKey(System.getenv("OPENAI_API_KEY"))
.model("gpt-4o-mini")
.requestTimeout(Duration.ofSeconds(60))
.httpClient(HttpClient.newHttpClient())
.objectMapper(new ObjectMapper())
.build();
var response = client.chat(request);This API throws runtime exceptions under com.r7b7.llm.exception when a request fails, which tends to fit enterprise error handling patterns better than propagating errors via response objects.
For working examples and tutorials - visit Wiki
