-
Notifications
You must be signed in to change notification settings - Fork 20
Rest API support #49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Rest API support #49
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds REST API support to the existing TornadoVM LLM application, enabling it to function as a web service alongside the existing command-line interface. The implementation provides OpenAI-compatible endpoints for text completion with both regular and streaming responses.
Key changes:
- Added Spring Boot web framework integration with REST API endpoints
- Refactored sampler creation logic from LlamaApp to Sampler interface for reusability
- Created service layer architecture for model initialization and text generation
Reviewed Changes
Copilot reviewed 13 out of 13 changed files in this pull request and generated 4 comments.
Show a summary per file
File | Description |
---|---|
ModelLoader.java | Added static loadModel method with AOT support for service initialization |
Sampler.java | Added static factory methods for sampler creation previously in LlamaApp |
TokenizerService.java | New service for text encoding/decoding operations |
ModelInitializationService.java | New service for initializing model, options, and sampler on startup |
LLMService.java | New service handling text completion generation (regular and streaming) |
CompletionResponse.java | Response model class for OpenAI-compatible completion API |
CompletionRequest.java | Request model class for OpenAI-compatible completion API |
CompletionController.java | REST controller providing /v1/completions endpoints |
ModelConfiguration.java | Spring configuration for exposing model and options as beans |
LLMApiApplication.java | Spring Boot application entry point for API service mode |
LlamaApp.java | Removed sampler and model loading methods (moved to respective classes) |
pom.xml | Added Spring Boot and Jackson dependencies |
llama-tornado | Added --service flag and updated argument handling for API mode |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
// * Loads the language model based on the given options. | ||
// * <p> | ||
// * If Ahead-of-Time (AOT) mode is enabled, attempts to use a pre-loaded compiled model. Otherwise, loads the model from the specified path using the model loader. | ||
// * </p> | ||
// * | ||
// * @param options | ||
// * the parsed CLI options containing model path and max token limit | ||
// * @return the loaded {@link Model} instance | ||
// * @throws IOException | ||
// * if the model fails to load | ||
// * @throws IllegalStateException | ||
// * if AOT loading is enabled but the preloaded model is unavailable | ||
// */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment block uses incorrect comment syntax with '//' instead of '*' for Javadoc. This should be standard Javadoc format to be properly processed by documentation tools.
// * Loads the language model based on the given options. | |
// * <p> | |
// * If Ahead-of-Time (AOT) mode is enabled, attempts to use a pre-loaded compiled model. Otherwise, loads the model from the specified path using the model loader. | |
// * </p> | |
// * | |
// * @param options | |
// * the parsed CLI options containing model path and max token limit | |
// * @return the loaded {@link Model} instance | |
// * @throws IOException | |
// * if the model fails to load | |
// * @throws IllegalStateException | |
// * if AOT loading is enabled but the preloaded model is unavailable | |
// */ | |
* Loads the language model based on the given options. | |
* <p> | |
* If Ahead-of-Time (AOT) mode is enabled, attempts to use a pre-loaded compiled model. Otherwise, loads the model from the specified path using the model loader. | |
* </p> | |
* | |
* @param options | |
* the parsed CLI options containing model path and max token limit | |
* @return the loaded {@link Model} instance | |
* @throws IOException | |
* if the model fails to load | |
* @throws IllegalStateException | |
* if AOT loading is enabled but the preloaded model is unavailable | |
*/ |
Copilot uses AI. Check for mistakes.
// public String decode(int token) { | ||
// Model model = initService.getModel(); | ||
// // Convenience method for single token decoding | ||
// return model.tokenizer().decode(token); | ||
// } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove commented-out code. If this functionality might be needed later, consider implementing it or documenting why it's disabled.
// public String decode(int token) { | |
// Model model = initService.getModel(); | |
// // Convenience method for single token decoding | |
// return model.tokenizer().decode(token); | |
// } |
Copilot uses AI. Check for mistakes.
} | ||
|
||
// Create custom sampler with request-specific parameters | ||
//Sampler sampler = initService.createCustomSampler(temperature, topP, System.currentTimeMillis()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove commented-out code that references a non-existent method. This creates confusion about the intended functionality.
//Sampler sampler = initService.createCustomSampler(temperature, topP, System.currentTimeMillis()); |
Copilot uses AI. Check for mistakes.
} | ||
} | ||
|
||
//Sampler sampler = initService.createCustomSampler(temperature, topP, System.currentTimeMillis()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove commented-out code that references a non-existent method. This creates confusion about the intended functionality.
//Sampler sampler = initService.createCustomSampler(temperature, topP, System.currentTimeMillis()); |
Copilot uses AI. Check for mistakes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Copilot reviewed 12 out of 12 changed files in this pull request and generated 2 comments.
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
return ModelLoader.loadModel(options.modelPath(), options.maxTokens(), true); | ||
} | ||
|
||
public static Model loadModel(Path ggufPath, int contextLength, boolean loadWeights) throws IOException { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line calls ModelLoader.loadModel() from within the ModelLoader class itself, but the method name is ambiguous. Consider renaming this overloaded method to avoid confusion, such as 'loadModelFromPath' or similar.
return ModelLoader.loadModel(options.modelPath(), options.maxTokens(), true); | |
} | |
public static Model loadModel(Path ggufPath, int contextLength, boolean loadWeights) throws IOException { | |
return ModelLoader.loadModelFromPath(options.modelPath(), options.maxTokens(), true); | |
} | |
public static Model loadModelFromPath(Path ggufPath, int contextLength, boolean loadWeights) throws IOException { |
Copilot uses AI. Check for mistakes.
// Step 2: Load model weights | ||
System.out.println("\nStep 2: Loading model..."); | ||
System.out.println("Loading model from: " + options.modelPath()); | ||
model = ModelLoader.loadModel(options.modelPath(), options.maxTokens(), true); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The service is calling the static loadModel method directly instead of using the new loadModel(Options) method that was added to ModelLoader. This bypasses the AOT functionality. Consider using ModelLoader.loadModel(options) instead.
model = ModelLoader.loadModel(options.modelPath(), options.maxTokens(), true); | |
model = ModelLoader.loadModel(options); |
Copilot uses AI. Check for mistakes.
2b2ed4d
to
ba97fe9
Compare
ba97fe9
to
6b76b08
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Copilot reviewed 9 out of 9 changed files in this pull request and generated 3 comments.
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
if (!generatedTokens.isEmpty() && stopTokens.contains(generatedTokens.getLast())) { | ||
generatedTokens.removeLast(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The getLast()
and removeLast()
methods are Java 21+ features. This code will fail to compile on Java versions prior to 21, potentially causing compatibility issues if the project needs to support older Java versions.
if (!generatedTokens.isEmpty() && stopTokens.contains(generatedTokens.getLast())) { | |
generatedTokens.removeLast(); | |
if (!generatedTokens.isEmpty() && stopTokens.contains(generatedTokens.get(generatedTokens.size() - 1))) { | |
generatedTokens.remove(generatedTokens.size() - 1); |
Copilot uses AI. Check for mistakes.
this.id = "cmpl-" + System.currentTimeMillis(); | ||
this.created = System.currentTimeMillis() / 1000; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using System.currentTimeMillis()
twice in quick succession may result in different timestamps for id
and created
. Consider calling it once and storing the result in a variable to ensure consistency.
this.id = "cmpl-" + System.currentTimeMillis(); | |
this.created = System.currentTimeMillis() / 1000; | |
long nowMillis = System.currentTimeMillis(); | |
this.id = "cmpl-" + nowMillis; | |
this.created = nowMillis / 1000; |
Copilot uses AI. Check for mistakes.
if (model.tokenizer().shouldDisplayToken(token)) { | ||
String tokenText = model.tokenizer().decode(List.of(token)); | ||
emitter.send(SseEmitter.event().data(tokenText)); | ||
//emitter.send(SseEmitter.event().comment("flush")); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This commented-out code should either be removed if no longer needed or properly implemented if it serves a purpose for flushing the SSE stream.
//emitter.send(SseEmitter.event().comment("flush")); |
Copilot uses AI. Check for mistakes.
No description provided.