Skip to content

Conversation

mikepapadim
Copy link
Member

No description provided.

@mikepapadim mikepapadim marked this pull request as ready for review September 16, 2025 15:06
@mikepapadim mikepapadim self-assigned this Sep 16, 2025
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds REST API support to the existing TornadoVM LLM application, enabling it to function as a web service alongside the existing command-line interface. The implementation provides OpenAI-compatible endpoints for text completion with both regular and streaming responses.

Key changes:

  • Added Spring Boot web framework integration with REST API endpoints
  • Refactored sampler creation logic from LlamaApp to Sampler interface for reusability
  • Created service layer architecture for model initialization and text generation

Reviewed Changes

Copilot reviewed 13 out of 13 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
ModelLoader.java Added static loadModel method with AOT support for service initialization
Sampler.java Added static factory methods for sampler creation previously in LlamaApp
TokenizerService.java New service for text encoding/decoding operations
ModelInitializationService.java New service for initializing model, options, and sampler on startup
LLMService.java New service handling text completion generation (regular and streaming)
CompletionResponse.java Response model class for OpenAI-compatible completion API
CompletionRequest.java Request model class for OpenAI-compatible completion API
CompletionController.java REST controller providing /v1/completions endpoints
ModelConfiguration.java Spring configuration for exposing model and options as beans
LLMApiApplication.java Spring Boot application entry point for API service mode
LlamaApp.java Removed sampler and model loading methods (moved to respective classes)
pom.xml Added Spring Boot and Jackson dependencies
llama-tornado Added --service flag and updated argument handling for API mode

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Comment on lines 81 to 93
// * Loads the language model based on the given options.
// * <p>
// * If Ahead-of-Time (AOT) mode is enabled, attempts to use a pre-loaded compiled model. Otherwise, loads the model from the specified path using the model loader.
// * </p>
// *
// * @param options
// * the parsed CLI options containing model path and max token limit
// * @return the loaded {@link Model} instance
// * @throws IOException
// * if the model fails to load
// * @throws IllegalStateException
// * if AOT loading is enabled but the preloaded model is unavailable
// */
Copy link
Preview

Copilot AI Sep 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment block uses incorrect comment syntax with '//' instead of '*' for Javadoc. This should be standard Javadoc format to be properly processed by documentation tools.

Suggested change
// * Loads the language model based on the given options.
// * <p>
// * If Ahead-of-Time (AOT) mode is enabled, attempts to use a pre-loaded compiled model. Otherwise, loads the model from the specified path using the model loader.
// * </p>
// *
// * @param options
// * the parsed CLI options containing model path and max token limit
// * @return the loaded {@link Model} instance
// * @throws IOException
// * if the model fails to load
// * @throws IllegalStateException
// * if AOT loading is enabled but the preloaded model is unavailable
// */
* Loads the language model based on the given options.
* <p>
* If Ahead-of-Time (AOT) mode is enabled, attempts to use a pre-loaded compiled model. Otherwise, loads the model from the specified path using the model loader.
* </p>
*
* @param options
* the parsed CLI options containing model path and max token limit
* @return the loaded {@link Model} instance
* @throws IOException
* if the model fails to load
* @throws IllegalStateException
* if AOT loading is enabled but the preloaded model is unavailable
*/

Copilot uses AI. Check for mistakes.

Comment on lines 27 to 31
// public String decode(int token) {
// Model model = initService.getModel();
// // Convenience method for single token decoding
// return model.tokenizer().decode(token);
// }
Copy link
Preview

Copilot AI Sep 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove commented-out code. If this functionality might be needed later, consider implementing it or documenting why it's disabled.

Suggested change
// public String decode(int token) {
// Model model = initService.getModel();
// // Convenience method for single token decoding
// return model.tokenizer().decode(token);
// }

Copilot uses AI. Check for mistakes.

}

// Create custom sampler with request-specific parameters
//Sampler sampler = initService.createCustomSampler(temperature, topP, System.currentTimeMillis());
Copy link
Preview

Copilot AI Sep 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove commented-out code that references a non-existent method. This creates confusion about the intended functionality.

Suggested change
//Sampler sampler = initService.createCustomSampler(temperature, topP, System.currentTimeMillis());

Copilot uses AI. Check for mistakes.

}
}

//Sampler sampler = initService.createCustomSampler(temperature, topP, System.currentTimeMillis());
Copy link
Preview

Copilot AI Sep 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove commented-out code that references a non-existent method. This creates confusion about the intended functionality.

Suggested change
//Sampler sampler = initService.createCustomSampler(temperature, topP, System.currentTimeMillis());

Copilot uses AI. Check for mistakes.

@mikepapadim mikepapadim requested a review from Copilot September 18, 2025 09:20
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Copilot reviewed 12 out of 12 changed files in this pull request and generated 2 comments.


Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Comment on lines 102 to 105
return ModelLoader.loadModel(options.modelPath(), options.maxTokens(), true);
}

public static Model loadModel(Path ggufPath, int contextLength, boolean loadWeights) throws IOException {
Copy link
Preview

Copilot AI Sep 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line calls ModelLoader.loadModel() from within the ModelLoader class itself, but the method name is ambiguous. Consider renaming this overloaded method to avoid confusion, such as 'loadModelFromPath' or similar.

Suggested change
return ModelLoader.loadModel(options.modelPath(), options.maxTokens(), true);
}
public static Model loadModel(Path ggufPath, int contextLength, boolean loadWeights) throws IOException {
return ModelLoader.loadModelFromPath(options.modelPath(), options.maxTokens(), true);
}
public static Model loadModelFromPath(Path ggufPath, int contextLength, boolean loadWeights) throws IOException {

Copilot uses AI. Check for mistakes.

// Step 2: Load model weights
System.out.println("\nStep 2: Loading model...");
System.out.println("Loading model from: " + options.modelPath());
model = ModelLoader.loadModel(options.modelPath(), options.maxTokens(), true);
Copy link
Preview

Copilot AI Sep 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The service is calling the static loadModel method directly instead of using the new loadModel(Options) method that was added to ModelLoader. This bypasses the AOT functionality. Consider using ModelLoader.loadModel(options) instead.

Suggested change
model = ModelLoader.loadModel(options.modelPath(), options.maxTokens(), true);
model = ModelLoader.loadModel(options);

Copilot uses AI. Check for mistakes.

Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Copilot reviewed 9 out of 9 changed files in this pull request and generated 3 comments.


Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Comment on lines +117 to +118
if (!generatedTokens.isEmpty() && stopTokens.contains(generatedTokens.getLast())) {
generatedTokens.removeLast();
Copy link
Preview

Copilot AI Sep 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The getLast() and removeLast() methods are Java 21+ features. This code will fail to compile on Java versions prior to 21, potentially causing compatibility issues if the project needs to support older Java versions.

Suggested change
if (!generatedTokens.isEmpty() && stopTokens.contains(generatedTokens.getLast())) {
generatedTokens.removeLast();
if (!generatedTokens.isEmpty() && stopTokens.contains(generatedTokens.get(generatedTokens.size() - 1))) {
generatedTokens.remove(generatedTokens.size() - 1);

Copilot uses AI. Check for mistakes.

Comment on lines +71 to +72
this.id = "cmpl-" + System.currentTimeMillis();
this.created = System.currentTimeMillis() / 1000;
Copy link
Preview

Copilot AI Sep 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using System.currentTimeMillis() twice in quick succession may result in different timestamps for id and created. Consider calling it once and storing the result in a variable to ensure consistency.

Suggested change
this.id = "cmpl-" + System.currentTimeMillis();
this.created = System.currentTimeMillis() / 1000;
long nowMillis = System.currentTimeMillis();
this.id = "cmpl-" + nowMillis;
this.created = nowMillis / 1000;

Copilot uses AI. Check for mistakes.

if (model.tokenizer().shouldDisplayToken(token)) {
String tokenText = model.tokenizer().decode(List.of(token));
emitter.send(SseEmitter.event().data(tokenText));
//emitter.send(SseEmitter.event().comment("flush"));
Copy link
Preview

Copilot AI Sep 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This commented-out code should either be removed if no longer needed or properly implemented if it serves a purpose for flushing the SSE stream.

Suggested change
//emitter.send(SseEmitter.event().comment("flush"));

Copilot uses AI. Check for mistakes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants