Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/developer-lightspeed-guide/master.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ include::topics/templates/document-attributes.adoc[]
[id="mta-developer-lightspeed"]
= Configuring and Using Red Hat Developer Lightspeed for MTA


:toc:
:toclevels: 4
:numbered:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ endif::[]
[role="_abstract"]
{mta-dl-plugin} provides the large language model (LLM) with the contextual prompt, migration hints, and solved examples to generate suggestions for resolving issues identified in the current code.

{mta-dl-plugin} is designed to be model agnostic. It works with LLMs that are run in different environments (in local containers, as local AI, or as a shared service) to support analyzing Java applications in a wide range of scenarios. You can choose an LLM from well-known providers, local models that you run from Ollama or Podman desktop, and OpenAI API compatible models.
{mta-dl-plugin} is designed to be model agnostic. It works with LLMs that are run in different environments (in local containers, as local AI, or as a shared service) to support analyzing Java applications in a wide range of scenarios. You can choose an LLM from well-known providers, local models that you run from Ollama or Podman Desktop, and OpenAI API compatible models.

The code fix suggestions produced to resolve issues detected through an analysis depend on the LLM's capabilities.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,13 @@ endif::[]
= Solution Server configurations
:context: solution-server-configurations

[role=_abstract]
Solution Server is a component that allows {mta-dl-plugin} to build a collective memory of source code changes from all analysis performed in an organization. When you request code fix for issues in the Visual Studio (VS) Code, the Solution Server augments previous patterns of how source code changed to resolve issues (also called solved examples) that were similar to those in the current file, and suggests a resolution that has a higher confidence level derived from previous solutions. After you accept a suggested code fix, the Solution Server works with the large language model (LLM) to improve the hints about the issue that becomes part of the context. An improved context enables the LLM to generate more reliable code fix suggestions in future cases.
[role="_abstract"]
Solution Server is a component that allows {mta-dl-plugin} to build a collective memory of source code changes from all analysis performed in an organization. When you request code fix for issues in Visual Studio Code, the Solution Server augments previous patterns of how source code changed to resolve issues (also called solved examples) that were similar to those in the current file, and suggests a resolution that has a higher confidence level derived from previous solutions. After you accept a suggested code fix, the Solution Server works with the large language model (LLM) to improve the hints about the issue that becomes part of the context. An improved context enables the LLM to generate more reliable code fix suggestions in future cases.

The Solution Server delivers two primary benefits to users:

* *Contextual Hints*: It surfaces examples of past migration solutionsincluding successful user modifications and accepted fixes offering actionable hints for difficult or previously unsolved migration problems.
* *Migration Success Metrics*: It exposes detailed success metrics for each migration rule, derived from real-world usage data. These metrics can be used by IDEs or automation tools to present users with a “confidence level” or likelihood of {mta-dl-plugin} successfully migrating a given code segment.
* *Contextual Hints*: It surfaces examples of past migration solutions, including successful user modifications and accepted fixes, offering actionable hints for difficult or previously unsolved migration problems.
* *Migration Success Metrics*: It exposes detailed success metrics for each migration rule, derived from real-world usage data. IDEs or automation tools can use these metrics to present users with a “confidence level” or likelihood of {mta-dl-plugin} successfully migrating a given code segment.

Solution Server is an optional component in {mta-dl-plugin}. You must complete the following configurations before you can place a code resolution request.

Expand Down Expand Up @@ -54,4 +54,4 @@ include::proc_tackle-enable-dev-lightspeed.adoc[leveloffset=+1]
ifdef::parent-context-of-solution-server-configurations[:context: {parent-context-of-solution-server-configurations}]
ifndef::parent-context-of-solution-server-configurations[:!context:]

:!solution-server-configurations:
:!solution-server-configurations:
Original file line number Diff line number Diff line change
Expand Up @@ -9,13 +9,13 @@
[role="_abstract"]
{mta-dl-full} generates logs to debug issues specific to the extension host and the {ProductShortName} analysis and RPC server. You can also configure the log level for the {mta-dl-plugin} in the extension settings. The default log level is *debug*.

Extension logs are stored as `extension.log` with automatic rotation. The maximum size of the log file is 10 MB and three files are retained. Analyzer RPC logs are stored as `analyzer.log` without rotation.
The extension stores logs as `extension.log` with automatic rotation. The log file is limited to 10 MB and the system retains three files. The analyzer stores RPC logs as `analyzer.log` without rotation.

[id="dev-lightspeed-archive-logs_{context}"]

== Archiving the logs

To archive the logs as a zip file, type `{ProductShortName}: Generate Debug Archive` in the VS Code Command Palette and select the information type that must be archived as a log file.
To archive the logs as a zip file, type `{ProductShortName}: Generate Debug Archive` in the Visual Studio Code Command Palette and select the information type that must be archived as a log file.

The archive command allows capturing all relevant log files in a zip archive at the specified location in your project. By default, you can access the archived logs in the .vscode directory of your project.

Expand All @@ -35,4 +35,4 @@ You can access the logs in the following ways:

* *Output panel*: Select `{mta-dl-plugin}` from the drop-down menu.

* *Webview logs*: You can also inspect webview content by using the webview logs. To access the webview logs, type `Open Webview Developer Tools` in the VS Code Command Palette.
* *Webview logs*: You can also inspect webview content by using the webview logs. To access the webview logs, type `Open Webview Developer Tools` in the Visual Studio Code Command Palette.
4 changes: 2 additions & 2 deletions docs/topics/developer-lightspeed/con_installation.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,6 @@
= Installation

[role="_abstract"]
You can install the {ProductFullName} 8.0.0 Visual Studio (VS) Code plug-in from the link:https://marketplace.visualstudio.com/search?term=migration%20toolkit&target=VSCode&category=All%20categories&sortBy=Relevance[VS Code marketplace].
You can install the {ProductFullName} 8.0.0 Visual Studio Code plugin from the link:https://marketplace.visualstudio.com/search?term=migration%20toolkit&target=VSCode&category=All%20categories&sortBy=Relevance[Visual Studio Code marketplace].

You can use the {ProductShortName} VS Code plug-in to perform analysis and optionally enable {mta-dl-full} to use generative AI capabilities. You can fix code issues before migrating the application to target technologies by using the generative AI capabilities.
You can use the {ProductShortName} Visual Studio Code plugin to perform analysis and optionally enable {mta-dl-full} to use generative AI capabilities. You can fix code issues before migrating the application to target technologies by using the generative AI capabilities.
Original file line number Diff line number Diff line change
Expand Up @@ -7,16 +7,16 @@
= Introduction to the {mta-dl-plugin}

[role="_abstract"]
Starting from 8.0.0, {ProductFullName} integrates with large language models (LLM) through the {mta-dl-full} component in the Visual Studio (VS) Code extension. You can use {mta-dl-plugin} to apply LLM-driven code changes to resolve issues found through static code analysis of Java applications.
Starting from 8.0.0, {ProductFullName} integrates with large language models (LLM) through the {mta-dl-full} component in the Visual Studio Code extension. You can use {mta-dl-plugin} to apply LLM-driven code changes to resolve issues found through static code analysis of Java applications.

[id="use-case-ai-code-fix_{context}"]
== Use case for AI-driven code fixes

{ProductFirstRef} performs the static code analysis for a specified target technology to which you want to migrate your applications. Red Hat provides 2400+ analysis rules in {ProductShortName} for various Java technologies and you can extend the ruleset for custom frameworks or new technologies by creating custom rules.
{ProductFirstRef} performs the static code analysis for a specified target technology to which you want to migrate your applications. Red Hat provides 2400+ analysis rules in {ProductShortName} for many Java technologies and you can extend the ruleset for custom frameworks or new technologies by creating custom rules.

The static code analysis describes the issues in your code that must be resolved. As you perform analysis for a large portfolio of applications, the issue description and the rule definition that may contain additional information form a large corpus of data that contains repetitive patterns of problem definitions and solutions.
The static code analysis describes the issues in your code that you must resolve. As you perform analysis for a large portfolio of applications, the issue description and the rule definition that might contain extra information form a large corpus of data that contains repetitive patterns of problem definitions and solutions.

Migrators do duplicate work by resolving issues that are repeated across applications in different migration waves.
Migrators do duplicate work by resolving issues that repeat across applications in different migration waves.

[id="how-developerlightspped-works_{context}"]
== How does {mta-dl-plugin} work
Expand All @@ -29,41 +29,41 @@ The context is a combination of the source code, the issue description, and solv

* Description of issues detected by {ProductShortName} when you run a static code analysis for a given set of target technologies.

* (Optional) The default and custom rules may contain additional information that you include which can help {mta-dl-plugin} to define the context.
* (Optional) The default and custom rules might contain extra information that you include which can help {mta-dl-plugin} to define the context.
+
* Solved examples constitute code changes from other migrations and a pattern of resolution for an issue that can be used in future. A solved example is created when a Migrator accepts a resolution in a previous analysis that results in updated code or an unfamiliar issue in a legacy application that the Migrator manually fixed. Solved examples are stored in the Solution Server.
* Solved examples constitute code changes from other migrations and a pattern of resolution for an issue that can be used in future. The system creates a solved example when a Migrator accepts a resolution in a previous analysis that results in updated code or an unfamiliar issue in a legacy application that the Migrator manually fixed. The Solution Server stores solved examples.
+
More instances of solved examples for an issue enhances the context and improve the success metrics of rules that trigger the issue. A higher success metrics of an issue refers to the higher confidence level associated with the accepted resolutions for that issue in previous analyses.

* (Optional) If you enable the Solution Server, it extracts a pattern of resolution, called the migration hint, that can be used by the LLM to generate a more accurate fix suggestion in a future analysis.
+
The improvement in the quality of migration hints results in more accurate code resolutions. Accurate code resolutions from the LLM result in the user accepting an update to the code. The updated code is stored in the Solution Server to generate a better migration hint in future.
The improvement in the quality of migration hints results in more accurate code resolutions. Accurate code resolutions from the LLM result in the user accepting an update to the code. The Solution Server stores the updated code to generate a better migration hint in future.
+
This cyclical improvement of resolution pattern from the Solution Server and improved migration hints lead to more reliable code changes as you migrate applications in different migration waves.

[id="modes-developer-lightspeed_{context}"]
== Requesting code fixes in {mta-dl-plugin}

You can request AI-assisted code resolutions that obtain additional context from several potential sources, such as analysis issues, IDE diagnostic information, and past migration data via the Solution Server.
You can request AI-assisted code resolutions that get context from multiple potential sources, such as analysis issues, IDE diagnostic information, and past migration data via the Solution Server.

The Solution Server acts as an institutional memory that stores changes to source codes after analyzing applications in your organization. This helps you to leverage the recurring patterns of solutions for issues that are repeated in many applications.
The Solution Server acts as an institutional memory that stores changes to source codes after analyzing applications in your organization. This helps you use the recurring patterns of solutions for issues that repeat in many applications.

When you use the Solution Server, {mta-dl-plugin} suggests a code resolution that is based on solved examples or code changes in past analysis. You can view a diff of the updated portions of the code and the original source code to do a manual review.
When you use the Solution Server, {mta-dl-plugin} suggests a code resolution based on solved examples or code changes in past analysis. You can view a diff of the updated portions of the code and the original source code to do a manual review.

It also enables you to control the analysis through manual reviews of the suggested AI resolutions: you can accept, reject or edit the suggested code changes while reducing the overall time and effort required to prepare your application for migration.
It also enables you to control the analysis through manual reviews of the suggested AI resolutions: you can accept, reject or edit the suggested code changes while reducing the time and effort required to prepare your application for migration.

In the agentic AI mode, {mta-dl-plugin} streams an automated analysis of the code in a loop until all issues are resolved and changes the code with the updates. In the initial run, the AI agent:
In the agentic AI mode, {mta-dl-plugin} streams an automated analysis of the code in a loop until it resolves all issues and changes the code with the updates. In the initial run, the AI agent:

* Plans the context to define the issues.
* Chooses a suitable sub agent for the analysis task.
Works with the LLM to generate fix suggestions. The reasoning transcript and files to be changed are displayed to the user.
Works with the LLM to generate fix suggestions. The tool displays the reasoning transcript and files to be changed to the user.
* Applies the changes to the code once the user approves the updates.

If you accept that the agentic AI must continue to make changes, it compiles the code and runs a partial analysis. In this iteration, the agentic AI attempts to fix diagnostic issues (if any) generated by tools that you installed in the VS Code IDE. You can review the changes and accept the agentic AI's suggestion to address these diagnostic issues.
If you accept that the agentic AI must continue to make changes, it compiles the code and runs a partial analysis. In this iteration, the agentic AI attempts to fix diagnostic issues (if any) generated by tools that you installed in the Visual Studio Code IDE. You can review the changes and accept the agentic AI's suggestion to address these diagnostic issues.

After each iteration of applying changes to the code, the agentic AI asks if you want the agent to continue fixing more issue. When you accept, it runs another iteration of automated analysis until it has resolved all issues or it has made a maximum of two attempts to fix an issue.
After each iteration of applying changes to the code, the agentic AI asks if you want the agent to continue fixing more issue. When you accept, it runs another iteration of automated analysis until it has resolved all issues or it has made up to two attempts to fix an issue.

Agentic AI generates a new preview in each iteration when it updates the code with the suggested resolutions. The time taken by the agentic AI to complete all iterations depends on the number of new diagnostic issues that are detected in the code.
Agentic AI generates a new preview in each iteration when it updates the code with the suggested resolutions. The time taken by the agentic AI to complete all iterations depends on the number of new diagnostic issues that the tool detects in the code.

//You can consider using the demo mode for running {mta-dl-plugin} when you need to perform analysis but have a limited network connection for {mta-dl-plugin} to sync with the LLM. The demo mode stores the input data as a hash and past LLM calls in a cache. The cache is stored in a chosen location in the your file system for later use. The hash of the inputs is used to determine which LLM call must be used in the demo mode. After you enable the demo mode and configure the path to your cached LLM calls in the {mta-dl-plugin} settings, you can rerun an analysis for the same set of files using the responses to a previous LLM call.

Expand All @@ -85,5 +85,5 @@ For more information about the support scope of Red Hat Technology Preview featu
* *Iterative refinement* - {mta-dl-plugin} can include an agent that iterates through the source code to run a series of automated analyses that resolves both the code base and diagnostic issues.
* *Contextual code generation* - By leveraging AI for static code analysis, {mta-dl-plugin} breaks down complex problems into more manageable ones, providing the LLM with focused context to generate meaningful results. This helps overcome the limited context size of LLMs when dealing with large codebases.
* *No fine tuning* - You also do not need to fine tune your model with a suitable data set for analysis which leaves you free to use and switch LLM models to respond to your requirements.
* *Learning and Improvement* - As more parts of a codebase are migrated with {mta-dl-plugin}, it can use RAG to learn from the available data and provide better recommendations in subsequent application analysis.
* *Learning and Improvement* - As you migrate more parts of a codebase with {mta-dl-plugin}, it can use RAG to learn from the available data and provide better recommendations in an application analysis.

Loading