Skip to content

Conversation

Pkylas007
Copy link
Collaborator

@Pkylas007 Pkylas007 commented Jul 31, 2025

JIRA

Version

  • 8.0.0

Preview

@Pkylas007 Pkylas007 force-pushed the mta-5378-developer-lightspeed-intro branch 2 times, most recently from 3fc75f6 to 82f78d7 Compare July 31, 2025 06:15
@Pkylas007 Pkylas007 force-pushed the mta-5378-developer-lightspeed-intro branch from 555df2a to de83b5f Compare August 7, 2025 10:01
@Pkylas007
Copy link
Collaborator Author

@anarnold97 Thank you for your comments! I have updated the content based on your review feedback.

@Pkylas007
Copy link
Collaborator Author

@jwmatthews @sshveta Could you provide me your feedback, please?

Copy link
Member

@jwmatthews jwmatthews left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Pkylas007

Please look over the specific way we refer to the product name and help so we are conforming to what marketing/branding has planned.

I spoke to @JonathanR19 (Product Marketing Manager) to confirm how we should refer to "Red Hat Developer Lightspeed for migration toolkit for applications"

Jonathan shared:

These are the short names that we got approval to use:

  • Developer Lightspeed for migration toolkit for applications
  • Developer Lightspeed for MTA

We should probably use the full name for at least the first mention of the tool, which would be:
Red Hat Developer Lightspeed for migration toolkit for applications

Also note that I am using the correct capitalization in each variation.

[id="intro-to-the-developer-lightspeed_{context}"]
= Introduction to the {ProductShortName} Developer Lightspeed

Starting from 8.0.0, you can use {ProductFullName} Developer Lightspeed for application modernization in your organization by running Artificial Intelligence-driven static code analysis for Java applications. Developer Lightspeed gains context for an analysis from historical changes to source code through previous analysis (called solved examples) and the description of issues available in both default and custom rule sets. Thus, when you deploy Developer Lightspeed for analyzing your entire application portfolio, it enables you to be consistent with the common fixes you need to make in the source code of any Java application. It also enables you to control the analysis through manual reviews of the suggested AI fixes by accepting or rejecting the changes while reducing the overall time and effort required to prepare your application for migration.
Copy link
Member

@jwmatthews jwmatthews Aug 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It may be worth to reword this a little, I'll share more info for you to decide what is best path to take. Please don't feel compelled to use all I share below, I'm just trying to help to give a bit more info so you ultimately can decide what you think is best to include.

Developer Lightspeed gains context for an analysis from historical changes to source code through previous analysis (called solved examples) and the description of issues available in both default and custom rule sets.

It may help if I provide a bit more info of what the "solved example" is.

The "solved examples", are not so much previous analysis info, that is related, but there is more.

The solved example contains how a previous analysis issue was "solved" it includes:

  • Before Code
  • Updated Code (what fixed the issue and was ultimately accepted by the Migrator)
  • Hint ... this hint is the big piece of what is so helpful. This hint is calculated from a LLM by working with the LLM to tell them 1) the analysis problem, 2) the before code 3) the modified code, then asking the LLM to extract out the pattern/instructions of how to fix this in the future, we call that the "hint".

The key pieces I would emphasize are:

  • We provide 2400+ analysis rules with the product for various Java technologies AND this is extensible for custom frameworks or new technologies via custom rules

  • Each analysis rule allows capturing additional contextual information, such as guidance to the LLM of how to resolve the specific problem, we call this a "hint". This is an extensible/easy mechanism for Migrators to complement the migration information by quickly adding guidance to the LLM of how to address any analysis issue they identify. (This is a RAG pattern implemented via Analysis issues as the similiarity search opposed to the more popular vector search).

  • The Solution Server, augments the above "hint" which a rules author can associate to an Analysis Rule. As Migrators use MTA for migrations and solve difficult problems, (the value comes in when the Migrator uses MTA for a fix, the fix was not ideal, Migrator manually modifies the fix to correct it, then accepts it), this info is recorded and processed. MTA works with a LLM to produce a Hint based on what the Migrator corrected, that then improves the contextual information with an improved Hint.


The main components of Developer Lightspeed are the large language model (LLM), a Visual Studio Code (VS Code) extension, and the solution server.

When you initiate an analysis, Developer Lightspeed creates a context to generate a hint or a prompt that is shared with your LLM. The context is drawn from profile configuration that contains the target technology or the source technology that you configure for migration. Based on the source or target configuration, Developer Lightspeed checks the associated rule set containing rules that describe what needs to be fixed as the first input for the LLM prompt. As the second input, Developer Lightspeed uses the solved examples that are stored in the solution server from previous analyses in which the source code changed after applying the fixes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When you initiate an analysis, Developer Lightspeed creates a context to generate a hint or a prompt that is shared with your LLM.

May want to rephrase this, we do create context and a prompt and work with a LLM, but we do that only when a code suggestion is requested, we do not do that at time of analysis. Analysis info is an input to this context and passed into the prompt.

Will share some more info to help clarify the technical pieces.

Analysis is purely static code analysis, there is no LLM interaction for Analysis.

Think of the analysis as the first step that scans the code to "understand" what the potential concerns are, then it produces a list of potential concerns. We call each of those concerns a "violation".

The user will view those concerns and may elect to "fix" one, if they decide to ask MTA to fix it, then we move to next step of collecting "context" related to the concern and working with a LLM to request a code suggestion.

The context we pass includes several things...

  • The target technology the analysis ran against, serves as an "aim" of what the LLM will try to move our code to
  • Description of the Analysis Issue seen
  • [Optional] Extra information included in the Rule associated with the Analysis Issue, this often serves as extra guidance, or a hint to the LLM of how to fix the problem. This is info the Rules Author may or may not include for each Rule
  • [Optional] Hint produced from the solution server, potential if this is a problem that has been in past and solved, the solution server may have been able to work with a solved version of the code and extract a pattern (or a hint) to help future examples be solved in a similar way by the LLM
  • [Optional] Additional data from "tools" integrated into the IDE, i.e. "vscode diagnostic" information. The agentic workflows have the capability to use a few additional tools to detect problems and attempt to fix.


The hint or the prompt generated by Developer Lightspeed is the well-defined context for identifying issues that allows the LLM to "reason" and generate the fix suggestions. This mechanism helps to overcome the limited context size in LLMs that prevents them from analyzing the entire source code of an application. You can review the suggested change and accept or reject the update to the code per issue or for all the issues.

Developer Lightspeed supports different goals of analysis through the three modes: the Agentic AI, the Retrieval Augmented Geeneration (RAG) solution delivered by the solution server, and the demo mode.
Copy link
Member

@jwmatthews jwmatthews Aug 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: "Geeneration" near end in Retrieval Augmented Geeneration

I wouldn't think of this as different goals of analysis through three modes, I don't think that is accurate.

I would not document anything of the "demo mode" that is not something an end user would be expected to use.

I would consider that MTA DLS is built on the fundamental concept of Static Analysis and RAG. The RAG pattern is implemented relying on static analysis to identify similar problems and provide relevant additional info per identified issue. After we obtain an initial code suggestion from the LLM, there is an optional behavior where we can detect "follow on" problems that occur after the suggestion is applied, this uses the agentic workflows and additional tools to attempt to "understand" and "fix" potential problems.

In terms of goals, I would think of the goal tied to the end users desire for moving to a "new target technology", for example Java EE to Quarkus, Spring Boot 2 to Spring Boot 3... this is the "big need" the end user has and why they use MTA, they want to move a legacy application to a newer piece of technology.

To accomplish what they want to do they can use MTA to perform an analysis to undercover potential areas of concern a Migrator should consider. This allows an engineer who knows say Spring Boot 2 and 3 to quickly get an understanding of an application they haven't worked with before and quickly see the big areas they need to look at to accomplish their goal, (maybe for example to upgrade the app from Spring Boot 2 to 3).

Works with the LLM to generate fix suggestions. The reasoning transcript and files to be changed are displayed to the user.
* Applies the changes to the code once the user approves the updates.

If you accept that the agentic AI must continue to make changes, it compiles the code and runs a partial analysis. In this phase, the agentic AI can detect diagnostic issues (if any) generated by tools that you installed in the VS Code IDE. You can accept the agentic AI's suggestion to address these diagnostic issues too. After every phase of applying changes to the code, the agentic AI runs another round of automated analysis depending on your acceptance, until it has run through all the files in your project and resolved the issues in the code. Agentic AI generates a new file in each round when it applies the suggestions in the code. The time taken by the agentic AI to complete several rounds of analysis depends on the size of the application, the number of issues, and the complexity of the code.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, read through what you have for enabling the agent and looks good


If you accept that the agentic AI must continue to make changes, it compiles the code and runs a partial analysis. In this phase, the agentic AI can detect diagnostic issues (if any) generated by tools that you installed in the VS Code IDE. You can accept the agentic AI's suggestion to address these diagnostic issues too. After every phase of applying changes to the code, the agentic AI runs another round of automated analysis depending on your acceptance, until it has run through all the files in your project and resolved the issues in the code. Agentic AI generates a new file in each round when it applies the suggestions in the code. The time taken by the agentic AI to complete several rounds of analysis depends on the size of the application, the number of issues, and the complexity of the code.

The RAG solution, delivered by the Solution Server, is based on solved examples or past analysis to fresolve new issues or similar issues that are found while analyzing the source code. This type of analysis is not iterative. The Solution Server analysis generates a diff of the updated portions of the code and the original source code for a manual review. In such an analysis, the user has more control over the changes that must be applied to the code.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: "fresolve"

Note that we use RAG for more than just solution server.
The basic concept of RAG is fundamental to all we are doing with MTA DLS.

The Solution Server uses past analysis AND accepted/modified migration problems from prior completed migrations. Just analysis would be incomplete.

I think this paragraph would benefit from rewording, it's not quite capturing what the solution server is.

The solution server is not really performing "analysis", its more the solution server is forming a big "memory pool" for the entire organization. As Migrators use MTA to migrate legacy applications they will be faced with analysis issues that MTA struggles to solve automatically. A Migrator will need to manually modify the suggested code changes to "correct" them, this interaction is recorded and saved in the Solution Server... it's updating the "memory" of how a given issue was fixed. Later, there is a post processing stage where the Solution Server will look at the data it's collected and it will work with a LLM to ask it for this given problem we struggled on, how would I improve the Hint so next time we see it we have better contextual info so MTA is more likely to be able to fix the problem in future.


The RAG solution, delivered by the Solution Server, is based on solved examples or past analysis to fresolve new issues or similar issues that are found while analyzing the source code. This type of analysis is not iterative. The Solution Server analysis generates a diff of the updated portions of the code and the original source code for a manual review. In such an analysis, the user has more control over the changes that must be applied to the code.

You can consider using the demo mode for running Developer Lightspeed when you need to perform analysis but have a limited network connection for Developer Lightspeed to sync with the LLM. The demo mode stores the input data as a hash and past LLM calls in a cache. The cache is stored in a chosen location in the your file system for later use. The hash of the inputs is used to determine which LLM call must be used in the demo mode. After you enable the demo mode and configure the path to your cached LLM calls in the Developer Lightspeed settings, you can rerun an analysis for the same set of files using the responses to a previous LLM call.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be removed. Demo mode is only intended to aid folks running a demo that was previously recorded to limit LLM communication issues in a conference setting or similar, limited network connectivity.

I wouldn't document this setting for downstream.

* *Iterative refinement* - Developer Lightspeed can include an agent that iterates through the source code to run a series of automated analyses that resolves both the code base and diagnostic issues.
* *Contextual code generation* - By leveraging AI for static code analysis, Developer Lightspeed breaks down complex problems into more manageable ones, providing the LLM with focused context to generate meaningful results. This helps overcome the limited context size of LLMs when dealing with large codebases.
* *No fine tuning* - You also do not need to fine tune your model with a suitable data set for analysis which leaves you free to use and switch LLM models to respond to your requirements.
* *Learning and Improvement* - As more parts of a codebase are migrated with Developer Lightspeed, it can use RAG to learn from the available data and provide better recommendations in subsequent application analysis.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, read through this section and the benefits look good.

@Pkylas007
Copy link
Collaborator Author

@Pkylas007

Please look over the specific way we refer to the product name and help so we are conforming to what marketing/branding has planned.

I spoke to @JonathanR19 (Product Marketing Manager) to confirm how we should refer to "Red Hat Developer Lightspeed for migration toolkit for applications"

Jonathan shared:

These are the short names that we got approval to use:

  • Developer Lightspeed for migration toolkit for applications
  • Developer Lightspeed for MTA

We should probably use the full name for at least the first mention of the tool, which would be: Red Hat Developer Lightspeed for migration toolkit for applications

Also note that I am using the correct capitalization in each variation.

Hi @jwmatthews ,

Thank you for confirming the brand name! I will use Red Hat Developer Lightspeed for migration toolkit for applications as the complete expanded form and Developer Lightspeed for MTA as the shorter version if that works for you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants