|
| 1 | +:_newdoc-version: 2.18.3 |
| 2 | +:_template-generated: 2025-02-26 |
| 3 | +:_mod-docs-content-type: PROCEDURE |
| 4 | + |
| 5 | +[id="configuring-developer-lightspeed-ide-settings_{context}"] |
| 6 | += Configuring the {mta-dl-plugin} IDE settings |
| 7 | + |
| 8 | +After you install the {ProductShortName} extension in Visual Studio (VS) Code, you must provide your large language model (LLM) credentials to activate {mta-dl-plugin} settings in Visual Studio (VS) Code. |
| 9 | + |
| 10 | +{mta-dl-plugin} settings are applied to all AI-assisted analysis that you perform by using the {ProductShortName} extension. The extension settings can be broadly categorized into debugging and logging, {mta-dl-plugin} settings, analysis related settings, and solution server settings. |
| 11 | + |
| 12 | +.Prerequisites |
| 13 | + |
| 14 | +* You installed the {ProductFullName} extension version 8.0.0 in VS Code. |
| 15 | +//need to check how the user provides LLM credentials and write a new proc if needed |
| 16 | +* You provided LLM credentials to enable generative AI for the {ProductShortName} extension in `settings.json` file. |
| 17 | +* You installed the {ProductShortName} distribution version 8.0.0 in your system. |
| 18 | +* You installed the latest version of Language Support for Java(TM) by Red Hat extension in VS Code. |
| 19 | +* You installed Jave 17+ and Maven 3.9.9+ in your system. |
| 20 | + |
| 21 | +.Procedure |
| 22 | + |
| 23 | +. Go to the {mta-dl-plugin} settings in one of the following ways: |
| 24 | ++ |
| 25 | +.. Click `Extensions > MTA CLI Extension for VSCode > Settings` |
| 26 | ++ |
| 27 | +.. Type `Ctrl + Shift + P` on the search bar to open the Command Palette and enter `Preferences: Open Settings (UI)`. Go to `Extensions > MTA` to open the settings page. |
| 28 | ++ |
| 29 | +. Configure the settings described in the following table: |
| 30 | + |
| 31 | +.{mta-dl-plugin} settings |
| 32 | +[cols="40%,60%a",options="header",] |
| 33 | +|==== |
| 34 | +|Settings |Description |
| 35 | +|Log level|Set the log level for the {ProductShortName} binary. The default log level is `debug`. The log level increases or decreases the verbosity of logs. |
| 36 | +|RPC Server Path|Displays the path to the solution server binary. If you do not modify the path, {mta-dl-plugin} uses the bundled binary. |
| 37 | +|Analyzer path|Specify a {ProductShortName} custom binary path. If you do not provide a path, {mta-dl-plugin} uses the default path to the binary. |
| 38 | +|Solution Server:URL|Configure the URL of the Solution Server end point. This field comes with the default URL. |
| 39 | +|Solution Server:enabled|Enable the Solution Server client ({ProductShortName} extension) to connect with the Solution Server to perform analysis. |
| 40 | +|Analyze on save|Enable this setting for {mta-dl-plugin} to run an analysis on a file that is saved after code modification. This setting is enabled automatically when you enable Agentic AI mode. |
| 41 | +|Agent mode|Enable the experimental Agentic AI flow for analysis. {mta-dl-plugin} runs an automated analysis of a file to identify issues and suggest resolutions. After you accept the solutions, {mta-dl-plugin} makes the changes in the code and re-analyzes the file. |
| 42 | +|Super agent mode| |
| 43 | +|Diff editor type|Select from diff or merge view to review the suggested solutions after running an analysis. The diff view shows the old code and a copy of the code with changes side-by-side. The merge view overlays the changes in the code in a single view. |
| 44 | +|Excluded diagnostic sources|Add diagnostic sources in the `settings.json` file. The issues generated by such diagnostic sources are excluded from the automated Agentic AI analysis. |
| 45 | +|Get solution max effort|Select the effort level for generating solutions. This can be adjusted depending on the type of incidents. Higher values increase processing time. |
| 46 | +|Get solution max LLM queries|Specify the maximum number of LLM queries made per solution request. |
| 47 | +|Get solution max priority|Specify the maximum priority level of issues to be considered in a solution request. |
| 48 | +//need more info |
| 49 | +|Cache directory|Specify the path to a directory in your filesystem to store cached responses from the LLM. |
| 50 | +|Demo mode|Enable to run {mta-dl-plugin} in demo mode that uses the LLM responses saved in the `cache` directory for analysis. |
| 51 | +|Trace enabled|Enable to trace {ProductShortName} communication with the LLM model. Traces are stored in the `/.vscode/konveyor-logs/traces` path in your IDE project. |
| 52 | +|Debug:Webview|Enable debug level logging for Webview message handling in VS Code. |
| 53 | +|Analyze dependencies|Enable {mta-dl-plugin} to analyze dependency-related errors detected by the LLM in your project. |
| 54 | +|Analyze known libraries|Enable {mta-dl-plugin} to analyze well-known open-source libraries in your code. |
| 55 | +|Code snip limit|Set the maximum number of lines of code that are included in incident reports. |
| 56 | +|Context lines|Configure the number of context lines included in incident reports. The greater the number, the more the LLM accuracy. |
| 57 | +|Incident limit|Specifies the maximum number of incidents to be reported. If you enter a higher value, it increases the coverage of incidents in your report. |
| 58 | +|==== |
| 59 | + |
0 commit comments