Skip to content

[WIP] MTA-5378 - LLM Configurations for Developer Lightspeed #177

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
:_newdoc-version: 2.18.3
:_template-generated: 2025-04-08

ifdef::context[:parent-context-of-configuring-openshift-ai: {context}]

:_mod-docs-content-type: ASSEMBLY

ifndef::context[]
[id="configuring-openshift-ai"]
endif::[]
ifdef::context[]
[id="configuring-openshift-ai_{context}"]
endif::[]
= Configuring {ocp-short} AI
:context: configuring-openshift-ai

abc

include::topics/developer-lightspeed/proc_creating-datascience-cluster.adoc[leveloffset=+1]

include::topics/developer-lightspeed/proc_configuring-llm-serving-runtime.adoc[leveloffset=+1]

include::topics/developer-lightspeed/proc_creating-accelerator-profile.adoc[leveloffset=+1]


ifdef::parent-context-of-configuring-openshift-ai[:context: {parent-context-of-configuring-openshift-ai}]
ifndef::parent-context-of-configuring-openshift-ai[:!context:]
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
:_newdoc-version: 2.18.3
:_template-generated: 2025-04-08

ifdef::context[:parent-context-of-configuring-llm: {context}]

:_mod-docs-content-type: ASSEMBLY

ifndef::context[]
[id="configuring-llm"]
endif::[]
ifdef::context[]
[id="configuring-llm_{context}"]
endif::[]
= Configuring large language models for analysis
:context: configuring-llm

{mta-dl-plugin} works with large language models (LLM) run in different environments to support analyzing Java applications in a wide range of scenarios. You can choose an LLM from well-known providers, local models that you run from Ollama or Podman desktop, and OpenAI API compatible models that are available as Model-as-a-Service deployments.

The result of an analysis performed by {mta-dl-plugin} depends on the parameter configuration of the LLM that you choose. In order to use {mta-dl-plugin} for analysis, you must deploy your LLM and then, configure mandatory settings (for example, API key and secret) and other parameters for your LLM.

You can run an LLM from the following providers:

* OpenAI
* Azure OpenAI
* Google Gemini
* Amazon Bedrock
* Deepseek
* OpenShift AI

include::topics/developer-lightspeed/con_model-as-a-service.adoc[leveloffset=+1]

include::assembly_maas-oc-install-config.adoc[leveloffset=+1]

include::assembly_configuring-openshift-ai.adoc[leveloffset=+1]

include::assembly_connecting-openshift-ai-llm.adoc[leveloffset=+1]

include::assembly_preparing-llm-analysis.adoc[leveloffset=+1]

include::topics/developer-lightspeed/proc_configuring-llm-podman-desktop.adoc[leveloffset=+1]

ifdef::parent-context-of-configuring-llm[:context: {parent-context-of-configuring-llm}]
ifndef::parent-context-of-configuring-llm[:!context:]
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
:_newdoc-version: 2.18.3
:_template-generated: 2025-04-08

ifdef::context[:parent-context-of-connecting-openshift-ai-llm: {context}]

:_mod-docs-content-type: ASSEMBLY

ifndef::context[]
[id="connecting-openshift-ai-llm"]
endif::[]
ifdef::context[]
[id="connecting-openshift-ai-llm_{context}"]
endif::[]
= Connecting {ocp-short} AI with the large language model
:context: connecting-openshift-ai-llm

Upload to your Amazon S3 bucket.

include::topics/developer-lightspeed/proc_adding-data-connection.adoc[leveloffset=+1]

include::topics/developer-lightspeed/proc_deploying-the-model.adoc[leveloffset=+1]

include::topics/developer-lightspeed/proc_export-token-certificate.adoc[leveloffset=+1]


ifdef::parent-context-of-connecting-openshift-ai-llm[:context: {parent-context-of-connecting-openshift-ai-llm}]
ifndef::parent-context-of-connecting-openshift-ai-llm[:!context:]
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
:_newdoc-version: 2.18.3
:_template-generated: 2025-04-08

ifdef::context[:parent-context-of-maas-oc-install-config: {context}]

:_mod-docs-content-type: ASSEMBLY

ifndef::context[]
[id="maas-oc-install-config"]
endif::[]
ifdef::context[]
[id="maas-oc-install-config_{context}"]
endif::[]
= Installing and configuring {ocp-short} cluster
:context: maas-oc-install-config

abc

include::topics/developer-lightspeed/proc_install-oc-cluster.adoc[leveloffset=+1]

include::topics/developer-lightspeed/proc_creating-identity-provider.adoc[leveloffset=+1]

include::topics/developer-lightspeed/proc_configuring-operators.adoc[leveloffset=+1]

include::topics/developer-lightspeed/proc_creating-gpu-machine-set.adoc[leveloffset=+1]

include::topics/developer-lightspeed/proc_configuring-node-auto-scaling.adoc[leveloffset=+1]

ifdef::parent-context-of-maas-oc-install-config[:context: {parent-context-of-maas-oc-install-config}]
ifndef::parent-context-of-maas-oc-install-config[:!context:]
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
:_newdoc-version: 2.18.3
:_template-generated: 2025-04-08

ifdef::context[:parent-context-of-preparing-llm-analysis: {context}]

:_mod-docs-content-type: ASSEMBLY

ifndef::context[]
[id="preparing-llm-analysis"]
endif::[]
ifdef::context[]
[id="preparing-llm-analysis_{context}"]
endif::[]
= Preparing the large language model for analysis
:context: preparing-llm-analysis

abc

include::topics/developer-lightspeed/proc_downloading-certificate.adoc[leveloffset=+1]

include::topics/developer-lightspeed/proc_configuring-openai-api-key.adoc[leveloffset=+1]

ifdef::parent-context-of-preparing-llm-analysis[:context: {parent-context-of-preparing-llm-analysis}]
ifndef::parent-context-of-preparing-llm-analysis[:!context:]
1 change: 1 addition & 0 deletions assemblies/developer-lightspeed-guide/topics
1 change: 1 addition & 0 deletions docs/developer-lightspeed-guide/assemblies
11 changes: 11 additions & 0 deletions docs/developer-lightspeed-guide/master-docinfo.xml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
<title>Developer Lightspeed Guide</title>
<productname>{DocInfoProductName}</productname>
<productnumber>{DocInfoProductNumber}</productnumber>
<subtitle>Using the {ProductName} Developer Lightspeed to modernize your applications</subtitle>
<abstract>
<para>you can use {ProductFullName} Developer Lightspeed for application modernization in your organization by running Artificial Intelligence-driven static code analysis for Java applications.</para>
</abstract>
<authorgroup>
<orgname>Red Hat Customer Content Services</orgname>
</authorgroup>
<xi:include href="Common_Content/Legal_Notice.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
28 changes: 28 additions & 0 deletions docs/developer-lightspeed-guide/master.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
:mta:
include::topics/templates/document-attributes.adoc[]
:_mod-docs-content-type: ASSEMBLY
[id="mta-developer-lightspeed"]
= MTA Developer Lightspeed Guide

:toc:
:toclevels: 4
:numbered:
:imagesdir: topics/images
:context: mta-developer-lightspeed
:mta-developer-lightspeed:

//Inclusive language statement
include::topics/making-open-source-more-inclusive.adoc[]










include::assemblies/developer-lightspeed-guide/assembly_configuring_llm.adoc[leveloffset=+1]

:!mta-developer-lightspeed:
1 change: 1 addition & 0 deletions docs/developer-lightspeed-guide/topics
19 changes: 19 additions & 0 deletions docs/topics/developer-lightspeed/con_model-as-a-service.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
:_newdoc-version: 2.15.0
:_template-generated: 2024-2-21

:_mod-docs-content-type: CONCEPT

[id="model-as-a-service_{context}"]
= Deploying an LLM as a scalable service

[role="_abstract"]
{mta-dl-plugin} also supports large language models (LLMs) that are deployed as a scalable service on {ocp-full} clusters. These deployments, called model-as-a-service (MaaS), provide you with greater control to optimize resources such as compute, cluster nodes, and auto-scaling Graphical Processing Units (GPUs) while enabling you to leverage artificial intelligence to perform operations at a large scale.


The workflow for configuring an LLM on {ocp-short} AI can be broadly divided into the following parts:

* Installing and configuring resources: from creating an {ocp} cluster to configuring node auto scaling
* Configuring OpenShift AI: from creating a data science project to creating an accelerator profile
* Connecting OpenShift AI with the LLM: from uploading a model to exporting tokens and SSL certificate for the LLM
* Preparing the LLM for analysis: from downloading the CA certificates to updating the `provider.settings` file.
//* Configuring monitoring and alerting for the storage resource: creating a ConfigMap for monitoring storage and an alerting configuration file.
Loading