Skip to content

Commit c89bb95

Browse files
Modify LLM security docs, add LLM specifications
1 parent 988e096 commit c89bb95

File tree

2 files changed

+145
-31
lines changed

2 files changed

+145
-31
lines changed

md-docs/user_guide/model.md

Lines changed: 55 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ for its training usually represent the reference data distribution, while produc
55
performs inference.
66
For more information about reference and production data see the [Data] page.
77

8-
A Model is uniquely associated with a [Task] and it can be created both through the WebApp and the Python SDK.
8+
A Model is uniquely associated with a [Task], and it can be created both through the WebApp and the Python SDK.
99
Currently, we support only one model per Task.
1010

1111
A Model is defined by a name and a version. The version is updated whenever the model is retrained, allowing to
@@ -19,19 +19,66 @@ predictions are associated to the correct model version.
1919
the ML cube Platform is considered *model agnostic*.
2020

2121

22-
### RAG Model
22+
## RAG Model
2323

2424
RAG Tasks represent an exception to the model framework presented before. In this type of Tasks, the model
2525
is a Large Language Model (LLM), that is used to generate responses to user queries. The model is not trained on a specific dataset
26-
but is rather a pre-trained model, sometimes finetuned on custom domain data, which means that the classic process of training and
26+
but is rather a pre-trained model, sometimes fine-tuned on custom domain data, which means that the classic process of training and
2727
retraining does not apply.
2828

2929
To maintain a coherent Model definition across task types, the RAG model is also represented as a Model,
3030
but an update of its version represents an update of the reference data distribution and not necessarily
3131
a retraining of the model itself. Moreover, most of the attributes which will be described in the following sections
3232
are not applicable, as they are related to the retraining module, which is not available for RAG tasks.
3333

34-
### Probabilistic output
34+
### LLM Specifications
35+
36+
For RAG Tasks, you can provide the specifications of the underlying LLMs used in the RAG system.
37+
This information is used by the [LLM Security Module](modules/llm_security.md) to provide insights on the security of the LLMs
38+
used in the RAG system. Currently, we support only LLMs accessible via API.
39+
40+
The specifications include the following information:
41+
42+
| Field | Description |
43+
|---------------------|-----------------------------------------------------------------------------------------------------------------------|
44+
| LLM Provider | The provider of the LLM used. |
45+
| LLM name | The name of the LLM model. |
46+
| Temperature | The temperature used by the LLM model. |
47+
| Top P | The top P used by the LLM model. |
48+
| Top K | The top K of the LLM model. |
49+
| Max tokens | The max output tokens used by the LLM model. |
50+
| Time intervals | The time intervals where the LLM model is used. |
51+
| Role | The role assigned to the LLM to interpret (part of the system prompt) |
52+
| Task | The task assigned to the LLM to solve (part of the system prompt) |
53+
| Behavior Guidelines | A list of guidelines used to define the general behavior of the LLM (part of the system prompt) |
54+
| Security Guidelines | A list of guidelines designed to protect the LLM against attacks, or information leakage (part of the system prompt) |
55+
56+
!!! note
57+
Providing the LLM specifications is optional; however, if you choose to provide them, you must fill in at least the required fields.
58+
Moreover, providing the specifications improves the quality of the LLM Security Module insights.
59+
60+
The prompt includes the following information:
61+
62+
!!! example
63+
An example of LLM specifications is:
64+
65+
- **LLM Provider**: "OpenAI",
66+
- **LLM name**: "GPT-3",
67+
- **Temperature**: 0.7,
68+
- **Top P**: 0.9,
69+
- **Top K**: None,
70+
- **Max tokens**: 100,
71+
- **Time intervals**: "2022-01-01 00:00:00 - 2022-01-31 23:59:59",
72+
- **Role**: "You are an helpful assistant, "
73+
- **Task**: "your goal is to provide accurate and useful information to the users. You must follow these rules:"
74+
- **Behavior Guidelines**:
75+
1. "1) Be polite, "
76+
2. "2) Be concise, "
77+
- **Security Guidelines**:
78+
1. "3) Do not provide personal information, "
79+
2. "4) Do not provide harmful information, "
80+
81+
## Probabilistic output
3582

3683
When creating a model, you can specify if you want to provide also the probabilistic output of the model along with the predictions.
3784
The probabilistic output represents the probability or confidence score associated with the model's predictions. If provided,
@@ -44,7 +91,7 @@ as a new column in the predictions file, following the guidelines in the [Data S
4491
For example, Logistic Regression classification model provides both the probability of belonging to the positive class and the predicted class using a threshold.
4592
In this case, you can upload to ML cube Platform the predicted class as principal prediction and the probability as probabilistic output.
4693

47-
### Model Metric
94+
## Model Metric
4895

4996
A Model Metric represents the evaluation metric used to assess the performance of the model.
5097
It can both represent a performance or an error. The chosen metric will be used in the various views of the WebApp to
@@ -69,19 +116,19 @@ RAG tasks have no metric, as in that case the model is an LLM for which classic
69116
Model Metrics should not be confused with [Monitoring Metrics](monitoring/index.md#monitoring-metrics), which are
70117
entities being monitoring by the ML cube Platform and not necessarily related to a Model.
71118

72-
### Suggestion Type
119+
## Suggestion Type
73120

74121
The Suggestion Type represents the type of suggestion that the ML cube Platform should provide when computing the
75122
[Retraining Dataset](modules/retraining.md#retraining-dataset). The available options are provided in the related section.
76123

77124

78-
### Retraining Cost
125+
## Retraining Cost
79126

80127
The Retraining Cost represents the cost associated with retraining the model. This information is used by the Retraining Module
81128
to provide gain-cost analysis and insights on the retraining process. The cost is expressed in the same currency as the one used
82129
in the Task cost information. The default value is 0.0, which means that the cost is negligible.
83130

84-
### Retrain Trigger
131+
## Retrain Trigger
85132

86133
You can associate a [Retrain Trigger] to your Model in order to enable the automatic initiation of your retraining pipeline
87134
from the ML cube Platform. More information on how to set up a retrain trigger can be found in the related section.

0 commit comments

Comments
 (0)