You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/guide/admin_settings.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,6 +16,6 @@ There are several places where you can configure org-wide settings:
16
16
| ------------- | ------------ | ------------ |
17
17
|[**Usage & License**](admin_usage)| Owner<br />Admin | This page has a mix of settings that can be set by Administrators (enabling email notifications) and Owners (enabling AI, enabling storage proxy, enabling early adopter features). |
18
18
|[**Access Token**](access_tokens)| Owner<br />Admin | Control which types of access tokens are available in the organization. |
19
-
|[**Model Providers**](model_providers)| Owner<br />Admin | Set up model providers that can be used with [Prompts](prompts_overview) and [Chat](/tags/chat). |
19
+
|[**Model Providers**](model_providers)| Owner<br />Admin | Set up model providers that can be used with [Prompts](prompts_overview) and [Chat](/tags/chat.html). |
20
20
|[**Permissions**](admin_permissions)| Owner | Customize certain permissions for roles. |
21
21
|[**Support reports**](support_reports)| Owner<br />Admin | Generate anonymized operational reports that help HumanSignal support understand your deployment, diagnose issues, and recommend workflow and performance improvements. |
Copy file name to clipboardExpand all lines: docs/source/guide/model_providers.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ date: 2025-02-18 12:03:59
15
15
16
16
To use certain AI features across your organization, you must first set up a model provider. You can set up model providers from **Organization > Settings**.
17
17
18
-
For example, if you want to interact with an LLM when using the [`<Chat>` tag](/tags/chat), you will first need to configure access to the model.
18
+
For example, if you want to interact with an LLM when using the [`<Chat>` tag](/tags/chat.html), you will first need to configure access to the model.
Copy file name to clipboardExpand all lines: docs/source/guide/project_settings_lse.md
+8-3Lines changed: 8 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -559,7 +559,7 @@ By default, each task only needs to be annotated by one annotator. If you want m
559
559
560
560
The number of distinct annotations you want to allow per task.
561
561
562
-
Note that in certain situations, this may be exceeded. For example, if there are long-standing drafts within a project or you have a very low [task reservation](#task-lock) time.
562
+
Note that in certain situations, this may be exceeded. For example, if there are long-standing drafts within a project or you have a very low [task reservation](#lock-tasks) time.
563
563
564
564
Also note that only annotations created by distinct users count towards the overlap. For example, if the overlap is `2` and a user creates and submits two annotations on a single task (which can be done in Quick View), the overlap threshold will not be reached until another user submits an annotation.
565
565
@@ -642,7 +642,12 @@ For more information about pausing annotators, including how to manually pause s
642
642
643
643
<dd>
644
644
645
-
Evaluate annotators against [ground truths](ground_truths) within a project. A “ground truth” annotation is a verified, high-quality annotation that serves as the correct answer for a specific task.
645
+
!!! note
646
+
Annotator Evaluation settings are only available when the project is configured to [automatically assign tasks](#distribute-tasks). If you are using Manual distribution, this section will not appear in your project settings.
647
+
648
+
If you switch a project from Automatic to Manual distribution, annotator evaluation is automatically disabled.
649
+
650
+
Evaluate annotators against [ground truths](ground_truths) within a project. A "ground truth" annotation is a verified, high-quality annotation that serves as the correct answer for a specific task.
646
651
647
652
When enabled, this setting looks at the agreement score for the annotator when compared solely against ground truth annotations. You can decide to automatically pause an annotator within the project if their ground truth agreement score falls below a certain threshold.
648
653
@@ -681,7 +686,7 @@ When annotators enter the labeling stream, they are first presented with tasks t
681
686
682
687
Use the counter to determine how many ground truth tasks should be presented first before the annotator progresses through the remaining project tasks.
683
688
684
-
**Note:** This option is only active when the project is configured to [automatically assign tasks](#distribute-tasks). If you are using Manual distribution, annotators will see tasks ordered by ID number. If you would like them to see ground truth tasks first, you should add ground truth annotations in the same order.
689
+
Set this counter to zero if you want to skip onboarding and only use continuous evaluation.
Copy file name to clipboardExpand all lines: docs/source/guide/troubleshooting.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -38,7 +38,7 @@ To resolve this issue, update the host specified as an environment variable or w
38
38
39
39
* If you want to upload a large volume of data (thousands of items), consider doing that at a time when people are not labeling or use a different database backend such as PostgreSQL or Redis. You can run Docker Compose from the root directory of Label Studio to use PostgreSQL: `docker-compose up -d`, or see [Sync data from cloud or database storage](storage).
40
40
41
-
* If you are using a labeling schema that has many thousands of labels, consider using an [external taxonomy](/tags/taxonomy) instead.
41
+
* If you are using a labeling schema that has many thousands of labels, consider using an [external taxonomy](/tags/taxonomy.html) instead.
42
42
43
43
### Image/audio/resource loading error while labeling
Copy file name to clipboardExpand all lines: docs/source/templates/chat_eval.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -97,7 +97,7 @@ The `Chat` tag provides an interface where the annotator can type and send messa
97
97
98
98
You can customize this parameter with `user`, `assistant`, `system`, `tool`, and `developer`.
99
99
100
-
For more information and additional parameters, see the [Chat tag](/tags/chat).
100
+
For more information and additional parameters, see the [Chat tag](/tags/chat.html).
101
101
102
102
103
103
## Input data
@@ -146,11 +146,11 @@ You can also import demo chat messages as follows:
146
146
!!! attention
147
147
The chat messages that you import are not selectable. This means that you cannot edit them or apply annotations (ratings, choices, etc) to them.
148
148
149
-
You can only select and annotate messages that are added to the chat by an annotator or that are imported as [predictions](/tags/chat#Prediction-format).
149
+
You can only select and annotate messages that are added to the chat by an annotator or that are imported as [predictions](/tags/chat.html#Prediction-format).
Copy file name to clipboardExpand all lines: docs/source/templates/chat_llm_eval.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ While this template focuses on conversation-level evaluation, you can modify it
16
16
!!! error Enterprise
17
17
This template requires Label Studio Enterprise.
18
18
19
-
Starter Cloud users can use the `Chat` tag, but have limited access to LLM integration. Instead, you can conduct a manual chat or import messages as predictions. See the [Chat tag documentation](/tags/chat#Prediction-format).
19
+
Starter Cloud users can use the `Chat` tag, but have limited access to LLM integration. Instead, you can conduct a manual chat or import messages as predictions. See the [Chat tag documentation](/tags/chat.html#Prediction-format).
20
20
21
21
For Community users, see our [Conversation AI templates](gallery_conversational_ai) or the [Multi-Turn Chat Evaluation template](multi_turn_chat).
22
22
@@ -163,15 +163,15 @@ The `Chat` tag provides an interface where the annotator can type and send messa
163
163
164
164
*`value`: This is required, and should use a variable referencing your [input data](#Input-data). In this example, we use `$chat` because the input JSON uses `"chat"`.
165
165
166
-
*`llm`: Messages from the annotator will be sent to an LLM and the response returned within the chat area of the labeling configuration. For more information, see [Chat tag - Use with an LLM](/tags/chat#Use-with-an-LLM).
166
+
*`llm`: Messages from the annotator will be sent to an LLM and the response returned within the chat area of the labeling configuration. For more information, see [Chat tag - Use with an LLM](/tags/chat.html#Use-with-an-LLM).
167
167
168
168
*`minMessages`: The minimum number of messages users must submit to complete the task. You can also set a maximum.
169
169
170
170
Both minimum and maximum can also be set in the task data, allowing you to have different limits for each task. For an example, see [Chatbot Evaluation](chatbot#Chat).
171
171
172
172
*`editable`: Messages from the annotator and from the LLM are editable. To modify this so that only messages from certain roles are editable, you can specify them (for example, `editable="user,assistant"`).
173
173
174
-
For more information and additional parameters, see the [Chat tag](/tags/chat).
174
+
For more information and additional parameters, see the [Chat tag](/tags/chat.html).
175
175
176
176
### Choices and TextArea
177
177
@@ -224,11 +224,11 @@ You can also import demo chat messages as follows:
224
224
!!! attention
225
225
The chat messages that you import are not selectable. This means that you cannot edit them or apply annotations (ratings, choices, etc) to them.
226
226
227
-
You can only select and annotate messages that are added to the chat by an annotator or that are imported as [predictions](/tags/chat#Prediction-format).
227
+
You can only select and annotate messages that are added to the chat by an annotator or that are imported as [predictions](/tags/chat.html#Prediction-format).
Copy file name to clipboardExpand all lines: docs/source/templates/chat_red_team.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ Stress‑test your GenAI agent with structured red‑teaming. Use this template
14
14
!!! error Enterprise
15
15
This template requires Label Studio Enterprise.
16
16
17
-
Starter Cloud users can use the `Chat` tag, but have limited access to LLM integration. Instead, you can conduct a manual chat or import messages as predictions. See the [Chat tag documentation](/tags/chat#Prediction-format).
17
+
Starter Cloud users can use the `Chat` tag, but have limited access to LLM integration. Instead, you can conduct a manual chat or import messages as predictions. See the [Chat tag documentation](/tags/chat.html#Prediction-format).
18
18
19
19
For Community users, see our [Conversation AI templates](gallery_conversational_ai) or the [Multi-Turn Chat Evaluation template](multi_turn_chat).
20
20
@@ -175,15 +175,15 @@ The `Chat` tag provides an interface where the annotator can type and send messa
175
175
176
176
*`value`: This is required, and should use a variable referencing your [input data](#Input-data). In this example, we use `$chat` because the input JSON uses `"chat"`.
177
177
178
-
*`llm`: Messages from the annotator will be sent to an LLM and the response returned within the chat area of the labeling configuration. For more information, see [Chat tag - Use with an LLM](/tags/chat#Use-with-an-LLM).
178
+
*`llm`: Messages from the annotator will be sent to an LLM and the response returned within the chat area of the labeling configuration. For more information, see [Chat tag - Use with an LLM](/tags/chat.html#Use-with-an-LLM).
179
179
180
180
*`minMessages`: The minimum number of messages users must submit to complete the task. You can also set a maximum.
181
181
182
182
Both minimum and maximum can also be set in the task data, allowing you to have different limits for each task. For an example, see [Chatbot Evaluation](chatbot#Chat).
183
183
184
184
*`editable`: In this example, you are not allowing the annotator to edit messages. You can set this to `true` or modify it so that only messages from certain roles are editable (for example, `editable="user,assistant"`).
185
185
186
-
For more information and additional parameters, see the [Chat tag](/tags/chat).
186
+
For more information and additional parameters, see the [Chat tag](/tags/chat.html).
187
187
188
188
### Per-message evaluation
189
189
@@ -263,11 +263,11 @@ You can also import demo chat messages as follows:
263
263
!!! attention
264
264
The chat messages that you import are not selectable. This means that you cannot edit them or apply annotations (ratings, choices, etc) to them.
265
265
266
-
You can only select and annotate messages that are added to the chat by an annotator or that are imported as [predictions](/tags/chat#Prediction-format).
266
+
You can only select and annotate messages that are added to the chat by an annotator or that are imported as [predictions](/tags/chat.html#Prediction-format).
Copy file name to clipboardExpand all lines: docs/source/templates/chat_rlhf.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -195,10 +195,10 @@ The `Chat` tag provides an interface where the annotator can type and send messa
195
195
196
196
*`editable`: In this example, you are not allowing the annotator to edit messages.
197
197
198
-
For more information and additional parameters, see the [Chat tag](/tags/chat).
198
+
For more information and additional parameters, see the [Chat tag](/tags/chat.html).
199
199
200
200
!!! note
201
-
This template is designed to be used to evaluate an imported conversation, so you will likely want to import messages from an external source as [predictions](/tags/chat#Prediction-format).
201
+
This template is designed to be used to evaluate an imported conversation, so you will likely want to import messages from an external source as [predictions](/tags/chat.html#Prediction-format).
202
202
203
203
### Conversation evaluation
204
204
@@ -296,7 +296,7 @@ You can also import demo chat messages as follows:
296
296
297
297
### Predictions
298
298
299
-
If you want to be able to select messages and evaluate them, then you can use [predictions](/tags/chat#Prediction-format). For example:
299
+
If you want to be able to select messages and evaluate them, then you can use [predictions](/tags/chat.html#Prediction-format). For example:
300
300
301
301
```json
302
302
[
@@ -346,7 +346,7 @@ If you want to be able to select messages and evaluate them, then you can use [p
Copy file name to clipboardExpand all lines: docs/source/templates/chatbot.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -195,10 +195,10 @@ The `Chat` tag provides an interface where the annotator can type and send messa
195
195
196
196
*`editable`: In this example, you are not allowing the annotator to edit messages.
197
197
198
-
For more information and additional parameters, see the [Chat tag](/tags/chat).
198
+
For more information and additional parameters, see the [Chat tag](/tags/chat.html).
199
199
200
200
!!! note
201
-
This template is designed to be used to evaluate an imported conversation, so you will likely want to import messages from an external source as [predictions](/tags/chat#Prediction-format).
201
+
This template is designed to be used to evaluate an imported conversation, so you will likely want to import messages from an external source as [predictions](/tags/chat.html#Prediction-format).
202
202
203
203
### Per-message evaluation
204
204
@@ -275,7 +275,7 @@ You can also import demo chat messages as follows:
275
275
276
276
### Predictions
277
277
278
-
If you want to be able to select messages and evaluate them, then you can use [predictions](/tags/chat#Prediction-format). For example:
278
+
If you want to be able to select messages and evaluate them, then you can use [predictions](/tags/chat.html#Prediction-format). For example:
279
279
280
280
281
281
```json
@@ -326,7 +326,7 @@ If you want to be able to select messages and evaluate them, then you can use [p
0 commit comments