Skip to content

Commit c1d1499

Browse files
committed
Fixing links
1 parent a769999 commit c1d1499

File tree

3 files changed

+6
-6
lines changed
  • articles/cognitive-services/language-service

3 files changed

+6
-6
lines changed

articles/cognitive-services/language-service/custom-classification/faq.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ The training process can take some time. As a rough estimate, the expected train
4040

4141
## How do I build my custom model programmatically?
4242

43-
You can use the [REST APIs](https://aka.ms/ct-authoring-swagger) to build your custom models. Follow this [quickstart](includes/quickstarts/rest-api.md) to get started with creating a project and creating a model through APIs for examples of how to call the Authoring API.
43+
You can use the [REST APIs](https://aka.ms/ct-authoring-swagger) to build your custom models. Follow this [quickstart](quickstart.md?pivots=rest-api) to get started with creating a project and creating a model through APIs for examples of how to call the Authoring API.
4444

4545

4646
## What is the recommended CI/CD process?

articles/cognitive-services/language-service/custom-classification/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ Custom text classification supports two types of projects:
2727
This documentation contains the following article types:
2828

2929
* [Quickstarts](quickstart.md) are getting-started instructions to guide you through making requests to the service.
30-
* [Concepts](concepts/evaluation-metrics.md) provide explanations of the service functionality and features.
30+
* [Concepts](concepts/evaluation.md) provide explanations of the service functionality and features.
3131
* [How-to guides](how-to/tag-data.md) contain instructions for using the service in more specific or customized ways.
3232

3333
## Example usage scenarios

articles/cognitive-services/language-service/custom-named-entity-recognition/faq.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -36,11 +36,11 @@ The training process can take a long time. As a rough estimate, the expected tra
3636

3737
## How do I build my custom model programmatically?
3838

39-
You can use the [REST APIs](https://aka.ms/ct-authoring-swagger) to build your custom models. Follow this [quickstart](includes/quickstarts/rest-api.md) to get started with creating a project and creating a model through APIs for examples of how to call the Authoring API.
39+
You can use the [REST APIs](https://aka.ms/ct-authoring-swagger) to build your custom models. Follow this [quickstart](quickstart.md?pivots=rest-api) to get started with creating a project and creating a model through APIs for examples of how to call the Authoring API.
4040

4141
## What is the recommended CI/CD process?
4242

43-
You can train multiple models on the same dataset within the same project. After you have trained your model successfully, you can [view its evaluation](how-to/view-model-evaluation.md). You can [deploy and test](quickstart.md#deploy-your-model) your model within [Language studio](https://aka.ms/languageStudio). You can add or remove tags from your data and train a **new** model and test it as well. View [service limits](service-limits.md)to learn about maximum number of trained models with the same project. When you train a new model your dataset is [split](how-to/train-model.md#data-splits) randomly into training and testing sets, so there is no guarantee that the reflected model evaluation is about the same test set, and the results are not comparable. It's recommended that you develop your own test set and use it to evaluate both models so you can measure improvement.
43+
You can train multiple models on the same dataset within the same project. After you have trained your model successfully, you can [view its evaluation](how-to/view-model-evaluation.md). You can [deploy and test](quickstart.md#deploy-your-model) your model within [Language studio](https://aka.ms/languageStudio). You can add or remove tags from your data and train a **new** model and test it as well. View [service limits](service-limits.md)to learn about maximum number of trained models with the same project. When you train a new model your dataset is [split](how-to/train-model.md#data-split) randomly into training and testing sets, so there is no guarantee that the reflected model evaluation is about the same test set, and the results are not comparable. It's recommended that you develop your own test set and use it to evaluate both models so you can measure improvement.
4444

4545
## Does a low or high model score guarantee bad or good performance in production?
4646

@@ -55,15 +55,15 @@ See the [data selection and schema design](how-to/design-schema.md) article for
5555

5656
* View the model [confusion matrix](how-to/view-model-evaluation.md). If you notice that a certain entity type is frequently not predicted correctly, consider adding more tagged instances for this class. If you notice that two entity types are frequently predicted as each other, this means the schema is ambiguous and you should consider merging them both into one entity type for better performance.
5757

58-
* [Examine the data distribution](how-to/improve-model.md#examine-data-distribution-from-language-studio). If one of the entity types has a lot more tagged instances than the others, your model may be biased towards this type. Add more data to the other entity types or remove examples from the dominating type.
58+
* [Examine the data distribution](how-to/improve-model.md#examine-data-distribution). If one of the entity types has a lot more tagged instances than the others, your model may be biased towards this type. Add more data to the other entity types or remove examples from the dominating type.
5959

6060
* Learn more about [data selection and schema design](how-to/design-schema.md).
6161

6262
* [Review your test set](how-to/improve-model.md) to see predicted and tagged entities side-by-side so you can get a better idea of your model performance, and decide if any changes in the schema or the tags are necessary.
6363

6464
## Why do I get different results when I retrain my model?
6565

66-
* When you train a new model your dataset is [split](how-to/train-model.md#data-splits) randomly into train and test sets so there is no guarantee that the reflected model evaluation is on the same test set, so results are not comparable.
66+
* When you train a new model your dataset is [split](how-to/train-model.md#data-split) randomly into train and test sets so there is no guarantee that the reflected model evaluation is on the same test set, so results are not comparable.
6767

6868
* If you're retraining the same model, your test set will be the same but you might notice a slight change in predictions made by the model. This is because the trained model is not robust enough and this is a factor of how representative and distinct your data is and the quality of your tagged data.
6969

0 commit comments

Comments
 (0)