|
732 | 732 | "metadata": {}, |
733 | 733 | "source": [ |
734 | 734 | "## Storing results in MLflow\n", |
735 | | - "Storing evaluation results in CSVs is fine but not enough if you want to compare and track multiple evaluation runs. MLflow is a handy tool when it comes to tracking experiments. So we decided to use it to track all of `Pipeline.eval()` with reproducability of your experiments in mind." |
| 735 | + "Storing evaluation results in CSVs is fine but not enough if you want to compare and track multiple evaluation runs. MLflow is a handy tool when it comes to tracking experiments. So we decided to use it to track all of `Pipeline.eval()` with reproducibility of your experiments in mind." |
736 | 736 | ] |
737 | 737 | }, |
738 | 738 | { |
739 | 739 | "attachments": {}, |
740 | 740 | "cell_type": "markdown", |
741 | 741 | "metadata": {}, |
742 | 742 | "source": [ |
743 | | - "### Host your own MLflow or use deepset's public MLflow" |
744 | | - ] |
745 | | - }, |
746 | | - { |
747 | | - "attachments": {}, |
748 | | - "cell_type": "markdown", |
749 | | - "metadata": {}, |
750 | | - "source": [ |
751 | | - "If you don't want to use deepset's public MLflow instance under https://public-mlflow.deepset.ai, you can easily host it yourself." |
| 743 | + "### MLflow setup\n", |
| 744 | + "\n", |
| 745 | + "Uncomment the following cell to install and run MLflow locally (does not work in Colab). For other options, refer to the [MLflow documentation](https://www.mlflow.org/docs/latest/index.html)." |
752 | 746 | ] |
753 | 747 | }, |
754 | 748 | { |
|
907 | 901 | " evaluation_set_meta={\"name\": \"nq_dev_subset_v2.json\"},\n", |
908 | 902 | " pipeline_meta={\"name\": \"sparse-pipeline\"},\n", |
909 | 903 | " add_isolated_node_eval=True,\n", |
910 | | - " experiment_tracking_tool=\"mlflow\",\n", |
911 | | - " experiment_tracking_uri=\"https://public-mlflow.deepset.ai\",\n", |
| 904 | + " # experiment_tracking_tool=\"mlflow\", # UNCOMMENT TO USE MLFLOW\n", |
| 905 | + " # experiment_tracking_uri=\"YOUR-MLFLOW-TRACKING-URI\", # UNCOMMENT TO USE MLFLOW\n", |
912 | 906 | " reuse_index=True,\n", |
913 | 907 | ")" |
914 | 908 | ] |
|
948 | 942 | " evaluation_set_meta={\"name\": \"nq_dev_subset_v2.json\"},\n", |
949 | 943 | " pipeline_meta={\"name\": \"embedding-pipeline\"},\n", |
950 | 944 | " add_isolated_node_eval=True,\n", |
951 | | - " experiment_tracking_tool=\"mlflow\",\n", |
952 | | - " experiment_tracking_uri=\"https://public-mlflow.deepset.ai\",\n", |
| 945 | + " # experiment_tracking_tool=\"mlflow\", # UNCOMMENT TO USE MLFLOW\n", |
| 946 | + " # experiment_tracking_uri=\"YOUR-MLFLOW-TRACKING-URI\", # UNCOMMENT TO USE MLFLOW\n", |
953 | 947 | " reuse_index=True,\n", |
954 | 948 | " answer_scope=\"context\",\n", |
955 | 949 | ")" |
|
960 | 954 | "cell_type": "markdown", |
961 | 955 | "metadata": {}, |
962 | 956 | "source": [ |
963 | | - "You can now open MLflow (e.g. https://public-mlflow.deepset.ai/ if you used the public one hosted by deepset) and look for the haystack-eval-experiment experiment. Try out mlflow's compare function and have fun...\n", |
964 | | - "\n", |
965 | | - "Note that on our public mlflow instance we are not able to log artifacts like the evaluation results or the piplines.yaml file." |
| 957 | + "You can now open MLflow and look for the haystack-eval-experiment experiment. Try out mlflow's compare function and have fun..." |
966 | 958 | ] |
967 | 959 | }, |
968 | 960 | { |
|
0 commit comments