You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: Workloads-Specific/DataScience/BestPractices.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,23 +30,23 @@ Last updated: 2025-05-03
30
30
31
31
> Ensure that your data science workflows in Microsoft Fabric are built for rapid experimentation, efficient model management, and seamless deployment. Each element should be managed with clear versioning, detailed documentation, and reproducible environments, enabling a smooth transition from experimentation to production.
32
32
33
-
## ML Model Management
33
+
## ML Model Management
34
34
35
-
> Use model registries integrated within Fabric to store and version your models. Include a descriptive README, link relevant experiment IDs, and attach performance metrics such as accuracy, AUC, and confusion matrices. For example, link your production-ready model (v#.#) from a registered repository along with its associated validation metrics and deployment instructions.
35
+
> Use model registries integrated within Fabric to store and version your models. Include a descriptive README, link relevant experiment IDs, and attach performance metrics such as accuracy, AUC, and confusion matrices. For example, link your production-ready model (v#.#) from a registered repository along with its associated validation metrics and deployment instructions.
36
36
37
-
## Experiment Tracking & Management
37
+
## Experiment Tracking & Management
38
38
39
-
> Set up an experiment dashboard that automatically logs training runs. For instance, record runs with various hyperparameter combinations, tag them with unique identifiers, and visualize comparative metrics over multiple iterations. This dashboard can help you decide whether a model trained with early stopping or one with higher epochs best meets performance goals.
39
+
> Set up an experiment dashboard that automatically logs training runs. For instance, record runs with various hyperparameter combinations, tag them with unique identifiers, and visualize comparative metrics over multiple iterations. This dashboard can help you decide whether a model trained with early stopping or one with higher epochs best meets performance goals.
> Create an environment file (e.g., Conda `environment.yml`) that lists all required Python packages and their versions. For example, specify TensorFlow 2.9, scikit-learn 1.0, and other dependencies so that every data scientist and deployment pipeline uses the exact setup. Use Microsoft Fabric workspaces to segregate development and production environments, ensuring that models are trained and evaluated in a consistent setting.
45
+
> Create an environment file (e.g., Conda `environment.yml`) that lists all required Python packages and their versions. For example, specify TensorFlow 2.9, scikit-learn 1.0, and other dependencies so that every data scientist and deployment pipeline uses the exact setup. Use Microsoft Fabric workspaces to segregate development and production environments, ensuring that models are trained and evaluated in a consistent setting.
> Integrate the Data Agent into your pipeline to automatically validate incoming datasets for completeness and consistency. For instance, set up rules that flag missing data or out-of-range values and trigger notifications when anomalies are detected. Track and document these incidents to help refine the agent’s calibration, ensuring that data passing to your experiments meets quality standards.
0 commit comments