Total Visitors
diff --git a/Workloads-Specific/DataScience/How_AutoML/Train_MLmodel_AutoML.ipynb b/Workloads-Specific/DataScience/How_AutoML/Train_MLmodel_AutoML.ipynb
new file mode 100644
index 0000000..6d80469
--- /dev/null
+++ b/Workloads-Specific/DataScience/How_AutoML/Train_MLmodel_AutoML.ipynb
@@ -0,0 +1,732 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "d8d36bfe-0884-4c73-a24f-175233d98bdf",
+ "metadata": {},
+ "source": [
+ "# Demonstration: Train a ML model with AutoML\n",
+ "\n",
+ "## Introduction\n",
+ "\n",
+ "This notebook is automatically generated by the Fabric low-code AutoML wizard based on your selections. Whether you're building a regression model, a classifier, or another machine-learning solution, this tool simplifies the process by transforming your goals into executable code. You can easily modify any settings or code snippets to better align with your requirements.\n",
+ "\n",
+ "### What is FLAML?\n",
+ "\n",
+ "[FLAML (Fast and Lightweight Automated Machine Learning)](https://aka.ms/fabric-automl) is an open-source AutoML library designed to quickly and efficiently find the best machine learning models and hyperparameters. FLAML optimizes for speed, accuracy, and cost, making it an excellent choice for a wide range of machine-learning tasks.\n",
+ "\n",
+ "### Steps in this notebook\n",
+ "\n",
+ "1. **Load the data**: Import your dataset.\n",
+ "2. **Generate features**: Automatically transform and preprocess your data to improve model performance.\n",
+ "3. **Use AutoML to find your best model**: Use FLAML to automatically select the most suitable model and optimize its parameters.\n",
+ "4. **Save the final machine learning model**: Store the trained model for future use.\n",
+ "5. **Generate predictions**: Use the saved model to predict outcomes on new data.\n",
+ "\n",
+ "> [!IMPORTANT]\n",
+ "> **Automated ML is currently supported on Fabric Runtimes 1.2+ or any Fabric environment with Spark 3.4+.**\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "592531fe-7a06-4837-a5eb-2650113cbf13",
+ "metadata": {
+ "microsoft": {
+ "language": "python",
+ "language_group": "synapse_pyspark"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "%pip install scikit-learn==1.5.1\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "14223c8d-f82a-44ef-a466-e03ebcc6b430",
+ "metadata": {},
+ "source": [
+ "### Default notebook optimization\n",
+ "\n",
+ "This cell configures the logging and warning settings to reduce unnecessary output and focus on critical information. It suppresses specific warnings and logs from the underlying libraries, ensuring a cleaner and more readable notebook experience."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "9878de39-d1c1-485b-9058-e429715b5cd8",
+ "metadata": {
+ "microsoft": {
+ "language": "python",
+ "language_group": "synapse_pyspark"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "import logging\n",
+ "import warnings\n",
+ " \n",
+ "logging.getLogger('synapse.ml').setLevel(logging.CRITICAL)\n",
+ "logging.getLogger('mlflow.utils').setLevel(logging.CRITICAL)\n",
+ "warnings.simplefilter('ignore', category=FutureWarning)\n",
+ "warnings.simplefilter('ignore', category=UserWarning)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "67153540-7117-4adb-9766-b701ff7fc616",
+ "metadata": {},
+ "source": [
+ "## Step 1: Load the Data\n",
+ "\n",
+ "This cell is responsible for importing the raw data from the specified source into the notebook environment. The data could come from various sources, such as a file or table in your lakehouse.\n",
+ "\n",
+ "Once loaded, this data will serve as the input for subsequent steps, such as data transformation, model training, and evaluation."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "63113dbc-16ab-4932-97c2-b0f54cfe9b3f",
+ "metadata": {
+ "microsoft": {
+ "language": "python",
+ "language_group": "synapse_pyspark"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "import re\n",
+ "import pandas as pd\n",
+ "import numpy as np\n",
+ "\n",
+ "df = spark.read.format(\"delta\").load(\n",
+ " \"Tables/2020orders\"\n",
+ ").cache()\n",
+ "# Transform to pandas according to the selected models\n",
+ "X = df.limit(100000).toPandas() # Use df.toPandas() to use all the data\n",
+ "X = X.rename(columns = lambda c:re.sub('[^A-Za-z0-9_]+', '_', c)) # Replace not supported characters in column name with underscore to avoid invalid character for model training and saving\n",
+ "\n",
+ "target_col = re.sub('[^A-Za-z0-9_]+', '_', \"price\")\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "ae621756-f044-4553-8509-d64973d5d903",
+ "metadata": {
+ "microsoft": {
+ "language": "python",
+ "language_group": "synapse_pyspark"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "display(X)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "761a4b6e-6698-4bd3-948c-3e5274efbaad",
+ "metadata": {},
+ "source": [
+ "## Step 2: Generate features\n",
+ "\n",
+ "Featurization is the process of transforming raw data into a format optimized for training a machine learning model. It ensures the model can access the most relevant information, significantly impacting its accuracy and performance.\n",
+ "\n",
+ "This step applies various techniques to refine the data, enhance its quality, and make it compatible with the selected algorithms, helping the model learn patterns more effectively."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "d7e7a55b-434d-42c3-b457-88b89dd57461",
+ "metadata": {
+ "microsoft": {
+ "language": "python",
+ "language_group": "synapse_pyspark"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Handle class imbalance\n",
+ "import matplotlib.pyplot as plt\n",
+ "\n",
+ "\n",
+ "distribution = X[target_col].value_counts(normalize=True)\n",
+ "dominant_class_proportion = distribution.max()\n",
+ "\n",
+ "distribution.plot(kind='bar')\n",
+ "plt.title(\"Class Distribution\")\n",
+ "plt.xlabel(\"Class\")\n",
+ "plt.ylabel(\"Proportion\")\n",
+ "plt.show()\n",
+ "\n",
+ "if dominant_class_proportion > 0.8:\n",
+ " print(f\"The dataset is imbalanced. The dominant class has {dominant_class_proportion * 100:.2f}% of the samples.\")\n",
+ " print(\"You may need to handle class imbalance before training the model.\")\n",
+ " print(\"You can use techniques such as oversampling, undersampling, or using class weights to handle class imbalance.\")\n",
+ " print(\"For more information, see https://aka.ms/smote-example\")\n",
+ "else:\n",
+ " print(\"The dataset is balanced.\")\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e37b9c43-7220-4b0c-9fa3-6ad9226dc85e",
+ "metadata": {
+ "microsoft": {
+ "language": "python",
+ "language_group": "synapse_pyspark"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Set Functions if needed for Featurization\n",
+ "def create_fillna_processor(\n",
+ " df, mean_features=None, median_features=None, mode_features=None\n",
+ "):\n",
+ " \"\"\"\n",
+ " Create a ColumnTransformer that fills missing values in a DataFrame using different strategies\n",
+ " based on the skewness of the numerical features and the specified feature lists.\n",
+ "\n",
+ " Parameters:\n",
+ " df (pd.DataFrame): The input DataFrame.\n",
+ " mean_features (list, optional): List of features to impute using the mean strategy. Defaults to None.\n",
+ " median_features (list, optional): List of features to impute using the median strategy. Defaults to None.\n",
+ " mode_features (list, optional): List of features to impute using the mode strategy. Defaults to None.\n",
+ "\n",
+ " Returns:\n",
+ " ColumnTransformer: A fitted ColumnTransformer that can be used to transform the DataFrame.\n",
+ " list: List of all features supported by SimpleImputer in the DataFrame.\n",
+ " list: List of datetime features in the DataFrame.\n",
+ " \"\"\"\n",
+ " if mean_features is None:\n",
+ " mean_features = []\n",
+ " if median_features is None:\n",
+ " median_features = []\n",
+ " if mode_features is None:\n",
+ " mode_features = []\n",
+ " all_features = mean_features + median_features + mode_features\n",
+ " # Group features by their imputation needs\n",
+ " mean_features = [\n",
+ " col\n",
+ " for col in df.select_dtypes(include=[\"number\"]).columns\n",
+ " if df[col].skew(skipna=True) <= 1 and col not in all_features\n",
+ " ] + mean_features\n",
+ " median_features = [\n",
+ " col\n",
+ " for col in df.select_dtypes(include=[\"number\"]).columns\n",
+ " if df[col].skew(skipna=True) > 1 and col not in all_features\n",
+ " ] + median_features\n",
+ " all_features = mean_features + median_features\n",
+ " datetime_features = df.select_dtypes(include=[\"datetime\"]).columns.tolist()\n",
+ " mode_features = [col for col in df.columns.tolist() if col not in all_features + datetime_features]\n",
+ "\n",
+ " transformers = []\n",
+ "\n",
+ " if mean_features:\n",
+ " transformers.append(\n",
+ " (\"mean_imputer\", SimpleImputer(strategy=\"mean\"), mean_features)\n",
+ " )\n",
+ " if median_features:\n",
+ " transformers.append(\n",
+ " (\"median_imputer\", SimpleImputer(strategy=\"median\"), median_features)\n",
+ " )\n",
+ " if mode_features:\n",
+ " transformers.append(\n",
+ " (\"mode_imputer\", SimpleImputer(strategy=\"most_frequent\"), mode_features)\n",
+ " )\n",
+ "\n",
+ " column_transformer = ColumnTransformer(transformers=transformers)\n",
+ " all_features = mean_features + median_features + mode_features\n",
+ "\n",
+ " return column_transformer.fit(df), all_features, datetime_features\n",
+ "\n",
+ "\n",
+ "def fillna(df, processor, all_features, datetime_features):\n",
+ " \"\"\"\n",
+ " Fill missing values in a DataFrame using a specified processor and mode imputation.\n",
+ "\n",
+ " Parameters:\n",
+ " df (pd.DataFrame): The input DataFrame with missing values.\n",
+ " processor (object): An object with a `transform` method that processes the DataFrame.\n",
+ " all_features (list): List of all features supported by SimpleImputer in the DataFrame.\n",
+ " datetime_features (list): List of datetime features in the DataFrame.\n",
+ "\n",
+ " Returns:\n",
+ " pd.DataFrame: A DataFrame with missing values filled.\n",
+ " \"\"\"\n",
+ " filled_array = processor.transform(df)\n",
+ " filled_df = pd.DataFrame(filled_array, columns=all_features)\n",
+ " if datetime_features:\n",
+ " datetime_data = df[datetime_features]\n",
+ " datetime_data.ffill()\n",
+ " filled_df = pd.concat([datetime_data, filled_df], axis=1)\n",
+ " for col in df.columns:\n",
+ " filled_df[col].fillna(filled_df[col].mode()[0], inplace=True)\n",
+ "\n",
+ " return filled_df\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "c9c6728f-7385-4c76-8284-6708d67bc5c7",
+ "metadata": {
+ "microsoft": {
+ "language": "python",
+ "language_group": "synapse_pyspark"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "from sklearn.pipeline import Pipeline\n",
+ "from sklearn.impute import SimpleImputer\n",
+ "from sklearn.compose import ColumnTransformer\n",
+ "\n",
+ "\n",
+ "# convert object type to nearest dtype\n",
+ "X = X.convert_dtypes()\n",
+ "X = X.dropna(axis=1, how='all')\n",
+ "\n",
+ "# select columns for model training\n",
+ "X = X.select_dtypes(include=['number', 'datetime', 'category'])\n",
+ "\n",
+ "from sklearn.model_selection import train_test_split\n",
+ "\n",
+ "# You may need to update the test_size based on your scenario\n",
+ "X_train, X_test = train_test_split(X, test_size=0.2, random_state=41)\n",
+ "\n",
+ "mean_features, median_features, mode_features = [], [], []\n",
+ " \n",
+ "preprocessor, all_features, datetime_features = create_fillna_processor(X_train, mean_features, median_features, mode_features)\n",
+ "X_train = fillna(X_train, preprocessor, all_features, datetime_features)\n",
+ "X_test = fillna(X_test, preprocessor, all_features, datetime_features)\n",
+ " \n",
+ "y_train = X_train.pop(target_col)\n",
+ "y_test = X_test.pop(target_col)\n",
+ "\n",
+ "display(X_train[:10])\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3b4c43b4-9416-43d9-9ed8-a8d32858250d",
+ "metadata": {},
+ "source": [
+ "## Step 3: Use AutoML to find your best model\n",
+ "\n",
+ "We will now use FLAML's AutoML to automatically find the best machine learning model for our data. AutoML (Automated Machine Learning) simplifies the model selection process by automatically testing and tuning various algorithms and configurations, helping us quickly identify the most effective model with minimal manual effort."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f287fb60-1e24-45f9-9493-1c563c797702",
+ "metadata": {},
+ "source": [
+ "### Tracking results with experiments in Fabric\n",
+ "\n",
+ "Experiments in Fabric let you track the results of your AutoML process, providing a comprehensive view of all the metrics and parameters from your trials."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "ff2e3568-ce88-4a63-8bf8-c768a6cfdc3c",
+ "metadata": {
+ "microsoft": {
+ "language": "python",
+ "language_group": "synapse_pyspark"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# MLFlow Logging Related\n",
+ "\n",
+ "import mlflow\n",
+ "\n",
+ "mlflow.autolog(exclusive=False)\n",
+ "mlflow.set_experiment(\"exp-test\")\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "4f02f65d-bc49-4090-b00b-2bb28d59e754",
+ "metadata": {},
+ "source": [
+ "#### Configure the AutoML trial and settings\n",
+ "\n",
+ "These configurations are driven by the AutoML mode and task selected in the wizard. For example, if you select \"quick prototype\", you'll see a setting for time budget."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "d05dcde3-bf5f-43c5-a6fa-01e0a07affab",
+ "metadata": {
+ "microsoft": {
+ "language": "python",
+ "language_group": "synapse_pyspark"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Import the AutoML class from the FLAML package\n",
+ "import flaml\n",
+ "from flaml import AutoML\n",
+ "\n",
+ "# Define AutoML settings\n",
+ "settings = {\n",
+ " \"time_budget\": 120, # Total running time in seconds\n",
+ " \"task\": \"binary\", \n",
+ " \"log_file_name\": \"flaml_experiment.log\", # FLAML log file\n",
+ " \"eval_method\": \"cv\",\n",
+ " \"n_splits\": 3,\n",
+ " \"max_iter\": 10, \n",
+ " \"force_cancel\": True, \n",
+ " \"seed\": 41 , # Random seed \n",
+ " \"mlflow_exp_name\": \"exp-test\", # MLflow experiment name\n",
+ " \"use_spark\": True, # whether to use Spark for distributed training\n",
+ " \"n_concurrent_trials\": 3, # the maximum number of concurrent trials \n",
+ " \"verbose\": 1, \n",
+ " \"featurization\": \"auto\", \n",
+ "}\n",
+ "\n",
+ "if flaml.__version__ > \"2.3.3\":\n",
+ " settings[\"entrypoint\"] = \"low-code\"\n",
+ "\n",
+ "# Create an AutoML instance\n",
+ "automl = AutoML(**settings)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "fc13e255-3bfb-4b54-9337-7f0fd070dbbc",
+ "metadata": {},
+ "source": [
+ "#### Run the AutoML trial\n",
+ "\n",
+ "Run the AutoML trial, with all trials being tracked as experiment runs. The trial is performed on the processed dataset, using the `Exited` variable as the target, and applying the defined configurations for optimal model selection."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "6c995371-878a-40be-a6ca-106181976ace",
+ "metadata": {
+ "microsoft": {
+ "language": "python",
+ "language_group": "synapse_pyspark"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "with mlflow.start_run(nested=True, run_name=\"exp-test-AutoMLModel\"):\n",
+ " automl.fit(\n",
+ " X_train=X_train, \n",
+ " y_train=y_train, # target column of the training data \n",
+ " )"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0d052eef-0756-411e-8ab2-7fabd7a6076a",
+ "metadata": {},
+ "source": [
+ "## Step 4: Save the final machine learning model\n",
+ "\n",
+ "Upon completing the AutoML trial, you can now save the final, tuned model as an ML model in Fabric."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "2ce45e61-6094-4faa-9c9a-e6350bc4de6b",
+ "metadata": {
+ "microsoft": {
+ "language": "python",
+ "language_group": "synapse_pyspark"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "model_path = f\"runs:/{automl.best_run_id}/model\"\n",
+ "\n",
+ "# Register the model to the MLflow registry\n",
+ "registered_model = mlflow.register_model(model_uri=model_path, name=\"exp-test-AutoMLModel\")\n",
+ "\n",
+ "# Print the registered model's name and version\n",
+ "print(f\"Model '{registered_model.name}' version {registered_model.version} registered successfully.\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b628aab7-22c6-47e6-8b79-a7767b519830",
+ "metadata": {},
+ "source": [
+ "## Step 5: Generate predictions"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "993e8880-f55e-438c-8d2d-fb7215e63c63",
+ "metadata": {},
+ "source": [
+ "Microsoft Fabric lets you operationalize machine learning models with a scalable function called `PREDICT`, which supports batch scoring (or batch inferencing) in any compute engine. You can generate batch predictions directly from the Microsoft Fabric notebook or from a given ML model's item page. For more information on how to use `PREDICT`, see [Model scoring with PREDICT in Microsoft Fabric](https://aka.ms/fabric-predict)."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "aa12ec97-d582-4a43-88c3-ddde42b7b44b",
+ "metadata": {},
+ "source": [
+ "1. Generate predictions."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "3c6f2b3a-ad30-4cf3-9740-9da5b90a859e",
+ "metadata": {
+ "microsoft": {
+ "language": "python",
+ "language_group": "synapse_pyspark"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "model_name = \"exp-test-AutoMLModel\"\n",
+ "from synapse.ml.predict import MLFlowTransformer\n",
+ "\n",
+ "feature_cols = X_train.columns.to_list()\n",
+ "model = MLFlowTransformer(\n",
+ " inputCols=feature_cols,\n",
+ " outputCol=target_col,\n",
+ " modelName=model_name,\n",
+ " modelVersion=registered_model.version,\n",
+ ")\n",
+ "\n",
+ "df_test = spark.createDataFrame(X_test)\n",
+ "batch_predictions = model.transform(df_test)\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "1af8b16c-cdb4-4add-8df5-5c179fffdb95",
+ "metadata": {
+ "microsoft": {
+ "language": "python",
+ "language_group": "synapse_pyspark"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "display(batch_predictions)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2642ffad-253b-4ea9-ac34-9ad0c3690f34",
+ "metadata": {},
+ "source": [
+ "2. Save the predictions to a table."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "fb16d367-0570-427c-a04a-2980b6e5d014",
+ "metadata": {
+ "microsoft": {
+ "language": "python",
+ "language_group": "synapse_pyspark"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "saved_name = \"2020orders_predictions\".replace(\".\", \"_\")\n",
+ "batch_predictions.write.mode(\"overwrite\").format(\"delta\").option(\"overwriteSchema\", \"true\").save(f\"Tables/{saved_name}\")"
+ ]
+ }
+ ],
+ "metadata": {
+ "automl_config": {
+ "finalDetails": {
+ "experimentName": "exp-test",
+ "model": {
+ "modelInput": "exp-test-AutoMLModel",
+ "modelSelection": "",
+ "modelType": "CreateNew"
+ },
+ "modelName": "exp-test-AutoMLModel",
+ "notebookName": "AutoML Sample Test - Demo ",
+ "parallelizationMethod": "trainMultiple"
+ },
+ "lakehouseInfo": {
+ "errMsg": "",
+ "lakehouseId": "3b406a22-8d06-40ef-9f97-8c2ab976f7a4",
+ "lakehouseName": "lake_samples",
+ "state": "ready",
+ "workspaceId": "98ea70b8-712f-49ac-9250-d737780bb594"
+ },
+ "mlModel": {
+ "duration": "-1",
+ "endEarly": false,
+ "metric": "",
+ "mode": "QuickProto",
+ "task": "Binary Classification"
+ },
+ "step": 5,
+ "tableInfo": {
+ "columns": [
+ {
+ "name": "ID",
+ "nullable": true,
+ "type": "string"
+ },
+ {
+ "name": "Count",
+ "nullable": true,
+ "type": "integer"
+ },
+ {
+ "name": "Date",
+ "nullable": true,
+ "type": "string"
+ },
+ {
+ "name": "Name",
+ "nullable": true,
+ "type": "string"
+ },
+ {
+ "name": "Style",
+ "nullable": true,
+ "type": "string"
+ },
+ {
+ "name": "price",
+ "nullable": true,
+ "type": "double"
+ },
+ {
+ "name": "tax",
+ "nullable": true,
+ "type": "double"
+ }
+ ],
+ "tableInfo": {
+ "format": "",
+ "fullAbfsPath": "abfss://98ea70b8-712f-49ac-9250-d737780bb594@onelake.dfs.fabric.microsoft.com/3b406a22-8d06-40ef-9f97-8c2ab976f7a4/Tables/2020orders",
+ "isDeltaTable": true,
+ "name": "2020orders",
+ "relativePath": "Tables/2020orders",
+ "type": "MANAGED"
+ },
+ "type": "table"
+ },
+ "trainData": {
+ "enableFeaturization": true,
+ "mappingColumns": [
+ {
+ "imputationMethod": "Auto",
+ "name": "ID",
+ "nullable": true,
+ "type": "string",
+ "valueType": "Auto"
+ },
+ {
+ "imputationMethod": "Auto",
+ "name": "Count",
+ "nullable": true,
+ "type": "integer",
+ "valueType": "Auto"
+ },
+ {
+ "imputationMethod": "Auto",
+ "name": "Date",
+ "nullable": true,
+ "type": "string",
+ "valueType": "Auto"
+ },
+ {
+ "imputationMethod": "Auto",
+ "name": "Name",
+ "nullable": true,
+ "type": "string",
+ "valueType": "Auto"
+ },
+ {
+ "imputationMethod": "Auto",
+ "name": "Style",
+ "nullable": true,
+ "type": "string",
+ "valueType": "Auto"
+ },
+ {
+ "imputationMethod": "Auto",
+ "name": "price",
+ "nullable": true,
+ "type": "double",
+ "valueType": "Auto"
+ },
+ {
+ "imputationMethod": "Auto",
+ "name": "tax",
+ "nullable": true,
+ "type": "double",
+ "valueType": "Auto"
+ }
+ ],
+ "predictColumn": "price"
+ }
+ },
+ "dependencies": {
+ "lakehouse": {
+ "default_lakehouse": "3b406a22-8d06-40ef-9f97-8c2ab976f7a4",
+ "default_lakehouse_name": "lake_samples",
+ "default_lakehouse_workspace_id": "98ea70b8-712f-49ac-9250-d737780bb594",
+ "known_lakehouses": [
+ {
+ "id": "3b406a22-8d06-40ef-9f97-8c2ab976f7a4"
+ }
+ ]
+ }
+ },
+ "kernel_info": {
+ "name": "synapse_pyspark"
+ },
+ "kernelspec": {
+ "display_name": "Synapse PySpark",
+ "language": "Python",
+ "name": "synapse_pyspark"
+ },
+ "language_info": {
+ "name": "python"
+ },
+ "microsoft": {
+ "language": "python",
+ "language_group": "synapse_pyspark",
+ "ms_spell_check": {
+ "ms_spell_check_language": "en"
+ }
+ },
+ "nteract": {
+ "version": "nteract-front-end@1.0.0"
+ },
+ "spark_compute": {
+ "compute_id": "/trident/default",
+ "session_options": {
+ "conf": {
+ "spark.synapse.nbs.session.timeout": "1200000"
+ }
+ }
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/Workloads-Specific/DataWarehouse/Medallion_Architecture/README.md b/Workloads-Specific/DataWarehouse/Medallion_Architecture/README.md
index 0feec6b..669a05d 100644
--- a/Workloads-Specific/DataWarehouse/Medallion_Architecture/README.md
+++ b/Workloads-Specific/DataWarehouse/Medallion_Architecture/README.md
@@ -1,4 +1,4 @@
-# Demostration: Medallion Architecture Overview
+# Demonstration: Medallion Architecture Overview
Costa Rica
diff --git a/Workloads-Specific/OneLake/BestPractices.md b/Workloads-Specific/OneLake/BestPractices.md
deleted file mode 100644
index 7ccee2f..0000000
--- a/Workloads-Specific/OneLake/BestPractices.md
+++ /dev/null
@@ -1,21 +0,0 @@
-# OneLake - Best Practices Overview
-
-Costa Rica
-
-[](https://github.com)
-[](https://github.com/)
-[brown9804](https://github.com/brown9804)
-
-Last updated: 2025-05-03
-
-----------
-
-
-List of References (Click to expand)
-
-
-
-
-
Total Visitors
-

-
diff --git a/Workloads-Specific/RealTimeIntelligence/BestPractices.md b/Workloads-Specific/RealTimeIntelligence/BestPractices.md
index 6369ab1..99a3fc8 100644
--- a/Workloads-Specific/RealTimeIntelligence/BestPractices.md
+++ b/Workloads-Specific/RealTimeIntelligence/BestPractices.md
@@ -13,8 +13,48 @@ Last updated: 2025-05-03
List of References (Click to expand)
+- [Real-Time Intelligence documentation in Microsoft Fabric](https://learn.microsoft.com/en-us/fabric/real-time-intelligence/)
+- [What is Real-Time Intelligence?](https://learn.microsoft.com/en-us/fabric/real-time-intelligence/overview)
+- [Implement medallion architecture in Real-Time Intelligence](https://learn.microsoft.com/en-us/fabric/real-time-intelligence/architecture-medallion)
+
+
+
+
+Table of Content (Click to expand)
+
+- [Structured Eventhouse Implementation](#structured-eventhouse-implementation)
+- [Interactive Real-Time Dashboard Creation](#interactive-real-time-dashboard-creation)
+- [Efficient Eventstream Management](#efficient-eventstream-management)
+- [Dynamic Activator Configuration](#dynamic-activator-configuration)
+
+> Ensure that your real time intelligence system in Microsoft Fabric is designed for both rapid ingestion and instantaneous analysis. By structuring your Eventhouse, leveraging powerful KQL query sets, building dynamic dashboards, managing high-throughput event streams, and configuring rule-based Activator triggers, you can achieve actionable insights and automated responses as events occur.
+
+
+

+
+
+## Structured Eventhouse Implementation
+
+> Design your Eventhouse to serve as the backbone of your real-time data ingestion. Organize event data using defined schemas, partitioning strategies, and indexing to optimize for both immediate query performance and historical analysis. This approach enhances data governance and ensures that critical event details are captured for quick retrieval and auditing. E.g `Create dedicated partitions in Eventhouse based on time windows or event type. For instance, set up policies to automatically archive older events while retaining a hot partition for current data. This enables rapid detection of anomalies and supports retrospective analysis when patterns or trends need to be reviewed.`
+
+## Interactive Real-Time Dashboard Creation
+
+> Build dashboards that dynamically update as new data flows in. Utilize real-time visualizations, clear metric hierarchies, and fast refresh cycles to ensure stakeholders receive immediate feedback on key performance indicators (KPIs) and system health. This empowers decision-makers to respond quickly to emerging issues. For example, implement drill-down capabilities so that clicking on an alert leads to detailed logs derived from the Eventhouse via KQL queries.
+
+## Efficient Eventstream Management
+
+> Configure Eventstream with dynamic scaling and load balancing. For example, integrate pre-processing steps that filter out noise and enrich events before they enter the Eventhouse, and monitor key metrics (such as latency and event volume) to automatically adjust resource allocation based on current demand.
+
+## Dynamic Activator Configuration
+
+> Implement Activator to respond to events with rule-based triggers that can automatically initiate workflows, send notifications, or activate remediation processes. Ensure that your activation rules are flexible and customizable so that actions can be fine-tuned to the specific nuances of your environment. For example: Set up Activator rules that trigger alerts or automated remedial actions when certain thresholds are reached—such as a sudden spike in error events or a dip in transaction volumes. For example, configure the system to send an SMS or email alert when abnormal patterns are detected, and automatically adjust system parameters via an integrated ITSM tool.
+
+Click to read [Demonstration: Automating Pipeline Execution with Activator](./FabricActivatorRulePipeline): Shows how to set up Microsoft Fabric Activator to automate workflows by detecting file creation events in a storage system and triggering another pipeline to run.
+
+
+
Total Visitors

diff --git a/Monitoring-Observability/FabricActivatorRulePipeline/GeneratesRandomData.ipynb b/Workloads-Specific/RealTimeIntelligence/FabricActivatorRulePipeline/GeneratesRandomData.ipynb
similarity index 100%
rename from Monitoring-Observability/FabricActivatorRulePipeline/GeneratesRandomData.ipynb
rename to Workloads-Specific/RealTimeIntelligence/FabricActivatorRulePipeline/GeneratesRandomData.ipynb
diff --git a/Monitoring-Observability/FabricActivatorRulePipeline/README.md b/Workloads-Specific/RealTimeIntelligence/FabricActivatorRulePipeline/README.md
similarity index 98%
rename from Monitoring-Observability/FabricActivatorRulePipeline/README.md
rename to Workloads-Specific/RealTimeIntelligence/FabricActivatorRulePipeline/README.md
index 99d14b7..256c6e7 100644
--- a/Monitoring-Observability/FabricActivatorRulePipeline/README.md
+++ b/Workloads-Specific/RealTimeIntelligence/FabricActivatorRulePipeline/README.md
@@ -1,4 +1,4 @@
-# Microsoft Fabric: Automating Pipeline Execution with Activator
+# Demonstration: Automating Pipeline Execution with Activator
Costa Rica