diff --git a/.github/workflows/update-md-date.yml b/.github/workflows/update-md-date.yml index 96dc9d7..ac7a96d 100644 --- a/.github/workflows/update-md-date.yml +++ b/.github/workflows/update-md-date.yml @@ -7,6 +7,7 @@ on: permissions: contents: write + pull-requests: write jobs: update-date: @@ -35,7 +36,12 @@ jobs: run: python .github/workflows/update_date.py - name: Commit changes + env: + TOKEN: ${{ secrets.GITHUB_TOKEN }} run: | + git fetch origin ${{ github.event.pull_request.head.ref }} + git pull --rebase origin ${{ github.event.pull_request.head.ref }} || echo "No rebase needed" git add -A git commit -m "Update last modified date in Markdown files" || echo "No changes to commit" + git remote set-url origin https://x-access-token:${TOKEN}@github.com/${{ github.repository }} git push origin HEAD:${{ github.event.pull_request.head.ref }} diff --git a/.github/workflows/validate_and_fix_markdown.yml b/.github/workflows/validate_and_fix_markdown.yml index 820d991..4cef7ef 100644 --- a/.github/workflows/validate_and_fix_markdown.yml +++ b/.github/workflows/validate_and_fix_markdown.yml @@ -7,6 +7,7 @@ on: permissions: contents: write + pull-requests: write jobs: validate-and-fix-markdown: @@ -27,18 +28,18 @@ jobs: run: npm install -g markdownlint-cli - name: Lint and Fix Markdown files - run: markdownlint '**/*.md' --fix --config .github/.markdownlint.json + run: markdownlint '**/*.md' --fix --config .github/.markdownlint.json - name: Configure Git run: | git config --global user.email "github-actions[bot]@users.noreply.github.com" git config --global user.name "github-actions[bot]" - - name: Commit changes + - name: Commit and rebase changes + env: + PR_BRANCH: ${{ github.head_ref || github.ref_name }} run: | - git fetch origin - git checkout -b ${{ github.event.pull_request.head.ref }} origin/${{ github.event.pull_request.head.ref }} git add -A git commit -m "Fix Markdown syntax issues" || echo "No changes to commit" - git pull --rebase origin ${{ github.event.pull_request.head.ref }} || echo "No rebase needed" - git push origin HEAD:${{ github.event.pull_request.head.ref }} + git pull --rebase origin "$PR_BRANCH" || echo "No rebase needed" + git push origin HEAD:"$PR_BRANCH" diff --git a/Workloads-Specific/DataScience/AI_integration/README.md b/Workloads-Specific/DataScience/AI_integration/README.md index e9e2431..4c332f4 100644 --- a/Workloads-Specific/DataScience/AI_integration/README.md +++ b/Workloads-Specific/DataScience/AI_integration/README.md @@ -5,7 +5,7 @@ Costa Rica [![GitHub](https://img.shields.io/badge/--181717?logo=github&logoColor=ffffff)](https://github.com/) [brown9804](https://github.com/brown9804) -Last updated: 2025-04-21 +Last updated: 2025-07-17 ------------------------------------------ @@ -136,7 +136,7 @@ Tools in practice: ### Configure Azure OpenAI Service > [!NOTE] -> Click [here](./src/fabric-llms-overview_sample.ipynb) to see all notebook +> Click [here to see all notebook](./src/fabric-llms-overview_sample.ipynb) 1. **Set Up API Keys**: Ensure you have the API key and endpoint URL for your deployed model. Set these as environment variables diff --git a/Workloads-Specific/DataWarehouse/BestPractices.md b/Workloads-Specific/DataWarehouse/BestPractices.md index 131e2db..009067d 100644 --- a/Workloads-Specific/DataWarehouse/BestPractices.md +++ b/Workloads-Specific/DataWarehouse/BestPractices.md @@ -6,7 +6,7 @@ Costa Rica [![GitHub](https://img.shields.io/badge/--181717?logo=github&logoColor=ffffff)](https://github.com/) [brown9804](https://github.com/brown9804) -Last updated: 2025-05-03 +Last updated: 2025-07-17 ---------- @@ -68,7 +68,7 @@ Create notebooks that are segmented into distinct sections: ## Using Mirroring to Your Benefit -> Mirroring offers a modern, efficient way to continuously and seamlessly access and ingest data from operational databases or data warehouses. It works by replicating a snapshot of the source database into OneLake, and then keeping that replica in near real-time sync with the original. This ensures that your data is always up to date and readily available for analytics or downstream processing. `As part of the value offering, each Fabric compute SKU includes a built-in allowance of free Mirroring storage, proportional to the compute capacity you provision. For example, provisioning an F64 SKU grants you 64 terabytes of free Mirroring storage. You only begin incurring OneLake storage charges if your mirrored data exceeds this free limit or if the compute capacity is paused.` Click [here](https://azure.microsoft.com/en-us/pricing/details/microsoft-fabric/?msockid=38ec3806873362243e122ce086486339) to read more about it. +> Mirroring offers a modern, efficient way to continuously and seamlessly access and ingest data from operational databases or data warehouses. It works by replicating a snapshot of the source database into OneLake, and then keeping that replica in near real-time sync with the original. This ensures that your data is always up to date and readily available for analytics or downstream processing. `As part of the value offering, each Fabric compute SKU includes a built-in allowance of free Mirroring storage, proportional to the compute capacity you provision. For example, provisioning an F64 SKU grants you 64 terabytes of free Mirroring storage. You only begin incurring OneLake storage charges if your mirrored data exceeds this free limit or if the compute capacity is paused.` Click [here to read more about it](https://azure.microsoft.com/en-us/pricing/details/microsoft-fabric/?msockid=38ec3806873362243e122ce086486339)
Centered Image diff --git a/Workloads-Specific/DataWarehouse/Medallion_Architecture/README.md b/Workloads-Specific/DataWarehouse/Medallion_Architecture/README.md index 669a05d..ca1f672 100644 --- a/Workloads-Specific/DataWarehouse/Medallion_Architecture/README.md +++ b/Workloads-Specific/DataWarehouse/Medallion_Architecture/README.md @@ -5,7 +5,7 @@ Costa Rica [![GitHub](https://img.shields.io/badge/--181717?logo=github&logoColor=ffffff)](https://github.com/) [brown9804](https://github.com/brown9804) -Last updated: 2025-05-03 +Last updated: 2025-07-17 ------------------------------------------ @@ -46,7 +46,7 @@ Last updated: 2025-05-03 > This demo will be created step by step. Please note that Microsoft Fabric already assists by setting up the medallion flow for you. > [!IMPORTANT] -> If you are not able to see the `auto-create report` option neither `copilot` be aware you need to enable AI features in your tenant, click [here](https://github.com/brown9804/MicrosoftCloudEssentialsHub/blob/main/0_Azure/2_AzureAnalytics/0_Fabric/demos/6_PBiCopilot.md#tenant-configuration) to see how. +> If you are not able to see the `auto-create report` option neither `copilot` be aware you need to enable AI features in your tenant, click [here to see how](https://github.com/brown9804/MicrosoftCloudEssentialsHub/blob/main/0_Azure/2_AzureAnalytics/0_Fabric/demos/6_PBiCopilot.md#tenant-configuration) image @@ -210,7 +210,7 @@ VALUES image - > If you want see more, click [here](./src/0_notebook_bronze_to_silver.ipynb) to see a sample of the notebook. + > If you want see more, click [here to see a sample of the notebook](./src/0_notebook_bronze_to_silver.ipynb) image @@ -228,7 +228,7 @@ VALUES image - > Applying some transformations: If you want see more, click [here](./src/1_notebook_silver_to_gold.ipynb) to see a sample of the notebook. + > Applying some transformations: If you want see more, click [here to see a sample of the notebook](./src/1_notebook_silver_to_gold.ipynb) > **PySpark Code to Move Data from Silver to Gold**: