Skip to content

Commit 3d779d6

Browse files
authored
Merge branch 'main' into copilot/adjust-ci-to-use-netlify-cli
2 parents b5b588e + 354da1f commit 3d779d6

File tree

2 files changed

+49
-5
lines changed

2 files changed

+49
-5
lines changed

.github/workflows/collab.yml

Lines changed: 47 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,19 @@
11
name: Build Project on Google Collab (Execution)
2-
on: [pull_request]
2+
on:
3+
schedule:
4+
# Execute weekly on Monday at 4am UTC (offset from cache.yml)
5+
- cron: '0 4 * * 1'
6+
workflow_dispatch:
37
jobs:
48
execution-checks:
59
runs-on: "runs-on=${{ github.run_id }}/family=g4dn.2xlarge/image=ubuntu24-gpu-x64/disk=large"
10+
permissions:
11+
issues: write # required for creating issues on execution failure
612
container:
713
image: docker://us-docker.pkg.dev/colab-images/public/runtime:latest
814
options: --gpus all
915
steps:
1016
- uses: actions/checkout@v5
11-
with:
12-
ref: ${{ github.event.pull_request.head.sha }}
1317
# Install build software
1418
- name: Install Build Software & LaTeX
1519
shell: bash -l {0}
@@ -46,3 +50,43 @@ jobs:
4650
with:
4751
name: execution-reports
4852
path: _build/html/reports
53+
- name: Create execution failure report
54+
if: failure()
55+
run: |
56+
cat > execution-failure-report.md << 'EOF'
57+
# Colab Execution Failure Report
58+
59+
The weekly Google Colab execution check has failed. This indicates that one or more notebooks failed to execute properly in the Colab environment.
60+
61+
## Details
62+
63+
**Workflow Run:** [${{ github.run_id }}](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }})
64+
**Date:** ${{ github.event.head_commit.timestamp || github.event.schedule }}
65+
**Branch:** ${{ github.ref_name }}
66+
**Commit:** ${{ github.sha }}
67+
68+
## Execution Reports
69+
70+
Detailed execution reports have been uploaded as artifacts to this workflow run. Please check the following:
71+
72+
1. Download the `execution-reports` artifact from the workflow run
73+
2. Review the failed notebook execution logs
74+
3. Fix any execution issues in the notebooks
75+
4. Test locally or in Colab before merging
76+
77+
## Next Steps
78+
79+
1. Investigate the failure by reviewing the execution reports
80+
2. Fix the identified issues
81+
3. Test the fixes
82+
4. Close this issue once resolved
83+
84+
This is an automated issue created by the weekly Colab execution check.
85+
EOF
86+
- name: Create Issue on Execution Failure
87+
if: failure()
88+
uses: peter-evans/create-issue-from-file@v5
89+
with:
90+
title: "Weekly Colab Execution Check Failed - ${{ github.run_id }}"
91+
content-filepath: execution-failure-report.md
92+
labels: execution-failure, automated-issue, colab

lectures/back_prop.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -201,7 +201,7 @@ $$ (eq:sgd)
201201
202202
where $\frac{d {\mathcal L}}{dx_{N+1}}=-\left(x_{N+1}-y\right)$ and $\alpha > 0 $ is a step size.
203203
204-
(See [this](https://en.wikipedia.org/wiki/Gradient_descent#Description) and [this](https://en.wikipedia.org/wiki/Newton%27s_method) to gather insights about how stochastic gradient descent
204+
(See [this](https://en.wikipedia.org/wiki/Gradient_descent#Description) and [this](https://en.wikipedia.org/wiki/Newton's_method) to gather insights about how stochastic gradient descent
205205
relates to Newton's method.)
206206
207207
To implement one step of this parameter update rule, we want the vector of derivatives $\frac{dx_{N+1}}{dp_k}$.
@@ -540,7 +540,7 @@ Image(fig.to_image(format="png"))
540540
It is fun to think about how deepening the neural net for the above example affects the quality of approximation
541541
542542
543-
* If the network is too deep, you'll run into the [vanishing gradient problem](https://neuralnetworksanddeeplearning.com/chap5.html)
543+
* If the network is too deep, you'll run into the [vanishing gradient problem](https://en.wikipedia.org/wiki/Vanishing_gradient_problem)
544544
* Other parameters such as the step size and the number of epochs can be as important or more important than the number of layers in the situation considered in this lecture.
545545
* Indeed, since $f$ is a linear function of $x$, a one-layer network with the identity map as an activation would probably work best.
546546

0 commit comments

Comments
 (0)