Skip to content

Commit c8d7a44

Browse files
Github action: auto-update.
1 parent 89e7df3 commit c8d7a44

File tree

150 files changed

+5605
-2344
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

150 files changed

+5605
-2344
lines changed
Binary file not shown.

dev/_downloads/07795e165bda9881ff980becb31905ab/plot_diffusion_advection_solver.ipynb

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -4,14 +4,14 @@
44
"cell_type": "markdown",
55
"metadata": {},
66
"source": [
7-
"\n\n# A simple finite-difference solver\nAn intro to our loss module's finite difference utility demonstrating\nits use to create a simple numerical solver for the diffusion-advection equation.\n"
7+
"\n\n# A simple finite-difference solver for the diffusion-advection equation\nAn intro to our loss module's finite difference utility demonstrating\nits use to create a simple numerical solver for the diffusion-advection equation.\n"
88
]
99
},
1010
{
1111
"cell_type": "markdown",
1212
"metadata": {},
1313
"source": [
14-
"## Import the library\nWe first import our `neuralop` library and required dependencies.\n\n"
14+
".. raw:: html\n\n <div style=\"margin-top: 3em;\"></div>\n\n## Import the library\nWe first import our `neuralop` library and required dependencies.\n\n"
1515
]
1616
},
1717
{
@@ -29,7 +29,7 @@
2929
"cell_type": "markdown",
3030
"metadata": {},
3131
"source": [
32-
"## Defining our problem\nWe aim to solve the 2D diffusion advection equation:\n\n$u_t + cx \\cdot u_x + cy \\cdot u_y = \\nu (u_xx + u_yy) + f(x,y,t)$,\n\nWhere $f(x,y,t)$ is a source term and $cx$ and $cy$ are advection speeds in x and y.\nWe set simulation parameters below:\n\n"
32+
".. raw:: html\n\n <div style=\"margin-top: 3em;\"></div>\n\n## Defining our problem\nWe aim to solve the 2D diffusion advection equation:\n\n\\begin{align}\\frac{\\partial u}{\\partial t} + c_x \\frac{\\partial u}{\\partial x} + c_y \\frac{\\partial u}{\\partial y} = \\nu \\left(\\frac{\\partial^2 u}{\\partial x^2} + \\frac{\\partial^2 u}{\\partial y^2}\\right) + f(x,y,t)\\end{align}\n\nWhere $f(x,y,t)$ is a source term and $c_x$ and $c_y$ are advection speeds in x and y.\nWe set simulation parameters below:\n\n"
3333
]
3434
},
3535
{
@@ -47,7 +47,7 @@
4747
"cell_type": "markdown",
4848
"metadata": {},
4949
"source": [
50-
"## Simulate evolution using numerical solver\n\n"
50+
".. raw:: html\n\n <div style=\"margin-top: 3em;\"></div>\n\n## Simulate evolution using numerical solver\n\n"
5151
]
5252
},
5353
{
@@ -65,7 +65,7 @@
6565
"cell_type": "markdown",
6666
"metadata": {},
6767
"source": [
68-
"## Animate our solution\n\n"
68+
".. raw:: html\n\n <div style=\"margin-top: 3em;\"></div>\n\n## Animate our solution\n\n"
6969
]
7070
},
7171
{
Binary file not shown.

dev/_downloads/09a1631d14c6eb51e552aa53b9f6d3e8/checkpoint_FNO_darcy.ipynb

Lines changed: 20 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,13 @@
77
"\n# Checkpointing and loading training states\n\nDemonstrating the ``Trainer``'s saving and loading functionality, \nwhich makes it easy to checkpoint and resume training states.\n"
88
]
99
},
10+
{
11+
"cell_type": "markdown",
12+
"metadata": {},
13+
"source": [
14+
".. raw:: html\n\n <div style=\"margin-top: 3em;\"></div>\n\n## Import dependencies\n\n"
15+
]
16+
},
1017
{
1118
"cell_type": "code",
1219
"execution_count": null,
@@ -22,7 +29,7 @@
2229
"cell_type": "markdown",
2330
"metadata": {},
2431
"source": [
25-
"Loading the Navier-Stokes dataset in 128x128 resolution\n\n"
32+
".. raw:: html\n\n <div style=\"margin-top: 3em;\"></div>\n\n## Loading the Darcy-Flow dataset\n\n"
2633
]
2734
},
2835
{
@@ -40,7 +47,7 @@
4047
"cell_type": "markdown",
4148
"metadata": {},
4249
"source": [
43-
"We create an FNO model\n\n"
50+
".. raw:: html\n\n <div style=\"margin-top: 3em;\"></div>\n\n## Creating the FNO model\n\n"
4451
]
4552
},
4653
{
@@ -58,7 +65,7 @@
5865
"cell_type": "markdown",
5966
"metadata": {},
6067
"source": [
61-
"Create the optimizer\n\n"
68+
".. raw:: html\n\n <div style=\"margin-top: 3em;\"></div>\n\n## Creating the optimizer and scheduler\n\n"
6269
]
6370
},
6471
{
@@ -76,7 +83,7 @@
7683
"cell_type": "markdown",
7784
"metadata": {},
7885
"source": [
79-
"Creating the losses\n\n"
86+
".. raw:: html\n\n <div style=\"margin-top: 3em;\"></div>\n\n## Creating the losses\n\n"
8087
]
8188
},
8289
{
@@ -90,6 +97,13 @@
9097
"l2loss = LpLoss(d=2, p=2)\nh1loss = H1Loss(d=2)\n\ntrain_loss = h1loss\neval_losses={'h1': h1loss, 'l2': l2loss}"
9198
]
9299
},
100+
{
101+
"cell_type": "markdown",
102+
"metadata": {},
103+
"source": [
104+
".. raw:: html\n\n <div style=\"margin-top: 3em;\"></div>\n\n## Displaying configuration\n\n"
105+
]
106+
},
93107
{
94108
"cell_type": "code",
95109
"execution_count": null,
@@ -105,7 +119,7 @@
105119
"cell_type": "markdown",
106120
"metadata": {},
107121
"source": [
108-
"Create the trainer\n\n"
122+
".. raw:: html\n\n <div style=\"margin-top: 3em;\"></div>\n\n## Creating the trainer\n\n"
109123
]
110124
},
111125
{
@@ -123,7 +137,7 @@
123137
"cell_type": "markdown",
124138
"metadata": {},
125139
"source": [
126-
"Actually train the model on our small Darcy-Flow dataset\n\n"
140+
".. raw:: html\n\n <div style=\"margin-top: 3em;\"></div>\n\n## Training the model\nWe train and save checkpoints\n\n"
127141
]
128142
},
129143
{

dev/_downloads/0c15380306ab01fee2a5ab27c1519b72/plot_darcy_flow.py

Lines changed: 41 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,19 @@
33
44
A simple Darcy-Flow dataset
55
===========================
6-
An intro to the small Darcy-Flow example dataset we ship with the package.
6+
An introduction to the small Darcy-Flow example dataset we ship with the package.
7+
8+
The Darcy-Flow problem is a fundamental partial differential equation (PDE) in fluid mechanics
9+
that describes the flow of a fluid through a porous medium. In this tutorial, we explore the
10+
dataset structure and visualize how the data is processed for neural operator training.
11+
712
"""
813

914
# %%
15+
# .. raw:: html
16+
#
17+
# <div style="margin-top: 3em;"></div>
18+
#
1019
# Import the library
1120
# ------------------
1221
# We first import our `neuralop` library and required dependencies.
@@ -16,6 +25,10 @@
1625
from neuralop.layers.embeddings import GridEmbedding2D
1726

1827
# %%
28+
# .. raw:: html
29+
#
30+
# <div style="margin-top: 3em;"></div>
31+
#
1932
# Load the dataset
2033
# ----------------
2134
# Training samples are 16x16 and we load testing samples at both
@@ -29,17 +42,22 @@
2942
train_dataset = train_loader.dataset
3043

3144
# %%
45+
# .. raw:: html
46+
#
47+
# <div style="margin-top: 3em;"></div>
48+
#
3249
# Visualizing the data
3350
# --------------------
51+
# Let's examine the shape and structure of our dataset at different resolutions.
3452

3553
for res, test_loader in test_loaders.items():
36-
print(res)
54+
print(f"Resolution: {res}")
3755
# Get first batch
3856
batch = next(iter(test_loader))
39-
x = batch['x']
40-
y = batch['y']
57+
x = batch['x'] # Input
58+
y = batch['y'] # Output
4159

42-
print(f'Testing samples for res {res} have shape {x.shape[1:]}')
60+
print(f'Testing samples for resolution {res} have shape {x.shape[1:]}')
4361

4462

4563
data = train_dataset[0]
@@ -48,7 +66,6 @@
4866

4967
print(f'Training samples have shape {x.shape[1:]}')
5068

51-
5269
# Which sample to view
5370
index = 0
5471

@@ -59,23 +76,34 @@
5976
# positional embedding. We will add it manually here to
6077
# visualize the channels appended by this embedding.
6178
positional_embedding = GridEmbedding2D(in_channels=1)
62-
# at train time, data will be collated with a batch dim.
63-
# we create a batch dim to pass into the embedding, then re-squeeze
79+
# At train time, data will be collated with a batch dimension.
80+
# We create a batch dimension to pass into the embedding, then re-squeeze
6481
x = positional_embedding(data['x'].unsqueeze(0)).squeeze(0)
6582
y = data['y']
83+
84+
# %%
85+
# .. raw:: html
86+
#
87+
# <div style="margin-top: 3em;"></div>
88+
#
89+
# Visualizing the processed data
90+
# ------------------------------
91+
# We can see how the positional embedding adds coordinate information to our input data.
92+
# This helps the neural operator understand spatial relationships in the data.
93+
6694
fig = plt.figure(figsize=(7, 7))
6795
ax = fig.add_subplot(2, 2, 1)
6896
ax.imshow(x[0], cmap='gray')
69-
ax.set_title('input x')
97+
ax.set_title('Input x')
7098
ax = fig.add_subplot(2, 2, 2)
7199
ax.imshow(y.squeeze())
72-
ax.set_title('input y')
100+
ax.set_title('Output y')
73101
ax = fig.add_subplot(2, 2, 3)
74102
ax.imshow(x[1])
75-
ax.set_title('x: 1st pos embedding')
103+
ax.set_title('Positional embedding: x-coordinates')
76104
ax = fig.add_subplot(2, 2, 4)
77105
ax.imshow(x[2])
78-
ax.set_title('x: 2nd pos embedding')
79-
fig.suptitle('Visualizing one input sample', y=0.98)
106+
ax.set_title('Positional embedding: y-coordinates')
107+
fig.suptitle('Visualizing one input sample with positional embeddings', y=0.98)
80108
plt.tight_layout()
81109
fig.show()

dev/_downloads/1590df702802104e18640e94f94adf30/plot_resample.py

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,10 @@
2929
from neuralop.layers.resample import resample
3030

3131
# %%
32+
# .. raw:: html
33+
#
34+
# <div style="margin-top: 3em;"></div>
35+
#
3236
# First, let's generate a data input. We create a high-resolution Gaussian Random Field (GRF), which
3337
# is a smooth, continuous signal, making it ideal for visualizing the effects of resampling.
3438
device = 'cpu'
@@ -80,6 +84,10 @@ def generate_grf(shape, alpha=2.5, device='cpu'):
8084
low_res = 32
8185

8286
# %%
87+
# .. raw:: html
88+
#
89+
# <div style="margin-top: 3em;"></div>
90+
#
8391
# Now, let's use the ``resample`` function to simulate downsampling and upsampling operations.
8492
# This could for instance be used in the encoder and decoder of a U-Net architecture.
8593
# The function takes an input tensor, a `scale_factor`, and a list of
@@ -95,6 +103,10 @@ def generate_grf(shape, alpha=2.5, device='cpu'):
95103

96104

97105
# %%
106+
# .. raw:: html
107+
#
108+
# <div style="margin-top: 3em;"></div>
109+
#
98110
# Finally, let's visualize the results to see the effect of the ``resample`` function.
99111

100112
fig, axs = plt.subplots(1, 3, figsize=(14, 6))
@@ -128,6 +140,10 @@ def generate_grf(shape, alpha=2.5, device='cpu'):
128140
plt.show()
129141

130142
# %%
143+
# .. raw:: html
144+
#
145+
# <div style="margin-top: 3em;"></div>
146+
#
131147
# The ``resample`` function effectively changes the resolution of the data.
132148
# Notice that the upsampled image on the right is a faithful, if slightly blurrier,
133149
# reconstruction of the original. This is because the downsampling step is lossy;
Binary file not shown.

0 commit comments

Comments
 (0)