Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1,513 changes: 1,513 additions & 0 deletions tests/modal_cm.json

Large diffs are not rendered by default.

23 changes: 23 additions & 0 deletions tests/tidy3d_modal_cm_fdtd_test.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
import tidy3d as td

Check failure on line 1 in tests/tidy3d_modal_cm_fdtd_test.py

View workflow job for this annotation

GitHub Actions / verify-linting

Ruff (I002)

tests/tidy3d_modal_cm_fdtd_test.py:1:1: I002 Missing required import: `from __future__ import annotations`

Check failure on line 1 in tests/tidy3d_modal_cm_fdtd_test.py

View workflow job for this annotation

GitHub Actions / verify-linting

Ruff (I002)

tests/tidy3d_modal_cm_fdtd_test.py:1:1: I002 Missing required import: `from __future__ import annotations`
from tidy3d import web
from tidy3d.config import Env

Env.prod.active()
web.configure("CICIf8UmEdMtBSJbxW66npxujQ3Ob7Wiy4UHChijaVTAdrnu")

#Env.dev.active()
#web.configure("Ltrvqel7oCenUTH88Pqh99vn7ikCD25KFPZ0phz2Mxtgl5I4")

#Env.uat.active()
#web.configure("LmpSvRP0MGOuKOgm9ZJn97l9RE8t5I2ENTI9RLwbXlmmW89Z")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hardcoded API keys committed in test file

High Severity

Production API keys are hardcoded in tests/tidy3d_modal_cm_fdtd_test.py. Three separate API keys (for prod, dev, and uat environments) are exposed in plaintext via web.configure(...) calls. These credentials will be visible in version control history even if later removed and could allow unauthorized access to the service.

Fix in Cursor Fix in Web



modeler = td.Tidy3dBaseModel.from_file("modal_cm.json")
task_id = web.upload(modeler, task_name="directional coupler")
from tidy3d.web.core.http_util import http

Check failure on line 17 in tests/tidy3d_modal_cm_fdtd_test.py

View workflow job for this annotation

GitHub Actions / verify-linting

Ruff (F401)

tests/tidy3d_modal_cm_fdtd_test.py:17:39: F401 `tidy3d.web.core.http_util.http` imported but unused

Check failure on line 17 in tests/tidy3d_modal_cm_fdtd_test.py

View workflow job for this annotation

GitHub Actions / verify-linting

Ruff (F401)

tests/tidy3d_modal_cm_fdtd_test.py:17:39: F401 `tidy3d.web.core.http_util.http` imported but unused
import json

Check failure on line 18 in tests/tidy3d_modal_cm_fdtd_test.py

View workflow job for this annotation

GitHub Actions / verify-linting

Ruff (F401)

tests/tidy3d_modal_cm_fdtd_test.py:18:8: F401 `json` imported but unused

Check failure on line 18 in tests/tidy3d_modal_cm_fdtd_test.py

View workflow job for this annotation

GitHub Actions / verify-linting

Ruff (I001)

tests/tidy3d_modal_cm_fdtd_test.py:17:1: I001 Import block is un-sorted or un-formatted

Check failure on line 18 in tests/tidy3d_modal_cm_fdtd_test.py

View workflow job for this annotation

GitHub Actions / verify-linting

Ruff (F401)

tests/tidy3d_modal_cm_fdtd_test.py:18:8: F401 `json` imported but unused

Check failure on line 18 in tests/tidy3d_modal_cm_fdtd_test.py

View workflow job for this annotation

GitHub Actions / verify-linting

Ruff (I001)

tests/tidy3d_modal_cm_fdtd_test.py:17:1: I001 Import block is un-sorted or un-formatted
# resp = http.get(
# f"rf/task/{task_id}/statistics",
# )
#print(json.dumps(resp, indent=4))
web.run(modeler, folder_name="modal_cm")

Check failure on line 23 in tests/tidy3d_modal_cm_fdtd_test.py

View workflow job for this annotation

GitHub Actions / verify-linting

Ruff (W292)

tests/tidy3d_modal_cm_fdtd_test.py:23:41: W292 No newline at end of file

Check failure on line 23 in tests/tidy3d_modal_cm_fdtd_test.py

View workflow job for this annotation

GitHub Actions / verify-linting

Ruff (W292)

tests/tidy3d_modal_cm_fdtd_test.py:23:41: W292 No newline at end of file
3 changes: 3 additions & 0 deletions tidy3d/plugins/smatrix/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -255,5 +255,8 @@ def _run_local(
vgpu_allocation = kwargs.get("vgpu_allocation")
if vgpu_allocation is not None:
run_kwargs["vgpu_allocation"] = vgpu_allocation
ignore_memory_limit = kwargs.get("ignore_memory_limit")
if ignore_memory_limit is not None:
run_kwargs["ignore_memory_limit"] = ignore_memory_limit
batch_data = batch.run(**run_kwargs)
return compose_modeler_data_from_batch_data(modeler=modeler, batch_data=batch_data)
13 changes: 12 additions & 1 deletion tidy3d/web/api/autograd/autograd.py
Original file line number Diff line number Diff line change
Expand Up @@ -304,6 +304,7 @@ def run_custom(
] = None,
custom_vjp: Optional[Union[CustomVJPConfig, tuple[CustomVJPConfig, ...]]] = None,
vgpu_allocation: Optional[int] = None,
ignore_memory_limit: Optional[bool] = None,
) -> WorkflowDataType:
"""
Submits a :class:`.Simulation` to server, starts running, monitors progress, downloads,
Expand Down Expand Up @@ -363,6 +364,8 @@ def run_custom(
Number of virtual GPUs to allocate for the simulation (1, 2, 4, or 8).
Only applies to vGPU license users. If not specified, the system
automatically determines the optimal GPU count.
ignore_memory_limit: Optional[bool] = None
Whether to ignore memory usage limits. Defaults to ``None``.

Returns
-------
Expand Down Expand Up @@ -472,6 +475,7 @@ def run_custom(
max_num_adjoint_per_fwd=max_num_adjoint_per_fwd,
numerical_structures=numerical_structures,
custom_vjp=custom_vjp,
ignore_memory_limit=ignore_memory_limit,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ignore_memory_limit dropped in two run_custom code paths

High Severity

The run_custom function accepts ignore_memory_limit and passes it correctly for the component modeler path (line 478), but the autograd _run() call (lines 499–520) and the non-autograd webapi.run() call (lines 531–549) both omit it. For regular Simulation objects, the parameter is silently ignored regardless of the code path taken.

Additional Locations (1)

Fix in Cursor Fix in Web

)

should_use_autograd = False
Expand Down Expand Up @@ -572,6 +576,7 @@ def run_async_custom(
] = None,
custom_vjp: Optional[CustomVJPSpec] = None,
vgpu_allocation: Optional[int] = None,
ignore_memory_limit: Optional[bool] = None,
) -> BatchData:
"""Submits a set of Union[:class:`.Simulation`, :class:`.HeatSimulation`, :class:`.EMESimulation`] objects to server,
starts running, monitors progress, downloads, and loads results as a :class:`.BatchData` object.
Expand Down Expand Up @@ -633,7 +638,8 @@ def run_async_custom(
Number of virtual GPUs to allocate for the simulation (1, 2, 4, or 8).
Only applies to vGPU license users. If not specified, the system
automatically determines the optimal GPU count.

ignore_memory_limit: Optional[bool] = None
Whether to ignore memory limitations. Defaults to ``None``
Returns
------
:class:`BatchData`
Expand Down Expand Up @@ -805,6 +811,7 @@ def _expand_spec(
priority=priority,
vgpu_allocation=vgpu_allocation,
lazy=lazy,
ignore_memory_limit=ignore_memory_limit,
)

# insert numerical_structures even if not traced
Expand Down Expand Up @@ -864,6 +871,7 @@ def run(
priority: Optional[int] = None,
lazy: Optional[bool] = None,
vgpu_allocation: Optional[int] = None,
ignore_memory_limit: Optional[bool] = None,
) -> WorkflowDataType:
"""Wrapper for run_custom for usage without numerical_structures or custom_vjp for public facing API."""
return run_custom(
Expand All @@ -888,6 +896,7 @@ def run(
lazy=lazy,
numerical_structures=None,
custom_vjp=None,
ignore_memory_limit=ignore_memory_limit,
)


Expand All @@ -908,6 +917,7 @@ def run_async(
priority: Optional[int] = None,
lazy: Optional[bool] = None,
vgpu_allocation: Optional[int] = None,
ignore_memory_limit: Optional[bool] = None,
) -> BatchData:
"""Wrapper for run_async_custom for usage without numerical_structures or custom_vjp for public facing API."""
return run_async_custom(
Expand All @@ -929,6 +939,7 @@ def run_async(
lazy=lazy,
numerical_structures=None,
custom_vjp=None,
ignore_memory_limit=ignore_memory_limit,
)


Expand Down
14 changes: 10 additions & 4 deletions tidy3d/web/api/container.py
Original file line number Diff line number Diff line change
Expand Up @@ -412,6 +412,7 @@ def start(
self,
priority: Optional[int] = None,
vgpu_allocation: Optional[int] = None,
ignore_memory_limit: Optional[bool] = None,
) -> None:
"""Start running a :class:`Job`.

Expand Down Expand Up @@ -439,6 +440,7 @@ def start(
pay_type=self.pay_type,
priority=priority,
vgpu_allocation=vgpu_allocation,
ignore_memory_limit=ignore_memory_limit,
)

def get_run_info(self) -> RunInfo:
Expand Down Expand Up @@ -893,6 +895,7 @@ def run(
priority: Optional[int] = None,
replace_existing: bool = False,
vgpu_allocation: Optional[int] = None,
ignore_memory_limit: Optional[bool] = None,
) -> BatchData:
"""Upload and run each simulation in :class:`Batch`.

Expand All @@ -910,7 +913,8 @@ def run(
Number of virtual GPUs to allocate for the simulation (1, 2, 4, or 8).
Only applies to vGPU license users. If not specified, the system
automatically determines the optimal GPU count.

ignore_memory_limit : Optional[bool] = None
Whether to ignore memory usage limits.
Returns
------
:class:`BatchData`
Expand Down Expand Up @@ -939,7 +943,7 @@ def run(
if not all(loaded):
self.upload()
self.to_file(self._batch_path(path_dir=path_dir))
self.start(priority=priority, vgpu_allocation=vgpu_allocation)
self.start(priority=priority, vgpu_allocation=vgpu_allocation, ignore_memory_limit=ignore_memory_limit)
self.monitor(
path_dir=path_dir,
download_on_success=True,
Expand Down Expand Up @@ -1082,6 +1086,7 @@ def start(
self,
priority: Optional[int] = None,
vgpu_allocation: Optional[int] = None,
ignore_memory_limit: Optional[bool] = None,
) -> None:
"""Start running all tasks in the :class:`Batch`.

Expand All @@ -1095,7 +1100,8 @@ def start(
Number of virtual GPUs to allocate for the simulation (1, 2, 4, or 8).
Only applies to vGPU license users. If not specified, the system
automatically determines the optimal GPU count.

ignore_memory_limit: Optional[bool] = None
Whether to ignore memory usage limits.
Note
----
To monitor the running simulations, can call :meth:`Batch.monitor`.
Expand All @@ -1106,7 +1112,7 @@ def start(

with ThreadPoolExecutor(max_workers=self.num_workers) as executor:
for _, job in self.jobs.items():
executor.submit(job.start, priority=priority, vgpu_allocation=vgpu_allocation)
executor.submit(job.start, priority=priority, vgpu_allocation=vgpu_allocation, ignore_memory_limit=ignore_memory_limit)

def get_run_info(self) -> dict[TaskName, RunInfo]:
"""get information about a each of the tasks in the :class:`Batch`.
Expand Down
5 changes: 5 additions & 0 deletions tidy3d/web/api/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -104,6 +104,7 @@ def run(
max_workers: typing.Optional[int] = None,
lazy: typing.Optional[bool] = None,
vgpu_allocation: typing.Optional[int] = None,
ignore_memory_limit: typing.Optional[bool] = None,
) -> RunOutput:
"""
Submit one or many simulations and return results in the same container shape.
Expand Down Expand Up @@ -175,6 +176,8 @@ def run(
Number of virtual GPUs to allocate for the simulation (1, 2, 4, or 8).
Only applies to vGPU license users. If not specified, the system
automatically determines the optimal GPU count.
ignore_memory_limit : Optional[bool] = None
Whether to ignore memory usage limitations.

Returns
-------
Expand Down Expand Up @@ -270,6 +273,7 @@ def run(
pay_type=pay_type,
priority=priority,
vgpu_allocation=vgpu_allocation,
ignore_memory_limit=ignore_memory_limit,
lazy=lazy if lazy is not None else False,
)
}
Expand All @@ -293,6 +297,7 @@ def run(
pay_type=pay_type,
priority=priority,
vgpu_allocation=vgpu_allocation,
ignore_memory_limit=ignore_memory_limit,
lazy=lazy if lazy is not None else True,
)

Expand Down
5 changes: 4 additions & 1 deletion tidy3d/web/api/webapi.py
Original file line number Diff line number Diff line change
Expand Up @@ -682,6 +682,7 @@ def start(
pay_type: Union[PayType, str] = PayType.AUTO,
priority: Optional[int] = None,
vgpu_allocation: Optional[int] = None,
ignore_memory_limit: Optional[bool] = None,
) -> None:
"""Start running the simulation associated with task.

Expand All @@ -705,7 +706,8 @@ def start(
Number of virtual GPUs to allocate for the simulation (1, 2, 4, or 8).
Only applies to vGPU license users. If not specified, the system
automatically determines the optimal GPU count.

ignore_memory_limit : bool = None
Whether to ignore memory usage limits.
Note
----
To monitor progress, can call :meth:`monitor` after starting simulation.
Expand All @@ -729,6 +731,7 @@ def start(
pay_type=pay_type,
priority=priority,
vgpu_allocation=vgpu_allocation,
ignore_memory_limit=ignore_memory_limit,
)


Expand Down
10 changes: 10 additions & 0 deletions tidy3d/web/core/task_core.py
Original file line number Diff line number Diff line change
Expand Up @@ -594,6 +594,7 @@ def submit(
pay_type: Union[PayType, str] = PayType.AUTO,
priority: Optional[int] = None,
vgpu_allocation: Optional[int] = None,
ignore_memory_limit: Optional[bool] = None,
) -> None:
"""Kick off this task.

Expand All @@ -616,6 +617,9 @@ def submit(
Number of virtual GPUs to allocate for the simulation (1, 2, 4, or 8).
Only applies to vGPU license users. If not specified, the system
automatically determines the optimal GPU count.
ignore_memory_limit: bool = None
Whether to ignore memory limits.

"""
pay_type = PayType(pay_type) if not isinstance(pay_type, PayType) else pay_type

Expand All @@ -634,6 +638,7 @@ def submit(
"payType": pay_type.value,
"priority": priority,
"vgpuAllocation": vgpu_allocation,
"ignoreMemoryLimit": ignore_memory_limit,
},
)

Expand Down Expand Up @@ -921,6 +926,7 @@ def submit(
pay_type: Union[PayType, str] = PayType.AUTO,
priority: Optional[int] = None,
vgpu_allocation: Optional[int] = None,
ignore_memory_limit: Optional[bool] = None,
) -> requests.Response:
"""Submits the batch for execution on the server.

Expand All @@ -934,6 +940,8 @@ def submit(
Optional identifier for a specific worker group to run on.
vgpu_allocation : Optional[int], default=None
Number of virtual GPUs to allocate for the simulation (1, 2, 4, or 8).
ignore_memory_limit : Optional[bool], default=None
Whether or not to ignore memory limits.

Returns
-------
Expand Down Expand Up @@ -963,6 +971,8 @@ def submit(
"solverVersion": solver_version,
"protocolVersion": protocol_version,
"workerGroup": worker_group,
"vgpu_allocation": vgpu_allocation,
"ignore_memory_limit": ignore_memory_limit,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Snake_case API keys instead of camelCase in submit

High Severity

In the BatchTaskCore.submit method, the HTTP POST body uses snake_case keys "vgpu_allocation" and "ignore_memory_limit" instead of the camelCase "vgpuAllocation" and "ignoreMemoryLimit" used by the other submit method (for Tidy3dTask) and by the other keys in the same dictionary ("solverVersion", "protocolVersion", "workerGroup"). The server likely won't recognize these misnamed keys, so the parameters will be silently ignored.

Fix in Cursor Fix in Web

},
)

Expand Down
Loading