Skip to content

Commit ffaf6cb

Browse files
authored
formatting updates (#1480)
1 parent e059035 commit ffaf6cb

File tree

1 file changed

+25
-22
lines changed

1 file changed

+25
-22
lines changed

13_sandboxes/test_case_generator.py

Lines changed: 25 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -4,17 +4,17 @@
44
# ---
55

66
# # LLM-Generated Unit Test Development
7-
#
7+
88
# Unit tests can become tedious to generate and maintain. While LLMs are a useful tool
99
# for generating test cases, hallucinations are always possible, making the code
1010
# potentially unsafe to run locally.
1111
#
1212
# In this example, we'll show you how to generate unit tests in a [sample repository](https://github.com/modal-labs/password-analyzer) using an open-source LLM,
1313
# and then run them in Modal Sandboxes - our sandboxed environment. We'll then open ports
1414
# on each of our Sandboxes, showing the code diff and new test case coverage.
15-
#
15+
1616
# # Model Setup
17-
#
17+
1818
# First, let's pick an LLM to do the heavy lifting. We went with [deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct)
1919
# which offers a good trade-off between size and quality. At just 7B parameters, it ranks 9th on Hugging Face's
2020
# [Big Code Models Leaderboard](https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard), only beat out primarily by larger models.
@@ -31,6 +31,7 @@
3131
files_volume = modal.Volume.from_name("files-volume", create_if_missing=True)
3232

3333
# ## Model Dependencies
34+
3435
# We'll package our dependencies into a [Modal Image](https://modal.com/docs/reference/modal.Image). Starting from a
3536
# container image provided [by the SGLang team via Dockerhub](https://hub.docker.com/r/lmsysorg/sglang/tags),
3637
# we'll also add a few packages and flags from Hugging Face to facilitate fast model download.
@@ -41,32 +42,25 @@
4142
"accelerate==1.8.1",
4243
"hf_transfer==0.1.9",
4344
)
44-
.env(
45-
{
46-
"HF_HUB_ENABLE_HF_TRANSFER": "1",
47-
"HF_HOME": "/cache",
48-
}
49-
)
45+
.env({"HF_HUB_ENABLE_HF_TRANSFER": "1", "HF_HOME": "/cache"})
5046
.entrypoint([]) # silence noisy logs
5147
)
5248

53-
app = modal.App(
54-
name="sandbox-test-case-generator",
55-
)
49+
app = modal.App(name="sandbox-test-case-generator")
5650

5751

5852
# ## Model Server
53+
5954
# Let's put it all together to set up our inference server! Using a [modal.Cls](https://modal.com/docs/reference/modal.Cls), we can
6055
# easily attach an L40S GPU by setting the `gpu` parameter. The `@modal.enter()` decorator creates
6156
# a `download_model` lifecycle function, which is run once when the container starts. These are executed sequentially,
6257
# so our SGLang server starts once the weights are downloaded. Finally, the `@modal.web_endpoint()` converts this
6358
# into a web endpoint that can be invoked for model inference.
59+
60+
6461
@app.cls(
6562
image=server_image,
66-
volumes={
67-
"/cache": model_volume,
68-
"/data": files_volume,
69-
},
63+
volumes={"/cache": model_volume, "/data": files_volume},
7064
gpu="L40S",
7165
timeout=600,
7266
)
@@ -108,6 +102,7 @@ def serve(self):
108102

109103

110104
# ## Model Client
105+
111106
# Now that our server is set up, let's create a client. We [parametrize](https://modal.com/docs/guide/parametrized-functions#using-parametrized-functions-with-lifecycle-functions) our function with
112107
# the server `url`, which we'll pass in later. We'll add a [`@modal.method()`](https://modal.com/docs/reference/modal.method#modalmethod) decorator
113108
# to register our `generate()` function as a Modal Function. Finally, we can invoke our OpenAI-compatible server with our prompt
@@ -118,9 +113,7 @@ def serve(self):
118113
image=modal.Image.debian_slim(python_version="3.12").uv_pip_install(
119114
"openai==1.97.1"
120115
),
121-
volumes={
122-
"/data": files_volume,
123-
},
116+
volumes={"/data": files_volume},
124117
)
125118
class TestCaseClient:
126119
url: str = modal.parameter()
@@ -244,11 +237,15 @@ def download_files_to_volume(
244237

245238

246239
# # Sandbox Setup
240+
247241
# ## Image
242+
248243
# Now, let's create a secure environment for our generated test cases to run in.
249244
# We'll define another Modal Image for our [Modal Sandbox](https://modal.com/docs/guide/sandboxes), installing
250245
# the [Allure Framework](https://github.com/allure-framework) to generate a report, as well as
251246
# [git](https://git-scm.com/) to clone our [sample repo](https://github.com/modal-labs/password-analyzer).
247+
248+
252249
def get_sandbox_image(gh_owner: str, gh_repo_name: str):
253250
ALLURE_VERSION = "2.34.1"
254251
MODULE_URL = f"https://github.com/{gh_owner}/{gh_repo_name}"
@@ -270,6 +267,7 @@ def get_sandbox_image(gh_owner: str, gh_repo_name: str):
270267

271268

272269
# ## Sandbox Command
270+
273271
# Next, we define our Sandbox command, chaining together a series of commands. We'll show the difference
274272
# between the two unit test files, re-run unit tests with the LLM-generated files to check test case
275273
# coverage, and show the results of both in ports 8000 and 8001.
@@ -308,11 +306,14 @@ def run_sandbox(image: modal.Image, file_name: str):
308306

309307

310308
# # Put it all together!
309+
311310
# Finally, we can define a [local_entrypoint](https://modal.com/docs/reference/modal.App#local_entrypoint) and chain
312311
# everything together. We'll create our test case server and download the Github files from our repo to a Modal Volume.
313312
# We'll use [`map.aio`](https://modal.com/docs/reference/modal.Function#map) to invoke our server asynchronously,
314313
# generating our unit test files in parallel. Finally, we'll create our Sandboxes to validate our new test cases,
315314
# similarly using [`sb.wait.aio`](https://modal.com/docs/reference/modal.Sandbox#wait).
315+
316+
316317
@app.local_entrypoint()
317318
async def main(
318319
gh_owner: str,
@@ -324,7 +325,7 @@ async def main(
324325
import asyncio
325326

326327
# Start server
327-
sg_lang_server = TestCaseServer()
328+
sglang_server = TestCaseServer()
328329

329330
# Download files to volume
330331
input_files = download_files_to_volume.remote(
@@ -335,7 +336,7 @@ async def main(
335336
)
336337

337338
# Initialize client and generate test files
338-
generator = TestCaseClient(url=sg_lang_server.serve.get_web_url()) # type: ignore
339+
generator = TestCaseClient(url=sglang_server.serve.get_web_url()) # type: ignore
339340
output_generator = generator.generate.map.aio(input_files)
340341
output_files = []
341342
async for f in output_generator:
@@ -352,7 +353,9 @@ async def main(
352353

353354

354355
# # Addenda
355-
# The below functions are utility functions.
356+
357+
# The functions below are utilities used above.
358+
356359
import subprocess
357360
import time
358361

0 commit comments

Comments
 (0)