Skip to content

Commit d6a54fc

Browse files
authored
Fix typos: Successfully facilitate getting pipeline overridden (#30)
1 parent ec9fe68 commit d6a54fc

File tree

4 files changed

+8
-8
lines changed

4 files changed

+8
-8
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -240,7 +240,7 @@ Differences may be less or more pronounced for different inputs. Please see the
240240
</details>
241241

242242
<details>
243-
<summary> <b> <a name="low-mem-conversion"></a> Q3: </b> My Mac has 8GB RAM and I am converting models to Core ML using the example command. The process is geting killed because of memory issues. How do I fix this issue? </summary>
243+
<summary> <b> <a name="low-mem-conversion"></a> Q3: </b> My Mac has 8GB RAM and I am converting models to Core ML using the example command. The process is getting killed because of memory issues. How do I fix this issue? </summary>
244244

245245
<b> A3: </b> In order to minimize the memory impact of the model conversion process, please execute the following command instead:
246246

@@ -313,7 +313,7 @@ On iOS, depending on the iPhone model, Stable Diffusion model versions, selected
313313

314314
<b> 4. Weights and Activations Data Type </b>
315315

316-
When quantizing models from float32 to lower-precision data types such as float16, the generated images are [known to vary slightly](https://lambdalabs.com/blog/inference-benchmark-stable-diffusion) in semantics even when using the same PyTorch model. Core ML models generated by coremltools have float16 weights and activations by default [unless explicitly overriden](https://github.com/apple/coremltools/blob/main/coremltools/converters/_converters_entry.py#L256). This is not expected to be a major source of difference.
316+
When quantizing models from float32 to lower-precision data types such as float16, the generated images are [known to vary slightly](https://lambdalabs.com/blog/inference-benchmark-stable-diffusion) in semantics even when using the same PyTorch model. Core ML models generated by coremltools have float16 weights and activations by default [unless explicitly overridden](https://github.com/apple/coremltools/blob/main/coremltools/converters/_converters_entry.py#L256). This is not expected to be a major source of difference.
317317

318318
</details>
319319

python_coreml_stable_diffusion/torch2coreml.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -576,7 +576,7 @@ def convert_unet(pipe, args):
576576
# Set the output descriptions
577577
coreml_unet.output_description["noise_pred"] = \
578578
"Same shape and dtype as the `sample` input. " \
579-
"The predicted noise to faciliate the reverse diffusion (denoising) process"
579+
"The predicted noise to facilitate the reverse diffusion (denoising) process"
580580

581581
_save_mlpackage(coreml_unet, out_path)
582582
logger.info(f"Saved unet into {out_path}")

swift/StableDiffusion/pipeline/StableDiffusionPipeline+Resources.swift

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ public extension StableDiffusionPipeline {
7575
safetyChecker = SafetyChecker(modelAt: urls.safetyCheckerURL, configuration: config)
7676
}
7777

78-
// Construct pipelien
78+
// Construct pipeline
7979
self.init(textEncoder: textEncoder,
8080
unet: unet,
8181
decoder: decoder,

tests/test_stable_diffusion.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -74,23 +74,23 @@ def test_torch_to_coreml_conversion(self):
7474
with self.subTest(model="vae_decoder"):
7575
logger.info("Converting vae_decoder")
7676
torch2coreml.convert_vae_decoder(self.pytorch_pipe, self.cli_args)
77-
logger.info("Successfuly converted vae_decoder")
77+
logger.info("Successfully converted vae_decoder")
7878

7979
with self.subTest(model="unet"):
8080
logger.info("Converting unet")
8181
torch2coreml.convert_unet(self.pytorch_pipe, self.cli_args)
82-
logger.info("Successfuly converted unet")
82+
logger.info("Successfully converted unet")
8383

8484
with self.subTest(model="text_encoder"):
8585
logger.info("Converting text_encoder")
8686
torch2coreml.convert_text_encoder(self.pytorch_pipe, self.cli_args)
87-
logger.info("Successfuly converted text_encoder")
87+
logger.info("Successfully converted text_encoder")
8888

8989
with self.subTest(model="safety_checker"):
9090
logger.info("Converting safety_checker")
9191
torch2coreml.convert_safety_checker(self.pytorch_pipe,
9292
self.cli_args)
93-
logger.info("Successfuly converted safety_checker")
93+
logger.info("Successfully converted safety_checker")
9494

9595
def test_end_to_end_image_generation_speed(self):
9696
""" Tests:

0 commit comments

Comments
 (0)