We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 3148510 commit f0f7589Copy full SHA for f0f7589
.github/workflows/spell-check.yml
@@ -0,0 +1,26 @@
1
+ame: Spell Check
2
+
3
+on:
4
+ push:
5
+ branches:
6
+ - main
7
+ pull_request:
8
9
+jobs:
10
+ spell-check:
11
+ runs-on: ubuntu-latest
12
13
+ steps:
14
+ # Checkout the repository
15
+ - name: Checkout code
16
+ uses: actions/checkout@v3
17
18
+ # Install codespell
19
+ - name: Install codespell
20
+ run: |
21
+ pip install codespell
22
23
+ # Run codespell
24
+ - name: Run codespell
25
26
+ codespell --skip="*.png,*.jpg,*.jpeg,*.gif,*.svg,*.ico,*.pdf,*.js" --ignore-words-list="nd,te,OT" --check-filenames
README.rst
@@ -99,4 +99,4 @@ Snapshot of usefuls tools
99
100
**max_diff**
101
102
-Returns the maximum discrancies accross nested containers containing tensors.
+Returns the maximum discrancies across nested containers containing tensors.
_doc/examples/plot_export_tiny_llm.py
@@ -4,7 +4,7 @@
Steel method forward to guess the dynamic shapes
================================================
-Inputs are always dynamic with LLMs that is why dyanmic shapes
+Inputs are always dynamic with LLMs that is why dynamic shapes
needs to be specified when a LLM is exported with:func:`torch.export.export`.
Most of the examples on :epkg:`HuggingFace` use method
:meth:`transformers.GenerationMixin.generate` but we only want to
@@ -15,7 +15,7 @@
We focus on the model
`Tiny-LLM <https://huggingface.co/arnir0/Tiny-LLM>`_.
-To avoid downloading any weigths, we write a function creating a
+To avoid downloading any weights, we write a function creating a
random model based on the same architecture.
Steel the forward method
_doc/examples/plot_export_with_dynamic_cache.py
@@ -54,9 +54,9 @@ def forward(self, x, y):
54
pprint.pprint(ds)
55
56
# %%
57
-# The function returns a tuple with two objets.
+# The function returns a tuple with two objects.
58
# The first one for the positional arguments, the other one
59
-# for the named arguments. There is no named argements. We
+# for the named arguments. There is no named arguments. We
60
# we used the first result to export.
61
62
ep = torch.export.export(model, (x, y), dynamic_shapes=ds[0])
@@ -66,7 +66,7 @@ def forward(self, x, y):
66
# kwargs
67
# ++++++
68
#
69
-# We do the same with named argments.
+# We do the same with named arguments.
70
71
72
class Model(torch.nn.Module):
_doc/examples/plot_export_with_dynamic_shapes_auto.py
Use DYNAMIC or AUTO when exporting if dynamic shapes has constraints
====================================================================
-Settings the dynamic shapes is not always easy.
+Setting the dynamic shapes is not always easy.
Here are a few tricks to make it work.
dx + dy not allowed?
_doc/examples/plot_failing_model_extract.py
@@ -39,7 +39,7 @@
39
oh.make_node("Cast", ["C"], ["X999"], to=999, name="failing"),
40
oh.make_node("CastLike", ["X999", "Y"], ["Z"], name="n4"),
41
],
42
- "nd",
+ "-nd-",
43
[
44
oh.make_tensor_value_info("X", TFLOAT, ["a", "b", "c"]),
45
oh.make_tensor_value_info("Y", TFLOAT, ["a", "b", "c"]),
_doc/examples/plot_failing_onnxruntime_evaluator.py
@@ -8,7 +8,7 @@
how to run a python runtime on a model but it may very slow sometimes
and it could show some discrepancies if the only provider is not CPU.
Let's use :class:`OnnxruntimeEvaluator <onnx_diagnostic.reference.OnnxruntimeEvaluator>`.
-It splits the model into node and runs them independantly until it succeeds
+It splits the model into node and runs them independently until it succeeds
or fails. This class converts every node into model based on the types
discovered during the execution. It relies on :class:`InferenceSessionForTorch
<onnx_diagnostic.ort_session.InferenceSessionForTorch>` or
@@ -43,7 +43,7 @@
46
47
48
oh.make_tensor_value_info("X", TBFLOAT16, ["a", "b", "c"]),
49
oh.make_tensor_value_info("Y", TBFLOAT16, ["a", "b", "c"]),
@@ -100,7 +100,7 @@
# We can see it run until it reaches `Cast` and stops.
# The error message is not always obvious to interpret.
103
-# It gets improved everytime from time to time.
+# It gets improved every time from time to time.
104
# This runtime is useful when it fails for a numerical reason.
105
# It is possible to insert prints in the python code to print
106
# more information or debug if needed.
_doc/examples/plot_failing_reference_evaluator.py
@@ -33,7 +33,7 @@
33
34
35
36
37
38
@@ -75,7 +75,7 @@
75
76
77
78
79
80
81
_doc/index.rst
@@ -18,7 +18,7 @@ onnx-diagnostic: investigate onnx models
.. image:: https://codecov.io/gh/sdpython/onnx-diagnostic/branch/main/graph/badge.svg?token=Wb9ZGDta8J
:target: https://codecov.io/gh/sdpython/onnx-diagnostic
-**onnx-diagnostic** helps investgating onnx models, exporting models into onnx.
+**onnx-diagnostic** helps investigating onnx models, exporting models into onnx.
It implements tools used to understand issues.
Source are `sdpython/onnx-diagnostic
_unittests/ut_xrun_doc/test_helpers.py
@@ -87,7 +87,7 @@ def test_pretty_onnx(self):
87
oh.make_node("Mul", ["Y", "sy"], ["ysy"]),
88
oh.make_node("Mul", ["X", "ysy"], ["final"]),
89
90
91
92
oh.make_tensor_value_info("X", TFLOAT, [1, "b", "c"]),
93
@@ -111,7 +111,7 @@ def test_print_pretty_onnx(self):
111
112
113
114
115
116
117
@@ -136,7 +136,7 @@ def test_get_onnx_signature(self):
136
137
138
139
140
141
142
0 commit comments