Dates are in YYYY-MM-DD format.
- Fixed a bug where
numpy.dtypewould not be exported correctly when specified as a node attribute.
Graph.tensors()will now display a warning when duplicate tensors are detected in the graph, even ifcheck_duplicates=False. As before, whencheck_duplicates=True, it will throw an exception in such cases.
- Added support for
Castelision infold_constants().
- Updated
fold_constants()so that it no longer fails if a shape folding pass fails whenerror_okisTrue.
- Fixed a bug where
fold_constants()would fail if a model contained aSlicenode without astartsorendsinput.
- Added support for folding
Shape -> Slicepatterns even when the entire shape may not be known.
fold_constants()will no longer store values for foldable tensors whose outputs are all foldable. For example, while folding a constant subgraph likeA (constant) -> B -> C, previously,Bvalues would be computed in addition toC. With these changes, onlyCvalues are computed and stored. This can reduce memory usage significantly.
- Fixed a bug where
copy()would not work with subgraphs that included tensors with the same names as outer graph tensors unless atensor_mapwas provided.
fold_constants()can now foldShape -> Gatherpatterns even when the entire shape may not be known.- Added an
error_okparameter infold_constants()which can be set toFalseto re-raise errors encountered during inference.
- Fixed a bug where
copy()would not correctly copy tensors in nested graphs. - Fixed a bug where
fold_constants()would attempt to fold nodes including graph attributes even if nodes within the nested graph could not be folded.
fold_constants()no longer loads constant values into numpy arrays. This can save a significant amount of memory.cleanup()will no longer remove unused graph inputs by default - this was causing invalid ONNX models to be generated in cases withLoopnodes. Setremove_unused_graph_inputstoTrueto revert to the old behavior.cleanup()will no longer reorder node inputs in cases where they are also graph outputs.
- Added support for models with externally stored data. See the README for details on how to import and export such models.
- Operator domains are now preserved when exporting graphs to ONNX.
fold_constantswill no longer attempt to run inference if there are no constants to compute.
- Fixed a bug in
fold_constantswhere it would fail if ONNX-Runtime could not run a node with constant inputs. In such cases, the graph is now partitioned to exclude the node before running another pass of constant folding. - Fixed a bug where graph output tensors would still point to consumer nodes that had been removed from the graph.
- Constant folding is now significantly faster in models with large weights.
- Added support for folding
Shapenodes infold_constants. This requires that shape inference has been run on the graph, and that the input to theShapenode has a static shape. This behavior can be disabled by settingfold_shapes=False.
cleanup,toposort, andfold_constantsare now recursively applied to subgraphs by default. This behavior can be disabled by settingrecurse_subgraphs=False.
- Fixed a bug where
do_type_checkwould not propagate to subgraphs. - Fixed a bug where
cleanup()would incorrectly remove outer-level nodes if they were used only by inner-nodes of subgraphs.
- Removed
__deepcopy__fromGraphas it wasn't deep-copying weights or attributes. The method is now calledcopyand makes a shallow copy of everything exceptNodes andTensorinstances.
- Fixed a bug where shapes including empty strings for
dim_paramwould be treated as empty tensors. They are now correctly imported as tensors with dynamic shapes. - Fixed a bug where variable tensors with unknown shapes would be imported as scalars.
- The
valuesproperty ofConstanttensors is now lazily loaded. This can greatly improve model loading times.
- Fixed a bug where graph inputs and outputs could be assigned
SynchronizedListinstances, and would therefore be modified if nodes in the graph were.
- Changed the default value of
remove_unused_node_outputsincleanup()toFalse, as a value ofTruecan lead to unintuitive behavior, especially with looping constructs likeScanandLoop.
- Fixed a bug where calling
graph.tensors()would cause the inputs or outputs of some tensors to be modified.
SynchronizedList.__add__()no longer modifies the left operand.
- Fixed a bug where nodes including subgraphs whose inputs/outputs had the same names as the node's inputs/outputs would not be imported correctly.
fold_constants()will no longer fail if there is nothing to fold in the graphcleanup()will now properly remove the producer nodes of graph inputs.- Fixed a bug where graph input/output tensors not attached to nodes would not be correctly exported.
Graph.register()now accepts anopsetsargument so that functions can be registered for specific opsets.
has_metadatahas been removed fromTensor, since the function is no longer used.
- ONNX GraphSurgeon now enforces the constraint that graph inputs/outputs must include type information.
- Fixed a bug where
opsetwas not being considering when running inference for constant folding.
- Added
layer()function toGraphto make it easier to generate models from scratch - Added
i()ando()convenience functions toTensor, which are similar to the functions forNode, but returnTensors instead ofNodes
- Added an
examplesdirectory - Added
has_metadata()toTensorclasses to determine if dtype/shape are known. - Added a
check_duplicatesparameter toGraph.tensors()to make it easy to check for duplicate tensors in the graph.
- Various improvements to the logger
- Updated
OnnxImporterso that it can correctly import shapes and types from an ONNX graph after shape inference. - Made
Tensoran abstract class - all tensors in a graph are now eitherVariableorConstant - Renames
generate_tensor_map()totensors()inGraph - Removed
Tensorsuffix from Tensor classes.
- The
import_onnxandexport_onnxfunctions will now preserve opset information anddim_paramvalues in shapes.
- Added
i()ando()convenience functions toNodefor retrieving input/output nodes. - Added
fold_constants()toGraphto allow for folding constants in the graph. - Added
__deepcopy__()toGraph. - Added
to_constant()andto_variable()functions toVariableandConstantrespectively to transmute them in-place.
- Removed some type annotations to allow compatibility with Python 3.5.
- Added
Node,TensorandGraphclasses. - Added
BaseImporterandOnnxImporterclasses. - Added support for importing initializers in the
OnnxImporter - Added
VariableandConstant - Consolidates inputs/outputs of Nodes/Tensors. Now, inputs/outputs should generally only be added to
Nodes. - Added
OnnxExporterto exportGraphtoonnx.GraphProto - Added
OnnxExporterandOnnxImporterto public imports - Added
toposortfunction toGraph, which will topologically sort it. - Added
cleanupfunction toGraph, which will remove unused nodes and tensors. - Added high-level API for importing/exporting
Graphs from/to ONNX models. Graphs are now generated with a default name ofonnx_graphsurgeon