|
232 | 232 | Signed-off-by: Naren Dasan <[email protected]>
|
233 | 233 |
|
234 | 234 |
|
| 235 | +# v0.1.0 (2020-10-23) |
| 236 | + |
| 237 | + |
| 238 | +### Bug Fixes |
| 239 | + |
| 240 | +* added some fixes, trt/jit output still mismatches ([723ac1d](https://github.com/NVIDIA/TRTorch/commit/723ac1d)) |
| 241 | +* added test cases to explicitly check hidden/cell state outputs ([d7c3164](https://github.com/NVIDIA/TRTorch/commit/d7c3164)) |
| 242 | +* cleaned up logic, added case where bias doesn't exist for LSTM cell converter ([a3e1093](https://github.com/NVIDIA/TRTorch/commit/a3e1093)) |
| 243 | +* **//core/conversion/evaluator:** Custom to IValue that handles int[] ([68c934a](https://github.com/NVIDIA/TRTorch/commit/68c934a)) |
| 244 | +* **//docker:** Workaround only shared libraries being available in ([50c7eda](https://github.com/NVIDIA/TRTorch/commit/50c7eda)) |
| 245 | +* **//py:** Fix long description section of setup.py ([efd2099](https://github.com/NVIDIA/TRTorch/commit/efd2099)) |
| 246 | +* **//tests:** Add stride to complete tensors ([af5d28e](https://github.com/NVIDIA/TRTorch/commit/af5d28e)) |
| 247 | +* **//tests/accuracy:** Fix int8 accuracy test for new PTQ api ([a53bea7](https://github.com/NVIDIA/TRTorch/commit/a53bea7)) |
| 248 | +* **//tests/core/converters/activations:** Complete tensors in prelu test ([0e90f78](https://github.com/NVIDIA/TRTorch/commit/0e90f78)) |
| 249 | +* **docsrc:** Update docsrc container for bazel 3.4.1 ([4eb53b5](https://github.com/NVIDIA/TRTorch/commit/4eb53b5)) |
| 250 | + |
| 251 | + |
| 252 | +* fix(Windows)!: Fix dependency resolution for local builds ([858d8c3](https://github.com/NVIDIA/TRTorch/commit/858d8c3)) |
| 253 | +* chore!: Update dependencies to PyTorch 1.6.0 ([8eda27d](https://github.com/NVIDIA/TRTorch/commit/8eda27d)) |
| 254 | +* chore!: Bumping version numbers to 0.1.0 ([b84c90b](https://github.com/NVIDIA/TRTorch/commit/b84c90b)) |
| 255 | +* refactor(//core)!: Introducing a binding convention that will address ([5a105c6](https://github.com/NVIDIA/TRTorch/commit/5a105c6)) |
| 256 | +* refactor!: Renaming extra info to compile spec to be more consistent ([b8fa228](https://github.com/NVIDIA/TRTorch/commit/b8fa228)) |
| 257 | + |
| 258 | + |
| 259 | +### Features |
| 260 | + |
| 261 | +* **//core/conversion/converters:** LSTMCell converter ([8c61248](https://github.com/NVIDIA/TRTorch/commit/8c61248)) |
| 262 | +* **//core/conversion/var:** created ITensorOrFreeze() method, to replace functionality of Var::ITensor() ([2ccf8d0](https://github.com/NVIDIA/TRTorch/commit/2ccf8d0)) |
| 263 | +* **//core/converters:** Add power layer conversion support and minor README edits ([a801506](https://github.com/NVIDIA/TRTorch/commit/a801506)) |
| 264 | +* **//core/lowering:** Add functionalization pass to replace implace ([90a9ed6](https://github.com/NVIDIA/TRTorch/commit/90a9ed6)), closes [#30](https://github.com/NVIDIA/TRTorch/issues/30) |
| 265 | +* **//docker:** Adding CUDA11 based container for Ampere support ([970d775](https://github.com/NVIDIA/TRTorch/commit/970d775)) |
| 266 | +* started working on lstm_cell converter ([546d790](https://github.com/NVIDIA/TRTorch/commit/546d790)) |
| 267 | +* **//py:** Initial compiliant implementation of the to_backend api for ([59113cf](https://github.com/NVIDIA/TRTorch/commit/59113cf)) |
| 268 | +* **//third_party/tensorrt:** Add back TensorRT static lib in a cross ([d3c2e7e](https://github.com/NVIDIA/TRTorch/commit/d3c2e7e)) |
| 269 | +* **aten::prelu:** Basic prelu support ([8bc4369](https://github.com/NVIDIA/TRTorch/commit/8bc4369)) |
| 270 | +* **aten::prelu:** Implement the multi-channel version of prelu and ([c066581](https://github.com/NVIDIA/TRTorch/commit/c066581)) |
| 271 | +* finished logic for LSTM cell, now to test ([a88cfaf](https://github.com/NVIDIA/TRTorch/commit/a88cfaf)) |
| 272 | + |
| 273 | + |
| 274 | +### BREAKING CHANGES |
| 275 | + |
| 276 | +* Users on Windows trying to use cuDNN 8 must manually |
| 277 | +configure third_party/cudnn/local/BUILD to use cuDNN 8. |
| 278 | + |
| 279 | +Signed-off-by: Naren Dasan <[email protected]> |
| 280 | +Signed-off-by: Naren Dasan <[email protected]> |
| 281 | +* Support for Python 3.5 is being dropped with this |
| 282 | +update |
| 283 | + |
| 284 | +Signed-off-by: Naren Dasan <[email protected]> |
| 285 | +Signed-off-by: Naren Dasan <[email protected]> |
| 286 | +* Version is being bumped to version 0.1.0a0 to target |
| 287 | +PyTorch 1.6.0 |
| 288 | + |
| 289 | +Signed-off-by: Naren Dasan <[email protected]> |
| 290 | +Signed-off-by: Naren Dasan <[email protected]> |
| 291 | +* This changes the "ABI" of compiled TRTorch programs and |
| 292 | +the runtime and breaks backwards compatability between the runtime in |
| 293 | +0.1.0+ and programs compiled pre-0.1.0 |
| 294 | + |
| 295 | +Signed-off-by: Naren Dasan <[email protected]> |
| 296 | +Signed-off-by: Naren Dasan <[email protected]> |
| 297 | +* This changes the top level api for setting the |
| 298 | +specification for compilation, a simple find and replace should allow |
| 299 | +users to port forward |
| 300 | + |
| 301 | +Signed-off-by: Naren Dasan <[email protected]> |
| 302 | +Signed-off-by: Naren Dasan <[email protected]> |
| 303 | + |
| 304 | + |
235 | 305 |
|
0 commit comments