v0.3.0
What's Changed
Features
The shortest distance between two points is a straight line. - Archimedes
As said Archimedes, knowing the distance (straight line) between camera and a point is as important as knowing planar depth. Therefore, it's convenient to have methods that can do the conversion between them
What's new ?
- Handle negative points in encode_absolute: For wide range camera (FoV > 180), it's possible to have points whose planar depth is small than 0 (points behind camera). To keep these points instead of clipping by 0, pass keep_negative=True in argument.
- Depth to distance as_distance(): Convert depth to distance. Only pinhole camera and linear equidistant camera are supported at this time.
- Distance to depth as_depth(): Convert distance to depth. Only pinhole camera and linear equidistant camera are supported at this time.
- Possible to create a tensor of Distance by passing is_distance=True at initialization.
- Support functions in depth_utils.
Update
- Change the term to avoid the confusion: "euclidean depth" for distance and "planar depth" for usual depth.
- as_distance() becomes as_euclidean()
- as_depth() becomes as_planar()
Archimedes's quote now becomes: The shortest "euclidean depth" between two points is a straight line.
New feature
- Add projection and distortion as new properties of
SpatialAugmentedTensorso that we can inherit for other types of tensor. Two projection models are supported:pinholeandequidistant. Default values arepinholeand1.0for distortion so it won't change anything for initialization if we are working on "pinhole" image. Onlyaloscene.Depthis supported for distortion and equidistant projection at this time. Depth.as_points3dis now supported equidistant model with distortion. If no projection model and distortion are specified in arguments,as_points3duses theprojectionanddistortionproperty.Depth.as_planarandDepth.as_euclideannow useprojectionanddistortionproperty if there is no projection model and distortion specified in arguments.Depth.__get_view__now has color legend iflegendis setTrue.
- add tensorrt quantization by @Data-Iab in #172
-
New 💥 :
- TensorRt engines can now be built with int8 precision using Post Training Quantization.
- 4 calibrators are available for quantization :
MinMaxCalibrator,LegacyCalibrator,EntropyCalibratorandEntropyCalibrator2. - Added a QuantizedModel interface to convert model to quantized model for Training Aware Quantization.
-
Fixed 🔧 :
- Adapt graph option is removed, we just adapt graph once it's exported from torch to ONNX.
- add profiling verbosity by @Data-Iab in #176
New ⭐ :
profiling_verbosityoption is added to the TRTEngineBuilder to better inspect the details of each node when calling thetensorrt.EngineInspector- Some quantization related arguments are added to the
BaseTRTExporter.
RandomDownScale: transform to randomly downscale between original and a minimum frame sizeRandomDownScaleCrop: a compose transform to randomly downscale then crop
- add cuda shared memory for reccurent engines by @Data-Iab in #186
New 🌱
- Engine's inputs/outputs can share the same space in GPU for faster execution. Hosts with shared memory can be retrieved with
outputs_to_cpuargument and can be updated usinginputs_to_gpuargument.
Dynamic Cropping
- Possibility to crop an image to smaller fixed size image in the position we want. The crop position can be parsed by argument
centerwhich can befloatorint. - If crop is out of image border, an error is triggered.
- new: depth metrics by @Data-Iab in #195
New ⭐ :
- Depth evaluation metrics are added to alonet metrics.
- Lvis Dataset + Coco Update + minor fix by @thibo73800 in #196
CocoDetectionDatasetcan now use a givenann_filewhen loadedCocoPanopticDatasetcan now useignore_classesto ignore some classed when loading the panoptic anns- In
DetrCriterioninterpolation is an option that can be changed withupscale_interpolate - Lvis Dataset based on
CocoDetectionDatasetwith a different ann file
# Create three gray frames, display them on two rows (2 on first rows, 1 on 2nd row)
import numpy as np
import aloscene
arrays = [np.full((3, 600, 650), 100), np.full((3, 500, 500), 50), np.full((3, 500, 800), 200)]
frames = [aloscene.Frame(arr) for arr in arrays]
views = [[frames[0].get_view(), frames[1].get_view()], [frames[2].get_view()]]
aloscene.render(views, renderer="matplotlib")Create scene flow by calling the class with a file path, a tensor or a ndarray.
If you have optical flow, depth at time T and T + 1 and the camera intrinsic. You can create scene flow with the class method from_optical_flow. It handle the creation of the occlusion mask if some parameters have one.
- Github action who automatically launch unit test when there is a commit or pull request in master branch
Fix
- fix depth absolute/inverse assertion by @Data-Iab in #167
- Fixed some issues by @Dee61298 in #171
- better colorbar position by @anhtu293 in #178
- Check if depth is planar before projecting to 3d points by @anhtu293 in #177
- Merge dataset weights by @jsalotti in #175
- update arg name by @anhtu293 in #179
- Fix package prod dependencies by @thibo73800 in #181
- remove tracing assertion by @Data-Iab in #182
- clip low values of depth before conversion to disp by @jsalotti in #180
- Pass arguments to RandomScale and RandomCrop in ResizeCropTransform by @anhtu293 in #189
- add execution context failed creation exception by @Data-Iab in #190
- fix: AugmentedTensor clone method by @jsalotti in #191
- bugfix: close plt figure by @jsalotti in #192
- fix masking dimension mismatch by @Data-Iab in #194
- ignore same_on_sequence when no time dimension by @jsalotti in #200
- RealisticNoise default values by @jsalotti in #199
- allow for non integer principal point coordinates by @jsalotti in #202
- check disp_format and clamp if necessary by @jsalotti in #203
GLOBAL_COLOR_SET_CLASSwill automaticly adjust its size for giving random color for a given object class
New Contributors
Full Changelog: v0.2.1...v0.3.0