You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* now uses imageio for img reading/writing
* does not need to import the whole main script, just the util
* more options, to allow for different values to output
Copy file name to clipboardExpand all lines: README.md
+19-3Lines changed: 19 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ It has not been tested for multiple GPU, but it should work just as in original
8
8
9
9
The code provides a training example, using [the flying chair dataset](http://lmb.informatik.uni-freiburg.de/resources/datasets/FlyingChairs.en.html) , with data augmentation. An implementation for [Scene Flow Datasets](http://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html) may be added in the future.
10
10
11
-
Two neural network models are currently provided :
11
+
Two neural network models are currently provided, along with their batch norm variation (experimental) :
12
12
13
13
-**FlowNetS**
14
14
-**FlowNetSBN**
@@ -22,12 +22,12 @@ Thanks to [Kaixhin](https://github.com/Kaixhin) you can download a pretrained ve
22
22
Directly feed the downloaded Network to the script, you don't need to uncompress it even if your desktop environment tells you so.
23
23
24
24
### Note on networks from caffe
25
-
These networks expect a BGR input in range `[-0.5,0.5]`(compared to RGB in pytorch). However, BGR order is not very important.
25
+
These networks expect a BGR input (compared to RGB in pytorch). However, BGR order is not very important.
26
26
27
27
## Prerequisite
28
28
29
29
```
30
-
pytorch >= 0.4.1
30
+
pytorch >= 1.0.1
31
31
tensorboard-pytorch
32
32
tensorboardX >= 1.4
33
33
spatial-correlation-sampler>=0.0.8
@@ -88,6 +88,22 @@ Exact code for Optical Flow -> Color map can be found [here](main.py#L321)
If you need to run the network on your images, you can download a pretrained network [here](https://drive.google.com/open?id=0B5EC7HMbyk3CbjFPb0RuODI3NmM) and launch the inference script on your folder of image pairs.
94
+
95
+
Your folder needs to have all the images pairs in the same location, with the name pattern
As for the `main.py` script, a help menu is available for additional options.
106
+
91
107
## Note on transform functions
92
108
93
109
In order to have coherent transformations between inputs and target, we must define new transformations that take both input and target, as a new random variable is defined each time a random transformation is called.
0 commit comments