Skip to content

Commit f2f8841

Browse files
committed
Add papers
1 parent 5cff841 commit f2f8841

40 files changed

+685
-26
lines changed

.gitignore

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,4 +9,5 @@ _site
99
Gemfile.lock
1010
vendor
1111
assets/code/Torch/*
12-
_progress/
12+
_progress/
13+
temp/

_pages/projects.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ permalink: /projects/
55
description: Relevant academic deep learning projects done during my master.
66
nav: true
77
nav_order: 1
8-
display_categories: [work, fun]
8+
display_categories: [paper, master]
99
horizontal: false
1010
---
1111

_projects/1_project.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
---
22
layout: page
3-
title: 3D object detection
4-
description: Reproduce the results of "Deep Hough Voting" on ScanNet and SUNRGB-D.
5-
img: assets/img/Npm3d/project_deephoughvoting.png
3+
title: Deep Hough Voting
4+
description: 3D object detection
5+
img: assets/img/project/Npm3d/project_deephoughvoting.png
66
importance: 1
7-
category: work
7+
category: master
88
---
99

1010
**Reference**: <a href="https://github.com/facebookresearch/votenet">Deep Hough Voting, C.R. Qi, K. He, L.J. Guibas, 2019.</a>
@@ -15,7 +15,7 @@ category: work
1515

1616
<div class="row">
1717
<div class="col-sm mt-3 mt-md-0">
18-
{% include figure.html path="assets/img/Npm3d/project_deephoughvoting_results.jpg" title="example image" class="img-fluid rounded z-depth-1" %}
18+
{% include figure.html path="assets/img/project/Npm3d/project_deephoughvoting_results.jpg" title="example image" class="img-fluid rounded z-depth-1" %}
1919
</div>
2020
</div>
2121
<div class="caption">
@@ -24,7 +24,7 @@ category: work
2424

2525
<div class="row">
2626
<div class="col-sm mt-4 mt-md-0">
27-
{% include figure.html path="assets/img/Npm3d/project_deephoughvoting_results_3.jpg" title="example image" class="img-fluid rounded z-depth-1" %}
27+
{% include figure.html path="assets/img/project/Npm3d/project_deephoughvoting_results_3.jpg" title="example image" class="img-fluid rounded z-depth-1" %}
2828
</div>
2929
</div>
3030
<div class="caption">

_projects/2_project.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
---
22
layout: page
3-
title: Correlate objects and their effects
4-
description: Reproduce the results of "Omnimatte" on the DAVIS dataset and on custom Youtube videos.
5-
img: assets/img/DeepL/project_omnimatte.png
3+
title: Omnimatte
4+
description: Video inpainting
5+
img: assets/img/project/DeepL/project_omnimatte.png
66
importance: 1
7-
category: work
7+
category: master
88
---
99

1010
**Reference:** <a href="https://github.com/erikalu/omnimatte">Omnimatte: Associating Objects and Their Effects in Video, E. Lu, F. Cole, et al., 2021.</a>
@@ -14,13 +14,13 @@ category: work
1414
**Results:** You can find below my personal results. Details on the training and on the implementation can be found in the following <a href="/assets/pdf/Report_Omnimatte"> pdf </a>. The hardest parts were to pre-process videos by calculating homographies, optical flow, binary masks, etc. and to do a notebook in order to run the code on Colab.
1515
<div class="row">
1616
<div class="col-sm mt-3 mt-md-0">
17-
{% include figure.html path="assets/img/DeepL/project_omnimatte_results1.jpg" title="example image" class="img-fluid rounded z-depth-1" %}
17+
{% include figure.html path="assets/img/project/DeepL/project_omnimatte_results1.jpg" title="example image" class="img-fluid rounded z-depth-1" %}
1818
</div>
1919
</div>
2020

2121
<div class="row">
2222
<div class="col-sm mt-3 mt-md-0">
23-
{% include figure.html path="assets/img/DeepL/project_omnimatte_results1.jpg" title="example image" class="img-fluid rounded z-depth-1" %}
23+
{% include figure.html path="assets/img/project/DeepL/project_omnimatte_results1.jpg" title="example image" class="img-fluid rounded z-depth-1" %}
2424
</div>
2525
</div>
2626
<div class="caption">
@@ -29,13 +29,13 @@ category: work
2929

3030
<div class="row">
3131
<div class="col-sm mt-3 mt-md-0">
32-
{% include figure.html path="assets/img/DeepL/project_omnimatte_results3.jpg" title="example image" class="img-fluid rounded z-depth-1" %}
32+
{% include figure.html path="assets/img/project/DeepL/project_omnimatte_results3.jpg" title="example image" class="img-fluid rounded z-depth-1" %}
3333
</div>
3434
</div>
3535

3636
<div class="row">
3737
<div class="col-sm mt-3 mt-md-0">
38-
{% include figure.html path="assets/img/DeepL/project_omnimatte_results4.jpg" title="example image" class="img-fluid rounded z-depth-1" %}
38+
{% include figure.html path="assets/img/project/DeepL/project_omnimatte_results4.jpg" title="example image" class="img-fluid rounded z-depth-1" %}
3939
</div>
4040
</div>
4141
<div class="caption">

_projects/3_project.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
---
22
layout: page
3-
title: 3D reconstruction
4-
description: Reproduce the results of various papers to reconstruct 3D meshes.
5-
img: assets/img/RecVis/project_recvis.png
3+
title: Occupancy Networks
4+
description: 3D reconstruction
5+
img: assets/img/project/RecVis/project_recvis.png
66
importance: 1
7-
category: work
7+
category: master
88
---
99

1010
**References:** <a href="https://github.com/facebookresearch/DeepSDF">Deep SDF, JJ. Park, et al., 2019.</a> <a href="https://github.com/autonomousvision/occupancy_networks"> Occupancy Networks, L Mescheder, 2018.</a> <a href="https://github.com/autonomousvision/shape_as_points"> Shape as Points, S. Peng et al, 2021.</a>
@@ -18,7 +18,7 @@ Several modern methods to reconstruct a 3D meshcan be grouped into two categorie
1818
<div class="col-sm">
1919
</div>
2020
<div class="col-auto-4">
21-
{% include figure.html path="assets/img/RecVis/project_recvis_results.jpg" title="example image" class="img-fluid rounded z-depth-1" %}
21+
{% include figure.html path="assets/img/project/RecVis/project_recvis_results.jpg" title="example image" class="img-fluid rounded z-depth-1" %}
2222
</div>
2323
<div class="col-sm">
2424
</div>

_projects/4_project.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
---
22
layout: page
3-
title: 3D semantic segmentation
4-
description: Adapt "KP-conv" to industrial facility point clouds.
5-
img: assets/img/IC/profile.png
3+
title: KP-conv
4+
description: 3D semantic segmentation
5+
img: assets/img/project/IC/profile.png
66
importance: 1
7-
category: work
7+
category: master
88
---
99

1010
**References:** <a href="https://github.com/HuguesTHOMAS/KPConv">KP-Conv, H. Thomas, et al., 2020.</a>
@@ -18,7 +18,7 @@ category: work
1818

1919
<div class="row">
2020
<div class="col-sm mt-3 mt-md-0">
21-
{% include figure.html path="assets/img/IC/results_1.png"
21+
{% include figure.html path="assets/img/project/IC/results_1.png"
2222
title="example image"
2323
class="img-fluid rounded z-depth-1" %}
2424
</div>

_projects/5_project.md

Lines changed: 64 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,64 @@
1+
---
2+
layout: page
3+
title: PointBeV - A Sparse Approach to BeV Predictions
4+
description: 2D BeV Segmentation
5+
img: assets/img/paper/2024_pointbev/pointbev.PNG
6+
importance: 1
7+
category: paper
8+
year: 2024
9+
---
10+
11+
<h1 align="center"> {{page.title}} </h1>
12+
<h3 align="center"> Loïck Chambon &nbsp;&nbsp; <a href="https://eloiz.github.io">Éloi Zablocki</a> &nbsp;&nbsp; <a href="https://scholar.google.com/citations?user=QnRpMJAAAAAJ">Mickaël Chen</a> &nbsp;&nbsp; <a href="https://f-barto.github.io/">Florent Bartoccioni</a> &nbsp;&nbsp; <a href="https://ptrckprz.github.io/">Patrick Pérez</a> &nbsp;&nbsp; <a href="https://cord.isir.upmc.fr/">Matthieu Cord</a></h3>
13+
14+
15+
<h3 align="center"> {{page.venue}} {{page.year}} </h3>
16+
17+
<div align="center">
18+
<p>
19+
{% if page.paper_url %}
20+
<a href="{{ page.paper_url }}"><i class="far fa-file-pdf"></i> Paper</a>&nbsp;&nbsp;
21+
{% endif %}
22+
{% if page.code_url %}
23+
<a href="{{ page.code_url }}"><i class="fab fa-github"></i> Code</a> &nbsp;&nbsp;
24+
{% endif %}
25+
{% if page.blog_url %}
26+
<a href="{{ page.blog_url }}"><i class="fab fa-blogger"></i> Blog</a> &nbsp;&nbsp;
27+
{% endif %}
28+
{% if page.slides_url %}
29+
<a href="{{ page.slides_url }}"><i class="far fa-file-pdf"></i> Slides</a>&nbsp;&nbsp;
30+
{% endif %}
31+
{% if page.bib_url %}
32+
<a href="{{ page.bib_url}}"><i class="far fa-file-alt"></i> BibTeX</a>&nbsp;&nbsp;
33+
{% endif %}
34+
</p>
35+
</div>
36+
37+
38+
<div class="publication-teaser">
39+
<img src="../../{{ page.image }}" alt="project teaser"/>
40+
</div>
41+
42+
43+
<hr>
44+
45+
<h2 align="center"> Abstract</h2>
46+
47+
<p align="justify">Bird's-eye View (BeV) representations have emerged as the de-facto shared space in driving applications, offering a unified space for sensor data fusion and supporting various downstream tasks. However, conventional models use grids with fixed resolution and range and face computational inefficiencies due to the uniform allocation of resources across all cells. To address this, we propose PointBeV, a novel sparse BeV segmentation model operating on sparse BeV cells instead of dense grids. This approach offers precise control over memory usage, enabling the use of long temporal contexts and accommodating memory-constrained platforms. PointBeV employs an efficient two-pass strategy for training, enabling focused computation on regions of interest. At inference time, it can be used with various memory/performance trade-offs and flexibly adjusts to new specific use cases. PointBeV achieves state-of-the-art results on the nuScenes dataset for vehicle, pedestrian, and lane segmentation, showcasing superior performance in static and temporal settings despite being trained solely with sparse signals. We will release our code along with two new efficient modules used in the architecture: Sparse Feature Pulling, designed for the effective extraction of features from images to BeV, and Submanifold Attention, which enables efficient temporal modeling.</p>
48+
49+
<hr>
50+
<hr>
51+
52+
<h2 align="center">BibTeX</h2>
53+
<left>
54+
<pre class="bibtex-box">
55+
@inproceedings{chambon2024pointbev,
56+
title={PointBeV: A Sparse Approach to BeV Predictions},
57+
author={Loick Chambon and Eloi Zablocki and Mickael Chen and Florent Bartoccioni and Patrick Perez and Matthieu Cord},
58+
year={2024},
59+
booktitle={CVPR}
60+
}
61+
</pre>
62+
</left>
63+
64+
<br>

0 commit comments

Comments
 (0)