Skip to content

Commit 1eb1eb1

Browse files
committed
challenge
0 parents  commit 1eb1eb1

File tree

2 files changed

+71
-0
lines changed

2 files changed

+71
-0
lines changed

README.md

Lines changed: 71 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
# Welcome to the Stroma's Machine Learning Engineering Challenge!
2+
> The objective of this challenge project is for you to showcase your capabilities in creating neural network pipelines.
3+
4+
Your result should be able to predict and track the number of nuts and bolts that have fallen through the frame of the provided video snippets with high accuracy.
5+
6+
You are provided with 4 minutes of video for training and 2 minutes of video for validation and another 2 minutes of video for testing. The [video files](https://github.com/Stroma-Vision/machine-learning-challenge/releases/download/v0.1/challenge.zip) are synthetically generated 640x640 frames in 30FPS, each frame is accurately labeled in the [COCO](https://opencv.org/introduction-to-the-coco-dataset/) format with an additional field named `track_id`.
7+
8+
> Please review the [Challenge Instructions](https://stromavision.notion.site/Stroma-Machine-Learning-Engineer-Technical-Interview-19f4573982b64791b14121faddb2f176) once again before proceeding.
9+
10+
Image below shows the expected output of your model.
11+
12+
![Expected Output](./sample.gif)
13+
14+
## Data
15+
16+
**Folder Structure**
17+
```bash
18+
challenge
19+
├── annotations
20+
│   ├── instances_test.json
21+
│   ├── instances_train.json
22+
│   └── instances_val.json
23+
└── images
24+
├── test
25+
│   └── test.mp4
26+
├── train
27+
│   └── train.mp4
28+
└── val
29+
└── val.mp4
30+
31+
6 directories, 6 files
32+
```
33+
34+
Each annotation in COCO format contains a `track_id` section. With the following schema:
35+
36+
**JSON Schema**
37+
38+
```json
39+
"annotations":[
40+
{
41+
"id": int,
42+
"image_id": int, (#frame)
43+
"category_id": int,
44+
"segmentation": RLE,
45+
"area": float,
46+
"bbox": [x,y,width,height],
47+
"iscrowd": 0,
48+
"track_id": int,
49+
},
50+
...
51+
]
52+
```
53+
You may use any type of model of your preference, if your model requires any other annotation format, be careful when converting dataset to your format.
54+
55+
## Results
56+
57+
You have the freedom to present your work in any format, and it will be evaluated based on the overall representation of your work. Utilizing visualizations is encouraged. However, keep in mind that your audience will be technical and familiar with the field, so a clear and concise explanation of your work is highly recommended.
58+
59+
⚠️Remember that the performance of your model will be evaluated using a separate validation dataset.
60+
61+
`Note: You may submit a Github repo with scripts or a google colab notebook with your work.`
62+
63+
## Suggestions
64+
65+
- Training a model from scratch may take a lot of time, you may use a `pretrained` model and fine-tune it to reach your goal.
66+
67+
- Optimize the dataset for the available hardware resources by either utilizing a `subset` to iterate faster or use `augmentation techniques` to improve your model's accuracy, as appropriate.
68+
69+
- Make sure to document your work, you may provide an explanatory `README.md` file or you may use `Jupyter Notebook`'s markdown cells to explain your findings.
70+
71+
- Please ensure to `document` any difficulties encountered and the corresponding resolution methods adopted during the completion of this challenge as they are of utmost relevance to us.

sample.gif

11.1 MB
Loading

0 commit comments

Comments
 (0)