Punch Detection/Classification Data Labeling Instructions #2
francislabountyjr
announced in
Tutorials
Replies: 1 comment 1 reply
-
Hi Francis, Great outline, really clear and practical for building a first-pass dataset. We’ve worked on similar video/action labeling projects, and one thing that helps a lot is having the right tooling to keep everything consistent and efficient from the start. If you’re open to it, we’d be glad to explore a quick pilot with you to streamline the punch annotation workflow and help get you to a solid baseline dataset faster. Best, EvoLearns Team |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
v1 Dataset — Boxing Punch Annotations 🥊
Here’s how we want the first‑pass dataset to look so we can start training some baseline models.
1. What we’re after
The very first models we’ll train are punch detection and punch classification. Whether that ends up as one multi‑task net or separate heads doesn’t change how we label; the data requirements are the same:
2. The rectangles (a.k.a. tracks)
Even though the bare minimum is “start frame + end frame + attributes,” actually tracking the glove across those few frames makes QC and later experiments easier. So:
How to track a punch (quick version)
3. Optional: tracking the fighters themselves
If you feel like going the extra mile, you can throw tracks on the two boxers too. Totally not required right now—there are plenty of off‑the‑shelf person‑tracking models we can bolt on later—but having clean boxer IDs might save us headaches down the road. Feel free to skip for v1 and circle back only if it turns out we need it.
That’s it! Keep it simple, stay consistent, and shout if anything’s unclear. Let’s build a killer dataset. 🚀
Beta Was this translation helpful? Give feedback.
All reactions