-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathproject_grab.html
More file actions
104 lines (83 loc) · 4.66 KB
/
project_grab.html
File metadata and controls
104 lines (83 loc) · 4.66 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>GRAB - Chase Lolley</title>
<style>
/* Reuse the same CSS variables for consistency */
:root {
--bg-color: #121212;
--text-color: #e0e0e0;
--accent-color: #64ffda;
--card-bg: #1e1e1e;
--font-main: 'Segoe UI', Roboto, Helvetica, Arial, sans-serif;
}
body { background-color: var(--bg-color); color: var(--text-color); font-family: var(--font-main); margin: 0; padding: 0; line-height: 1.6; }
a { color: var(--accent-color); text-decoration: none; }
a:hover { text-decoration: underline; }
/* Layout for the detail page */
.container { max-width: 800px; margin: 0 auto; padding: 40px 20px; }
/* Back Button Style */
.back-nav { margin-bottom: 40px; }
.back-nav a { font-size: 0.9rem; color: #888; }
.back-nav a:hover { color: var(--accent-color); }
/* Typography */
h1 { font-size: 2rem; color: #fff; margin-bottom: 10px; }
h3 { color: #fff; margin-top: 30px; border-bottom: 1px solid #333; padding-bottom: 5px; }
ul { margin-bottom: 20px; }
li { margin-bottom: 10px; }
img { border-radius: 4px; border: 1px solid #333; }
.meta-grid { display: grid; grid-template-columns: repeat(auto-fit, minmax(200px, 1fr)); gap: 15px; background: #1e1e1e; padding: 20px; border-radius: 8px; margin-bottom: 40px; font-size: 0.9rem; }
</style>
</head>
<body>
<div class="container">
<div class="back-nav">
<a href="index.html">← Back to Portfolio</a>
</div>
<h1>Generalizable Recognition of Activity in Breeders (GRAB)</h1>
<div class="meta-grid">
<div><strong>Affiliation:</strong> GTRI - Intelligent Sustainable Tech</div>
<div><strong>Timeframe:</strong> August 2025 - Present</div>
<div><strong>Status:</strong> Ongoing</div>
<div><strong>Links:</strong> <a href="#" target="_blank">Tracklet Output (Video)</a></div>
</div>
<h3>Project Overview</h3>
<p>
Internal research project focused on building a robust, end-to-end computer vision pipeline for recognizing poultry behaviors from barn video. The system is designed to generalize across barns, lighting conditions, and camera viewpoints, supporting long-term goals in precision agriculture and animal welfare monitoring.
</p>
<p><em>Note: Descriptions and visuals have been intentionally abstracted to respect confidentiality.</em></p>
<h3>System and Technical Context</h3>
<img src="images/grab_pipeline.png" alt="GRAB pipeline" style="width: 100%; margin-bottom: 5px;">
<p style="font-size: 0.85rem; color: #888; text-align: center;"><em>Figure: End-to-end perception pipeline.</em></p>
<p>The system operates in real agricultural environments characterized by dense scenes, heavy occlusion, and variable lighting. Raw video is processed through a multi-stage perception pipeline that isolates individual animals and classifies behaviors over extended time horizons.</p>
<h3>Responsibilities and Contributions</h3>
<strong>Perception Pipeline Design</strong>
<ul>
<li>Designed an end-to-end poultry activity recognition pipeline spanning segmentation, tracking, and temporal behavior classification.</li>
</ul>
<strong>Segmentation and Data Quality</strong>
<ul>
<li>Implemented a segmentation-based preprocessing stage using vision foundation models to isolate individual animals.</li>
<li>Developed a custom binary classifier to filter segmentation outputs.</li>
</ul>
<img src="images/grab_conf_mat.png" alt="Confusion Matrix" style="width: 100%; max-width: 500px; display:block; margin: 20px auto;">
<strong>Temporal Modeling</strong>
<ul>
<li>Built a tracklet generation pipeline that aggregates frame-level segmentations into temporally consistent clips.</li>
<li>Evaluated transformer-based architectures for long-range temporal reasoning.</li>
</ul>
<h3>Results and Evidence</h3>
<ul>
<li>Demonstrated reliable end-to-end activity recognition from raw video under real-world barn conditions.</li>
<li>Validated model robustness across multiple environments and acquisition conditions.</li>
</ul>
<h3>Tools Used</h3>
<p>Python, PyTorch, ROS, OpenCV, Transformer Architectures.</p>
<div style="margin-top: 50px; border-top: 1px solid #333; padding-top: 20px; text-align: center;">
<a href="index.html">Back to Home</a>
</div>
</div>
</body>
</html>