You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We tackle the problem of developing humanoid loco-manipulation skills with deep imitation learning. The challenge of collecting human demonstrations for humanoids, in conjunction with the difficulty of policy training under a high degree of freedom, presents substantial challenges.
160
-
We introduce <b>TRILL</b>, a data-efficient framework for learning humanoid loco-manipulation policies from human demonstrations.
161
-
In this framework, we collect human demonstration data through an intuitive Virtual Reality (VR) interface.
162
-
We employ the whole-body control formulation to transform task-space commands from human operators into the robot's joint-torque actuation while stabilizing its dynamics.
163
-
By employing high-level action abstractions tailored for humanoid robots, our method can efficiently learn complex loco-manipulation skills.
164
-
We demonstrate the effectiveness of TRILL in simulation and on a real-world robot for performing various types of tasks.
165
-
</p>
166
-
</td>
167
-
</tr>
155
+
<tablealign=centerwidth=800px>
156
+
<tr>
157
+
<td>
158
+
<p align="justify" width="20%">
159
+
We tackle the problem of developing humanoid loco-manipulation skills with deep imitation learning. The challenge of collecting human demonstrations for humanoids, in conjunction with the difficulty of policy training under a high degree of freedom, presents substantial challenges.
160
+
We introduce <b>TRILL</b>, a data-efficient framework for learning humanoid loco-manipulation policies from human demonstrations.
161
+
In this framework, we collect human demonstration data through an intuitive Virtual Reality (VR) interface.
162
+
We employ the whole-body control formulation to transform task-space commands from human operators into the robot's joint-torque actuation while stabilizing its dynamics.
163
+
By employing high-level action abstractions tailored for humanoid robots, our method can efficiently learn complex loco-manipulation skills.
164
+
We demonstrate the effectiveness of TRILL in simulation and on a real-world robot for performing various types of tasks.
TRILL addresses the challenge of learning humanoid loco-manipulation.
187
-
We introduce a learning framework that facilitates teleoperated demonstrations with task-space commands provided by a human demonstrator.
188
-
The trained policies leverage human complexity and adaptability in decision-making to generate these commands.
189
-
The robot control interface then executes these target commands through joint-torque actuation, complying with robot dynamics.
190
-
This synergistic combination of imitation learning and whole-body control enables successful method implementation in both simulated and real-world environments.
191
-
</p>
192
-
</td>
193
-
</tr>
183
+
<tr>
184
+
<td>
185
+
<p align="justify" width="20%">
186
+
TRILL addresses the challenge of learning humanoid loco-manipulation.
187
+
We introduce a learning framework that facilitates teleoperated demonstrations with task-space commands provided by a human demonstrator.
188
+
The trained policies leverage human complexity and adaptability in decision-making to generate these commands.
189
+
The robot control interface then executes these target commands through joint-torque actuation, complying with robot dynamics.
190
+
This synergistic combination of imitation learning and whole-body control enables successful method implementation in both simulated and real-world environments.
The trained policies generate the target task-space command at 20Hz from the onboard stereo camera observation and the robot's proprioceptive feedback.
216
-
The robot control interface realizes the task-space commands and computes the desired joint torques at 100Hz and sends them to the humanoid robot for actuation.
217
-
More implementation details can be found in <a href="https://github.com/UT-Austin-RPL/TRILL/blob/main/docs/Implementation-Details.md">this page</a>.
218
-
</p>
219
-
</td>
220
-
</tr>
211
+
<tr>
212
+
<td>
213
+
<p align="justify" width="20%">
214
+
The trained policies generate the target task-space command at 20Hz from the onboard stereo camera observation and the robot's proprioceptive feedback.
215
+
The robot control interface realizes the task-space commands and computes the desired joint torques at 100Hz and sends them to the humanoid robot for actuation.
216
+
More implementation details can be found in <a href="https://github.com/UT-Austin-RPL/TRILL/blob/main/docs/Implementation-Details.md">this page</a>.
217
+
</p>
218
+
</td>
219
+
</tr>
221
220
</table>
222
221
223
222
<hr>
224
223
225
224
<h1align="center">Real-Robot Teleoperation</h1>
226
225
<tablealign=centerwidth=800px>
227
-
<tr>
228
-
<td>
229
-
<p align="justify" width="20%">
230
-
We design an intuitive VR teleoperation system, which reduces the cognitive and physical burdens for human operators to provide task demonstration.
231
-
As a result, our teleoperation approach can produce high-quality demonstration data while maintaining safe robot operation.
232
-
</p>
233
-
</td>
234
-
</tr>
226
+
<tr>
227
+
<td>
228
+
<p align="justify" width="20%">
229
+
We design an intuitive VR teleoperation system, which reduces the cognitive and physical burdens for human operators to provide task demonstration.
230
+
As a result, our teleoperation approach can produce high-quality demonstration data while maintaining safe robot operation.
Music: <a href="https://soundcloud.com/bergscloud/happy">Happy</a> by <a href="https://soundcloud.com/bergscloud">Luke Bergs</a>
251
+
</p>
252
+
</td>
253
+
</tr>
256
254
</table>
257
255
258
256
<hr>
259
257
260
258
<h1align="center">Real-Robot Deployment</h1>
261
259
<tablealign=centerwidth=800px>
262
-
<tr>
263
-
<td>
264
-
<p align="justify" width="20%">
265
-
We demonstrate the application of TRILL on the real robot, deploying visuomotor policies trained for dexterous manipulation tasks.
266
-
During evaluation, the robot performed each task 10 times in a row without rebooting and succeeded in 8 out of 10 trials in the <i>Tool pick-and-place</i> task and 9 out of 10 trials in the <i>Removing the spray cap</i> task, respectively.
267
-
</p>
268
-
</td>
269
-
</tr>
260
+
<tr>
261
+
<td>
262
+
<p align="justify" width="20%">
263
+
We demonstrate the application of TRILL on the real robot, deploying visuomotor policies trained for dexterous manipulation tasks.
264
+
During evaluation, the robot performed each task 10 times in a row without rebooting and succeeded in 8 out of 10 trials in the <i>Tool pick-and-place</i> task and 9 out of 10 trials in the <i>Removing the spray cap</i> task, respectively.
We design two realistic simulation environments and evaluate the robot’s ability to successfully perform subtasks involving free-space locomotion, manipulation, and loco-manipulation.
297
-
TRILL, a framework tailored to train humanoid robots, achieves success rates of 96% for free-space locomotion tasks, 80% for manipulation tasks, and 92% for loco-manipulation tasks.
298
-
</p>
299
-
</td>
300
-
</tr>
290
+
<tr>
291
+
<td>
292
+
<p align="justify" width="20%">
293
+
We design two realistic simulation environments and evaluate the robot’s ability to successfully perform subtasks involving free-space locomotion, manipulation, and loco-manipulation.
294
+
TRILL, a framework tailored to train humanoid robots, achieves success rates of 96% for free-space locomotion tasks, 80% for manipulation tasks, and 92% for loco-manipulation tasks.
0 commit comments