Skip to content

Commit 6db0dec

Browse files
committed
update
1 parent 2463ac3 commit 6db0dec

File tree

1 file changed

+41
-19
lines changed

1 file changed

+41
-19
lines changed

index.html

Lines changed: 41 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -195,7 +195,7 @@ <h2 class="title is-3">Best-of-N Results</h2>
195195
</p>
196196
<div class="box m-5">
197197
<div class="content has-text-centered">
198-
<img src="static/images/ac_table2.png" alt="main best-of-N results" class="center" width="100%"/>
198+
<img src="static/images/ac_table2.png" alt="main best-of-N results" class="center" width="80%"/>
199199
</div>
200200
</div>
201201
</div>
@@ -212,38 +212,60 @@ <h2 class="title is-3">RL Results</h2>
212212
</p>
213213
<div class="box m-5">
214214
<div class="content has-text-centered">
215-
<img src="static/images/ac_table3.png" alt="RL results" class="center" width="100%"/>
215+
<img src="static/images/ac_table3.png" alt="RL results" class="center" width="80%"/>
216216
</div>
217217
</div>
218218
</div>
219219
</div>
220220
</div>
221221

222-
<div class="columns is-centered m-6">
223-
<div class="column is-full has-text-centered content">
224-
<h2 class="title is-3">Ablation Study</h2>
225-
<div class="carousel results-carousel">
222+
<div class="columns is-centered has-text-centered">
223+
<div class="column is-four-fifths">
224+
<h2 class="title is-3">Comparison with existing RM</h2>
225+
<div class="content has-text-justified">
226+
<p>
227+
Existing top-ranked reward models on Reward Bench can perform pretty bad for best-of-N sampling in the coding scenarion, and sometime can underperform the greedy results. However, our AceCodeRM-7B consistently outperform them with an average of <b>6.9</b> improvement
228+
</p>
226229
<div class="box m-5">
227-
<div class="content has-text-centered">
228-
<img src="static/images/ac_table4.png" alt="Comparion with other RM" width="95%"/>
229-
<p> Existing top-ranked reward models on Reward Bench can perform pretty bad for best-of-N sampling in the coding scenarion, and sometime can underperform the greedy results. However, our AceCodeRM-7B consistently outperform them with an average of <b>6.9</b> improvement </p>
230+
<div class="content has-text-centered">
231+
<img src="static/images/ac_table4.png" alt="Comparion with other RM" class="center" width="80%"/>
232+
</div>
230233
</div>
231-
</div>
234+
</div>
235+
</div>
236+
</div>
237+
238+
<div class="columns is-centered has-text-centered">
239+
<div class="column is-four-fifths">
240+
<h2 class="title is-3">Test case filtering matters</h2>
241+
<div class="content has-text-justified">
242+
<p>
243+
We also conduct experiments to investigate how filtering the test cases with a proxy model can affect the results. As shown in table, training RM on data after the filtering improve the performance significantly, especially for those hard code questions like MBPP-Plus and BigCodeBench-Hard (C/I). We believe this is because the test case filtering can ensure the remaining ones are consistent with each other and thus point to the same implicit program, which improves the quality of the rewards.
244+
</p>
232245
<div class="box m-5">
233-
<div class="content has-text-centered">
234-
<img src="static/images/ac_table5.png" alt="Test case filtering matters" width="95%"/>
235-
<p>We also conduct experiments to investigate how filtering the test cases with a proxy model can affect the results. As shown in table, training RM on data after the filtering improve the performance significantly, especially for those hard code questions like MBPP-Plus and BigCodeBench-Hard (C/I). We believe this is because the test case filtering can ensure the remaining ones are consistent with each other and thus point to the same implicit program, which improves the quality of the rewards.</p>
246+
<div class="content has-text-centered">
247+
<img src="static/images/ac_table5.png" alt="Test case filtering matters" class="center" width="80%"/>
248+
</div>
236249
</div>
237-
</div>
250+
</div>
251+
</div>
252+
</div>
253+
254+
<!-- <div class="columns is-centered has-text-centered">
255+
<div class="column is-four-fifths">
256+
<h2 class="title is-3">RM backbone matters</h2>
257+
<div class="content has-text-justified">
258+
<p>
259+
We show that Qwen2.5-Coder is a better backbone for the reward model compared to Llama-3.1-8B. This is because the Qwen2.5-Coder models have been pre-trained on way more code-related data compared to the Llama-3.1 models, and thus more knowledgeable when tuning it into a reward model.
260+
</p>
238261
<div class="box m-5">
239-
<div class="content has-text-centered">
240-
<img src="static/images/ac_table6.png" alt="RM Backbone Matters" width="95%"/>
241-
<p> We show that Qwen2.5-Coder is a better backbone for the reward model compared to Llama-3.1-8B. This is because the Qwen2.5-Coder models have been pre-trained on way more code-related data compared to the Llama-3.1 models, and thus more knowledgeable when tuning it into a reward model.</p>
262+
<div class="content has-text-centered">
263+
<img src="static/images/ac_table6.png" alt="RM Backbone Matters" class="center" width="80%"/>
264+
</div>
242265
</div>
243-
</div>
244266
</div>
245267
</div>
246-
</div>
268+
</div> -->
247269

248270
</div>
249271
</section>

0 commit comments

Comments
 (0)