Skip to content

Commit ac92a73

Browse files
pytorchbotkimishpatel
authored andcommitted
[Executorch][llm] Make runner return error if execution was not successful (pytorch#12141)
This PR was created by the merge bot to help merge the original PR into the main branch. ghstack PR number: pytorch#12129 by @kimishpatel ^ Please use this as the source of truth for the PR details, comments, and reviews ghstack PR base: https://github.com/pytorch/executorch/tree/gh/kimishpatel/194/base ghstack PR head: https://github.com/pytorch/executorch/tree/gh/kimishpatel/194/head Merge bot PR base: https://github.com/pytorch/executorch/tree/main Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/kimishpatel/194/orig @diff-train-skip-merge Co-authored-by: Kimish Patel <[email protected]>
1 parent 3bff95b commit ac92a73

File tree

1 file changed

+10
-2
lines changed

1 file changed

+10
-2
lines changed

examples/models/llama/main.cpp

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -100,12 +100,20 @@ int32_t main(int32_t argc, char** argv) {
100100
}
101101

102102
if (warmup) {
103-
runner->warmup(prompt, /*max_new_tokens=*/seq_len);
103+
auto error = runner->warmup(prompt, /*max_new_tokens=*/seq_len);
104+
if (error != executorch::runtime::Error::Ok) {
105+
ET_LOG(Error, "Failed to warmup llama runner");
106+
return 1;
107+
}
104108
}
105109
// generate
106110
executorch::extension::llm::GenerationConfig config{
107111
.seq_len = seq_len, .temperature = temperature};
108-
runner->generate(prompt, config);
112+
auto error = runner->generate(prompt, config);
113+
if (error != executorch::runtime::Error::Ok) {
114+
ET_LOG(Error, "Failed to warmup llama runner");
115+
return 1;
116+
}
109117

110118
return 0;
111119
}

0 commit comments

Comments
 (0)