Skip to content

Commit 6ca4ec8

Browse files
Update dependency com.azure:azure-ai-openai to v1.0.0-beta.16 (#498)
This PR contains the following updates: | Package | Change | Age | Adoption | Passing | Confidence | |---|---|---|---|---|---| | [com.azure:azure-ai-openai](https://redirect.github.com/Azure/azure-sdk-for-java) | `1.0.0-beta.10` -> `1.0.0-beta.16` | [![age](https://developer.mend.io/api/mc/badges/age/maven/com.azure:azure-ai-openai/1.0.0-beta.16?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![adoption](https://developer.mend.io/api/mc/badges/adoption/maven/com.azure:azure-ai-openai/1.0.0-beta.16?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![passing](https://developer.mend.io/api/mc/badges/compatibility/maven/com.azure:azure-ai-openai/1.0.0-beta.10/1.0.0-beta.16?slim=true)](https://docs.renovatebot.com/merge-confidence/) | [![confidence](https://developer.mend.io/api/mc/badges/confidence/maven/com.azure:azure-ai-openai/1.0.0-beta.10/1.0.0-beta.16?slim=true)](https://docs.renovatebot.com/merge-confidence/) | --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR is behind base branch, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/pixee/codemodder-java). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MC41MC4wIiwidXBkYXRlZEluVmVyIjoiNDAuNTAuMCIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==--> --------- Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> Co-authored-by: Arshan Dabirsiaghi <[email protected]>
1 parent 0f02929 commit 6ca4ec8

File tree

3 files changed

+5
-3
lines changed

3 files changed

+5
-3
lines changed

framework/codemodder-base/build.gradle.kts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ dependencies {
2929
api(libs.javaparser.symbolsolver.model)
3030
api(libs.javadiff)
3131
api(libs.jtokkit)
32-
api("com.azure:azure-ai-openai:1.0.0-beta.10")
32+
api("com.azure:azure-ai-openai:1.0.0-beta.16")
3333
api("io.github.classgraph:classgraph:4.8.160")
3434

3535
implementation(libs.tuples)

plugins/codemodder-plugin-llm/src/main/java/io/codemodder/plugins/llm/SarifToLLMForBinaryVerificationAndFixingCodemod.java

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -151,7 +151,8 @@ private BinaryThreatAnalysis analyzeThreat(
151151
// If the estimated token count, which doesn't include the function (~100 tokens) or the reply
152152
// (~200 tokens), is close to the max, then assume the code is safe (for now).
153153
int tokenCount =
154-
model.tokens(List.of(systemMessage.getContent(), userMessage.getContent().toString()));
154+
model.tokens(
155+
List.of(systemMessage.getStringContent(), userMessage.getContent().toString()));
155156
if (tokenCount > model.contextWindow() - 300) {
156157
return new BinaryThreatAnalysis(
157158
"Ignoring file: estimated prompt token count (" + tokenCount + ") is too high.",

plugins/codemodder-plugin-llm/src/main/java/io/codemodder/plugins/llm/SarifToLLMForMultiOutcomeCodemod.java

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -201,7 +201,8 @@ private boolean estimatedToExceedContextWindow(final CodemodInvocationContext co
201201
int tokenCount =
202202
model.tokens(
203203
List.of(
204-
getSystemMessage().getContent(), estimatedUserMessage.getContent().toString()));
204+
getSystemMessage().getStringContent(),
205+
estimatedUserMessage.getContent().toString()));
205206
// estimated token count doesn't include the function (~100 tokens) or the reply
206207
// (~200 tokens) so add those estimates before checking against window size
207208
tokenCount += 300;

0 commit comments

Comments
 (0)