Conversation
- Add ToolCallingBridge in C++ for parsing tool_call tags from LLM output - Handle edge cases: missing closing tags, unquoted JSON keys - Add tool registration and prompt formatting in C++ bridge - TypeScript orchestration layer calls C++ for parsing, handles execution - Add Llama 3.2 3B model to example app (suitable for tool calling) - Update ChatScreen with tool calling demo (weather API example) Architecture: - C++ handles: parsing, validation, prompt formatting - TypeScript handles: tool registration (stores executors), execution (needs JS APIs) Co-Authored-By: jm
- Replace console.log with SDKLogger for consistency - Use ?? instead of || for maxToolCalls to respect explicit 0 - Parse argumentsJson if it's a string from C++ - Update comments to accurately reflect architecture (C++ parses, TS handles registry) - Remove toolsUsed field from analytics (not in type) - Fix doc comment for parseToolCallFromOutput return format
Signed-off-by: Hyunoh-Yeo <hyunoh.yeo@gmail.com>
Signed-off-by: Hyunoh-Yeo <hyunoh.yeo@gmail.com>
Signed-off-by: Hyunoh-Yeo <hyunoh.yeo@gmail.com>
Signed-off-by: Hyunoh-Yeo <hyunoh.yeo@gmail.com>
Signed-off-by: Hyunoh-Yeo <hyunoh.yeo@gmail.com>
Signed-off-by: Hyunoh-Yeo <hyunoh.yeo@gmail.com>
feat: Add tool calling support with C++
updating git ignore
fix: ios swift app is not deleting models once downloaded + Metadata display bug
📝 WalkthroughWalkthroughIntroduces comprehensive tool-calling capabilities across all SDKs (React Native, Android, iOS, Flutter). Adds C++ commons implementation as single source of truth for parsing and prompt formatting, platform-specific bridges, new demo tools (weather, time, calculator), and UI/settings integrations in all sample applications. Changes
Sequence DiagramsequenceDiagram
participant User as User
participant UI as Chat UI
participant SDK as RunAnywhere SDK
participant CppBridge as C++ Bridge<br/>(Commons)
participant LLM as LLM Engine
participant ToolReg as Tool Registry
participant Executor as Tool Executor
User->>UI: Enter prompt + registered tools enabled
UI->>SDK: generateWithTools(prompt, options)
SDK->>ToolReg: getRegisteredTools()
ToolReg-->>SDK: [tool definitions]
SDK->>CppBridge: formatToolsForPrompt(tools, format)
CppBridge-->>SDK: formatted system prompt
SDK->>CppBridge: buildInitialPrompt(userPrompt, tools, options)
CppBridge-->>SDK: combined initial prompt
SDK->>LLM: generate(combined prompt)
LLM-->>SDK: llm_output (may contain tool_call tags)
SDK->>CppBridge: parseToolCallFromOutput(llm_output)
CppBridge-->>SDK: {hasToolCall, toolName, arguments, cleanText}
alt Tool Call Detected & AutoExecute
SDK->>ToolReg: getTool(toolName)
ToolReg->>Executor: executor(arguments)
Executor-->>ToolReg: tool_result
ToolReg-->>SDK: ToolResult
SDK->>CppBridge: buildFollowupPrompt(originalPrompt, toolsPrompt, toolName, result)
CppBridge-->>SDK: followup_prompt
Note over SDK: Loop continues if maxToolCalls not reached
SDK->>LLM: generate(followup_prompt)
LLM-->>SDK: next_output
else No Tool Call or ManualExecute
SDK->>UI: ToolCallingResult {text, toolCalls, toolResults}
UI->>User: Display response + tool indicators
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120+ minutes Possibly related PRs
Suggested labels
🚥 Pre-merge checks | ❌ 3❌ Failed checks (2 warnings, 1 inconclusive)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| country: area?.country?.[0]?.value || '', | ||
| temperature_f: parseInt(current.temp_F, 10), | ||
| temperature_c: parseInt(current.temp_C, 10), | ||
| condition: current.weatherDesc[0].value, |
There was a problem hiding this comment.
Missing optional chaining - could crash if weatherDesc array is empty or null
| condition: current.weatherDesc[0].value, | |
| condition: current.weatherDesc?.[0]?.value || 'Unknown', |
Prompt To Fix With AI
This is a comment left during a code review.
Path: examples/react-native/RunAnywhereAI/src/screens/ChatScreen.tsx
Line: 106:106
Comment:
Missing optional chaining - could crash if `weatherDesc` array is empty or null
```suggestion
condition: current.weatherDesc?.[0]?.value || 'Unknown',
```
How can I resolve this? If you propose a fix, please make it concise.There was a problem hiding this comment.
Actionable comments posted: 8
🤖 Fix all issues with AI agents
In `@examples/react-native/RunAnywhereAI/src/screens/ChatScreen.tsx`:
- Around line 356-372: The code in ChatScreen.tsx currently logs raw user
prompts and tool results (prompt, result.toolCalls, result.toolResults) when
calling RunAnywhere.generateWithTools; remove or guard these logs to avoid
exposing sensitive data by either deleting the console.log lines that print the
raw prompt and toolResults or wrapping them behind a strict debug flag (e.g.,
process.env.DEBUG_LOGS) and always mask sensitive content (log only tool names,
counts, or sanitized summaries). Update the logging around
RunAnywhere.generateWithTools so you only log non-sensitive info like
result.toolCalls.map(t => t.toolName) or result.toolCalls.length unless explicit
debug mode is enabled and ensure any debug path documents it is off in
production.
- Around line 84-138: The code logs user-provided data in the get_weather tool
(console.log('[Tool] get_weather called for:', location)) and in
get_current_time (console.log('[Tool] get_current_time called')), which can
expose PII in production; update these to only log when in development or to
redact/sanitize inputs: wrap or replace the console.log calls in the get_weather
handler and the get_current_time registration with a __DEV__ check (or an
environment-based feature flag), or log a non-identifying placeholder (e.g.,
'[Tool] get_weather called' without the location) and avoid returning unredacted
logs to console; ensure the identifiers referenced are the get_weather async
handler, the location variable, and the get_current_time handler so you change
the right statements.
In `@examples/react-native/RunAnywhereAI/src/screens/ToolsScreen.tsx`:
- Around line 88-138: The code logs user-provided or sensitive values (location
in the get_weather tool and timestamps in get_current_time) which can be PII;
update the RunAnywhere.registerTool handlers (the tool with name 'get_weather'
and the tool with name 'get_current_time') to avoid emitting raw user data to
console in production by either gating logs with __DEV__ (e.g., only call
console.log/console.error when __DEV__ is true) or redact the values before
logging, and ensure error logs do not include the full location string but
instead include limited context or the error message only.
- Around line 69-86: In registerDemoTools, remove or change the PII-leaking
console log inside the get_weather tool executor: locate the console.log('[Tool]
get_weather called for:', location) and either delete it or replace it with a
non-sensitive message such as console.log('[Tool] get_weather called') (or mask
the input) so the user-provided location is not printed; ensure this change is
applied in the get_weather executor function registered via
RunAnywhere.registerTool.
In `@sdk/runanywhere-react-native/.gitignore`:
- Line 14: The .gitignore contains the rule '**/ios/xcframeworks/' which
conflicts with the note that xcframeworks are bundled for npm; remove that
blanket ignore or restrict it to build-only artifacts (e.g., a subpath like
'**/ios/xcframeworks/build/' or other temporary build dirs) so the actual
xcframework deliverables are tracked and shipped; update or delete the
'**/ios/xcframeworks/' entry and ensure the note lines about bundling remain
consistent with the ignore rules.
In `@sdk/runanywhere-react-native/packages/core/android/CMakeLists.txt`:
- Around line 8-20: The CMake script uses FetchContent_MakeAvailable which
requires CMake ≥3.14; update the project CMake minimum or vendor the dependency:
change the cmake_minimum_required declaration to at least 3.14.0 (so
FetchContent_MakeAvailable is supported) or alternatively remove
FetchContent_MakeAvailable/FetchContent_Declare and vendor the nlohmann/json
headers manually; look for the cmake_minimum_required line and the
FetchContent_Declare/FetchContent_MakeAvailable blocks to apply the change.
In
`@sdk/runanywhere-react-native/packages/core/cpp/bridges/ToolCallingBridge.cpp`:
- Around line 52-65: The quote-escape check in the loop using jsonStr[i-1] !=
'\\' fails for sequences with multiple backslashes (e.g., \\\") — change the
logic where the code toggles inString (around the for loop over jsonStr and the
if (c == '"' ...) block) to count the number of consecutive backslashes
immediately preceding the quote in jsonStr and treat the quote as escaped only
if that count is odd; update the branch that currently uses jsonStr[i-1] to use
this parity test so inString is toggled correctly for double-escaped
backslashes.
In
`@sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere`+ToolCalling.ts:
- Around line 362-365: The continuation call to generateWithTools decrements
options?.maxToolCalls and can reach 0 or negative, causing no generation; update
the call (in RunAnywhere+ToolCalling.ts where generateWithTools is invoked) to
compute maxToolCalls as Math.max(1, (options?.maxToolCalls ?? 5) - 1) (or
equivalent) so the continued invocation always uses at least 1 tool call; keep
the rest of options spread as-is to preserve other settings.
🧹 Nitpick comments (9)
examples/react-native/RunAnywhereAI/ios/RunAnywhereAI.xcodeproj/project.pbxproj (1)
479-479: Consider externalizingDEVELOPMENT_TEAMto avoid breaking local builds.Hardcoding a team ID in the project file makes builds fail for contributors or CI under different Apple accounts. Prefer setting this in an
.xcconfigor per-user build settings.Also applies to: 508-508
sdk/runanywhere-swift/Sources/RunAnywhere/Foundation/Bridge/Extensions/CppBridge+Storage.swift (1)
236-247: Reuse FileOperationsUtilities.fileSize to avoid duplication
Optional: leverage the existing helper for file-size retrieval to keep consistency and reduce duplicate logic.♻️ Proposed refactor
- if let attrs = try? fm.attributesOfItem(atPath: url.path), - let fileSize = attrs[.size] as? Int64 { - return fileSize - } else { - return 0 - } + return FileOperationsUtilities.fileSize(at: url) ?? 0examples/react-native/RunAnywhereAI/src/types/index.ts (1)
25-33: Update the top-level tab list comment to include Tools.The new tab is documented below, but the header still lists only the original tabs. Consider syncing it for clarity.
✏️ Suggested doc tweak
- * Tabs: Chat, STT, TTS, Voice (VoiceAssistant), Settings + * Tabs: Chat, STT, TTS, Voice (VoiceAssistant), Tools, Settingssdk/runanywhere-react-native/packages/core/cpp/bridges/ToolCallingBridge.cpp (1)
248-256:callIdis hardcoded to 0 — document the limitation or implement proper ID generation.If the system ever needs to support multiple tool calls in a single response or correlate tool calls with results, a static
callId = 0won't work. Consider either:
- Documenting this as a known limitation (single tool call per response)
- Generating unique IDs (e.g., incrementing counter or hash)
sdk/runanywhere-react-native/packages/core/cpp/bridges/ToolCallingBridge.hpp (1)
35-41:ToolCallParseResultstruct is defined but unused — consider removing or using it.The
ToolCallParseResultstruct is declared butparseToolCall()returns a JSON string instead of this struct. This creates confusion about the API design.Either:
- Remove the unused struct
- Use the struct internally and serialize to JSON at the boundary
sdk/runanywhere-react-native/packages/core/src/types/ToolCallingTypes.ts (1)
33-37: WidenToolParameter.enumtype to support numeric and boolean enums. TheParameterTypedefinition allows'number'and'boolean', butenumis restricted tostring[], creating a type mismatch. When a parameter type isnumberorboolean, developers cannot define enum-constrained values for it.♻️ Suggested type change
- enum?: string[]; + enum?: Array<string | number | boolean>;sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere+ToolCalling.ts (3)
47-53: Consider logging or warning on tool overwrite.When registering a tool with a name that already exists, the current implementation silently overwrites it. This might be intentional, but a debug log indicating the overwrite could help with troubleshooting.
💡 Optional: Add overwrite detection
export function registerTool( definition: ToolDefinition, executor: ToolExecutor ): void { + if (registeredTools.has(definition.name)) { + logger.debug(`Overwriting existing tool: ${definition.name}`); + } logger.debug(`Registering tool: ${definition.name}`); registeredTools.set(definition.name, { definition, executor }); }
110-114: Minor:Date.now()may produce duplicate callIds.If multiple tool calls are parsed within the same millisecond,
Date.now()could generate identical callIds. Consider using a counter or adding randomness for uniqueness.💡 Optional: Improve callId uniqueness
+let callIdCounter = 0; + // Inside parseToolCallViaCpp: const toolCall: ToolCall = { toolName: result.toolName, arguments: args, - callId: `call_${result.callId || Date.now()}`, + callId: `call_${result.callId || `${Date.now()}_${++callIdCounter}`}`, };
241-246:conversationHistoryis populated but never used.The
conversationHistoryarray is built throughout the generation loop (lines 246, 278, 307-308) but is never actually utilized in prompt construction. ThefullPromptis manually constructed each iteration without referencing this history. This appears to be dead code or an incomplete feature.♻️ Suggestion: Either remove or integrate conversationHistory
If this is intended for future use or debugging, consider:
- Adding a comment explaining its purpose
- Including it in the
ToolCallingResultfor debugging- Using it to build prompts for better context preservation
If unneeded, remove lines 242, 246, 278, and 307-308.
| async (args) => { | ||
| // Handle both 'location' and 'city' parameter names (models vary) | ||
| const location = (args.location || args.city) as string; | ||
| console.log('[Tool] get_weather called for:', location); | ||
|
|
||
| try { | ||
| const url = `https://wttr.in/${encodeURIComponent(location)}?format=j1`; | ||
| const response = await fetch(url); | ||
|
|
||
| if (!response.ok) { | ||
| return { error: `Weather API error: ${response.status}` }; | ||
| } | ||
|
|
||
| const data = await response.json(); | ||
| const current = data.current_condition[0]; | ||
| const area = data.nearest_area?.[0]; | ||
|
|
||
| return { | ||
| location: area?.areaName?.[0]?.value || location, | ||
| country: area?.country?.[0]?.value || '', | ||
| temperature_f: parseInt(current.temp_F, 10), | ||
| temperature_c: parseInt(current.temp_C, 10), | ||
| condition: current.weatherDesc[0].value, | ||
| humidity: `${current.humidity}%`, | ||
| wind_mph: `${current.windspeedMiles} mph`, | ||
| feels_like_f: parseInt(current.FeelsLikeF, 10), | ||
| }; | ||
| } catch (error) { | ||
| const msg = error instanceof Error ? error.message : String(error); | ||
| console.error('[Tool] Weather fetch failed:', msg); | ||
| return { error: msg }; | ||
| } | ||
| } | ||
| ); | ||
|
|
||
| // Current time tool | ||
| RunAnywhere.registerTool( | ||
| { | ||
| name: 'get_current_time', | ||
| description: 'Gets the current date and time', | ||
| parameters: [], | ||
| }, | ||
| async () => { | ||
| console.log('[Tool] get_current_time called'); | ||
| const now = new Date(); | ||
| return { | ||
| date: now.toLocaleDateString(), | ||
| time: now.toLocaleTimeString(), | ||
| timezone: Intl.DateTimeFormat().resolvedOptions().timeZone, | ||
| }; | ||
| } | ||
| ); | ||
|
|
||
| console.log('[ChatScreen] Tools registered: get_weather, get_current_time'); | ||
| }; |
There was a problem hiding this comment.
Avoid logging user-provided locations/time in production. These can be PII; gate behind __DEV__ or redact.
🔧 Suggested fix (guard logs)
- console.log('[Tool] get_weather called for:', location);
+ if (__DEV__) {
+ console.log('[Tool] get_weather called for:', location);
+ }
...
- console.log('[Tool] get_current_time called');
+ if (__DEV__) {
+ console.log('[Tool] get_current_time called');
+ }
...
- console.log('[ChatScreen] Tools registered: get_weather, get_current_time');
+ if (__DEV__) {
+ console.log('[ChatScreen] Tools registered: get_weather, get_current_time');
+ }🤖 Prompt for AI Agents
In `@examples/react-native/RunAnywhereAI/src/screens/ChatScreen.tsx` around lines
84 - 138, The code logs user-provided data in the get_weather tool
(console.log('[Tool] get_weather called for:', location)) and in
get_current_time (console.log('[Tool] get_current_time called')), which can
expose PII in production; update these to only log when in development or to
redact/sanitize inputs: wrap or replace the console.log calls in the get_weather
handler and the get_current_time registration with a __DEV__ check (or an
environment-based feature flag), or log a non-identifying placeholder (e.g.,
'[Tool] get_weather called' without the location) and avoid returning unredacted
logs to console; ensure the identifiers referenced are the get_weather async
handler, the location variable, and the get_current_time handler so you change
the right statements.
| try { | ||
| console.log('[ChatScreen] Starting streaming generation for:', prompt); | ||
| console.log('[ChatScreen] Starting generation with tools for:', prompt); | ||
|
|
||
| // Use streaming generation (matches Swift SDK: RunAnywhere.generateStream) | ||
| const streamingResult = await RunAnywhere.generateStream(prompt, { | ||
| // Use tool-enabled generation | ||
| // If the LLM needs to call a tool (like weather API), it happens automatically | ||
| const result = await RunAnywhere.generateWithTools(prompt, { | ||
| autoExecute: true, | ||
| maxToolCalls: 3, | ||
| maxTokens: 1000, | ||
| temperature: 0.7, | ||
| }); | ||
|
|
||
| let fullResponse = ''; | ||
|
|
||
| // Stream tokens in real-time (matches Swift's for await loop) | ||
| for await (const token of streamingResult.stream) { | ||
| fullResponse += token; | ||
|
|
||
| // Update assistant message content as tokens arrive | ||
| updateMessage( | ||
| { | ||
| ...assistantMessage, | ||
| content: fullResponse, | ||
| }, | ||
| currentConversation.id | ||
| ); | ||
|
|
||
| // Scroll to keep up with new content | ||
| flatListRef.current?.scrollToEnd({ animated: false }); | ||
| // Log tool usage for debugging | ||
| if (result.toolCalls.length > 0) { | ||
| console.log('[ChatScreen] Tools used:', result.toolCalls.map(t => t.toolName)); | ||
| console.log('[ChatScreen] Tool results:', result.toolResults); | ||
| } |
There was a problem hiding this comment.
Do not log raw user prompts/tool results by default. Prompts can contain sensitive data; guard or remove these logs.
🔧 Suggested fix (guard logs)
- console.log('[ChatScreen] Starting generation with tools for:', prompt);
+ if (__DEV__) {
+ console.log('[ChatScreen] Starting generation with tools for:', prompt);
+ }
...
- console.log('[ChatScreen] Tools used:', result.toolCalls.map(t => t.toolName));
- console.log('[ChatScreen] Tool results:', result.toolResults);
+ if (__DEV__) {
+ console.log('[ChatScreen] Tools used:', result.toolCalls.map(t => t.toolName));
+ console.log('[ChatScreen] Tool results:', result.toolResults);
+ }🤖 Prompt for AI Agents
In `@examples/react-native/RunAnywhereAI/src/screens/ChatScreen.tsx` around lines
356 - 372, The code in ChatScreen.tsx currently logs raw user prompts and tool
results (prompt, result.toolCalls, result.toolResults) when calling
RunAnywhere.generateWithTools; remove or guard these logs to avoid exposing
sensitive data by either deleting the console.log lines that print the raw
prompt and toolResults or wrapping them behind a strict debug flag (e.g.,
process.env.DEBUG_LOGS) and always mask sensitive content (log only tool names,
counts, or sanitized summaries). Update the logging around
RunAnywhere.generateWithTools so you only log non-sensitive info like
result.toolCalls.map(t => t.toolName) or result.toolCalls.length unless explicit
debug mode is enabled and ensure any debug path documents it is off in
production.
examples/react-native/RunAnywhereAI/src/screens/ToolsScreen.tsx
Outdated
Show resolved
Hide resolved
examples/react-native/RunAnywhereAI/src/screens/ToolsScreen.tsx
Outdated
Show resolved
Hide resolved
| # iOS | ||
| ios/build/ | ||
| **/ios/Binaries/ | ||
| **/ios/xcframeworks/ |
There was a problem hiding this comment.
Conflict with “do not gitignore xcframeworks” note.
Line 14 ignores **/ios/xcframeworks/, but Lines 17–22 state xcframeworks are bundled for npm publish and should not be ignored. This will likely exclude required artifacts from source control/npm packages. Please remove this rule or narrow it to build-only output paths that are not shipped. Based on learnings, xcframeworks are required deliverables.
🤖 Prompt for AI Agents
In `@sdk/runanywhere-react-native/.gitignore` at line 14, The .gitignore contains
the rule '**/ios/xcframeworks/' which conflicts with the note that xcframeworks
are bundled for npm; remove that blanket ignore or restrict it to build-only
artifacts (e.g., a subpath like '**/ios/xcframeworks/build/' or other temporary
build dirs) so the actual xcframework deliverables are tracked and shipped;
update or delete the '**/ios/xcframeworks/' entry and ensure the note lines
about bundling remain consistent with the ignore rules.
| # ============================================================================= | ||
| # nlohmann/json - Header-only JSON library for robust JSON parsing | ||
| # Used by ToolCallingBridge for parsing tool call JSON from LLM output | ||
| # ============================================================================= | ||
| include(FetchContent) | ||
| FetchContent_Declare( | ||
| nlohmann_json | ||
| GIT_REPOSITORY https://github.com/nlohmann/json.git | ||
| GIT_TAG v3.11.3 | ||
| GIT_SHALLOW TRUE | ||
| ) | ||
| FetchContent_MakeAvailable(nlohmann_json) | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "CMakeLists.txt" -type f | head -20Repository: RunanywhereAI/runanywhere-sdks
Length of output: 558
🌐 Web query:
FetchContent CMake minimum version requirement
💡 Result:
- The FetchContent module itself is available starting in CMake 3.11 (“New in version 3.11”). [1][2]
- If you use the common modern helper
FetchContent_MakeAvailable(), that command is available starting in CMake 3.14 (“Added in version 3.14”). [3]
So, in practice:
- Minimum = 3.11 if you only rely on
include(FetchContent)+FetchContent_Declare()+FetchContent_Populate(). [1][2] - Minimum = 3.14 if you use
FetchContent_MakeAvailable(). [3]
Note: In CMake 4.0+, cmake_minimum_required(VERSION ...) must specify at least 3.5 (older compatibility is removed). [4]
Sources:
[1] CMake FetchContent docs (v3.20): (cmake.org)
[2] CMake FetchContent docs (v3.23): (cmake.org)
[3] CMake FetchContent docs (git-stage/latest): (cmake.org)
[4] CMake cmake_minimum_required() docs (latest): (cmake.org)
Citations:
- 1: https://cmake.org/cmake/help/v3.20/module/FetchContent.html?utm_source=openai
- 2: https://cmake.org/cmake/help/v3.23/module/FetchContent.html?utm_source=openai
- 3: https://cmake.org/cmake/help/git-stage/module/FetchContent.html?utm_source=openai
- 4: https://cmake.org/cmake/help/latest/command/cmake_minimum_required.html?utm_source=openai
🏁 Script executed:
cat -n ./sdk/runanywhere-react-native/packages/core/android/CMakeLists.txt | head -30Repository: RunanywhereAI/runanywhere-sdks
Length of output: 1513
FetchContent_MakeAvailable requires CMake 3.14, not 3.11.
The code uses FetchContent_MakeAvailable() (line 19), which requires CMake 3.14. Current minimum is 3.9, causing build failures. Bump cmake_minimum_required to 3.14.0 or vendor the dependency.
✅ Correct fix
-cmake_minimum_required(VERSION 3.9.0)
+cmake_minimum_required(VERSION 3.14.0)🤖 Prompt for AI Agents
In `@sdk/runanywhere-react-native/packages/core/android/CMakeLists.txt` around
lines 8 - 20, The CMake script uses FetchContent_MakeAvailable which requires
CMake ≥3.14; update the project CMake minimum or vendor the dependency: change
the cmake_minimum_required declaration to at least 3.14.0 (so
FetchContent_MakeAvailable is supported) or alternatively remove
FetchContent_MakeAvailable/FetchContent_Declare and vendor the nlohmann/json
headers manually; look for the cmake_minimum_required line and the
FetchContent_Declare/FetchContent_MakeAvailable blocks to apply the change.
| for (size_t i = 0; i < jsonStr.size(); i++) { | ||
| char c = jsonStr[i]; | ||
|
|
||
| // Track if we're inside a string | ||
| if (c == '"' && (i == 0 || jsonStr[i-1] != '\\')) { | ||
| inString = !inString; | ||
| result += c; | ||
| continue; | ||
| } | ||
|
|
||
| if (inString) { | ||
| result += c; | ||
| continue; | ||
| } |
There was a problem hiding this comment.
Edge case: double-escaped backslashes may cause incorrect string boundary tracking.
The escape detection jsonStr[i-1] != '\\' doesn't handle double-escaped backslashes like \\". For example, in {"key": "value\\"}, the final " after \\ is a real quote (not escaped), but this code would treat it as escaped.
This is a minor edge case since LLM outputs rarely contain such patterns, but worth noting for robustness.
🔧 Suggested fix for proper escape handling
// Track if we're inside a string
- if (c == '"' && (i == 0 || jsonStr[i-1] != '\\')) {
+ if (c == '"') {
+ // Count preceding backslashes to determine if quote is escaped
+ size_t backslashCount = 0;
+ size_t k = i;
+ while (k > 0 && jsonStr[k-1] == '\\') {
+ backslashCount++;
+ k--;
+ }
+ // Quote is escaped only if preceded by odd number of backslashes
+ if (backslashCount % 2 == 0) {
+ inString = !inString;
+ }
+ result += c;
+ continue;
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for (size_t i = 0; i < jsonStr.size(); i++) { | |
| char c = jsonStr[i]; | |
| // Track if we're inside a string | |
| if (c == '"' && (i == 0 || jsonStr[i-1] != '\\')) { | |
| inString = !inString; | |
| result += c; | |
| continue; | |
| } | |
| if (inString) { | |
| result += c; | |
| continue; | |
| } | |
| for (size_t i = 0; i < jsonStr.size(); i++) { | |
| char c = jsonStr[i]; | |
| // Track if we're inside a string | |
| if (c == '"') { | |
| // Count preceding backslashes to determine if quote is escaped | |
| size_t backslashCount = 0; | |
| size_t k = i; | |
| while (k > 0 && jsonStr[k-1] == '\\') { | |
| backslashCount++; | |
| k--; | |
| } | |
| // Quote is escaped only if preceded by odd number of backslashes | |
| if (backslashCount % 2 == 0) { | |
| inString = !inString; | |
| } | |
| result += c; | |
| continue; | |
| } | |
| if (inString) { | |
| result += c; | |
| continue; | |
| } |
🤖 Prompt for AI Agents
In `@sdk/runanywhere-react-native/packages/core/cpp/bridges/ToolCallingBridge.cpp`
around lines 52 - 65, The quote-escape check in the loop using jsonStr[i-1] !=
'\\' fails for sequences with multiple backslashes (e.g., \\\") — change the
logic where the code toggles inString (around the for loop over jsonStr and the
if (c == '"' ...) block) to count the number of consecutive backslashes
immediately preceding the quote in jsonStr and treat the quote as escaped only
if that count is odd; update the branch that currently uses jsonStr[i-1] to use
this parity test so inString is toggled correctly for double-escaped
backslashes.
| return generateWithTools(continuedPrompt, { | ||
| ...options, | ||
| maxToolCalls: (options?.maxToolCalls ?? 5) - 1, | ||
| }); |
There was a problem hiding this comment.
Edge case: maxToolCalls could become zero or negative.
When options?.maxToolCalls is 1, the computed value becomes 0, causing generateWithTools to skip the generation loop entirely. This could produce unexpected results where the continuation returns without generating any response.
🔧 Suggested fix: Ensure minimum of 1 tool call for continuation
return generateWithTools(continuedPrompt, {
...options,
- maxToolCalls: (options?.maxToolCalls ?? 5) - 1,
+ maxToolCalls: Math.max(1, (options?.maxToolCalls ?? 5) - 1),
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return generateWithTools(continuedPrompt, { | |
| ...options, | |
| maxToolCalls: (options?.maxToolCalls ?? 5) - 1, | |
| }); | |
| return generateWithTools(continuedPrompt, { | |
| ...options, | |
| maxToolCalls: Math.max(1, (options?.maxToolCalls ?? 5) - 1), | |
| }); |
🤖 Prompt for AI Agents
In
`@sdk/runanywhere-react-native/packages/core/src/Public/Extensions/RunAnywhere`+ToolCalling.ts
around lines 362 - 365, The continuation call to generateWithTools decrements
options?.maxToolCalls and can reach 0 or negative, causing no generation; update
the call (in RunAnywhere+ToolCalling.ts where generateWithTools is invoked) to
compute maxToolCalls as Math.max(1, (options?.maxToolCalls ?? 5) - 1) (or
equivalent) so the continued invocation always uses at least 1 tool call; keep
the rest of options spread as-is to preserve other settings.
SDK Changes: - Add ToolValue enum for type-safe JSON representation (string, number, bool, array, object, null) - Add ToolCallingTypes.swift with type definitions using ToolValue - Add ToolCallParser.swift with pure Swift parser for <tool_call> tags - Add RunAnywhere+ToolCalling.swift with public API (registerTool, executeTool, generateWithTools) Example App Changes: - Add ToolCallInfo to Message model using ToolValue for serialization - Add ToolCallViews.swift with minimal UI (indicator + detail sheet) - Add ToolSettingsView.swift with tool registration settings and 3 demo tools - Update ChatMessageComponents to show tool call indicator on messages - Update ChatInterfaceView to show "Tools enabled" badge - Update LLMViewModel to integrate tool calling into generation flow Bug Fixes: - Fix LLMViewModel+Analytics.swift nil coalescing warnings - Simplify ModelSelectionSheet VLM handling
…ved argument handling - Updated ToolCallViews to use ToolValue for arguments and results. - Refactored ToolSettingsView to implement real API calls for weather and time tools using Open-Meteo API. - Improved ToolCallParser to extract tool names and arguments with multiple fallback strategies. - Added WeatherService for fetching real-time weather data and handling geocoding. - Enhanced error handling in tool execution and argument parsing.
- Add SDKTestApp: minimal iOS test app (Status, Chat, TTS tabs) - List/download/load LLM and TTS models with progress - Chat: send messages and get LLM responses - TTS: speak text with Piper voices - Add swift-spm-example: SPM consumer example - Point swift-spm-example and validation/swift-spm-consumer at RunanywhereAI/runanywhere-sdks - SDK: clearer Foundation Models error messages; RunAnywhereAI README chat/LLM notes
…ge.swift v0.17.5 - swift-auto-tag.yml: auto-tag swift-v* on push to main - swift-sdk-build-release.yml: Phase 2 – build on swift-v* tag, create release + semver tag - swift-sdk-release.yml: tag trigger removed in favor of Phase 2 - Package.swift: remote binaries for v0.17.5 for consumers
- CppBridge+Device/ModelAssignment: write to temp pointers in Task, copy to outResponse after wait - CppBridge+Platform: TTS create use handlePtr; generate use responsePtr for strdup - AlamofireDownloadService: capture requiresExtraction in local let for Task - LiveTranscriptionSession: use Sendable Ref for onTermination to avoid capturing self
…divergent history - Save tag build commit (git rev-parse HEAD) before checkout - Fetch origin main (git fetch) - Compare tag commit vs origin/main (git rev-parse origin/main) - Fail fast if they differ to avoid creating divergent history - Use git pull --ff-only origin main for safe fast-forward only
…ult setup note - Switch c-cpp from autobuild to manual (no build system at repo root) - Add Build C/C++ step: cmake in sdk/runanywhere-commons with minimal opts - Comment: disable Default setup in repo settings when using this workflow
|
just opened to sync the dev and see the changes - not needed as of now - will oepen later if needed. Thanks! |
Description
Brief description of the changes made.
Type of Change
Testing
Platform-Specific Testing (check all that apply)
Swift SDK / iOS Sample:
Kotlin SDK / Android Sample:
Flutter SDK / Flutter Sample:
React Native SDK / React Native Sample:
Labels
Please add the appropriate label(s):
SDKs:
Swift SDK- Changes to Swift SDK (sdk/runanywhere-swift)Kotlin SDK- Changes to Kotlin SDK (sdk/runanywhere-kotlin)Flutter SDK- Changes to Flutter SDK (sdk/runanywhere-flutter)React Native SDK- Changes to React Native SDK (sdk/runanywhere-react-native)Commons- Changes to shared native code (sdk/runanywhere-commons)Sample Apps:
iOS Sample- Changes to iOS example app (examples/ios)Android Sample- Changes to Android example app (examples/android)Flutter Sample- Changes to Flutter example app (examples/flutter)React Native Sample- Changes to React Native example app (examples/react-native)Checklist
Screenshots
Attach relevant UI screenshots for changes (if applicable):
Summary by CodeRabbit
Release Notes
✏️ Tip: You can customize this high-level summary in your review settings.
Greptile Overview
Greptile Summary
Added tool calling support to React Native SDK, enabling LLMs to execute external functions (API calls, device operations). Also includes bug fixes for Swift SDK storage handling.
Major Changes:
ToolCallingBridgefor parsing<tool_call>tags from LLM output with robust JSON handling via nlohmann/json libraryRunAnywhere+ToolCalling.ts) with automatic execution loop and conversation managementToolsScreenwith step-by-step execution visualization and real weather API integrationChatScreento usegenerateWithTools()API for transparent tool executioncalculateDirectorySizeto properly handle files vs directoriesArchitecture:
maxToolCallsto prevent infinite loopsIssues Found:
ChatScreen.tsxweather API handler (line 106) - could crash if API response structure variesConfidence Score: 4/5
ChatScreen.tsxline 106 that could crash if the weather API returns unexpected data.examples/react-native/RunAnywhereAI/src/screens/ChatScreen.tsx- fix the optional chaining bug on line 106 before mergingImportant Files Changed
<tool_call>tags from LLM output with robust JSON handling using nlohmann/json libraryToolCallParseResultstruct and parsing interfaceToolDefinition,ToolCall,ToolResult, and related interfacesgenerateWithTools()API - enables LLM to call weather and time tools transparentlycalculateDirectorySizeto properly handle files vs directories by checking file type firstdeleteStoredModelto mark model as not downloaded in registry after deletionSequence Diagram
sequenceDiagram participant User participant ChatScreen as ChatScreen.tsx participant ToolCalling as RunAnywhere+ToolCalling.ts participant LLM as LLM Engine participant Bridge as ToolCallingBridge.cpp participant Tool as Tool Executor (Weather API) User->>ChatScreen: "What's the weather in Tokyo?" ChatScreen->>ToolCalling: generateWithTools(prompt, options) Note over ToolCalling: Build system prompt with tool definitions ToolCalling->>ToolCalling: formatToolsForPrompt(tools) Note over ToolCalling: Iteration 1: Initial generation ToolCalling->>LLM: generateStream(prompt + tools) LLM-->>ToolCalling: "<tool_call>{"tool":"get_weather","arguments":{"location":"Tokyo"}}</tool_call>" ToolCalling->>Bridge: parseToolCallFromOutput(llmOutput) Note over Bridge: Parse using nlohmann/json<br/>Extract tool name & arguments<br/>Return clean text Bridge-->>ToolCalling: {hasToolCall: true, toolName: "get_weather", arguments: {...}} alt autoExecute = true ToolCalling->>Tool: executeTool(toolCall) Tool->>Tool: fetch("https://wttr.in/Tokyo?format=j1") Tool-->>ToolCalling: {temperature_c: 15, condition: "Sunny", ...} Note over ToolCalling: Iteration 2: Generate final response ToolCalling->>ToolCalling: Build prompt with tool result ToolCalling->>LLM: generateStream(prompt + tool_result) LLM-->>ToolCalling: "The weather in Tokyo is 15°C and sunny..." ToolCalling->>Bridge: parseToolCallFromOutput(response) Bridge-->>ToolCalling: {hasToolCall: false, cleanText: "The weather..."} end ToolCalling-->>ChatScreen: {text: "The weather...", toolCalls: [...], toolResults: [...]} ChatScreen-->>User: Display final response