You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Read more about **Eloquent Chat History** in the Neuron AI documentation: **https://docs.neuron-ai.dev/the-basics/chat-history-and-memory#eloquentchathisotry**
224
+
225
+
<aname="monitoring">
226
+
227
+
## Monitoring & Debugging
228
+
229
+
Integrating AI Agents into your application you’re not working only with functions and deterministic code,
230
+
you program your agent also influencing probability distributions. Same input ≠ output.
231
+
That means reproducibility, versioning, and debugging become real problems.
232
+
233
+
Many of the Agents you build with Neuron will contain multiple steps with multiple invocations of LLM calls,
234
+
tool usage, access to external memories, etc. As these applications get more and more complex, it becomes crucial
235
+
to be able to inspect what exactly your agent is doing and why.
236
+
237
+
Why is the model taking certain decisions? What data is the model reacting to? Prompting is not programming
238
+
in the common sense. No static types, small changes break output, long prompts cost latency,
239
+
and no two models behave exactly the same with the same prompt.
240
+
241
+
The best way to take your AI application under control is with [Inspector](https://inspector.dev). After you sign up,
242
+
make sure to set the `INSPECTOR_INGESTION_KEY` variable in the application environment file to start monitoring your agents:
0 commit comments