updating api route to use claude#4601
updating api route to use claude#4601eriwarr wants to merge 17 commits intoSatcherInstitute:mainfrom
Conversation
Summary of ChangesHello @eriwarr, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request transitions the application's AI insight generation from using OpenAI's API to Anthropic's API. This change involves updating the API endpoint, authentication mechanism, request payload, and response parsing to leverage Anthropic's Claude model, aiming to provide AI-driven content. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request migrates the AI insight feature from OpenAI's GPT to Anthropic's Claude model. The changes correctly update the API endpoint, authentication headers, request body, and response parsing to align with the Anthropic API. My feedback includes a couple of suggestions to improve maintainability and code style.
frontend_server/server.js
Outdated
| model: 'gpt-4', | ||
| messages: [{ role: 'user', content: prompt }], | ||
| max_tokens: 500, | ||
| model: 'claude-sonnet-4-5-20250929', |
There was a problem hiding this comment.
To improve flexibility and avoid hardcoding configuration, it's a good practice to source the model name from an environment variable. This allows you to switch models without changing the code. You can provide a default value to ensure the application works even if the environment variable is not set.
| model: 'claude-sonnet-4-5-20250929', | |
| model: process.env.ANTHROPIC_MODEL || 'claude-sonnet-4-5-20250929', |
frontend_server/server.js
Outdated
| messages: [ | ||
| { | ||
| role: 'user', | ||
| content: prompt | ||
| } | ||
| ], |
There was a problem hiding this comment.
The formatting of the messages array and its object is a bit unusual and inconsistent with the surrounding code. For better readability, consider formatting it more conventionally. This is also more consistent with how it was formatted for the previous OpenAI call.
messages: [{ role: 'user', content: prompt }],
✅ Deploy Preview for health-equity-tracker ready!Built without sensitive environment variables
To edit notification comments on pull requests, go to your Netlify project configuration. |
Description and Motivation
Has this been tested? How?
Screenshots (if appropriate)
Types of changes
(leave all that apply)
New frontend preview link is below in the Netlify comment 😎