Skip to content

updating api route to use claude#4601

Open
eriwarr wants to merge 17 commits intoSatcherInstitute:mainfrom
eriwarr:charlie-claude-implementation
Open

updating api route to use claude#4601
eriwarr wants to merge 17 commits intoSatcherInstitute:mainfrom
eriwarr:charlie-claude-implementation

Conversation

@eriwarr
Copy link
Collaborator

@eriwarr eriwarr commented Feb 4, 2026

Description and Motivation

Has this been tested? How?

Screenshots (if appropriate)

Types of changes

(leave all that apply)

  • Bug fix
  • New content or feature
  • Refactor / chore

New frontend preview link is below in the Netlify comment 😎

@gemini-code-assist
Copy link

Summary of Changes

Hello @eriwarr, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request transitions the application's AI insight generation from using OpenAI's API to Anthropic's API. This change involves updating the API endpoint, authentication mechanism, request payload, and response parsing to leverage Anthropic's Claude model, aiming to provide AI-driven content.

Highlights

  • AI Provider Switch: The backend API integration for fetching AI insights has been migrated from OpenAI to Anthropic.
  • API Key Update: The required environment variable for the API key has been updated from 'OPENAI_API_KEY' to 'ANTHROPIC_API_KEY'.
  • API Endpoint and Request Format: The API endpoint, request headers (including 'x-api-key' and 'anthropic-version'), and request body structure (model, max_tokens, messages format) have been adjusted to be compatible with Anthropic's API.
  • Model and Token Limit: The AI model used has been changed from 'gpt-4' to 'claude-sonnet-4-5-20250929', and the 'max_tokens' limit for responses has been increased from 500 to 1024.
  • Response Parsing: The logic for parsing the AI response has been updated to correctly extract content from Anthropic's API response format.
Changelog
  • frontend_server/server.js
    • Replaced 'OPENAI_API_KEY' with 'ANTHROPIC_API_KEY' for authentication.
    • Updated the AI service endpoint from OpenAI to Anthropic.
    • Modified request headers to include 'x-api-key' and 'anthropic-version'.
    • Changed the AI model from 'gpt-4' to 'claude-sonnet-4-5-20250929'.
    • Increased 'max_tokens' in the request body from 500 to 1024.
    • Adjusted the response parsing logic to 'json.content?.[0]?.text' to correctly handle Anthropic's response structure.
Activity
  • The pull request introduces a significant change by switching the AI provider.
  • The author has provided placeholders for testing details, screenshots, and types of changes, indicating the PR is ready for review and further details might be added.
  • A Netlify preview link is expected to be provided in a separate comment.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request migrates the AI insight feature from OpenAI's GPT to Anthropic's Claude model. The changes correctly update the API endpoint, authentication headers, request body, and response parsing to align with the Anthropic API. My feedback includes a couple of suggestions to improve maintainability and code style.

model: 'gpt-4',
messages: [{ role: 'user', content: prompt }],
max_tokens: 500,
model: 'claude-sonnet-4-5-20250929',

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To improve flexibility and avoid hardcoding configuration, it's a good practice to source the model name from an environment variable. This allows you to switch models without changing the code. You can provide a default value to ensure the application works even if the environment variable is not set.

Suggested change
model: 'claude-sonnet-4-5-20250929',
model: process.env.ANTHROPIC_MODEL || 'claude-sonnet-4-5-20250929',

Comment on lines +147 to +152
messages: [
{
role: 'user',
content: prompt
}
],

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The formatting of the messages array and its object is a bit unusual and inconsistent with the surrounding code. For better readability, consider formatting it more conventionally. This is also more consistent with how it was formatted for the previous OpenAI call.

          messages: [{ role: 'user', content: prompt }],

@netlify
Copy link

netlify bot commented Feb 4, 2026

Deploy Preview for health-equity-tracker ready!

Built without sensitive environment variables

Name Link
🔨 Latest commit c1eb05f
🔍 Latest deploy log https://app.netlify.com/projects/health-equity-tracker/deploys/69aef9fd3dfa7c0008c568d9
😎 Deploy Preview https://deploy-preview-4601--health-equity-tracker.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant