Skip to content

(feat): update winston ai model version#359

Open
Philrobots wants to merge 2 commits intoedenai:masterfrom
gowinston-ai:feat/update-winstonai-model-versions
Open

(feat): update winston ai model version#359
Philrobots wants to merge 2 commits intoedenai:masterfrom
gowinston-ai:feat/update-winstonai-model-versions

Conversation

@Philrobots
Copy link
Contributor

@Philrobots Philrobots commented Jun 18, 2025

What has been done in this Pull Request

  • Change the URL in the configurations.
  • Use the latest version of the model for image detection.
  • Use the latest version of the model for text detection.
  • Use automatic language detection for text detection.

Summary by CodeRabbit

  • Bug Fixes

    • Updated Winston AI integration to use revised endpoints and defaults (text language auto-detection, default model version set to latest) for more accurate detection and compatibility.
    • Enforced a minimum text length (300 characters) for AI content detection to improve reliability.
  • Chores

    • Updated base API URL and removed unused imports for cleaner configuration.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Jun 18, 2025

Walkthrough

Updated Winston AI integration: base API URL changed, endpoints and payloads adjusted for image/text detection (including version/language defaults and a minimum text length), several async helper methods removed, and a minor import cleanup.

Changes

Cohort / File(s) Change Summary
Config constant
edenai_apis/apis/winstonai/config.py
Updated WINSTON_AI_API_URL from "https://api.gowinston.ai/functions/v1" to "https://api.gowinston.ai".
API implementation
edenai_apis/apis/winstonai/winstonai_api.py
Changed endpoints and payload shapes for image and text detection (e.g., /image-detection/v2/image-detection, /predict/v2/ai-content-detection, plagiarism endpoint to /v1/plagiarism); added "version": "latest" where required; changed text defaults (language "auto", version "latest"), added minimum 300-char validation for text detection, removed several async helper methods, and cleaned up imports.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Review areas to focus on:
    • text__ai_detection validation and how callers handle the 300-char requirement.
    • Updated endpoints/payloads vs. integration tests or external expectations.
    • Removal of async helper methods and any callers/tests relying on them.
    • Updated base URL usage across code paths that construct full endpoints.

Possibly related PRs

  • Image ai detection winstonai #390 — Modifies Winston AI endpoints and async image/text helper methods; likely overlaps or conflicts with these API method removals/changes.

Suggested reviewers

  • Daggx

Poem

"I hopped through URLs, trimmed a path or two,
I nudged the versions so responses stay new.
Async friends waved, then hopped out of sight,
Text must be longer to pass my keen sight.
🐰✨ — Love, the little review rabbit"

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'update winston ai model version' accurately reflects the main changes which involve upgrading to latest model versions for image and text detection endpoints, plus configuration updates.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Philrobots
Copy link
Contributor Author

@Daggx

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (2)
edenai_apis/apis/winstonai/winstonai_api.py (2)

48-48: Consider parameterizing the version for consistency.

The version is hardcoded as "latest" here, while text__ai_detection allows it to be configured via provider_params. For consistency and flexibility, consider accepting a provider_params parameter.

Apply this diff if you want to support version configuration:

-def image__ai_detection(
-    self, file: Optional[str] = None, file_url: Optional[str] = None, **kwargs
-) -> ResponseType[ImageAiDetectionDataclass]:
+def image__ai_detection(
+    self, file: Optional[str] = None, file_url: Optional[str] = None, provider_params: Optional[Dict[str, Any]] = None, **kwargs
+) -> ResponseType[ImageAiDetectionDataclass]:
+    if provider_params is None:
+        provider_params = {}
     if not file_url and not file:
         raise ProviderException("file or file_url required")

-    payload = json.dumps({"url": file_url or upload_file_to_s3(file, file), "version": "latest"})
+    payload = json.dumps({"url": file_url or upload_file_to_s3(file, file), "version": provider_params.get("version", "latest")})

83-85: Consider extracting the error message as a constant.

The error message is quite long and would be better defined as a module-level constant for maintainability.

Apply this diff to extract the error message:

+# Error messages
+MIN_TEXT_LENGTH_ERROR = "Make sure the text is at least 300 characters long before sending a detection request"
+
 class WinstonaiApi(ProviderInterface, TextInterface, ImageInterface):
     # ...
     def text__ai_detection(
         self, text: str, provider_params: Optional[Dict[str, Any]] = None, **kwargs
     ) -> ResponseType[AiDetectionDataClass]:
         if provider_params is None:
             provider_params = {}

         # WinstonAI has a minimum characters limit of 300 characters for his API
         if len(text) < 300:
-            raise ProviderException("Make sure the text is at least 300 characters long before sending a detection request")
+            raise ProviderException(MIN_TEXT_LENGTH_ERROR)

Based on static analysis hints (Ruff TRY003).

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 63826e1 and 66ab4bf.

📒 Files selected for processing (1)
  • edenai_apis/apis/winstonai/winstonai_api.py (6 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
edenai_apis/apis/winstonai/winstonai_api.py (2)
edenai_apis/utils/upload_s3.py (1)
  • upload_file_to_s3 (76-82)
edenai_apis/utils/exception.py (1)
  • ProviderException (14-26)
🪛 Ruff (0.14.6)
edenai_apis/apis/winstonai/winstonai_api.py

50-50: Probable use of requests call without timeout

(S113)


85-85: Avoid specifying long messages outside the exception class

(TRY003)


96-96: Probable use of requests call without timeout

(S113)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: test
🔇 Additional comments (3)
edenai_apis/apis/winstonai/winstonai_api.py (3)

3-3: LGTM: Clean import optimization.

The import cleanup appropriately removes unused uuid4 and narrows the upload_s3 import to only the required function.

Also applies to: 24-24


87-94: LGTM: Automatic language detection and version configuration.

The changes appropriately enable automatic language detection by defaulting to "auto" and allow version configuration with "latest" as the default, aligning with the PR objectives.


52-52: I'll help you verify the Winston AI API endpoint changes in this PR. Let me start by examining the actual code changes in the file and then verify against available documentation.
<function_calls>

cat edenai_apis/apis/winstonai/winstonai_api.py | head -60

</function_calls>

Let me get a broader view of the file to see all the changes:
<function_calls>

wc -l edenai_apis/apis/winstonai/winstonai_api.py

</function_calls>

Now let me view the file and check the specific lines mentioned:
<function_calls>

cat -n edenai_apis/apis/winstonai/winstonai_api.py | sed -n '45,160p'

</function_calls>

Let me also check if there's any request body or parameter configuration in the file:
<function_calls>

grep -n "version|language|latest|auto" edenai_apis/apis/winstonai/winstonai_api.py | head -20

</function_calls>

Now let me search for Winston AI API documentation or references in the codebase:
<function_calls>

find . -type f ( -name ".md" -o -name ".txt" -o -name "*.json" ) | xargs grep -l -i "winston" 2>/dev/null | head -10

</function_calls>

Let me check the request bodies to see what parameters are being sent:
<function_calls>

cat -n edenai_apis/apis/winstonai/winstonai_api.py | sed -n '1,200p'

</function_calls>

Now let me search the web for Winston AI API documentation to verify these endpoints:
<function_calls>

Winston AI API v2 image-detection endpoint documentation

</function_calls>

Comment on lines 50 to 55
response = requests.request(
"POST",
f"{self.api_url}/image-detection",
f"{self.api_url}/v2/image-detection",
headers=self.headers,
data=payload,
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add timeout to prevent indefinite hangs.

The HTTP request lacks a timeout parameter, which can cause the application to hang indefinitely if the Winston AI server doesn't respond. This is especially problematic for image processing which may take longer.

Apply this diff to add a timeout:

 response = requests.request(
     "POST",
     f"{self.api_url}/v2/image-detection",
     headers=self.headers,
     data=payload,
+    timeout=30,
 )

Based on static analysis hints (Ruff S113).

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
response = requests.request(
"POST",
f"{self.api_url}/image-detection",
f"{self.api_url}/v2/image-detection",
headers=self.headers,
data=payload,
)
response = requests.request(
"POST",
f"{self.api_url}/v2/image-detection",
headers=self.headers,
data=payload,
timeout=30,
)
🧰 Tools
🪛 Ruff (0.14.6)

50-50: Probable use of requests call without timeout

(S113)

🤖 Prompt for AI Agents
In edenai_apis/apis/winstonai/winstonai_api.py around lines 50 to 55, the
requests.request call has no timeout which can cause indefinite hangs; add a
timeout parameter (e.g., timeout=30) to the request call or, better, use a
configurable attribute like self.timeout with a sensible default and pass
timeout=self.timeout to requests.request so the HTTP call fails fast when the
Winston AI server is unresponsive.

Comment on lines 96 to 98
response = requests.request(
"POST", f"{self.api_url}/predict", headers=self.headers, data=payload
"POST", f"{self.api_url}/v2/ai-content-detection", headers=self.headers, data=payload
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add timeout to prevent indefinite hangs.

The HTTP request lacks a timeout parameter, which can cause the application to hang indefinitely if the Winston AI server doesn't respond.

Apply this diff to add a timeout:

 response = requests.request(
-    "POST", f"{self.api_url}/v2/ai-content-detection", headers=self.headers, data=payload
+    "POST", f"{self.api_url}/v2/ai-content-detection", headers=self.headers, data=payload, timeout=30
 )

Based on static analysis hints (Ruff S113).

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
response = requests.request(
"POST", f"{self.api_url}/predict", headers=self.headers, data=payload
"POST", f"{self.api_url}/v2/ai-content-detection", headers=self.headers, data=payload
)
response = requests.request(
"POST", f"{self.api_url}/v2/ai-content-detection", headers=self.headers, data=payload, timeout=30
)
🧰 Tools
🪛 Ruff (0.14.6)

96-96: Probable use of requests call without timeout

(S113)

🤖 Prompt for AI Agents
In edenai_apis/apis/winstonai/winstonai_api.py around lines 96 to 98, the
requests.request call is missing a timeout which can cause indefinite hangs;
update the call to include a timeout argument (e.g., timeout=30) or use a
configurable attribute like self.request_timeout, and ensure the timeout value
is used in the request invocation so the HTTP call fails fast instead of
blocking indefinitely.

Comment on lines 147 to 149
response = requests.request(
"POST", f"{self.api_url}/plagiarism", headers=self.headers, data=payload
)

if response.status_code != 200:
raise ProviderException(response.json(), code=response.status_code)

original_response = response.json()
results = original_response.get("results")

if results is None:
raise ProviderException(response.json())

standardized_response = PlagiaDetectionDataClass(
plagia_score=original_response["score"],
items=[
PlagiaDetectionItem(
text=result["title"],
candidates=[
PlagiaDetectionCandidate(
url=result["url"],
plagia_score=1,
prediction="plagiarized",
plagiarized_text=excerpt,
)
for excerpt in result["excerpts"]
],
)
for result in results
],
)

return ResponseType[PlagiaDetectionDataClass](
original_response=original_response,
standardized_response=standardized_response,
)

async def text__aplagia_detection(
self,
text: str,
title: str = "",
provider_params: Optional[Dict[str, Any]] = None,
**kwargs,
) -> ResponseType[PlagiaDetectionDataClass]:
if provider_params is None:
provider_params = {}
payload = json.dumps(
{
"text": text,
"language": provider_params.get("language", "en"),
"version": provider_params.get("version", "2.0"),
}
"POST", f"{self.api_url}/v1/plagiarism", headers=self.headers, data=payload
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add timeout to prevent indefinite hangs.

The HTTP request lacks a timeout parameter, which can cause the application to hang indefinitely if the Winston AI server doesn't respond.

Apply this diff to add a timeout:

 response = requests.request(
-    "POST", f"{self.api_url}/v1/plagiarism", headers=self.headers, data=payload
+    "POST", f"{self.api_url}/v1/plagiarism", headers=self.headers, data=payload, timeout=30
 )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
response = requests.request(
"POST", f"{self.api_url}/plagiarism", headers=self.headers, data=payload
)
if response.status_code != 200:
raise ProviderException(response.json(), code=response.status_code)
original_response = response.json()
results = original_response.get("results")
if results is None:
raise ProviderException(response.json())
standardized_response = PlagiaDetectionDataClass(
plagia_score=original_response["score"],
items=[
PlagiaDetectionItem(
text=result["title"],
candidates=[
PlagiaDetectionCandidate(
url=result["url"],
plagia_score=1,
prediction="plagiarized",
plagiarized_text=excerpt,
)
for excerpt in result["excerpts"]
],
)
for result in results
],
)
return ResponseType[PlagiaDetectionDataClass](
original_response=original_response,
standardized_response=standardized_response,
)
async def text__aplagia_detection(
self,
text: str,
title: str = "",
provider_params: Optional[Dict[str, Any]] = None,
**kwargs,
) -> ResponseType[PlagiaDetectionDataClass]:
if provider_params is None:
provider_params = {}
payload = json.dumps(
{
"text": text,
"language": provider_params.get("language", "en"),
"version": provider_params.get("version", "2.0"),
}
"POST", f"{self.api_url}/v1/plagiarism", headers=self.headers, data=payload
)
response = requests.request(
"POST", f"{self.api_url}/v1/plagiarism", headers=self.headers, data=payload, timeout=30
)
🧰 Tools
🪛 Ruff (0.14.6)

147-147: Probable use of requests call without timeout

(S113)

🤖 Prompt for AI Agents
In edenai_apis/apis/winstonai/winstonai_api.py around lines 147 to 149, the POST
request to the Winston AI /v1/plagiarism endpoint is missing a timeout, which
can cause indefinite hangs; update the requests.request call to include a
reasonable timeout parameter (e.g., timeout=30) so the call fails fast on
unresponsive servers, and ensure any surrounding error handling remains
compatible with requests.exceptions.Timeout.

@Philrobots
Copy link
Contributor Author

Is it possible to take a look at this PR? It updates our systems to use our latest models.

Thanks ! 👍

@jqueguiner @beijbom @raztud @rreece

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant