(feat): update winston ai model version#359
(feat): update winston ai model version#359Philrobots wants to merge 2 commits intoedenai:masterfrom
Conversation
WalkthroughUpdated Winston AI integration: base API URL changed, endpoints and payloads adjusted for image/text detection (including version/language defaults and a minimum text length), several async helper methods removed, and a minor import cleanup. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Tip 📝 Customizable high-level summaries are now available in beta!You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.
Example instruction:
Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (2)
edenai_apis/apis/winstonai/winstonai_api.py (2)
48-48: Consider parameterizing the version for consistency.The
versionis hardcoded as"latest"here, whiletext__ai_detectionallows it to be configured viaprovider_params. For consistency and flexibility, consider accepting aprovider_paramsparameter.Apply this diff if you want to support version configuration:
-def image__ai_detection( - self, file: Optional[str] = None, file_url: Optional[str] = None, **kwargs -) -> ResponseType[ImageAiDetectionDataclass]: +def image__ai_detection( + self, file: Optional[str] = None, file_url: Optional[str] = None, provider_params: Optional[Dict[str, Any]] = None, **kwargs +) -> ResponseType[ImageAiDetectionDataclass]: + if provider_params is None: + provider_params = {} if not file_url and not file: raise ProviderException("file or file_url required") - payload = json.dumps({"url": file_url or upload_file_to_s3(file, file), "version": "latest"}) + payload = json.dumps({"url": file_url or upload_file_to_s3(file, file), "version": provider_params.get("version", "latest")})
83-85: Consider extracting the error message as a constant.The error message is quite long and would be better defined as a module-level constant for maintainability.
Apply this diff to extract the error message:
+# Error messages +MIN_TEXT_LENGTH_ERROR = "Make sure the text is at least 300 characters long before sending a detection request" + class WinstonaiApi(ProviderInterface, TextInterface, ImageInterface): # ... def text__ai_detection( self, text: str, provider_params: Optional[Dict[str, Any]] = None, **kwargs ) -> ResponseType[AiDetectionDataClass]: if provider_params is None: provider_params = {} # WinstonAI has a minimum characters limit of 300 characters for his API if len(text) < 300: - raise ProviderException("Make sure the text is at least 300 characters long before sending a detection request") + raise ProviderException(MIN_TEXT_LENGTH_ERROR)Based on static analysis hints (Ruff TRY003).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
edenai_apis/apis/winstonai/winstonai_api.py(6 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
edenai_apis/apis/winstonai/winstonai_api.py (2)
edenai_apis/utils/upload_s3.py (1)
upload_file_to_s3(76-82)edenai_apis/utils/exception.py (1)
ProviderException(14-26)
🪛 Ruff (0.14.6)
edenai_apis/apis/winstonai/winstonai_api.py
50-50: Probable use of requests call without timeout
(S113)
85-85: Avoid specifying long messages outside the exception class
(TRY003)
96-96: Probable use of requests call without timeout
(S113)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: test
🔇 Additional comments (3)
edenai_apis/apis/winstonai/winstonai_api.py (3)
3-3: LGTM: Clean import optimization.The import cleanup appropriately removes unused
uuid4and narrows theupload_s3import to only the required function.Also applies to: 24-24
87-94: LGTM: Automatic language detection and version configuration.The changes appropriately enable automatic language detection by defaulting to
"auto"and allow version configuration with"latest"as the default, aligning with the PR objectives.
52-52: I'll help you verify the Winston AI API endpoint changes in this PR. Let me start by examining the actual code changes in the file and then verify against available documentation.
<function_calls>
cat edenai_apis/apis/winstonai/winstonai_api.py | head -60
</function_calls>Let me get a broader view of the file to see all the changes:
<function_calls>
wc -l edenai_apis/apis/winstonai/winstonai_api.py
</function_calls>Now let me view the file and check the specific lines mentioned:
<function_calls>
cat -n edenai_apis/apis/winstonai/winstonai_api.py | sed -n '45,160p'
</function_calls>Let me also check if there's any request body or parameter configuration in the file:
<function_calls>
grep -n "version|language|latest|auto" edenai_apis/apis/winstonai/winstonai_api.py | head -20
</function_calls>Now let me search for Winston AI API documentation or references in the codebase:
<function_calls>
find . -type f ( -name ".md" -o -name ".txt" -o -name "*.json" ) | xargs grep -l -i "winston" 2>/dev/null | head -10
</function_calls>Let me check the request bodies to see what parameters are being sent:
<function_calls>
cat -n edenai_apis/apis/winstonai/winstonai_api.py | sed -n '1,200p'
</function_calls>Now let me search the web for Winston AI API documentation to verify these endpoints:
<function_calls>
Winston AI API v2 image-detection endpoint documentation
</function_calls>
| response = requests.request( | ||
| "POST", | ||
| f"{self.api_url}/image-detection", | ||
| f"{self.api_url}/v2/image-detection", | ||
| headers=self.headers, | ||
| data=payload, | ||
| ) |
There was a problem hiding this comment.
Add timeout to prevent indefinite hangs.
The HTTP request lacks a timeout parameter, which can cause the application to hang indefinitely if the Winston AI server doesn't respond. This is especially problematic for image processing which may take longer.
Apply this diff to add a timeout:
response = requests.request(
"POST",
f"{self.api_url}/v2/image-detection",
headers=self.headers,
data=payload,
+ timeout=30,
)Based on static analysis hints (Ruff S113).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| response = requests.request( | |
| "POST", | |
| f"{self.api_url}/image-detection", | |
| f"{self.api_url}/v2/image-detection", | |
| headers=self.headers, | |
| data=payload, | |
| ) | |
| response = requests.request( | |
| "POST", | |
| f"{self.api_url}/v2/image-detection", | |
| headers=self.headers, | |
| data=payload, | |
| timeout=30, | |
| ) |
🧰 Tools
🪛 Ruff (0.14.6)
50-50: Probable use of requests call without timeout
(S113)
🤖 Prompt for AI Agents
In edenai_apis/apis/winstonai/winstonai_api.py around lines 50 to 55, the
requests.request call has no timeout which can cause indefinite hangs; add a
timeout parameter (e.g., timeout=30) to the request call or, better, use a
configurable attribute like self.timeout with a sensible default and pass
timeout=self.timeout to requests.request so the HTTP call fails fast when the
Winston AI server is unresponsive.
| response = requests.request( | ||
| "POST", f"{self.api_url}/predict", headers=self.headers, data=payload | ||
| "POST", f"{self.api_url}/v2/ai-content-detection", headers=self.headers, data=payload | ||
| ) |
There was a problem hiding this comment.
Add timeout to prevent indefinite hangs.
The HTTP request lacks a timeout parameter, which can cause the application to hang indefinitely if the Winston AI server doesn't respond.
Apply this diff to add a timeout:
response = requests.request(
- "POST", f"{self.api_url}/v2/ai-content-detection", headers=self.headers, data=payload
+ "POST", f"{self.api_url}/v2/ai-content-detection", headers=self.headers, data=payload, timeout=30
)Based on static analysis hints (Ruff S113).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| response = requests.request( | |
| "POST", f"{self.api_url}/predict", headers=self.headers, data=payload | |
| "POST", f"{self.api_url}/v2/ai-content-detection", headers=self.headers, data=payload | |
| ) | |
| response = requests.request( | |
| "POST", f"{self.api_url}/v2/ai-content-detection", headers=self.headers, data=payload, timeout=30 | |
| ) |
🧰 Tools
🪛 Ruff (0.14.6)
96-96: Probable use of requests call without timeout
(S113)
🤖 Prompt for AI Agents
In edenai_apis/apis/winstonai/winstonai_api.py around lines 96 to 98, the
requests.request call is missing a timeout which can cause indefinite hangs;
update the call to include a timeout argument (e.g., timeout=30) or use a
configurable attribute like self.request_timeout, and ensure the timeout value
is used in the request invocation so the HTTP call fails fast instead of
blocking indefinitely.
| response = requests.request( | ||
| "POST", f"{self.api_url}/plagiarism", headers=self.headers, data=payload | ||
| ) | ||
|
|
||
| if response.status_code != 200: | ||
| raise ProviderException(response.json(), code=response.status_code) | ||
|
|
||
| original_response = response.json() | ||
| results = original_response.get("results") | ||
|
|
||
| if results is None: | ||
| raise ProviderException(response.json()) | ||
|
|
||
| standardized_response = PlagiaDetectionDataClass( | ||
| plagia_score=original_response["score"], | ||
| items=[ | ||
| PlagiaDetectionItem( | ||
| text=result["title"], | ||
| candidates=[ | ||
| PlagiaDetectionCandidate( | ||
| url=result["url"], | ||
| plagia_score=1, | ||
| prediction="plagiarized", | ||
| plagiarized_text=excerpt, | ||
| ) | ||
| for excerpt in result["excerpts"] | ||
| ], | ||
| ) | ||
| for result in results | ||
| ], | ||
| ) | ||
|
|
||
| return ResponseType[PlagiaDetectionDataClass]( | ||
| original_response=original_response, | ||
| standardized_response=standardized_response, | ||
| ) | ||
|
|
||
| async def text__aplagia_detection( | ||
| self, | ||
| text: str, | ||
| title: str = "", | ||
| provider_params: Optional[Dict[str, Any]] = None, | ||
| **kwargs, | ||
| ) -> ResponseType[PlagiaDetectionDataClass]: | ||
| if provider_params is None: | ||
| provider_params = {} | ||
| payload = json.dumps( | ||
| { | ||
| "text": text, | ||
| "language": provider_params.get("language", "en"), | ||
| "version": provider_params.get("version", "2.0"), | ||
| } | ||
| "POST", f"{self.api_url}/v1/plagiarism", headers=self.headers, data=payload | ||
| ) |
There was a problem hiding this comment.
Add timeout to prevent indefinite hangs.
The HTTP request lacks a timeout parameter, which can cause the application to hang indefinitely if the Winston AI server doesn't respond.
Apply this diff to add a timeout:
response = requests.request(
- "POST", f"{self.api_url}/v1/plagiarism", headers=self.headers, data=payload
+ "POST", f"{self.api_url}/v1/plagiarism", headers=self.headers, data=payload, timeout=30
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| response = requests.request( | |
| "POST", f"{self.api_url}/plagiarism", headers=self.headers, data=payload | |
| ) | |
| if response.status_code != 200: | |
| raise ProviderException(response.json(), code=response.status_code) | |
| original_response = response.json() | |
| results = original_response.get("results") | |
| if results is None: | |
| raise ProviderException(response.json()) | |
| standardized_response = PlagiaDetectionDataClass( | |
| plagia_score=original_response["score"], | |
| items=[ | |
| PlagiaDetectionItem( | |
| text=result["title"], | |
| candidates=[ | |
| PlagiaDetectionCandidate( | |
| url=result["url"], | |
| plagia_score=1, | |
| prediction="plagiarized", | |
| plagiarized_text=excerpt, | |
| ) | |
| for excerpt in result["excerpts"] | |
| ], | |
| ) | |
| for result in results | |
| ], | |
| ) | |
| return ResponseType[PlagiaDetectionDataClass]( | |
| original_response=original_response, | |
| standardized_response=standardized_response, | |
| ) | |
| async def text__aplagia_detection( | |
| self, | |
| text: str, | |
| title: str = "", | |
| provider_params: Optional[Dict[str, Any]] = None, | |
| **kwargs, | |
| ) -> ResponseType[PlagiaDetectionDataClass]: | |
| if provider_params is None: | |
| provider_params = {} | |
| payload = json.dumps( | |
| { | |
| "text": text, | |
| "language": provider_params.get("language", "en"), | |
| "version": provider_params.get("version", "2.0"), | |
| } | |
| "POST", f"{self.api_url}/v1/plagiarism", headers=self.headers, data=payload | |
| ) | |
| response = requests.request( | |
| "POST", f"{self.api_url}/v1/plagiarism", headers=self.headers, data=payload, timeout=30 | |
| ) |
🧰 Tools
🪛 Ruff (0.14.6)
147-147: Probable use of requests call without timeout
(S113)
🤖 Prompt for AI Agents
In edenai_apis/apis/winstonai/winstonai_api.py around lines 147 to 149, the POST
request to the Winston AI /v1/plagiarism endpoint is missing a timeout, which
can cause indefinite hangs; update the requests.request call to include a
reasonable timeout parameter (e.g., timeout=30) so the call fails fast on
unresponsive servers, and ensure any surrounding error handling remains
compatible with requests.exceptions.Timeout.
|
Is it possible to take a look at this PR? It updates our systems to use our latest models. Thanks ! 👍 |
What has been done in this Pull Request
Summary by CodeRabbit
Bug Fixes
Chores
✏️ Tip: You can customize this high-level summary in your review settings.