Skip to content
Closed
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 12 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
run:
python -m src.guidellm.main

install:
pip install -r requirements.txt

Expand All @@ -13,7 +16,7 @@ quality:
style:
ruff format src tests
isort src tests
flake8 src tests --max-line-length 88
flake8 src tests

# test:
# pytest tests
Expand All @@ -31,4 +34,11 @@ clean:
rm -rf .mypy_cache
rm -rf .pytest_cache

.PHONY: install install-dev quality style test test-unit test-integration test-e2e test-smoke test-sanity test-regression build clean

fix:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@parfeniukink this is what style is supposed to be for. Can you migrate these pieces under style so we can unify that?

Also, black and ruff we're disagreeing on how to format some edge cases around indexing lists, so let's remove the call into black completely

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@markurtz
My TLDR: consider using isort+black instead of ruff. Just some of my thought. But I will remove it as you wish.

Detailed

sure, but are you sure about the ruff? I mean.. I know that it is a great tool in terms of linting / formatting and even suppose to be an LSP ( a bit worst than pyright nowadays, but anyways )

Maybe we should consider using ruff only for linting ( since it is faster than flake8, pylint, etc ) and black for formatting. The black in known as uncompromising formatter and they specialized only on formatting when ruff is actually was designed as a linter. I just mean that the formatting is a secondary concern, which might mean it doesn't enforce formatting as strictly or comprehensively as black.

Another reason behind is that the ruff also sorts imports but isort makes it a bit better.

ruff latest release version is 0.4.10 for today and just wanted to notice that they only copied most of the code from other linters like flake8 so the core functionality is a bit stolen 😄

What about authors of all these tools?

  • Python Code Quality Authority for isort
  • Python Software Foundation for black

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@parfeniukink is definitely a good point to bring up. According to their docs, Ruff is supposed to have full feature parity with flake8, black, and isort.

The big push behind ruff from my side is ideally to standardize to a single tool across our full stack and ML applications that is backed by the top libraries we use / will integrate with, which are specifically Hugging Face Transformers and FastAPI.

If we can get black working with Ruff, I'm in favor of keeping it, but I do want to ensure the base authority is Ruff due to its performance, the increasing adoption, and the downstream effects of that choice.

python -m black src tests
python -m isort src tests
python -m ruff format src tests


.PHONY: run install install-dev quality style test test-unit test-integration test-e2e test-smoke test-sanity test-regression build clean fix
30 changes: 29 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,32 @@ target-version = ['py38']

[tool.isort]
profile = "black"
multi_line_output = 3
include_trailing_comma = true
force_grid_wrap = 0
use_parentheses = true
line_length = 88
src_paths = ["."]

[tool.mypy]
files = "src/guidellm"
python_version = '3.8'
files = ['*.py']
exclude = ["venv", "docs"]

show_error_codes = true
namespace_packages = false
check_untyped_defs = true

warn_redundant_casts = true
warn_unused_ignores = true

# Silint "type import errors" as our 3rd-party libs does not have types
# Check: https://mypy.readthedocs.io/en/latest/config_file.html#import-discovery
follow_imports = 'silent'

[[tool.mypy.overrides]]
module = []
ignore_missing_imports=true

[tool.ruff]
exclude = ["build", "dist", "env", ".venv"]
Expand All @@ -20,6 +43,11 @@ lint.select = ["E", "F", "W"]
max-line-length = 88

[tool.pytest.ini_options]
addopts = '-s -vvv --cache-clear'
asyncio_mode = 'auto'
cache_dir = '/tmp'
python_files = 'tests.py test_*.py'
python_functions = 'test_* *_test'
markers = [
"smoke: quick tests to check basic functionality",
"sanity: detailed tests to ensure major functions work correctly",
Expand Down
4 changes: 2 additions & 2 deletions src/guidellm/backend/__init__.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
from .base import Backend, BackendTypes, GenerativeResponse
from .base import Backend, BackendType, GenerativeResponse
from .openai import OpenAIBackend

__all__ = [
"Backend",
"BackendTypes",
"BackendType",
"GenerativeResponse",
"OpenAIBackend",
]
18 changes: 10 additions & 8 deletions src/guidellm/backend/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,17 @@
from abc import ABC, abstractmethod
from dataclasses import dataclass
from enum import Enum
from typing import Iterator, List, Optional, Type, Union
from typing import Iterator, List, Optional, Type

from loguru import logger

from guidellm.core.request import TextGenerationRequest
from guidellm.core.result import TextGenerationResult

__all__ = ["Backend", "BackendTypes", "GenerativeResponse"]
__all__ = ["Backend", "BackendType", "GenerativeResponse"]


class BackendTypes(Enum):
class BackendType(str, Enum):
TEST = "test"
OPENAI_SERVER = "openai_server"

Expand All @@ -39,12 +39,12 @@ class Backend(ABC):
_registry = {}

@staticmethod
def register_backend(backend_type: BackendTypes):
def register_backend(backend_type: BackendType):
"""
A decorator to register a backend class in the backend registry.

:param backend_type: The type of backend to register.
:type backend_type: BackendTypes
:type backend_type: BackendType
"""

def inner_wrapper(wrapped_class: Type["Backend"]):
Expand All @@ -54,21 +54,23 @@ def inner_wrapper(wrapped_class: Type["Backend"]):
return inner_wrapper

@staticmethod
def create_backend(backend_type: Union[str, BackendTypes], **kwargs) -> "Backend":
def create_backend(backend_type: BackendType, **kwargs) -> "Backend":
"""
Factory method to create a backend based on the backend type.

:param backend_type: The type of backend to create.
:type backend_type: BackendTypes
:type backend_type: BackendType
:param kwargs: Additional arguments for backend initialization.
:type kwargs: dict
:return: An instance of a subclass of Backend.
:rtype: Backend
"""
logger.info(f"Creating backend of type {backend_type}")
if backend_type not in Backend._registry:

if backend_type not in Backend._registry.keys():
logger.error(f"Unsupported backend type: {backend_type}")
raise ValueError(f"Unsupported backend type: {backend_type}")

return Backend._registry[backend_type](**kwargs)

def submit(self, request: TextGenerationRequest) -> TextGenerationResult:
Expand Down
4 changes: 2 additions & 2 deletions src/guidellm/backend/openai.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,13 @@
from loguru import logger
from transformers import AutoTokenizer

from guidellm.backend import Backend, BackendTypes, GenerativeResponse
from guidellm.backend import Backend, BackendType, GenerativeResponse
from guidellm.core.request import TextGenerationRequest

__all__ = ["OpenAIBackend"]


@Backend.register_backend(BackendTypes.OPENAI_SERVER)
@Backend.register_backend(BackendType.OPENAI_SERVER)
class OpenAIBackend(Backend):
"""
An OpenAI backend implementation for the generative AI result.
Expand Down