Skip to content

Observation/Feedback #2

@AntonVishal

Description

@AntonVishal

Nothing to complain great work @dprevoznik with few LOC. As I am prototyping in this space, browser-use seems to be more cost efficient.

Kernel with AI SDK - gpt-oss-120b ($0.094) - 30~40 secs - input token 350k

Browser-use OSS - gemini-flash-latest ($0.023) - 70~100 secs - input token 90k

For simple use case this is negligible, but thinking about scale is where the concern is.

But I really like that kernel is so quick, If you guys can bring down that token usage there's a real edge. Ping me for more info

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions