Skip to content

Commit f205706

Browse files
authored
Add Ruby example (#6)
1 parent 32a270c commit f205706

File tree

5 files changed

+52
-1
lines changed

5 files changed

+52
-1
lines changed

README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@ The proxy server supports specifying cache TTL on a per-request basis, so you co
1010

1111
- [openai/openai-node](https://github.com/openai/openai-node): full compatibility, takes just a few lines of config to use
1212
- [openai/openai-python](https://github.com/openai/openai-python): partial compatibility, supports caching but no TTL options so you'll need a cache eviction policy
13+
- [alexrudall/ruby-openai](https://github.com/alexrudall/ruby-openai): partial compatibility, supports caching but no TTL options so you'll need a cache eviction policy
1314

1415
It only caches `POST` requests that have a JSON request body, as these tend to be the slowest and are the only ones that cost money (for now).
1516

@@ -92,4 +93,4 @@ const configuration = new Configuration({
9293

9394
See `/examples/` directory for a full example of how to call this proxy with your openai client.
9495

95-
This includes both Node.js and Python client usage examples.
96+
This includes both Node.js, Python and Ruby client usage examples.

examples/ruby/.env.sample

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
# Create a file called ".env" and add this env var:
2+
OPENAI_API_KEY=...insertme...

examples/ruby/Gemfile

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
# frozen_string_literal: true
2+
3+
source "https://rubygems.org"
4+
5+
# gem "rails"
6+
7+
gem "ruby-openai", "~> 3.7"
8+
9+
gem "dotenv", "~> 2.8"

examples/ruby/Gemfile.lock

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
GEM
2+
remote: https://rubygems.org/
3+
specs:
4+
dotenv (2.8.1)
5+
httparty (0.21.0)
6+
mini_mime (>= 1.0.0)
7+
multi_xml (>= 0.5.2)
8+
mini_mime (1.1.2)
9+
multi_xml (0.6.0)
10+
ruby-openai (3.7.0)
11+
httparty (>= 0.18.1)
12+
13+
PLATFORMS
14+
arm64-darwin-22
15+
16+
DEPENDENCIES
17+
dotenv (~> 2.8)
18+
ruby-openai (~> 3.7)
19+
20+
BUNDLED WITH
21+
2.4.8

examples/ruby/example.rb

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
require 'dotenv/load'
2+
require 'openai'
3+
4+
# As api verision is hard-coded in proxy, we need to set it here to empty string
5+
OpenAI.configuration.api_version = ""
6+
# Set this to your local instance or Cloudflare deployment:
7+
OpenAI.configuration.uri_base = "http://localhost:8787/proxy"
8+
OpenAI.configuration.access_token = ENV.fetch('OPENAI_API_KEY')
9+
10+
client = OpenAI::Client.new
11+
12+
response = client.completions(
13+
parameters: {
14+
model: "text-davinci-001",
15+
prompt: "Once upon a time",
16+
max_tokens: 5
17+
})
18+
puts response["choices"].map { |c| c["text"] }

0 commit comments

Comments
 (0)