Replies: 2 comments 1 reply
-
@kchusam I confirm you that with
Hope that works for you! |
Beta Was this translation helpful? Give feedback.
1 reply
-
@kchusam Here's a MRE (minimal reproducible example) adapted from the """Example of using PandasAI with a Pandas DataFrame"""
import pandas as pd
from pandasai import PandasAI
from pandasai.llm import OpenAI
# dummy example dataframe
dataframe = {
"country": [
"United States",
"United Kingdom",
"France",
"Germany",
"Italy",
"Spain",
"Canada",
"Australia",
"Japan",
"China",
],
"gdp": [
19294482071552,
2891615567872,
2411255037952,
3435817336832,
1745433788416,
1181205135360,
1607402389504,
1490967855104,
4380756541440,
14631844184064,
],
"happiness_index": [6.94, 7.16, 6.66, 7.07, 6.38, 6.4, 7.23, 7.22, 5.87, 5.12],
}
df = pd.DataFrame(dataframe)
api_key = 'sk-...' # <-- your OpenAI API key here
llm = OpenAI(api_token=api_key)
pandas_ai = PandasAI(llm, enable_cache=False, enforce_privacy=True)
response = pandas_ai(df, "Calculate the sum of the gdp of north american countries")
print(pandas_ai.last_prompt) Which displays:
The reason why you see the conversational prompt instead is due to the |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I would like to understand what is really sent to the llm server and how the dataframe schema looks like. How can I do that?
Specially when set enforce_privacy = True, I want to be sure, that only the header is sent
Beta Was this translation helpful? Give feedback.
All reactions