-
Notifications
You must be signed in to change notification settings - Fork 21
Compage gpt #157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
sbiswasai
wants to merge
2
commits into
intelops:pre-main
Choose a base branch
from
sbiswasai:compage_gpt
base: pre-main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Compage gpt #157
Changes from 1 commit
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,40 @@ | ||
## Requirements | ||
|
||
To successfully run the program, make sure you have all the necessary dependencies installed. These dependencies are listed in the `requirements.txt` file. Before executing the program, follow these steps: | ||
|
||
## Setting Up the Environment | ||
|
||
1. Create a Python virtual environment using either pip or conda. You should use Python version 3.11.4. | ||
|
||
2. Activate the newly created virtual environment. This step ensures that the required packages are isolated from your system-wide Python installation. | ||
|
||
## Installing Dependencies | ||
|
||
3. Install the required dependencies by running the following command in your terminal: | ||
|
||
<pre> | ||
<code >pip install -r requirements.txt</code> | ||
</pre> | ||
|
||
This command will read the `requirements.txt` file and install all the necessary packages into your virtual environment. | ||
|
||
## Running the Code | ||
|
||
4. Once the dependencies are installed, you can run the program using the following command: | ||
|
||
<pre> | ||
<code>uvicorn generate_code:app --reload</code> | ||
</pre> | ||
|
||
This command starts the Uvicorn server and launches the application. The `--reload` flag enables auto-reloading, which is useful during development. | ||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,142 @@ | ||
# Import libraries | ||
import os | ||
import sys | ||
import openai | ||
import json | ||
import langchain.agents as lc_agents | ||
import uvicorn | ||
import pydantic | ||
|
||
|
||
# Import custom modules | ||
from datetime import datetime | ||
from dotenv import load_dotenv | ||
from langchain.llms import OpenAI | ||
from langchain.prompts import PromptTemplate | ||
from langchain.chains import SequentialChain, SimpleSequentialChain, LLMChain | ||
from langchain.memory import ConversationBufferMemory | ||
from langchain.llms import OpenAI as lang_open_ai | ||
from pydantic import BaseModel | ||
from fastapi import FastAPI, Header | ||
from fastapi.middleware.cors import CORSMiddleware | ||
|
||
|
||
|
||
|
||
|
||
|
||
class Item(pydantic.BaseModel): | ||
language: str | ||
topic: str | ||
|
||
|
||
app = FastAPI() | ||
origins = ["*"] | ||
app.add_middleware( | ||
CORSMiddleware, | ||
allow_origins=origins, | ||
allow_credentials=True, | ||
allow_methods=["*"], | ||
allow_headers=["*"], | ||
) | ||
|
||
|
||
|
||
|
||
@app.get("/ping") | ||
def ping(): | ||
return {"message": "Hello World"} | ||
|
||
@app.post("/generate_code/") | ||
async def generate_code(item:Item,apikey: str = Header(None) ): | ||
global api_key | ||
|
||
api_key = apikey | ||
# Check if the API key has been saved in the memory | ||
if not api_key: | ||
api_key = api_key | ||
|
||
else: | ||
# The API key has already been saved, so don't re-assign it | ||
pass | ||
|
||
|
||
os.environ["OPENAI_API_KEY"] = api_key #item.apikey | ||
mahendraintelops marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
code_language = item.language | ||
code_topic = item.topic | ||
|
||
# prompt template for the code generation | ||
code_template = PromptTemplate( | ||
input_variables=['lang', 'top'], | ||
template='Write the code in ' + | ||
' {lang} language' + ' for {top}'\ | ||
+ ' with proper inline comments and maintaining \ | ||
markdown format of {lang}' | ||
) | ||
|
||
code_explain_template = PromptTemplate( | ||
input_variables=['top'], | ||
template='Explain in detail the working of the generated code and algorithm ' + | ||
' for {top}' + ' in proper markdown format' | ||
) | ||
code_flow_template = PromptTemplate( | ||
input_variables=['top'], | ||
template='Generate the diagram flow ' + | ||
' for {top} in proper markdown format' | ||
) | ||
|
||
code_testcase_template = PromptTemplate( | ||
input_variables= ['lang', 'top'], | ||
template='Generate the unit test cases and codes ' + | ||
'and integration test cases with codes ' + | ||
'in {lang}' + ' for {top} in proper markdown formats' | ||
) | ||
|
||
# use memory for the conversation | ||
code_memory = ConversationBufferMemory( | ||
input_key='top', memory_key='chat_history') | ||
explain_memory = ConversationBufferMemory( | ||
input_key='top', memory_key='chat_history') | ||
flow_memory = ConversationBufferMemory( | ||
input_key='top', memory_key='chat_history') | ||
testcase_memory = ConversationBufferMemory( | ||
input_key='top', memory_key='chat_history') | ||
|
||
# create the OpenAI LLM model | ||
open_ai_llm = OpenAI( temperature=0.7, max_tokens=1000) | ||
|
||
# create a chain to generate the code | ||
code_chain = LLMChain(llm=open_ai_llm, prompt=code_template, | ||
output_key='code', memory=code_memory, verbose=True) | ||
# create another chain to explain the code | ||
code_explain_chain = LLMChain(llm=open_ai_llm, prompt=code_explain_template, | ||
output_key='code_explain', memory=explain_memory, verbose=True) | ||
|
||
|
||
|
||
# create another chain to generate the code flow if needed | ||
code_flow_chain = LLMChain(llm=open_ai_llm, prompt=code_flow_template, | ||
output_key='code_flow', memory=flow_memory, verbose=True) | ||
|
||
# create another chain to generate the code flow if needed | ||
code_testcase_chain = LLMChain(llm=open_ai_llm, prompt=code_testcase_template, | ||
output_key='code_unittest', memory=testcase_memory, verbose=True) | ||
|
||
# create a sequential chain to combine both chains | ||
sequential_chain = SequentialChain(chains=[code_chain, code_explain_chain, code_flow_chain,\ | ||
code_testcase_chain], input_variables= | ||
['lang', 'top'], output_variables=['code', 'code_explain','code_flow', 'code_unittest']) | ||
|
||
|
||
response = sequential_chain({'lang': code_language, 'top': code_topic}) | ||
|
||
|
||
return {'code': response['code'], 'code_explain': response['code_explain'],\ | ||
'code_flow': response['code_flow'], 'code_unittest': response['code_unittest']} | ||
|
||
|
||
|
||
|
||
|
||
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,125 @@ | ||
aiohttp==3.8.4 | ||
aiosignal==1.3.1 | ||
altair==4.2.2 | ||
anyio==3.7.1 | ||
async-timeout==4.0.2 | ||
attrs==23.1.0 | ||
backoff==2.2.1 | ||
beautifulsoup4==4.12.2 | ||
blinker==1.6.2 | ||
cachetools==5.3.1 | ||
certifi==2023.5.7 | ||
charset-normalizer==3.2.0 | ||
chromadb==0.3.23 | ||
click==8.1.5 | ||
clickhouse-connect==0.6.6 | ||
cmake==3.26.4 | ||
dataclasses-json==0.5.9 | ||
decorator==5.1.1 | ||
duckdb==0.8.1 | ||
entrypoints==0.4 | ||
fastapi==0.100.0 | ||
filelock==3.12.2 | ||
frozenlist==1.4.0 | ||
fsspec==2023.6.0 | ||
gitdb==4.0.10 | ||
GitPython==3.1.32 | ||
greenlet==2.0.2 | ||
h11==0.14.0 | ||
hnswlib==0.7.0 | ||
httptools==0.6.0 | ||
huggingface-hub==0.16.4 | ||
idna==3.4 | ||
importlib-metadata==6.8.0 | ||
Jinja2==3.1.2 | ||
joblib==1.3.1 | ||
jsonschema==4.18.3 | ||
jsonschema-specifications==2023.6.1 | ||
langchain==0.0.174 | ||
lit==16.0.6 | ||
lz4==4.3.2 | ||
markdown-it-py==3.0.0 | ||
MarkupSafe==2.1.3 | ||
marshmallow==3.19.0 | ||
marshmallow-enum==1.5.1 | ||
mdurl==0.1.2 | ||
monotonic==1.6 | ||
mpmath==1.3.0 | ||
multidict==6.0.4 | ||
mypy-extensions==1.0.0 | ||
networkx==3.1 | ||
nltk==3.8.1 | ||
numexpr==2.8.4 | ||
numpy==1.25.1 | ||
nvidia-cublas-cu11==11.10.3.66 | ||
nvidia-cuda-cupti-cu11==11.7.101 | ||
nvidia-cuda-nvrtc-cu11==11.7.99 | ||
nvidia-cuda-runtime-cu11==11.7.99 | ||
nvidia-cudnn-cu11==8.5.0.96 | ||
nvidia-cufft-cu11==10.9.0.58 | ||
nvidia-curand-cu11==10.2.10.91 | ||
nvidia-cusolver-cu11==11.4.0.1 | ||
nvidia-cusparse-cu11==11.7.4.91 | ||
nvidia-nccl-cu11==2.14.3 | ||
nvidia-nvtx-cu11==11.7.91 | ||
openai==0.27.2 | ||
openapi-schema-pydantic==1.2.4 | ||
packaging==23.1 | ||
pandas==2.0.3 | ||
Pillow==10.0.0 | ||
posthog==3.0.1 | ||
protobuf==3.20.3 | ||
pyarrow==12.0.1 | ||
pydantic==1.10.11 | ||
pydeck==0.8.1b0 | ||
Pygments==2.15.1 | ||
Pympler==1.0.1 | ||
python-dateutil==2.8.2 | ||
python-dotenv==1.0.0 | ||
pytz==2023.3 | ||
PyYAML==6.0 | ||
referencing==0.29.1 | ||
regex==2023.6.3 | ||
requests==2.31.0 | ||
rich==13.4.2 | ||
rpds-py==0.8.10 | ||
safetensors==0.3.1 | ||
scikit-learn==1.3.0 | ||
scipy==1.11.1 | ||
sentence-transformers==2.2.2 | ||
sentencepiece==0.1.99 | ||
six==1.16.0 | ||
smmap==5.0.0 | ||
sniffio==1.3.0 | ||
soupsieve==2.4.1 | ||
SQLAlchemy==2.0.18 | ||
starlette==0.27.0 | ||
streamlit==1.22.0 | ||
sympy==1.12 | ||
tenacity==8.2.2 | ||
threadpoolctl==3.2.0 | ||
tiktoken==0.3.3 | ||
tokenizers==0.13.3 | ||
toml==0.10.2 | ||
toolz==0.12.0 | ||
torch==2.0.1 | ||
torchvision==0.15.2 | ||
tornado==6.3.2 | ||
tqdm==4.65.0 | ||
transformers==4.30.2 | ||
triton==2.0.0 | ||
typing-inspect==0.9.0 | ||
typing_extensions==4.7.1 | ||
tzdata==2023.3 | ||
tzlocal==5.0.1 | ||
urllib3==2.0.3 | ||
uvicorn==0.22.0 | ||
uvloop==0.17.0 | ||
validators==0.20.0 | ||
watchdog==3.0.0 | ||
watchfiles==0.19.0 | ||
websockets==11.0.3 | ||
wikipedia==1.4.0 | ||
yarl==1.9.2 | ||
zipp==3.16.1 | ||
zstandard==0.21.0 |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.