Skip to content

Commit a1bba66

Browse files
committed
added token counter
1 parent 77e2657 commit a1bba66

File tree

7 files changed

+93
-0
lines changed

7 files changed

+93
-0
lines changed
Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
# Use an official Python runtime as a parent image
2+
FROM python:3.9-slim
3+
4+
WORKDIR /app
5+
6+
7+
#install requirements
8+
COPY requirements.txt requirements.txt
9+
RUN pip3 install -r requirements.txt
10+
11+
# Copy the rest of the application code to the working directory
12+
COPY . /app/
13+
EXPOSE 8000
14+
# Set the entrypoint for the container
15+
CMD ["hypercorn", "--bind", "0.0.0.0:8000", "api:app"]
Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
## Grievance classification:
2+
3+
4+
### Purpose :
5+
Model to classify grievances into 3 buckets :
6+
- Label 0: 'Agri scheme'
7+
- Label 1: 'Other agri content'
8+
- Label 2: 'pest flow'
9+
- Label 3: 'seed flow'
10+
11+
12+
### Testing the model deployment :
13+
To run for testing just the Hugging Face deployment for grievence recognition, you can follow the following steps :
14+
15+
- Git clone the repo
16+
- Go to current folder location i.e. ``` cd /src/text_classification/flow_classification/local ```
17+
- Create docker image file and test the api:
18+
```
19+
docker build -t testmodel .
20+
docker run -p 8000:8000 testmodel
21+
curl -X POST -H "Content-Type: application/json" -d '{"text": "Where is my money? "}' http://localhost:8000/
22+
```
Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
from .request import ModelRequest
2+
from .request import Model

token_counter/openai/local/api.py

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
from model import Model
2+
from request import ModelRequest
3+
from quart import Quart, request
4+
import aiohttp
5+
6+
app = Quart(__name__)
7+
8+
model = None
9+
10+
@app.before_serving
11+
async def startup():
12+
app.client = aiohttp.ClientSession()
13+
global model
14+
model = Model(app)
15+
16+
@app.route('/', methods=['POST'])
17+
async def embed():
18+
global model
19+
data = await request.get_json()
20+
req = ModelRequest(**data)
21+
return await model.inference(req)
22+
23+
if __name__ == "__main__":
24+
app.run()
Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
import tiktoken
2+
from request import ModelRequest
3+
4+
class Model():
5+
def __new__(cls, context):
6+
cls.context = context
7+
if not hasattr(cls, 'instance'):
8+
cls.instance = super(Model, cls).__new__(cls)
9+
model_name = "gpt-3.5-turbo"
10+
cls.encoding = tiktoken.encoding_for_model(model_name)
11+
return cls.instance
12+
13+
14+
async def inference(self, request: ModelRequest):
15+
num_tokens = len(self.encoding.encode(request.text))
16+
return str(num_tokens)
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
import requests
2+
import json
3+
4+
5+
class ModelRequest():
6+
def __init__(self, text):
7+
self.text = text
8+
9+
def to_json(self):
10+
return json.dumps(self, default=lambda o: o.__dict__,
11+
sort_keys=True, indent=4)
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
tiktoken
2+
quart
3+
aiohttp

0 commit comments

Comments
 (0)