Skip to content

Commit e6e194a

Browse files
committed
supporting notebook for PR 1270
elastic/search-labs-elastic-co#1270
1 parent 4141c04 commit e6e194a

File tree

1 file changed

+189
-0
lines changed

1 file changed

+189
-0
lines changed
Lines changed: 189 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,189 @@
1+
{
2+
"nbformat": 4,
3+
"nbformat_minor": 0,
4+
"metadata": {
5+
"colab": {
6+
"provenance": []
7+
},
8+
"kernelspec": {
9+
"name": "python3",
10+
"display_name": "Python 3"
11+
},
12+
"language_info": {
13+
"name": "python"
14+
}
15+
},
16+
"cells": [
17+
{
18+
"cell_type": "markdown",
19+
"source": [
20+
"# PDF Parsing - Table Extraction\n",
21+
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/elastic/elasticsearch-labs/blob/main/supporting-blog-content/alternative-approach-for-parsing-pdfs-in-rag/alternative-approach-for-parsing-pdfs-in-rag.ipynb\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n"
22+
],
23+
"metadata": {
24+
"id": "e9-GuDRKCz_1"
25+
}
26+
},
27+
{
28+
"cell_type": "markdown",
29+
"source": [
30+
"##Objective\n",
31+
"This Python script extracts text and tables from a PDF file, converts the tables into a human-readable text format using Azure OpenAI, and writes the processed content to a text file. The script uses pdfplumber to extract text and table data from each page of the PDF. For tables, it sends a cleaned version (handling any missing or None values) to Azure OpenAI, which generates a natural language summary of the table. The extracted non-table text and the summarized table text are then saved to a text file for easy search and readability."
32+
],
33+
"metadata": {
34+
"id": "MBdflc9G0ICc"
35+
}
36+
},
37+
{
38+
"cell_type": "code",
39+
"source": [
40+
"!pip install pdfplumber"
41+
],
42+
"metadata": {
43+
"id": "QBwz0_VNL1p6"
44+
},
45+
"execution_count": null,
46+
"outputs": []
47+
},
48+
{
49+
"cell_type": "markdown",
50+
"source": [
51+
"This code imports necessary libraries for PDF extraction, data processing, and interacting with Azure OpenAI via API calls. It retrieves the Azure OpenAI API key and endpoint from Google Colab's userdata storage, sets up the required headers, and prepares for sending requests to the Azure OpenAI service."
52+
],
53+
"metadata": {
54+
"id": "QC37eVM90few"
55+
}
56+
},
57+
{
58+
"cell_type": "code",
59+
"execution_count": null,
60+
"metadata": {
61+
"id": "X3vuHZTjK6l7"
62+
},
63+
"outputs": [],
64+
"source": [
65+
"import pdfplumber\n",
66+
"import pandas as pd\n",
67+
"import requests\n",
68+
"import base64\n",
69+
"import json\n",
70+
"from getpass import getpass\n",
71+
"import io # To create an in-memory file-like object\n",
72+
"\n",
73+
"\n",
74+
"ENDPOINT = getpass(\"Azure OpenAI Completions Endpoint: \")\n",
75+
"API_KEY = getpass(\"Azure OpenAI API Key: \")\n",
76+
"\n",
77+
"\n",
78+
"headers = {\n",
79+
" \"Content-Type\": \"application/json\",\n",
80+
" \"api-key\": API_KEY,\n",
81+
"}\n",
82+
"\n",
83+
"\n"
84+
]
85+
},
86+
{
87+
"cell_type": "markdown",
88+
"source": [
89+
"This code defines two functions: extract_table_text_from_openai and parse_pdf. The extract_table_text_from_openai function sends a table's plain text to Azure OpenAI for conversion into a human-readable description by building a request payload and handling the response. The parse_pdf function processes a PDF file page by page, extracting both text and tables, and sends the extracted tables to Azure OpenAI for summarization, saving all the content (including summarized tables) to a text file."
90+
],
91+
"metadata": {
92+
"id": "79VOKKam0leA"
93+
}
94+
},
95+
{
96+
"cell_type": "code",
97+
"source": [
98+
"def extract_table_text_from_openai(table_text):\n",
99+
" # Payload for the Azure OpenAI request\n",
100+
" payload = {\n",
101+
" \"messages\": [\n",
102+
" {\n",
103+
" \"role\": \"system\",\n",
104+
" \"content\": [\n",
105+
" {\n",
106+
" \"type\": \"text\",\n",
107+
" \"text\": \"You are an AI assistant that helps convert tables into a human-readable text.\"\n",
108+
" }\n",
109+
" ]\n",
110+
" },\n",
111+
" {\n",
112+
" \"role\": \"user\",\n",
113+
" \"content\": f\"Convert this table to a readable text format:\\n{table_text}\"\n",
114+
" }\n",
115+
" ],\n",
116+
" \"temperature\": 0.7,\n",
117+
" \"top_p\": 0.95,\n",
118+
" \"max_tokens\": 4096\n",
119+
" }\n",
120+
"\n",
121+
" # Send the request to Azure OpenAI\n",
122+
" try:\n",
123+
" response = requests.post(ENDPOINT, headers=headers, json=payload)\n",
124+
" response.raise_for_status() # Raise error if the request fails\n",
125+
" except requests.RequestException as e:\n",
126+
" raise SystemExit(f\"Failed to make the request. Error: {e}\")\n",
127+
"\n",
128+
" # Process the response\n",
129+
" return response.json().get('choices', [{}])[0].get('message', {}).get('content', '').strip()\n",
130+
"\n",
131+
"def parse_pdf_from_url(file_url):\n",
132+
" # Download the PDF file from the URL\n",
133+
" response = requests.get(file_url)\n",
134+
" response.raise_for_status() # Ensure the request was successful\n",
135+
"\n",
136+
" # Open the PDF content with pdfplumber using io.BytesIO\n",
137+
" pdf_content = io.BytesIO(response.content)\n",
138+
"\n",
139+
" with pdfplumber.open(pdf_content) as pdf, open(\"/content/parsed_output/parsed_output.txt\", \"w\") as output_file:\n",
140+
" for page_num, page in enumerate(pdf.pages, 1):\n",
141+
" print(f\"Processing page {page_num}\")\n",
142+
"\n",
143+
" # Extract text content\n",
144+
" text = page.extract_text()\n",
145+
" if text:\n",
146+
" output_file.write(f\"Page {page_num} Text:\\n\")\n",
147+
" output_file.write(text + \"\\n\\n\")\n",
148+
" print(\"Text extracted:\", text)\n",
149+
"\n",
150+
" # Extract tables\n",
151+
" tables = page.extract_tables()\n",
152+
" for idx, table in enumerate(tables):\n",
153+
" print(f\"Table {idx + 1} found on page {page_num}\")\n",
154+
"\n",
155+
" # Convert the table into plain text format (handling None values)\n",
156+
" table_text = \"\\n\".join([\"\\t\".join([str(cell) if cell is not None else \"\" for cell in row]) for row in table[1:]])\n",
157+
"\n",
158+
" # Call Azure OpenAI to convert the table into a text representation\n",
159+
" table_description = extract_table_text_from_openai(table_text)\n",
160+
"\n",
161+
" # Write the text representation to the file\n",
162+
" output_file.write(f\"Table {idx + 1} (Page {page_num}) Text Representation:\\n\")\n",
163+
" output_file.write(table_description + \"\\n\\n\")\n",
164+
" print(\"Text representation of the table:\", table_description)\n"
165+
],
166+
"metadata": {
167+
"id": "CdMm1AKJLKbA"
168+
},
169+
"execution_count": null,
170+
"outputs": []
171+
},
172+
{
173+
"cell_type": "code",
174+
"source": [
175+
"# URL of the PDF file\n",
176+
"file_url = \"https://sunmanapp.blob.core.windows.net/publicstuff/pdfs/quarterly_report.pdf\"\n",
177+
"##output stored here: /content/parsed_output/parsed_output.txt\n",
178+
"\n",
179+
"# Call the function to parse the PDF from the URL\n",
180+
"parse_pdf_from_url(file_url)"
181+
],
182+
"metadata": {
183+
"id": "7ig9NSSnLMGt"
184+
},
185+
"execution_count": null,
186+
"outputs": []
187+
}
188+
]
189+
}

0 commit comments

Comments
 (0)