You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
An intelligent invoice data extractor built with **OCI Generative AI**, **LangChain**, and **Streamlit**. Upload any invoice PDF and this app will extract structured data like REF. NO., POLICY NO., DATES, etc. using multimodal LLMs.
| 🧠 OCI Generative AI | Vision + Text LLMs for extraction |
22
+
| 🧱 LangChain | Prompt orchestration and LLM chaining |
23
+
| 📦 Streamlit | Interactive UI and file handling |
24
+
| 🖼️ pdf2image | Convert PDFs into JPEGs |
25
+
| 🧾 Pandas | CSV creation & table rendering |
26
+
| 🔐 Base64 | Encodes image bytes for prompt injection|
27
+
28
+
---
29
+
30
+
## 🧠 How It Works
31
+
32
+
1.**User Uploads Invoice PDF**
33
+
The file is uploaded and converted into an image using `pdf2image` (Ensure you upload one page documents ONLY)
34
+
35
+
2.**Initial Header Detection (LLaMA-3.2 Vision)**
36
+
The first page is passed to the multimodal LLM which returns a list of fields that are likely to be useful (e.g., "Policy No.", "Amount", "Underwriter").
37
+
38
+
3.**User Selects Fields and Types**
39
+
A UI allows the user to pick 3 fields from the detected list, and specify their data types (Text, Number, etc.).
40
+
41
+
4.**Prompt Generation (Cohere Command R+)**
42
+
The second LLM generates a custom system prompt to extract those fields as JSON.
43
+
44
+
5.**Full Invoice Extraction (LLaMA-3.2 Vision)**
45
+
Each page image is passed into the multimodal LLM using the custom prompt, returning JSON values for the requested fields.
46
+
47
+
6.**Data Saving & Display**
48
+
All data is shown in a `st.dataframe()` and saved to CSV.
49
+
50
+
---
51
+
52
+
## 📁 File Structure
53
+
54
+
```bash
55
+
.
56
+
├── app.py # Main Streamlit app
57
+
├── requirements.txt # Python dependencies
58
+
└── README.md # This file
59
+
```
60
+
61
+
---
62
+
63
+
## 🔧 Setup
64
+
65
+
1.**Clone the repository**
66
+
67
+
```bash
68
+
git clone <repository-url>
69
+
cd<repository-folder>
70
+
```
71
+
72
+
2.**Install dependencies**
73
+
74
+
```bash
75
+
pip install -r requirements.txt
76
+
```
77
+
78
+
3.**Run the app**
79
+
80
+
```bash
81
+
streamlit run app.py
82
+
```
83
+
84
+
> ⚠️ **Important Configuration:**
85
+
>
86
+
> - Replace all instances of `<YOUR_COMPARTMENT_OCID_HERE>` with your actual **OCI Compartment OCID**
87
+
> - Ensure you have access to **OCI Generative AI Services** with correct permissions
# Generate appropriate prompt based on selected or input fields
122
+
ifelements:
123
+
system_message_cohere=SystemMessage(
124
+
content=f"""
125
+
Based on the following set of elements {elements}, with their respective types, extract their values and respond only in valid JSON format (no explanation):
126
+
{', '.join([f'- {e[0]}'foreinelements])}
127
+
For example:
128
+
{{
129
+
{elements[0][0]}: "296969",
130
+
{elements[1][0]}: "296969",
131
+
{elements[2][0]}: "296969"
132
+
}}
133
+
"""
134
+
)
135
+
ai_response_cohere=system_message_cohere
136
+
else:
137
+
system_message_cohere=SystemMessage(
138
+
content=f"""
139
+
Generate a system prompt to extract fields based on user-defined elements: {user_prompt}.
By running models locally, you maintain full data ownership and avoid the potential security risks associated with cloud storage. Offline AI tools like Ollama also help reduce latency and reliance on external facilities, making them faster and more reliable.
23
23
24
-
This article is intended to demonstrate and provide directions to install and create an Ollama LLM processing facility. Despite the fact that Ollama can be run on both personal servers and laptops, this installation is aimed at the Oracle Compute Cloud@Customer (C3) and Private Cloud Appliance (PCA) to capitalize on more readily available resources to increase performance and processing efficiency, especially if large models are used.
24
+
This article is intended to demonstrate and provide directions to install and create an Ollama LLM processing facility. Despite the fact that Ollama can be run on both personal servers and laptops, this installation is aimed at the Oracle Compute Cloud@Customer (C3) to capitalize on more readily available resources to increase performance and processing efficiency, especially if large models are used.
25
25
26
26
Considerations:
27
-
* A firm grasp of C3/PCA/OCI concepts and administration is assumed.
27
+
* A firm grasp of C3 and OCI concepts and administration is assumed.
28
28
* The creation and integration of a development environment is outside of the scope of this document.
29
-
* Oracle Linux 8 and macOS Sonoma 14.7.1 clients were used for testing but Windows is however widely supported.
29
+
* Oracle Linux 8, macOS Sonoma 14.7.1 and macOS Sequoia 15.3.2 clients were used for testing but Windows is however widely supported.
30
30
31
31
[Back to top](#toc)<br>
32
32
<br>
@@ -40,17 +40,16 @@ Considerations:
40
40
|----------|----------|
41
41
| Operating system | Oracle Linux 8 or later<br>Ubuntu 22.04 or later<br>Windows<br> |
42
42
| RAM | 16 GB for running models up to 7B. "The rule of thumb" is to have at least 2x memory for the size of the LLM, also allowing for LLMs that will be loaded in memory simultaneously. |
43
-
| Disk space | 12 GB for installing Ollama and basic models. Additional space is required for storing model data depending on the used models. The LLM sizes can be obtained from the "trained models" link in the References section. For example the Llama 3.1 LLM with 405Bn parameters occupy 229GB of disk space |
43
+
| Disk space | 12 GB for installing Ollama and basic models. Additional space is required for storing model data depending on the used models. The LLM sizes can be obtained from the "trained models" link in the References section. For example the Llama 3.1 LLM with 405Bn parameters occupies 229GB of disk space |
44
44
| Processor | Recommended to use a modern CPU with at least 4 cores. For running models of approximately 15B, 8 cores (OCPUs) is recommended. Allocate accordingly |
45
-
| Graphics Processing Unit<br>(optional) | A GPU is not required for running Ollama, but can improve performance, especially when working with large models. If you have a GPU, you can use it to accelerate training of custom models. |
45
+
| Graphics Processing Unit<br>(optional) | A GPU is not required for running Ollama, but can improve performance, especially when working with large models. If you have a GPU, you can use it to accelerate inferencing, training, fine-tuning and RAG (Retrieval Augmented Generation). |
46
46
47
47
>[!NOTE]
48
-
>The GPU options in the Compute Cloud@Customer will be available soon.
48
+
>The C3 now has an NVIDIA L40S GPU expansion option available. Using a 4-GPU VM it is expected that performance acceleration will dramatically improve for LLMs of up to approximately 70bn parameters.
49
49
50
50
### Create a Virtual Machine Instance
51
51
52
-
[C3: Creating an Instance](https://docs.oracle.com/en-us/iaas/compute-cloud-at-customer/topics/compute/creating-an-instance.htm#creating-an-instance)<br>
53
-
[PCA 3.0: Working with Instances](https://docs.oracle.com/en/engineered-systems/private-cloud-appliance/3.0-latest/user/user-usr-instance-lifecycle.html)
52
+
[Creating an Instance](https://docs.oracle.com/en-us/iaas/compute-cloud-at-customer/topics/compute/creating-an-instance.htm#creating-an-instance)<br>
54
53
55
54
Create a VM in a public subnet following these guidelines:
56
55
@@ -73,8 +72,7 @@ sudo dnf update
73
72
74
73
### Create a Block Storage Device for LLMs
75
74
76
-
[C3: Creating and Attaching Block Volumes](https://docs.oracle.com/en-us/iaas/compute-cloud-at-customer/topics/block/creating-and-attaching-block-volumes.htm)<br>
77
-
[PCA 3.0: Creating and Attaching Block Volumes](https://docs.oracle.com/en/engineered-systems/private-cloud-appliance/3.0-latest/user/user-usr-blk-volume-create-attach.html)
75
+
[Creating and Attaching Block Volumes](https://docs.oracle.com/en-us/iaas/compute-cloud-at-customer/topics/block/creating-and-attaching-block-volumes.htm)<br>
78
76
79
77
1. Create and attach a block volume to the VM
80
78
2. Volume name `llm-repo`
@@ -106,7 +104,7 @@ export no_proxy
106
104
```
107
105
108
106
>[!TIP]
109
-
>The `no_proxy` environment variable can be expanded to include your internal domains. It is not required to list IP addresses in internal subnets of the C3/PCA.
107
+
>The `no_proxy` environment variable can be expanded to include your internal domains. It is not required to list IP addresses in internal subnets of the C3.
110
108
111
109
Edit the `/etc/yum.conf` file to include the following line:
112
110
```
@@ -181,7 +179,7 @@ The installation comprises the following components:
181
179
<sup><sub>2</sup></sub> See [Ollama documentation](https://github.com/ollama/ollama/tree/main/docs)
182
180
183
181
>[!IMPORTANT]
184
-
>When GPU's become available the NVIDIA and CUDA drivers should be installed. This configuration will also be tested on the Roving Edge Device GPU model.
182
+
>When GPU's are available for use on the C3 the NVIDIA and CUDA drivers should be installed. This configuration will also be tested on the Roving Edge Device GPU model.
0 commit comments