You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
By running models locally, you maintain full data ownership and avoid the potential security risks associated with cloud storage. Offline AI tools like Ollama also help reduce latency and reliance on external facilities, making them faster and more reliable.
23
23
24
-
This article is intended to demonstrate and provide directions to install and create an Ollama LLM processing facility. Despite the fact that Ollama can be run on both personal servers and laptops, this installation is aimed at the Oracle Compute Cloud@Customer (C3) and Private Cloud Appliance (PCA) to capitalize on more readily available resources to increase performance and processing efficiency, especially if large models are used.
24
+
This article is intended to demonstrate and provide directions to install and create an Ollama LLM processing facility. Despite the fact that Ollama can be run on both personal servers and laptops, this installation is aimed at the Oracle Compute Cloud@Customer (C3) to capitalize on more readily available resources to increase performance and processing efficiency, especially if large models are used.
25
25
26
26
Considerations:
27
-
* A firm grasp of C3/PCA/OCI concepts and administration is assumed.
27
+
* A firm grasp of C3 and OCI concepts and administration is assumed.
28
28
* The creation and integration of a development environment is outside of the scope of this document.
29
-
* Oracle Linux 8 and macOS Sonoma 14.7.1 clients were used for testing but Windows is however widely supported.
29
+
* Oracle Linux 8, macOS Sonoma 14.7.1 and macOS Sequoia 15.3.2 clients were used for testing but Windows is however widely supported.
30
30
31
31
[Back to top](#toc)<br>
32
32
<br>
@@ -40,17 +40,16 @@ Considerations:
40
40
|----------|----------|
41
41
| Operating system | Oracle Linux 8 or later<br>Ubuntu 22.04 or later<br>Windows<br> |
42
42
| RAM | 16 GB for running models up to 7B. "The rule of thumb" is to have at least 2x memory for the size of the LLM, also allowing for LLMs that will be loaded in memory simultaneously. |
43
-
| Disk space | 12 GB for installing Ollama and basic models. Additional space is required for storing model data depending on the used models. The LLM sizes can be obtained from the "trained models" link in the References section. For example the Llama 3.1 LLM with 405Bn parameters occupy 229GB of disk space |
43
+
| Disk space | 12 GB for installing Ollama and basic models. Additional space is required for storing model data depending on the used models. The LLM sizes can be obtained from the "trained models" link in the References section. For example the Llama 3.1 LLM with 405Bn parameters occupies 229GB of disk space |
44
44
| Processor | Recommended to use a modern CPU with at least 4 cores. For running models of approximately 15B, 8 cores (OCPUs) is recommended. Allocate accordingly |
45
-
| Graphics Processing Unit<br>(optional) | A GPU is not required for running Ollama, but can improve performance, especially when working with large models. If you have a GPU, you can use it to accelerate training of custom models. |
45
+
| Graphics Processing Unit<br>(optional) | A GPU is not required for running Ollama, but can improve performance, especially when working with large models. If you have a GPU, you can use it to accelerate inferencing, training, fine-tuning and RAG (Retrieval Augmented Generation). |
46
46
47
47
>[!NOTE]
48
-
>The GPU options in the Compute Cloud@Customer will be available soon.
48
+
>The C3 now has an NVIDIA L40S GPU expansion option available. Using a 4-GPU VM it is expected that performance acceleration will dramatically improve for LLMs of up to approximately 70bn parameters.
49
49
50
50
### Create a Virtual Machine Instance
51
51
52
-
[C3: Creating an Instance](https://docs.oracle.com/en-us/iaas/compute-cloud-at-customer/topics/compute/creating-an-instance.htm#creating-an-instance)<br>
53
-
[PCA 3.0: Working with Instances](https://docs.oracle.com/en/engineered-systems/private-cloud-appliance/3.0-latest/user/user-usr-instance-lifecycle.html)
52
+
[Creating an Instance](https://docs.oracle.com/en-us/iaas/compute-cloud-at-customer/topics/compute/creating-an-instance.htm#creating-an-instance)<br>
54
53
55
54
Create a VM in a public subnet following these guidelines:
56
55
@@ -73,8 +72,7 @@ sudo dnf update
73
72
74
73
### Create a Block Storage Device for LLMs
75
74
76
-
[C3: Creating and Attaching Block Volumes](https://docs.oracle.com/en-us/iaas/compute-cloud-at-customer/topics/block/creating-and-attaching-block-volumes.htm)<br>
77
-
[PCA 3.0: Creating and Attaching Block Volumes](https://docs.oracle.com/en/engineered-systems/private-cloud-appliance/3.0-latest/user/user-usr-blk-volume-create-attach.html)
75
+
[Creating and Attaching Block Volumes](https://docs.oracle.com/en-us/iaas/compute-cloud-at-customer/topics/block/creating-and-attaching-block-volumes.htm)<br>
78
76
79
77
1. Create and attach a block volume to the VM
80
78
2. Volume name `llm-repo`
@@ -106,7 +104,7 @@ export no_proxy
106
104
```
107
105
108
106
>[!TIP]
109
-
>The `no_proxy` environment variable can be expanded to include your internal domains. It is not required to list IP addresses in internal subnets of the C3/PCA.
107
+
>The `no_proxy` environment variable can be expanded to include your internal domains. It is not required to list IP addresses in internal subnets of the C3.
110
108
111
109
Edit the `/etc/yum.conf` file to include the following line:
112
110
```
@@ -181,7 +179,7 @@ The installation comprises the following components:
181
179
<sup><sub>2</sup></sub> See [Ollama documentation](https://github.com/ollama/ollama/tree/main/docs)
182
180
183
181
>[!IMPORTANT]
184
-
>When GPU's become available the NVIDIA and CUDA drivers should be installed. This configuration will also be tested on the Roving Edge Device GPU model.
182
+
>When GPU's are available for use on the C3 the NVIDIA and CUDA drivers should be installed. This configuration will also be tested on the Roving Edge Device GPU model.
0 commit comments