You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TransformersPHP is designed to be functionally equivalent to the Python library, while still maintaining the same level
17
-
of performance and ease of use. This library is built on top of the Hugging Face's Transformers library, which provides
18
-
thousands of pre-trained models in 100+ languages. It is designed to be a simple and easy-to-use library for PHP
19
-
developers using a similar API to the Python library. These models can be used for a variety of tasks, including text
20
-
generation, summarization, translation, and more.
16
+
TransformersPHP is designed to be functionally equivalent to the Python library, while still maintaining the same level of performance and ease of use. This library is built on top of the Hugging Face's Transformers library, which provides thousands of pre-trained models in 100+ languages. It is designed to be a simple and easy-to-use library for PHP developers using a similar API to the Python library. These models can be used for a variety of tasks, including text generation, summarization, translation, and more.
21
17
22
-
TransformersPHP uses [ONNX Runtime](https://onnxruntime.ai/) to run the models, which is a high-performance scoring
23
-
engine for Open Neural Network Exchange (ONNX) models. You can easily convert any PyTorch or TensorFlow model to ONNX
24
-
and use it with TransformersPHP using [🤗 Optimum](https://github.com/huggingface/optimum#onnx--onnx-runtime).
18
+
TransformersPHP uses [ONNX Runtime](https://onnxruntime.ai/) to run the models, which is a high-performance scoring engine for Open Neural Network Exchange (ONNX) models. You can easily convert any PyTorch or TensorFlow model to ONNX and use it with TransformersPHP using [🤗 Optimum](https://github.com/huggingface/optimum#onnx--onnx-runtime).
25
19
26
-
TO learn more about the library and how it works, head over to
TO learn more about the library and how it works, head over to our [extensive documentation](https://codewithkyrian.github.io/transformers-php/introduction).
28
21
29
22
## Quick tour
30
23
31
-
Because TransformersPHP is designed to be functionally equivalent to the Python library, it's super easy to learn from
32
-
existing Python or Javascript code. We provide the `pipeline` API, which is a high-level, easy-to-use API that groups
33
-
together a model with its necessary preprocessing and postprocessing steps.
24
+
Because TransformersPHP is designed to be functionally equivalent to the Python library, it's super easy to learn from existing Python or Javascript code. We provide the `pipeline` API, which is a high-level, easy-to-use API that groups together a model with its necessary preprocessing and postprocessing steps.
> The ONNX library is platform-specific, so it's important to run the composer require command on the target platform
108
-
> where the code will be executed. In most cases, this will be your development machine or a server where you deploy
109
-
> your application, but if you're using a Docker container, run the `composer require` command inside that container.
98
+
> The ONNX library is platform-specific, so it's important to run the composer require command on the target platform where the code will be executed. In most cases, this will be your development machine or a server where you deploy your application, but if you're using a Docker container, run the `composer require` command inside that container.
110
99
111
100
## PHP FFI Extension
112
101
113
-
TransformersPHP uses the PHP FFI extension to interact with the ONNX runtime. The FFI extension is included by default
114
-
in PHP 7.4 and later, but it may not be enabled by default. If the FFI extension is not enabled, you can enable it by
115
-
uncommenting(remove the `;` from the beginning of the line) the
102
+
TransformersPHP uses the PHP FFI extension to interact with the ONNX runtime. The FFI extension is included by default in PHP 7.4 and later, but it may not be enabled by default. If the FFI extension is not enabled, you can enable it by uncommenting(remove the `;` from the beginning of the line) the
116
103
following line in your `php.ini` file:
117
104
118
105
```ini
@@ -129,14 +116,11 @@ After making these changes, restart your web server or PHP-FPM service, and you
129
116
130
117
## Documentation
131
118
132
-
For more detailed information on how to use the library, check out the
For more detailed information on how to use the library, check out the documentation : [https://codewithkyrian.github.io/transformers-php](https://codewithkyrian.github.io/transformers-php)
134
120
135
121
## Usage
136
122
137
-
By default, TransformersPHP uses hosted pretrained ONNX models. For supported tasks, models that have been converted to
138
-
work with [Xenova's Transformers.js](https://huggingface.co/models?library=transformers.js) on HuggingFace should work
139
-
out of the box with TransformersPHP.
123
+
By default, TransformersPHP uses hosted pretrained ONNX models. For supported tasks, models that have been converted to work with [Xenova's Transformers.js](https://huggingface.co/models?library=transformers.js) on HuggingFace should work out of the box with TransformersPHP.
140
124
141
125
## Configuration
142
126
@@ -155,23 +139,20 @@ Transformers::setup()
155
139
->apply(); // Apply the configuration
156
140
```
157
141
158
-
You can call the `set` methods in any order, or leave any out entirely, in which case, it uses the default values. For
159
-
more information on the configuration options and what they mean, checkout
142
+
You can call the `set` methods in any order, or leave any out entirely, in which case, it uses the default values. For more information on the configuration options and what they mean, checkout
160
143
the [documentation](https://codewithkyrian.github.io/transformers-php/configuration).
161
144
162
145
## Convert your models to ONNX
163
146
164
-
TransformersPHP only works with ONNX models, therefore, you must convert your PyTorch, TensorFlow or JAX models to
165
-
ONNX. It is recommended to use [🤗 Optimum](https://huggingface.co/docs/optimum) to perform the conversion and
166
-
quantization of your model.
147
+
TransformersPHP only works with ONNX models, therefore, you must convert your PyTorch, TensorFlow or JAX models to ONNX. We recommend using the [conversion script](https://github.com/huggingface/transformers.js/blob/main/scripts/convert.py) from Transformers.js, which uses the [🤗 Optimum](https://huggingface.co/docs/optimum) behind the scenes to perform the conversion and quantization of your model.
By default, TransformersPHP automatically retrieves model weights (ONNX format) from the Hugging Face model hub when
171
-
you first use a pipeline or pretrained model. This can lead to a slight delay during the initial use. To improve the
172
-
user experience, it's recommended to pre-download the models you intend to use before running them in your PHP
173
-
application, especially for larger models. One way to do that is run the request once manually, but TransformersPHP
174
-
also comes with a command line tool to help you do just that:
155
+
By default, TransformersPHP automatically retrieves model weights (ONNX format) from the Hugging Face model hub when you first use a pipeline or pretrained model. This can lead to a slight delay during the initial use. To improve the user experience, it's recommended to pre-download the models you intend to use before running them in your PHP application, especially for larger models. One way to do that is run the request once manually, but TransformersPHP also comes with a command line tool to help you do just that:
Copy file name to clipboardExpand all lines: docs/getting-started.md
+2-8Lines changed: 2 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,10 +30,7 @@ can install them manually using the following command:
30
30
```
31
31
32
32
> [!CAUTION]
33
-
> The shared libraries is platform-specific, so it's important to run the `composer require`, or `transformers install`
34
-
> command on the target platform where the code will be executed. In most cases, this will be your development machine
35
-
> or a server where you deploy your application, but if you're using a Docker container, run the `composer require`
36
-
> command inside that container.
33
+
> The shared libraries is platform-specific, so it's important to run the `composer require`, or `transformers install` command on the target platform where the code will be executed. In most cases, this will be your development machine or a server where you deploy your application, but if you're using a Docker container, run the `composer require` command inside that container.
37
34
38
35
That's it! You're now ready to use TransformersPHP in your PHP application.
39
36
@@ -89,10 +86,7 @@ Since TransformersPHP operates exclusively with ONNX models, you'll need to conv
89
86
developed or plan to use from PyTorch, TensorFlow, or JAX into the ONNX format.
90
87
91
88
For this conversion process, we recommend using
92
-
the [conversion script](https://github.com/xenova/transformers.js/blob/main/scripts/convert.py)
93
-
provided by the Transformers.js project. This script is designed to convert models from PyTorch, TensorFlow, and JAX to
94
-
ONNX format, and most importantly, outputs it in a folder structure that is compatible with TransformersPHP. Behind the
95
-
scenes, the script uses [🤗 Optimum](https://huggingface.co/docs/optimum) from Hugging Face to convert and quantize the
89
+
the [conversion script](https://github.com/huggingface/transformers.js/blob/main/scripts/convert.py) provided by the Transformers.js project. This script is designed to convert models from PyTorch, TensorFlow, and JAX to ONNX format, and most importantly, outputs it in a folder structure that is compatible with TransformersPHP. Behind the scenes, the script uses [🤗 Optimum](https://huggingface.co/docs/optimum) from Hugging Face to convert and quantize the
96
90
models.
97
91
98
92
But let's be real, not all PHP developer are fans of Python, or even have a Python environment set up. And that's okay.
0 commit comments