Skip to content

Commit 8ce4c9c

Browse files
chore: Update README and getting-started documentation for clarity and consistency
- Enhanced explanations for model conversion and usage to better guide users.
1 parent e01a1c3 commit 8ce4c9c

File tree

2 files changed

+17
-42
lines changed

2 files changed

+17
-42
lines changed

README.md

Lines changed: 15 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -13,24 +13,15 @@
1313
<a href="https://github.com/codewithkyrian/transformers-php"><img src="https://img.shields.io/github/repo-size/codewithkyrian/transformers-php" alt="Documentation"></a>
1414
</p>
1515

16-
TransformersPHP is designed to be functionally equivalent to the Python library, while still maintaining the same level
17-
of performance and ease of use. This library is built on top of the Hugging Face's Transformers library, which provides
18-
thousands of pre-trained models in 100+ languages. It is designed to be a simple and easy-to-use library for PHP
19-
developers using a similar API to the Python library. These models can be used for a variety of tasks, including text
20-
generation, summarization, translation, and more.
16+
TransformersPHP is designed to be functionally equivalent to the Python library, while still maintaining the same level of performance and ease of use. This library is built on top of the Hugging Face's Transformers library, which provides thousands of pre-trained models in 100+ languages. It is designed to be a simple and easy-to-use library for PHP developers using a similar API to the Python library. These models can be used for a variety of tasks, including text generation, summarization, translation, and more.
2117

22-
TransformersPHP uses [ONNX Runtime](https://onnxruntime.ai/) to run the models, which is a high-performance scoring
23-
engine for Open Neural Network Exchange (ONNX) models. You can easily convert any PyTorch or TensorFlow model to ONNX
24-
and use it with TransformersPHP using [🤗 Optimum](https://github.com/huggingface/optimum#onnx--onnx-runtime).
18+
TransformersPHP uses [ONNX Runtime](https://onnxruntime.ai/) to run the models, which is a high-performance scoring engine for Open Neural Network Exchange (ONNX) models. You can easily convert any PyTorch or TensorFlow model to ONNX and use it with TransformersPHP using [🤗 Optimum](https://github.com/huggingface/optimum#onnx--onnx-runtime).
2519

26-
TO learn more about the library and how it works, head over to
27-
our [extensive documentation](https://codewithkyrian.github.io/transformers-php/introduction).
20+
TO learn more about the library and how it works, head over to our [extensive documentation](https://codewithkyrian.github.io/transformers-php/introduction).
2821

2922
## Quick tour
3023

31-
Because TransformersPHP is designed to be functionally equivalent to the Python library, it's super easy to learn from
32-
existing Python or Javascript code. We provide the `pipeline` API, which is a high-level, easy-to-use API that groups
33-
together a model with its necessary preprocessing and postprocessing steps.
24+
Because TransformersPHP is designed to be functionally equivalent to the Python library, it's super easy to learn from existing Python or Javascript code. We provide the `pipeline` API, which is a high-level, easy-to-use API that groups together a model with its necessary preprocessing and postprocessing steps.
3425

3526
<table>
3627
<tr>
@@ -104,15 +95,11 @@ composer require codewithkyrian/transformers
10495
```
10596

10697
> [!CAUTION]
107-
> The ONNX library is platform-specific, so it's important to run the composer require command on the target platform
108-
> where the code will be executed. In most cases, this will be your development machine or a server where you deploy
109-
> your application, but if you're using a Docker container, run the `composer require` command inside that container.
98+
> The ONNX library is platform-specific, so it's important to run the composer require command on the target platform where the code will be executed. In most cases, this will be your development machine or a server where you deploy your application, but if you're using a Docker container, run the `composer require` command inside that container.
11099
111100
## PHP FFI Extension
112101

113-
TransformersPHP uses the PHP FFI extension to interact with the ONNX runtime. The FFI extension is included by default
114-
in PHP 7.4 and later, but it may not be enabled by default. If the FFI extension is not enabled, you can enable it by
115-
uncommenting(remove the `;` from the beginning of the line) the
102+
TransformersPHP uses the PHP FFI extension to interact with the ONNX runtime. The FFI extension is included by default in PHP 7.4 and later, but it may not be enabled by default. If the FFI extension is not enabled, you can enable it by uncommenting(remove the `;` from the beginning of the line) the
116103
following line in your `php.ini` file:
117104

118105
```ini
@@ -129,14 +116,11 @@ After making these changes, restart your web server or PHP-FPM service, and you
129116

130117
## Documentation
131118

132-
For more detailed information on how to use the library, check out the
133-
documentation : [https://codewithkyrian.github.io/transformers-php](https://codewithkyrian.github.io/transformers-php)
119+
For more detailed information on how to use the library, check out the documentation : [https://codewithkyrian.github.io/transformers-php](https://codewithkyrian.github.io/transformers-php)
134120

135121
## Usage
136122

137-
By default, TransformersPHP uses hosted pretrained ONNX models. For supported tasks, models that have been converted to
138-
work with [Xenova's Transformers.js](https://huggingface.co/models?library=transformers.js) on HuggingFace should work
139-
out of the box with TransformersPHP.
123+
By default, TransformersPHP uses hosted pretrained ONNX models. For supported tasks, models that have been converted to work with [Xenova's Transformers.js](https://huggingface.co/models?library=transformers.js) on HuggingFace should work out of the box with TransformersPHP.
140124

141125
## Configuration
142126

@@ -155,23 +139,20 @@ Transformers::setup()
155139
->apply(); // Apply the configuration
156140
```
157141

158-
You can call the `set` methods in any order, or leave any out entirely, in which case, it uses the default values. For
159-
more information on the configuration options and what they mean, checkout
142+
You can call the `set` methods in any order, or leave any out entirely, in which case, it uses the default values. For more information on the configuration options and what they mean, checkout
160143
the [documentation](https://codewithkyrian.github.io/transformers-php/configuration).
161144

162145
## Convert your models to ONNX
163146

164-
TransformersPHP only works with ONNX models, therefore, you must convert your PyTorch, TensorFlow or JAX models to
165-
ONNX. It is recommended to use [🤗 Optimum](https://huggingface.co/docs/optimum) to perform the conversion and
166-
quantization of your model.
147+
TransformersPHP only works with ONNX models, therefore, you must convert your PyTorch, TensorFlow or JAX models to ONNX. We recommend using the [conversion script](https://github.com/huggingface/transformers.js/blob/main/scripts/convert.py) from Transformers.js, which uses the [🤗 Optimum](https://huggingface.co/docs/optimum) behind the scenes to perform the conversion and quantization of your model.
148+
149+
```
150+
python -m convert --quantize --model_id <model_name_or_path>
151+
```
167152

168153
## Pre-Download Models
169154

170-
By default, TransformersPHP automatically retrieves model weights (ONNX format) from the Hugging Face model hub when
171-
you first use a pipeline or pretrained model. This can lead to a slight delay during the initial use. To improve the
172-
user experience, it's recommended to pre-download the models you intend to use before running them in your PHP
173-
application, especially for larger models. One way to do that is run the request once manually, but TransformersPHP
174-
also comes with a command line tool to help you do just that:
155+
By default, TransformersPHP automatically retrieves model weights (ONNX format) from the Hugging Face model hub when you first use a pipeline or pretrained model. This can lead to a slight delay during the initial use. To improve the user experience, it's recommended to pre-download the models you intend to use before running them in your PHP application, especially for larger models. One way to do that is run the request once manually, but TransformersPHP also comes with a command line tool to help you do just that:
175156

176157
```bash
177158
./vendor/bin/transformers download <model_identifier> [<task>] [options]

docs/getting-started.md

Lines changed: 2 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -30,10 +30,7 @@ can install them manually using the following command:
3030
```
3131

3232
> [!CAUTION]
33-
> The shared libraries is platform-specific, so it's important to run the `composer require`, or `transformers install`
34-
> command on the target platform where the code will be executed. In most cases, this will be your development machine
35-
> or a server where you deploy your application, but if you're using a Docker container, run the `composer require`
36-
> command inside that container.
33+
> The shared libraries is platform-specific, so it's important to run the `composer require`, or `transformers install` command on the target platform where the code will be executed. In most cases, this will be your development machine or a server where you deploy your application, but if you're using a Docker container, run the `composer require` command inside that container.
3734
3835
That's it! You're now ready to use TransformersPHP in your PHP application.
3936

@@ -89,10 +86,7 @@ Since TransformersPHP operates exclusively with ONNX models, you'll need to conv
8986
developed or plan to use from PyTorch, TensorFlow, or JAX into the ONNX format.
9087

9188
For this conversion process, we recommend using
92-
the [conversion script](https://github.com/xenova/transformers.js/blob/main/scripts/convert.py)
93-
provided by the Transformers.js project. This script is designed to convert models from PyTorch, TensorFlow, and JAX to
94-
ONNX format, and most importantly, outputs it in a folder structure that is compatible with TransformersPHP. Behind the
95-
scenes, the script uses [🤗 Optimum](https://huggingface.co/docs/optimum) from Hugging Face to convert and quantize the
89+
the [conversion script](https://github.com/huggingface/transformers.js/blob/main/scripts/convert.py) provided by the Transformers.js project. This script is designed to convert models from PyTorch, TensorFlow, and JAX to ONNX format, and most importantly, outputs it in a folder structure that is compatible with TransformersPHP. Behind the scenes, the script uses [🤗 Optimum](https://huggingface.co/docs/optimum) from Hugging Face to convert and quantize the
9690
models.
9791

9892
But let's be real, not all PHP developer are fans of Python, or even have a Python environment set up. And that's okay.

0 commit comments

Comments
 (0)