Skip to content

Commit 07c7662

Browse files
committed
docs: s390x build indent
Signed-off-by: Aaron Teo <[email protected]>
1 parent d3903f5 commit 07c7662

File tree

1 file changed

+31
-31
lines changed

1 file changed

+31
-31
lines changed

docs/build-s390x.md

Lines changed: 31 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -66,48 +66,48 @@ cmake --build build --config Release -j $(nproc)
6666

6767
## Getting GGUF Models
6868

69-
In order to run GGUF models, the model needs to be converted to Big-Endian. You can achieve this in three cases:
69+
All models need to be converted to Big-Endian. You can achieve this in three cases:
7070

7171
1. Use pre-converted models verified for use on IBM Z & LinuxONE (easiest)
7272

73-
You can find popular models pre-converted and verified at [s390x Ready Models](hf.co/collections/taronaeo/s390x-ready-models-672765393af438d0ccb72a08).
73+
You can find popular models pre-converted and verified at [s390x Ready Models](hf.co/collections/taronaeo/s390x-ready-models-672765393af438d0ccb72a08).
7474

75-
These models and their respective tokenizers are verified to run correctly on IBM Z & LinuxONE.
75+
These models and their respective tokenizers are verified to run correctly on IBM Z & LinuxONE.
7676

7777
2. Convert safetensors model to GGUF Big-Endian directly (recommended)
7878

79-
```bash
80-
python3 convert_hf_to_gguf.py \
81-
--outfile model-name-be.f16.gguf \
82-
--outtype f16 \
83-
--bigendian \
84-
model-directory/
85-
```
86-
87-
For example,
79+
```bash
80+
python3 convert_hf_to_gguf.py \
81+
--outfile model-name-be.f16.gguf \
82+
--outtype f16 \
83+
--bigendian \
84+
model-directory/
85+
```
8886

89-
```bash
90-
python3 convert_hf_to_gguf.py \
91-
--outfile granite-3.3-2b-instruct-be.f16.gguf \
92-
--outtype f16 \
93-
--bigendian \
94-
granite-3.3-2b-instruct/
95-
```
87+
For example,
88+
89+
```bash
90+
python3 convert_hf_to_gguf.py \
91+
--outfile granite-3.3-2b-instruct-be.f16.gguf \
92+
--outtype f16 \
93+
--bigendian \
94+
granite-3.3-2b-instruct/
95+
```
9696

9797
3. Convert existing GGUF Little-Endian model to Big-Endian
9898

99-
```bash
100-
python3 gguf-py/gguf/scripts/gguf_convert_endian.py model-name.f16.gguf BIG
101-
```
102-
103-
For example,
104-
```bash
105-
python3 gguf-py/gguf/scripts/gguf_convert_endian.py granite-3.3-2b-instruct-le.f16.gguf BIG
106-
mv granite-3.3-2b-instruct-le.f16.gguf granite-3.3-2b-instruct-be.f16.gguf
107-
```
108-
109-
**Notes:**
110-
- The GGUF endian conversion script may not support all data types at the moment and may fail for some models/quantizations. When that happens, please try manually converting the safetensors model to GGUF Big-Endian via Step 2.
99+
```bash
100+
python3 gguf-py/gguf/scripts/gguf_convert_endian.py model-name.f16.gguf BIG
101+
```
102+
103+
For example,
104+
```bash
105+
python3 gguf-py/gguf/scripts/gguf_convert_endian.py granite-3.3-2b-instruct-le.f16.gguf BIG
106+
mv granite-3.3-2b-instruct-le.f16.gguf granite-3.3-2b-instruct-be.f16.gguf
107+
```
108+
109+
**Notes:**
110+
- The GGUF endian conversion script may not support all data types at the moment and may fail for some models/quantizations. When that happens, please try manually converting the safetensors model to GGUF Big-Endian via Step 2.
111111

112112

113113

0 commit comments

Comments
 (0)