Skip to content

Commit 924e1eb

Browse files
yoshihyodarick-github
authored andcommitted
docs: fix typos and remove trailing whitespaces (ollama#11554)
1 parent a904ce6 commit 924e1eb

File tree

4 files changed

+8
-8
lines changed

4 files changed

+8
-8
lines changed

docs/api.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -500,11 +500,11 @@ The `message` object has the following fields:
500500
- `thinking`: (for thinking models) the model's thinking process
501501
- `images` (optional): a list of images to include in the message (for multimodal models such as `llava`)
502502
- `tool_calls` (optional): a list of tools in JSON that the model wants to use
503-
- `tool_name` (optional): add the name of the tool that was executed to inform the model of the result
503+
- `tool_name` (optional): add the name of the tool that was executed to inform the model of the result
504504

505505
Advanced parameters (optional):
506506

507-
- `format`: the format to return a response in. Format can be `json` or a JSON schema.
507+
- `format`: the format to return a response in. Format can be `json` or a JSON schema.
508508
- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
509509
- `stream`: if `false` the response will be returned as a single response object, rather than a stream of objects
510510
- `keep_alive`: controls how long the model will stay loaded into memory following the request (default: `5m`)

docs/development.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ To run tests, use `go test`:
118118
go test ./...
119119
```
120120

121-
> NOTE: In rare cirumstances, you may need to change a package using the new
121+
> NOTE: In rare circumstances, you may need to change a package using the new
122122
> "synctest" package in go1.24.
123123
>
124124
> If you do not have the "synctest" package enabled, you will not see build or

docs/openai.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ client = OpenAI(base_url="http://localhost:11434/v1", api_key="ollama")
7272
# Define the schema for the response
7373
class FriendInfo(BaseModel):
7474
name: str
75-
age: int
75+
age: int
7676
is_available: bool
7777

7878
class FriendList(BaseModel):

docs/troubleshooting.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ cat ~/.ollama/logs/server.log
99
On **Linux** systems with systemd, the logs can be found with this command:
1010

1111
```shell
12-
journalctl -u ollama --no-pager --follow --pager-end
12+
journalctl -u ollama --no-pager --follow --pager-end
1313
```
1414

1515
When you run Ollama in a **container**, the logs go to stdout/stderr in the container:
@@ -23,7 +23,7 @@ docker logs <container-name>
2323
If manually running `ollama serve` in a terminal, the logs will be on that terminal.
2424

2525
When you run Ollama on **Windows**, there are a few different locations. You can view them in the explorer window by hitting `<cmd>+R` and type in:
26-
- `explorer %LOCALAPPDATA%\Ollama` to view logs. The most recent server logs will be in `server.log` and older logs will be in `server-#.log`
26+
- `explorer %LOCALAPPDATA%\Ollama` to view logs. The most recent server logs will be in `server.log` and older logs will be in `server-#.log`
2727
- `explorer %LOCALAPPDATA%\Programs\Ollama` to browse the binaries (The installer adds this to your user PATH)
2828
- `explorer %HOMEPATH%\.ollama` to browse where models and configuration is stored
2929

@@ -38,7 +38,7 @@ Join the [Discord](https://discord.gg/ollama) for help interpreting the logs.
3838

3939
## LLM libraries
4040

41-
Ollama includes multiple LLM libraries compiled for different GPUs and CPU vector features. Ollama tries to pick the best one based on the capabilities of your system. If this autodetection has problems, or you run into other problems (e.g. crashes in your GPU) you can workaround this by forcing a specific LLM library. `cpu_avx2` will perform the best, followed by `cpu_avx` an the slowest but most compatible is `cpu`. Rosetta emulation under MacOS will work with the `cpu` library.
41+
Ollama includes multiple LLM libraries compiled for different GPUs and CPU vector features. Ollama tries to pick the best one based on the capabilities of your system. If this autodetection has problems, or you run into other problems (e.g. crashes in your GPU) you can workaround this by forcing a specific LLM library. `cpu_avx2` will perform the best, followed by `cpu_avx` and the slowest but most compatible is `cpu`. Rosetta emulation under MacOS will work with the `cpu` library.
4242

4343
In the server log, you will see a message that looks something like this (varies from release to release):
4444

@@ -97,7 +97,7 @@ If none of those resolve the problem, gather additional information and file an
9797

9898
On linux, AMD GPU access typically requires `video` and/or `render` group membership to access the `/dev/kfd` device. If permissions are not set up correctly, Ollama will detect this and report an error in the server log.
9999

100-
When running in a container, in some Linux distributions and container runtimes, the ollama process may be unable to access the GPU. Use `ls -lnd /dev/kfd /dev/dri /dev/dri/*` on the host system to determine the **numeric** group IDs on your system, and pass additional `--group-add ...` arguments to the container so it can access the required devices. For example, in the following output `crw-rw---- 1 0 44 226, 0 Sep 16 16:55 /dev/dri/card0` the group ID column is `44`
100+
When running in a container, in some Linux distributions and container runtimes, the ollama process may be unable to access the GPU. Use `ls -lnd /dev/kfd /dev/dri /dev/dri/*` on the host system to determine the **numeric** group IDs on your system, and pass additional `--group-add ...` arguments to the container so it can access the required devices. For example, in the following output `crw-rw---- 1 0 44 226, 0 Sep 16 16:55 /dev/dri/card0` the group ID column is `44`
101101

102102
If you are experiencing problems getting Ollama to correctly discover or use your GPU for inference, the following may help isolate the failure.
103103
- `AMD_LOG_LEVEL=3` Enable info log levels in the AMD HIP/ROCm libraries. This can help show more detailed error codes that can help troubleshoot problems

0 commit comments

Comments
 (0)