Skip to content

Commit dc27e59

Browse files
committed
Merge branch 'master' into release/1.7.x
2 parents 1d1f3ee + 0ba86cf commit dc27e59

File tree

102 files changed

+5389
-2206
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

102 files changed

+5389
-2206
lines changed

docs-gb/SUMMARY.md

Lines changed: 18 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
1+
# Table of contents
2+
13
* [MLServer](README.md)
24
* [Getting Started](getting-started.md)
3-
* [User Guide](user-guide/index.md)
5+
* [User Guide](user-guide/README.md)
46
* [Content Types (and Codecs)](user-guide/content-type.md)
57
* [OpenAPI Support](user-guide/openapi.md)
68
* [Parallel Inference](user-guide/parallel-inference.md)
@@ -22,15 +24,19 @@
2224
* [Alibi-Explain](runtimes/alibi-explain.md)
2325
* [HuggingFace](runtimes/huggingface.md)
2426
* [Custom](runtimes/custom.md)
25-
* [Reference](reference/README.md)
26-
* [MLServer Settings](reference/settings.md)
27-
* [Model Settings](reference/model-settings.md)
28-
* [MLServer CLI](reference/cli.md)
29-
* [Python API](reference/python-api/README.md)
30-
* [MLModel](reference/api/model.md)
31-
* [Types](reference/api/types.md)
32-
* [Codecs](reference/api/codecs.md)
33-
* [Metrics](reference/api/metrics.md)
27+
28+
* [API Reference](api/api-reference.md)
29+
* [MLServer Settings](api/Settings.md)
30+
* [Model Settings](api/ModelSettings.md)
31+
* [Model Parameters](api/ModelParameters.md)
32+
* [MLServer CLI](api/CLI.md)
33+
<!-- * [MLServer CLI](api-reference/mlserver_cli.md) -->
34+
* [Python API](api/PythonAPI.md)
35+
* [MLModel](api/MLModel.md)
36+
* [Types](api/Types.md)
37+
* [Codecs](api/Codecs.md)
38+
* [Metrics](api/Metrics.md)
39+
3440
* [Examples](examples/README.md)
3541
* [Serving Scikit-Learn models](examples/sklearn/README.md)
3642
* [Serving XGBoost models](examples/xgboost/README.md)
@@ -47,4 +53,5 @@
4753
* [Serving models through Kafka](examples/kafka/README.md)
4854
* [Streaming](examples/streaming/README.md)
4955
* [Deploying a Custom Tensorflow Model with MLServer and Seldon Core](examples/cassava/README.md)
50-
* [Changelog](changelog.md)
56+
* [Changelog](changelog.md)
57+
* [Release Notes](https://github.com/SeldonIO/MLServer/releases)

docs-gb/api/CLI.md

Lines changed: 143 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,143 @@
1+
# MLServer CLI
2+
3+
The MLServer package includes a mlserver CLI designed to help with common tasks in a model’s lifecycle. You can see a high-level outline at any time via:
4+
5+
```bash
6+
mlserver --help
7+
```
8+
9+
## mlserver
10+
11+
Command-line interface to manage MLServer models.
12+
13+
```bash
14+
mlserver [OPTIONS] COMMAND [ARGS]...
15+
```
16+
17+
### Options
18+
19+
- `--version` (Default: `False`)
20+
Show the version and exit.
21+
22+
## build
23+
24+
Build a Docker image for a custom MLServer runtime.
25+
26+
```bash
27+
mlserver build [OPTIONS] FOLDER
28+
```
29+
30+
### Options
31+
32+
- `-t`, `--tag` `<text>`
33+
34+
- `--no-cache` (Default: `False`)
35+
36+
### Arguments
37+
38+
- `FOLDER`
39+
Required argument
40+
41+
## dockerfile
42+
43+
Generate a Dockerfile
44+
45+
```bash
46+
mlserver dockerfile [OPTIONS] FOLDER
47+
```
48+
49+
### Options
50+
51+
- `-i`, `--include-dockerignore` (Default: `False`)
52+
53+
### Arguments
54+
55+
- `FOLDER`
56+
Required argument
57+
58+
## infer
59+
60+
Deprecated: This experimental feature will be removed in future work.
61+
Execute batch inference requests against V2 inference server.
62+
63+
> Deprecated: This experimental feature will be removed in future work.
64+
65+
```bash
66+
mlserver infer [OPTIONS]
67+
```
68+
69+
### Options
70+
71+
- `--url`, `-u` `<text>` (Default: `localhost:8080`; Env: `MLSERVER_INFER_URL`)
72+
URL of the MLServer to send inference requests to. Should not contain http or https.
73+
74+
- `--model-name`, `-m` `<text>` (Required; Env: `MLSERVER_INFER_MODEL_NAME`)
75+
Name of the model to send inference requests to.
76+
77+
- `--input-data-path`, `-i` `<path>` (Required; Env: `MLSERVER_INFER_INPUT_DATA_PATH`)
78+
Local path to the input file containing inference requests to be processed.
79+
80+
- `--output-data-path`, `-o` `<path>` (Required; Env: `MLSERVER_INFER_OUTPUT_DATA_PATH`)
81+
Local path to the output file for the inference responses to be written to.
82+
83+
- `--workers`, `-w` `<integer>` (Default: `10`; Env: `MLSERVER_INFER_WORKERS`)
84+
85+
- `--retries`, `-r` `<integer>` (Default: `3`; Env: `MLSERVER_INFER_RETRIES`)
86+
87+
- `--batch-size`, `-s` `<integer>` (Default: `1`; Env: `MLSERVER_INFER_BATCH_SIZE`)
88+
Send inference requests grouped together as micro-batches.
89+
90+
- `--binary-data`, `-b` (Default: `False`; Env: `MLSERVER_INFER_BINARY_DATA`)
91+
Send inference requests as binary data (not fully supported).
92+
93+
- `--verbose`, `-v` (Default: `False`; Env: `MLSERVER_INFER_VERBOSE`)
94+
Verbose mode.
95+
96+
- `--extra-verbose`, `-vv` (Default: `False`; Env: `MLSERVER_INFER_EXTRA_VERBOSE`)
97+
Extra verbose mode (shows detailed requests and responses).
98+
99+
- `--transport`, `-t` `<choice>` (Options: `rest` | `grpc`; Default: `rest`; Env: `MLSERVER_INFER_TRANSPORT`)
100+
Transport type to use to send inference requests. Can be 'rest' or 'grpc' (not yet supported).
101+
102+
- `--request-headers`, `-H` `<text>` (Env: `MLSERVER_INFER_REQUEST_HEADERS`)
103+
Headers to be set on each inference request send to the server. Multiple options are allowed as: -H 'Header1: Val1' -H 'Header2: Val2'. When setting up as environmental provide as 'Header1:Val1 Header2:Val2'.
104+
105+
- `--timeout` `<integer>` (Default: `60`; Env: `MLSERVER_INFER_CONNECTION_TIMEOUT`)
106+
Connection timeout to be passed to tritonclient.
107+
108+
- `--batch-interval` `<float>` (Default: `0`; Env: `MLSERVER_INFER_BATCH_INTERVAL`)
109+
Minimum time interval (in seconds) between requests made by each worker.
110+
111+
- `--batch-jitter` `<float>` (Default: `0`; Env: `MLSERVER_INFER_BATCH_JITTER`)
112+
Maximum random jitter (in seconds) added to batch interval between requests.
113+
114+
- `--use-ssl` (Default: `False`; Env: `MLSERVER_INFER_USE_SSL`)
115+
Use SSL in communications with inference server.
116+
117+
- `--insecure` (Default: `False`; Env: `MLSERVER_INFER_INSECURE`)
118+
Disable SSL verification in communications. Use with caution.
119+
120+
## init
121+
122+
Generate a base project template
123+
124+
```bash
125+
mlserver init [OPTIONS]
126+
```
127+
128+
### Options
129+
130+
- `-t`, `--template` `<text>` (Default: `https://github.com/EthicalML/sml-security/`)
131+
132+
## start
133+
134+
Start serving a machine learning model with MLServer.
135+
136+
```bash
137+
mlserver start [OPTIONS] FOLDER
138+
```
139+
140+
### Arguments
141+
142+
- `FOLDER`
143+
Required argument

0 commit comments

Comments
 (0)