|
3 | 3 | One of the easiest ways to get started using TensorFlow Serving is with
|
4 | 4 | [Docker](http://www.docker.com/).
|
5 | 5 |
|
6 |
| -<pre class="prettyprint lang-bsh"> |
| 6 | +```shell |
7 | 7 | # Download the TensorFlow Serving Docker image and repo
|
8 |
| -<code class="devsite-terminal">docker pull tensorflow/serving</code><br/> |
9 |
| -<code class="devsite-terminal">git clone https://github.com/tensorflow/serving</code> |
| 8 | +docker pull tensorflow/serving |
| 9 | +git clone https://github.com/tensorflow/serving |
10 | 10 | # Location of demo models
|
11 |
| -<code class="devsite-terminal">TESTDATA="$(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata"</code> |
| 11 | +TESTDATA="$(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata" |
12 | 12 |
|
13 | 13 | # Start TensorFlow Serving container and open the REST API port
|
14 |
| -<code class="devsite-terminal">docker run -t --rm -p 8501:8501 \ |
| 14 | +docker run -t --rm -p 8501:8501 \ |
15 | 15 | -v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" \
|
16 | 16 | -e MODEL_NAME=half_plus_two \
|
17 |
| - tensorflow/serving &</code> |
| 17 | + tensorflow/serving & |
18 | 18 |
|
19 | 19 | # Query the model using the predict API
|
20 |
| -<code class="devsite-terminal">curl -d '{"instances": [1.0, 2.0, 5.0]}' \ |
21 |
| - -X POST http://localhost:8501/v1/models/half_plus_two:predict</code><br/> |
| 20 | +curl -d '{"instances": [1.0, 2.0, 5.0]}' \ |
| 21 | + -X POST http://localhost:8501/v1/models/half_plus_two:predict |
22 | 22 | # Returns => { "predictions": [2.5, 3.0, 4.5] }
|
23 |
| -</pre> |
| 23 | +``` |
24 | 24 |
|
25 |
| -For additional serving endpoints, see the <a href="./api_rest.md">Client REST API</a>. |
| 25 | +For additional serving endpoints, see the [Client REST API](api_rest.md). |
26 | 26 |
|
27 | 27 | ## Install Docker
|
28 | 28 |
|
|
0 commit comments