Skip to content

Commit d3399f5

Browse files
Fix admonitions
1 parent 3af20e0 commit d3399f5

File tree

6 files changed

+50
-37
lines changed

6 files changed

+50
-37
lines changed

tensorflow_serving/g3doc/guide/custom_op.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,9 +11,10 @@ explicitly:
1111
[this guide](https://github.com/tensorflow/custom-op))
1212
* You are using an already implemented op that is not shipped with TensorFlow
1313

14-
Note: Starting in version 2.0, TensorFlow no longer distributes the contrib
15-
module; if you are serving a TensorFlow program using contrib ops, use this
16-
guide to link these ops into ModelServer explicitly.
14+
!!! Note
15+
Starting in version 2.0, TensorFlow no longer distributes the contrib
16+
module; if you are serving a TensorFlow program using contrib ops, use this
17+
guide to link these ops into ModelServer explicitly.
1718

1819
Regardless of whether you implemented the op or not, in order to serve a model
1920
with custom ops, you need access to the source of the op. This guide walks you

tensorflow_serving/g3doc/guide/docker.md

Lines changed: 19 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -200,8 +200,9 @@ curl -d '{"instances": [1.0, 2.0, 5.0]}' \
200200
-X POST http://localhost:8501/v1/models/half_plus_two:predict
201201
```
202202

203-
NOTE: Older versions of Windows and other systems without curl can download it
204-
[here](https://curl.haxx.se/download.html).
203+
!!! NOTE
204+
Older versions of Windows and other systems without curl can download it
205+
[here](https://curl.haxx.se/download.html).
205206

206207
This should return a set of values:
207208

@@ -271,13 +272,14 @@ desired model from our host to where models are expected in the container. We
271272
also pass the name of the model as an environment variable, which will be
272273
important when we query the model.
273274

274-
TIP: Before querying the model, be sure to wait till you see a message like the
275-
following, indicating that the server is ready to receive requests:
275+
!!! TIP
276+
Before querying the model, be sure to wait till you see a message like the
277+
following, indicating that the server is ready to receive requests:
276278

277-
```shell
278-
2018-07-27 00:07:20.773693: I tensorflow_serving/model_servers/main.cc:333]
279-
Exporting HTTP/REST API at:localhost:8501 ...
280-
```
279+
```shell
280+
2018-07-27 00:07:20.773693: I tensorflow_serving/model_servers/main.cc:333]
281+
Exporting HTTP/REST API at:localhost:8501 ...
282+
```
281283

282284
To query the model using the predict API, you can run
283285

@@ -286,21 +288,23 @@ curl -d '{"instances": [1.0, 2.0, 5.0]}' \
286288
-X POST http://localhost:8501/v1/models/half_plus_two:predict
287289
```
288290

289-
NOTE: Older versions of Windows and other systems without curl can download it
290-
[here](https://curl.haxx.se/download.html).
291+
!!! NOTE
292+
Older versions of Windows and other systems without curl can download it
293+
[here](https://curl.haxx.se/download.html).
291294

292295
This should return a set of values:
293296

294297
```json
295298
{ "predictions": [2.5, 3.0, 4.5] }
296299
```
297300

298-
TIP: Trying to run the GPU model on a machine without a GPU or without a working
299-
GPU build of TensorFlow Model Server will result in an error that looks like:
301+
!!! TIP
302+
Trying to run the GPU model on a machine without a GPU or without a working
303+
GPU build of TensorFlow Model Server will result in an error that looks like:
300304

301-
```shell
302-
Cannot assign a device for operation 'a': Operation was explicitly assigned to /device:GPU:0
303-
```
305+
```shell
306+
Cannot assign a device for operation 'a': Operation was explicitly assigned to /device:GPU:0
307+
```
304308

305309
More information on using the RESTful API can be found [here](../api/api_rest.md).
306310

tensorflow_serving/g3doc/guide/serving_basic.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -140,8 +140,9 @@ store tensor logical name to real name mapping ('images' ->
140140
allows the user to refer to these tensors with their logical names when
141141
running inference.
142142

143-
Note: In addition to the description above, documentation related to signature
144-
def structure and how to set up them up can be found [here](signature_defs.md).
143+
!!! Note
144+
In addition to the description above, documentation related to signature
145+
def structure and how to set up them up can be found [here](signature_defs.md).
145146

146147
Let's run it!
147148

tensorflow_serving/g3doc/guide/setup.md

Lines changed: 13 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,9 @@ The easiest and most straight-forward way of using TensorFlow Serving is with
88
[Docker images](docker.md). We highly recommend this route unless you have
99
specific needs that are not addressed by running in a container.
1010

11-
TIP: This is also the easiest way to get TensorFlow Serving working with [GPU
12-
support](docker.md#serving-with-docker-using-your-gpu).
11+
!!! TIP
12+
This is also the easiest way to get TensorFlow Serving working with [GPU
13+
support](docker.md#serving-with-docker-using-your-gpu).
1314

1415
### Installing using APT
1516

@@ -68,9 +69,10 @@ apt-get upgrade tensorflow-model-server
6869

6970
<!-- common_typos_enable -->
7071

71-
Note: In the above commands, replace tensorflow-model-server with
72-
tensorflow-model-server-universal if your processor does not support AVX
73-
instructions.
72+
!!! Note
73+
In the above commands, replace tensorflow-model-server with
74+
tensorflow-model-server-universal if your processor does not support AVX
75+
instructions.
7476

7577
## Building from source
7678

@@ -83,7 +85,8 @@ Development Dockerfiles
8385
[[CPU](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/tools/docker/Dockerfile.devel),
8486
[GPU](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/tools/docker/Dockerfile.devel-gpu)].
8587

86-
Note: Currently we only support building binaries that run on Linux.
88+
!!! Note
89+
Currently we only support building binaries that run on Linux.
8790

8891
#### Installing Docker
8992

@@ -184,9 +187,10 @@ For example:
184187
tools/run_in_docker.sh bazel build --copt=-mavx2 tensorflow_serving/...
185188
```
186189
187-
Note: These instruction sets are not available on all machines, especially with
188-
older processors. Use the default `--config=nativeopt` to build an optimized
189-
version of TensorFlow Serving for your processor if you are in doubt.
190+
!!! Note
191+
These instruction sets are not available on all machines, especially with
192+
older processors. Use the default `--config=nativeopt` to build an optimized
193+
version of TensorFlow Serving for your processor if you are in doubt.
190194
191195
192196
##### Building with GPU Support

tensorflow_serving/g3doc/tutorials/building_with_docker.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -90,9 +90,10 @@ docker build --pull -t $USER/tensorflow-serving-devel -f Dockerfile.devel .
9090
docker build --pull -t $USER/tensorflow-serving-devel-gpu -f Dockerfile.devel-gpu .
9191
```
9292

93-
TIP: Before attempting to build an image, check the Docker Hub
94-
[tensorflow/serving repo](http://hub.docker.com/r/tensorflow/serving/tags/) to
95-
make sure an image that meets your needs doesn't already exist.
93+
!!! TIP
94+
Before attempting to build an image, check the Docker Hub
95+
[tensorflow/serving repo](http://hub.docker.com/r/tensorflow/serving/tags/) to
96+
make sure an image that meets your needs doesn't already exist.
9697

9798
Building from sources consumes a lot of RAM. If RAM is an issue on your system,
9899
you may limit RAM usage by specifying `--local_ram_resources=2048` while
@@ -117,8 +118,9 @@ To run the container opening the gRPC port (8500):
117118
docker run -it -p 8500:8500 $USER/tensorflow-serving-devel
118119
```
119120

120-
TIP: If you're running a GPU image, be sure to run using the NVIDIA runtime
121-
[`--runtime=nvidia`](https://github.com/NVIDIA/nvidia-docker#quick-start).
121+
!!! TIP
122+
If you're running a GPU image, be sure to run using the NVIDIA runtime
123+
[`--runtime=nvidia`](https://github.com/NVIDIA/nvidia-docker#quick-start).
122124

123125
From here, you can follow the instructions for
124126
[testing a development environment](#testing-the-development-environment).

tensorflow_serving/g3doc/tutorials/performance.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,9 +12,10 @@ Please use the [Profile Inference Requests with TensorBoard](tensorboard.md)
1212
guide to understand the underlying behavior of your model's computation on
1313
inference requests, and use this guide to iteratively improve its performance.
1414

15-
Note: If the following quick tips do not solve your problem, please read the
16-
longer discussion to develop a deep understanding of what affects TensorFlow
17-
Serving's performance.
15+
!!! Note
16+
If the following quick tips do not solve your problem, please read the
17+
longer discussion to develop a deep understanding of what affects TensorFlow
18+
Serving's performance.
1819

1920
## Quick Tips
2021

0 commit comments

Comments
 (0)