@@ -234,15 +234,48 @@ Only the following methods are required to implement decompression within kernel
234
234
235
235
* `MicroContext::AllocateDecompressionScratchBuffer` ([micro_context.h](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/micro_context.h)):
236
236
Allocates a scratch memory buffer within the `MicroInterpreter` to hold the
237
- decompressed tensor data.
237
+ decompressed tensor data. The returned scratch memory handle must be retained
238
+ (typically through kernel `OpData`) for use during the kernel inference operation.
238
239
* `MicroContext::GetTensorCompressionData` ([micro_context.h](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/micro_context.h)):
239
240
Retrieves compressed tensor information (see [compression.h](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/compression.h)).
240
241
* `tflite::micro::GetTensorData` ([kernel_util.h](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/kernels/kernel_util.h)):
241
- The four argument version of this method will automatically decompress the
242
- tensor data into the supplied scratch memory buffer.
242
+ The four parameter version of this method will automatically decompress the
243
+ tensor data into the supplied scratch memory buffer. The lifetime of a scratch
244
+ buffer is the same as the lifetime of the current kernel operator being processed.
245
+ Each call to the four parameter version of this method will always result in a
246
+ decompression operation being performed, if the tensor supplied is compressed.
243
247
244
248
Please see the [TRANSPOSE_CONV](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/kernels/transpose_conv.cc)
245
- reference kernel code for an example of how tensor decompression is implemented.
249
+ reference kernel code for an example of how to implement tensor decompression
250
+ within a kernel.
251
+
252
+ ### Alternate Decompression Memory
253
+
254
+ Alternate decompression memory regions allow the use of specialized memory
255
+ available to the processor, to be used as the target of a tensor decompression
256
+ operation. Such memory is typically mapped by the application through a linker
257
+ script. The application would then use a C++ attribute of the form:
258
+ ```
259
+ __ attribute__ ((section(".your-specialized-memory")))
260
+ ```
261
+ to link one or more application symbols to the specialized memory region.
262
+
263
+ Only a single API is required to use alternate decompression memory regions in
264
+ an application:
265
+ * `MicroInterpreter::SetDecompressionMemory` ([micro_interpreter.h](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/micro_interpreter.h)):
266
+ Specify the address and size of each alternate decompression
267
+ memory region. This method must be called before the application calls
268
+ `MicroInterpreter::AllocateTensors`. The lifetime of the method parameter must
269
+ equal the lifetime of the `MicroInterpreter` instance. The memory regions
270
+ specified by the method parameter must not overlap, and each region is considered
271
+ to be non-contiguous with all other regions.
272
+
273
+ Specifying alternate decompression memory will cause `MicroContext::AllocateDecompressionScratchBuffer`
274
+ and `tflite::micro::GetTensorData` (the four parameter version)
275
+ to automatically attempt to allocate memory for the decompression destination
276
+ buffer from available memory in one of the alternate memory regions. If no
277
+ alternate memory region of sufficient size is available, a scratch buffer will
278
+ be allocated within the `MicroInterpreter` arena.
246
279
247
280
# How to Compress a Model
248
281
@@ -253,61 +286,54 @@ a tensor to just four values among the tensor elements, a fixed-width of two bit
253
286
can be used for each element. This would result in nearly a four-fold decrease
254
287
in the size of an INT8 tensor.
255
288
256
- Tensors to compress are specified with the `--tensors="#, #, ...#"` flag.
257
- Per-channel quantized tensors using an alternate quantization axis (such as the
258
- filter tensor supplied to DEPTHWISE_CONV) must use the `--alt_axis_tensors=` flag.
259
-
260
- First, align your binned model:
289
+ Tensors to compress are specified with a `YAML` file. For example, if tensors
290
+ 5, 10, 11, 22 of subgraph 0 of the model are to be compressed, the contents of
291
+ the file would be as follows:
261
292
```
262
- bazel run --cache_test_results=no --test_output=all -s tensorflow/lite/micro/tools: tflite_flatbuffer_align -- binned_model.tflite binned_and_aligned.tflite
293
+ tensors:
294
+
295
+ - subgraph: 0
296
+ tensor: 5
297
+ compression:
298
+ - lut:
299
+ index_bitwidth: 4
300
+
301
+ - subgraph: 0
302
+ tensor: 10
303
+ compression:
304
+ - lut:
305
+ index_bitwidth: 4
306
+
307
+ - subgraph: 0
308
+ tensor: 11
309
+ compression:
310
+ - lut:
311
+ index_bitwidth: 2
312
+
313
+ - subgraph: 0
314
+ tensor: 22
315
+ compression:
316
+ - lut:
317
+ index_bitwidth: 2
263
318
```
319
+ Note that each tensor can have a different bit width (1 through 7 bits).
264
320
265
- Next , compress the model, supplying as arguments the target tensors :
321
+ Once the `YAML` specification is ready , compress the model using the following :
266
322
```
267
- bazel run --cache_test_results=no --test_output=all -s tensorflow/lite/micro/compression: compress -- binned_and_aligned .tflite compressed.tflite --tensors="1, 2, 7, 10, 3, 5"
323
+ bazel run -s tensorflow/lite/micro/compression: compress -- --input=binned .tflite --output= compressed.tflite --spec=spec.yaml
268
324
```
269
325
270
326
Then align the model:
271
327
```
272
- bazel run --cache_test_results=no --test_output=all -s tensorflow/lite/micro/tools: tflite_flatbuffer_align -- compressed.tflite compressed_and_aligned.tflite
328
+ bazel run -s tensorflow/lite/micro/tools: tflite_flatbuffer_align -- compressed.tflite compressed_and_aligned.tflite
273
329
```
274
330
275
331
# The Generic Benchmark Application
276
332
277
333
The Generic Benchmark Application can be used to see the size of the model, the
278
334
amount of arena memory used, and the size of the interpreter data structures
279
- including those involved with tensor conpression.
280
-
281
- The benchmark also reports total inference time, as well as time taken for
282
- tensor decompression. Timing data may be either wall-clock time or processor
283
- cycle time. The type of timing data is dependent on the underlying platform
284
- and/or simulator used. In some cases, no timing data is available.
285
-
286
- The benchmark output includes a CRC32 of the output tensor(s) for comparison
287
- within the same platform on which the benchmark is run.
335
+ including those involved with tensor compression. The benchmark also reports
336
+ total inference time, as well as time taken for tensor decompression.
288
337
289
338
For additional information on the Generic Benchmark Application, please refer to
290
339
this [document](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/tools/benchmarking/README.md).
291
-
292
- ## How to Run the Generic Benchmark Application
293
-
294
- The Generic Benchmark Application can only be built using `make`.
295
-
296
- ### Without Compression
297
-
298
- HIFI3 example:
299
- ```
300
- make -f ${TENSORFLOW_ROOT}tensorflow/lite/micro/tools/make/Makefile BUILD_TYPE=default run_tflm_benchmark -j$(nproc) GENERIC_BENCHMARK_MODEL_PATH=binned_and_aligned.tflite TARGET=xtensa TARGET_ARCH=hifi3 OPTIMIZED_KERNEL_DIR=xtensa XTENSA_CORE=HIFI_190304_swupgrade
301
- ```
302
-
303
- The model path can be an abolute path, or relative to your local TFLM repository.
304
-
305
- ### With Compression
306
-
307
- HIFI5 example:
308
- ```
309
- make -f ${TENSORFLOW_ROOT}tensorflow/lite/micro/tools/make/Makefile BUILD_TYPE=default run_tflm_benchmark -j$(nproc) GENERIC_BENCHMARK_MODEL_PATH=compressed_and_aligned.tflite TARGET=xtensa TARGET_ARCH=hifi5 OPTIMIZED_KERNEL_DIR=xtensa XTENSA_CORE=PRD_H5_RDO_07_01_2022 USE_TFLM_COMPRESSION=1
310
- ```
311
-
312
- The model path can be an abolute path, or relative to your local TFLM repository.
313
-
0 commit comments