Skip to content

Conversation

@dependabot-preview
Copy link
Contributor

Bumps tensorflow from 1.15.4 to 2.4.0. This update includes security fixes.

Vulnerabilities fixed

Sourced from The GitHub Security Advisory Database.

Uninitialized memory access in TensorFlow

Impact

Under certain cases, a saved model can trigger use of uninitialized values during code execution. This is caused by having tensor buffers be filled with the default value of the type but forgetting to default initialize the quantized floating point types in Eigen:

struct QUInt8 {
  QUInt8() {}
  // ...
  uint8_t value;
};
struct QInt16 {
QInt16() {}
// ...
int16_t value;
};
struct QUInt16 {
QUInt16() {}
// ...
uint16_t value;
</tr></table> ... (truncated)
Affected versions: < 1.15.5

Sourced from The GitHub Security Advisory Database.

Lack of validation in data format attributes in TensorFlow

Impact

The tf.raw_ops.DataFormatVecPermute API does not validate the src_format and dst_format attributes. The code assumes that these two arguments define a permutation of NHWC.

However, these assumptions are not checked and this can result in uninitialized memory accesses, read outside of bounds and even crashes.

&gt;&gt;&gt; import tensorflow as tf
&gt;&gt;&gt; tf.raw_ops.DataFormatVecPermute(x=[1,4], src_format='1234', dst_format='1234')
...
&gt;&gt;&gt; tf.raw_ops.DataFormatVecPermute(x=[1,4], src_format='HHHH', dst_format='WWWW')
...
&gt;&gt;&gt; tf.raw_ops.DataFormatVecPermute(x=[1,4], src_format='H', dst_format='W')
&gt;&gt;&gt; tf.raw_ops.DataFormatVecPermute(x=[1,2,3,4],
src_format='1234', dst_format='1253')
...
&gt;&gt;&gt; tf.raw_ops.DataFormatVecPermute(x=[1,2,3,4],
</tr></table> ... (truncated)
Affected versions: < 1.15.5

Sourced from The GitHub Security Advisory Database.

Write to immutable memory region in TensorFlow

Impact

The tf.raw_ops.ImmutableConst operation returns a constant tensor created from a memory mapped file which is assumed immutable. However, if the type of the tensor is not an integral type, the operation crashes the Python interpreter as it tries to write to the memory area:

&gt;&gt;&gt; import tensorflow as tf
&gt;&gt;&gt; with open('/tmp/test.txt','w') as f: f.write('a'*128)
&gt;&gt;&gt; tf.raw_ops.ImmutableConst(dtype=tf.string,shape=2,
                              memory_region_name='/tmp/test.txt')

If the file is too small, TensorFlow properly returns an error as the memory area has fewer bytes than what is needed for the tensor it creates. However, as soon as there are enough bytes, the above snippet causes a segmentation fault.

This is because the alocator used to return the buffer data is not marked as returning an opaque handle since the needed virtual method is not overriden.

Patches

We have patched the issue in GitHub commit c1e1fc899ad5f8c725dcbb6470069890b5060bc7 and will release TensorFlow 2.4.0 containing the patch. TensorFlow nightly packages after this commit will also have the issue resolved.

Since this issue also impacts TF versions before 2.4, we will patch all releases between 1.15 and 2.3 inclusive.

For more information

Affected versions: < 1.15.5

Sourced from The GitHub Security Advisory Database.

CHECK-fail in LSTM with zero-length input in TensorFlow

Impact

Running an LSTM/GRU model where the LSTM/GRU layer receives an input with zero-length results in a CHECK failure when using the CUDA backend.

This can result in a query-of-death vulnerability, via denial of service, if users can control the input to the layer.

Patches

We have patched the issue in GitHub commit 14755416e364f17fb1870882fa778c7fec7f16e3 and will release TensorFlow 2.4.0 containing the patch. TensorFlow nightly packages after this commit will also have the issue resolved.

Since this issue also impacts TF versions before 2.4, we will patch all releases between 1.15 and 2.3 inclusive.

For more information

Please consult our security guide for more information regarding the security model and how to contact us with issues and questions.

Affected versions: < 1.15.5

Sourced from The GitHub Security Advisory Database.

Heap out of bounds access in MakeEdge in TensorFlow

Impact

Under certain cases, loading a saved model can result in accessing uninitialized memory while building the computation graph. The MakeEdge function creates an edge between one output tensor of the src node (given by output_index) and the input slot of the dst node (given by input_index). This is only possible if the types of the tensors on both sides coincide, so the function begins by obtaining the corresponding DataType values and comparing these for equality:

  DataType src_out = src-&gt;output_type(output_index);
  DataType dst_in = dst-&gt;input_type(input_index);
  //...

However, there is no check that the indices point to inside of the arrays they index into. Thus, this can result in accessing data out of bounds of the corresponding heap allocated arrays.

In most scenarios, this can manifest as unitialized data access, but if the index points far away from the boundaries of the arrays this can be used to leak addresses from the library.

Patches

We have patched the issue in GitHub commit 0cc38aaa4064fd9e79101994ce9872c6d91f816b and will release TensorFlow 2.4.0 containing the patch. TensorFlow nightly packages after this commit will also have the issue resolved.

Since this issue also impacts TF versions before 2.4, we will patch all releases between 1.15 and 2.3 inclusive.

For more information

Please consult our security guide for more information regarding the security model and how to contact us with issues and questions.

Affected versions: < 1.15.5

Sourced from The GitHub Security Advisory Database.

Float cast overflow undefined behavior

Impact

When the boxes argument of tf.image.crop_and_resize has a very large value, the CPU kernel implementation receives it as a C++ nan floating point value. Attempting to operate on this is undefined behavior which later produces a segmentation fault.

Patches

We have patched the issue in c0319231333f0f16e1cc75ec83660b01fedd4182 and will release TensorFlow 2.4.0 containing the patch. TensorFlow nightly packages after this commit will also have the issue resolved.

For more information

Please consult our security guide for more information regarding the security model and how to contact us with issues and questions.

Attribution

This vulnerability has been reported in #42129

Affected versions: < 2.4.0

Sourced from The GitHub Security Advisory Database.

Segfault in tf.quantization.quantize_and_dequantize

Impact

An attacker can pass an invalid axis value to tf.quantization.quantize_and_dequantize:

tf.quantization.quantize_and_dequantize(
    input=[2.5, 2.5], input_min=[0,0], input_max=[1,1], axis=10)

This results in accessing a dimension outside the rank of the input tensor in the C++ kernel implementation:

const int depth = (axis_ == -1) ? 1 : input.dim_size(axis_);

However, dim_size only does a DCHECK to validate the argument and then uses it to access the corresponding element of an array:

int64 TensorShapeBase::dim_size(int d) const {
  DCHECK_GE(d, 0);
  DCHECK_LT(d, dims());
  DoStuffWith(dims_[d]);
}
</tr></table> ... (truncated)

Affected versions: < 2.4.0

Release notes

Sourced from tensorflow's releases.

TensorFlow 2.4.0

Release 2.4.0

Major Features and Improvements

  • tf.distribute introduces experimental support for asynchronous training of models via the tf.distribute.experimental.ParameterServerStrategy API. Please see the tutorial to learn more.

  • MultiWorkerMirroredStrategy is now a stable API and is no longer considered experimental. Some of the major improvements involve handling peer failure and many bug fixes. Please check out the detailed tutorial on Multi-worker training with Keras.

  • Introduces experimental support for a new module named tf.experimental.numpy which is a NumPy-compatible API for writing TF programs. See the detailed guide to learn more. Additional details below.

  • Adds Support for TensorFloat-32 on Ampere based GPUs. TensorFloat-32, or TF32 for short, is a math mode for NVIDIA Ampere based GPUs and is enabled by default.

  • A major refactoring of the internals of the Keras Functional API has been completed, that should improve the reliability, stability, and performance of constructing Functional models.

  • Keras mixed precision API tf.keras.mixed_precision is no longer experimental and allows the use of 16-bit floating point formats during training, improving performance by up to 3x on GPUs and 60% on TPUs. Please see below for additional details.

  • TensorFlow Profiler now supports profiling MultiWorkerMirroredStrategy and tracing multiple workers using the sampling mode API.

  • TFLite Profiler for Android is available. See the detailed guide to learn more.

  • TensorFlow pip packages are now built with CUDA11 and cuDNN 8.0.2.

Breaking Changes

  • TF Core:

    • Certain float32 ops run in lower precsion on Ampere based GPUs, including matmuls and convolutions, due to the use of TensorFloat-32. Specifically, inputs to such ops are rounded from 23 bits of precision to 10 bits of precision. This is unlikely to cause issues in practice for deep learning models. In some cases, TensorFloat-32 is also used for complex64 ops. TensorFloat-32 can be disabled by running tf.config.experimental.enable_tensor_float_32_execution(False).
    • The byte layout for string tensors across the C-API has been updated to match TF Core/C++; i.e., a contiguous array of tensorflow::tstring/TF_TStrings.
    • C-API functions TF_StringDecode, TF_StringEncode, and TF_StringEncodedSize are no longer relevant and have been removed; see core/platform/ctstring.h for string access/modification in C.
    • tensorflow.python, tensorflow.core and tensorflow.compiler modules are now hidden. These modules are not part of TensorFlow public API.
    • tf.raw_ops.Max and tf.raw_ops.Min no longer accept inputs of type tf.complex64 or tf.complex128, because the behavior of these ops is not well defined for complex types.
    • XLA:CPU and XLA:GPU devices are no longer registered by default. Use TF_XLA_FLAGS=--tf_xla_enable_xla_devices if you really need them, but this flag will eventually be removed in subsequent releases.
  • tf.keras:

    • The steps_per_execution argument in model.compile() is no longer experimental; if you were passing experimental_steps_per_execution, rename it to steps_per_execution in your code. This argument controls the number of batches to run during each tf.function call when calling model.fit(). Running multiple batches inside a single tf.function call can greatly improve performance on TPUs or small models with a large Python overhead.
    • A major refactoring of the internals of the Keras Functional API may affect code that is relying on certain internal details:
      • Code that uses isinstance(x, tf.Tensor) instead of tf.is_tensor when checking Keras symbolic inputs/outputs should switch to using tf.is_tensor.
      • Code that is overly dependent on the exact names attached to symbolic tensors (e.g. assumes there will be ":0" at the end of the inputs, treats names as unique identifiers instead of using tensor.ref(), etc.) may break.
      • Code that uses full path for get_concrete_function to trace Keras symbolic inputs directly should switch to building matching tf.TensorSpecs directly and tracing the TensorSpec objects.
      • Code that relies on the exact number and names of the op layers that TensorFlow operations were converted into may have changed.
      • Code that uses tf.map_fn/tf.cond/tf.while_loop/control flow as op layers and happens to work before TF 2.4. These will explicitly be unsupported now. Converting these ops to Functional API op layers was unreliable before TF 2.4, and prone to erroring incomprehensibly or being silently buggy.
      • Code that directly asserts on a Keras symbolic value in cases where ops like tf.rank used to return a static or symbolic value depending on if the input had a fully static shape or not. Now these ops always return symbolic values.
      • Code already susceptible to leaking tensors outside of graphs becomes slightly more likely to do so now.
      • Code that tries directly getting gradients with respect to symbolic Keras inputs/outputs. Use GradientTape on the actual Tensors passed to the already-constructed model instead.
      • Code that requires very tricky shape manipulation via converted op layers in order to work, where the Keras symbolic shape inference proves insufficient.
      • Code that tries manually walking a tf.keras.Model layer by layer and assumes layers only ever have one positional argument. This assumption doesn't hold true before TF 2.4 either, but is more likely to cause issues now.
Changelog

Sourced from tensorflow's changelog.

Release 2.4.0

Major Features and Improvements

Breaking Changes

  • TF Core:
    • Certain float32 ops run in lower precsion on Ampere based GPUs, including
Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
  • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
  • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
  • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language
  • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

Additionally, you can set the following in your Dependabot dashboard:

  • Update frequency (including time of day and day of week)
  • Pull request limits (per update run and/or open at any time)
  • Out-of-range updates (receive only lockfile updates, if desired)
  • Security updates (receive only security updates, if desired)

@dependabot-preview dependabot-preview bot added dependencies Pull requests that update a dependency file security Pull requests that address a security vulnerability labels Dec 15, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file security Pull requests that address a security vulnerability

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant