Skip to content

Commit dd97c01

Browse files
authored
Merge branch 'master' into patch-3
2 parents ac03e8b + 4db8e80 commit dd97c01

File tree

2 files changed

+48
-27
lines changed

2 files changed

+48
-27
lines changed

site/en/install/source.md

Lines changed: 23 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -56,6 +56,24 @@ sure to install the correct Bazel version from TensorFlow's
5656
[.bazelversion](https://github.com/tensorflow/tensorflow/blob/master/.bazelversion)
5757
file.
5858

59+
### Install Clang (recommended, Linux only)
60+
61+
Clang is a C/C++/Objective-C compiler that is compiled in C++ based on LLVM. It
62+
is the default compiler to build TensorFlow starting with TensorFlow 2.13. The
63+
current supported version is LLVM/Clang 16.
64+
65+
[LLVM Debian/Ubuntu nightly packages](https://apt.llvm.org) provide an automatic
66+
installation script and packages for manual installation on Linux. Make sure you
67+
run the following command if you manually add llvm apt repository to your
68+
package sources:
69+
70+
<pre class="prettyprint lang-bsh">
71+
<code class="devsite-terminal">sudo apt-get update && sudo apt-get install -y llvm-16 clang-16</code>
72+
</pre>
73+
74+
Alternatively, you can download and unpack the pre-built
75+
[Clang+LLVM-16 binaries](https://github.com/llvm/llvm-project/releases/tag/llvmorg-16.0.0).
76+
5977
### Install GPU support (optional, Linux only)
6078

6179
There is *no* GPU support for macOS.
@@ -221,8 +239,8 @@ for
221239
Building TensorFlow from source can use a lot of RAM. If your system is
222240
memory-constrained, limit Bazel's RAM usage with: `--local_ram_resources=2048`.
223241

224-
The [official TensorFlow packages](./pip.md) are built with a GCC toolchain that
225-
complies with the manylinux2014 package standard.
242+
The [official TensorFlow packages](./pip.md) are built with a Clang toolchain
243+
that complies with the manylinux2014 package standard.
226244

227245
### Build the package
228246

@@ -388,6 +406,7 @@ Success: TensorFlow is now installed.
388406

389407
<table>
390408
<tr><th>Version</th><th>Python version</th><th>Compiler</th><th>Build tools</th></tr>
409+
<tr><td>tensorflow-2.13.0</td><td>3.8-3.11</td><td>Clang 16.0.0</td><td>Bazel 5.3.0</td></tr>
391410
<tr><td>tensorflow-2.12.0</td><td>3.8-3.11</td><td>GCC 9.3.1</td><td>Bazel 5.3.0</td></tr>
392411
<tr><td>tensorflow-2.11.0</td><td>3.7-3.10</td><td>GCC 9.3.1</td><td>Bazel 5.3.0</td></tr>
393412
<tr><td>tensorflow-2.10.0</td><td>3.7-3.10</td><td>GCC 9.3.1</td><td>Bazel 5.1.1</td></tr>
@@ -423,6 +442,7 @@ Success: TensorFlow is now installed.
423442

424443
<table>
425444
<tr><th>Version</th><th>Python version</th><th>Compiler</th><th>Build tools</th><th>cuDNN</th><th>CUDA</th></tr>
445+
<tr><td>tensorflow-2.13.0</td><td>3.8-3.11</td><td>Clang 16.0.0</td><td>Bazel 5.3.0</td><td>8.6</td><td>11.8</td></tr>
426446
<tr><td>tensorflow-2.12.0</td><td>3.8-3.11</td><td>GCC 9.3.1</td><td>Bazel 5.3.0</td><td>8.6</td><td>11.8</td></tr>
427447
<tr><td>tensorflow-2.11.0</td><td>3.7-3.10</td><td>GCC 9.3.1</td><td>Bazel 5.3.0</td><td>8.1</td><td>11.2</td></tr>
428448
<tr><td>tensorflow-2.10.0</td><td>3.7-3.10</td><td>GCC 9.3.1</td><td>Bazel 5.1.1</td><td>8.1</td><td>11.2</td></tr>
@@ -460,6 +480,7 @@ Success: TensorFlow is now installed.
460480

461481
<table>
462482
<tr><th>Version</th><th>Python version</th><th>Compiler</th><th>Build tools</th></tr>
483+
<tr><td>tensorflow-2.13.0</td><td>3.8-3.11</td><td>Clang from xcode 10.15</td><td>Bazel 5.3.0</td></tr>
463484
<tr><td>tensorflow-2.12.0</td><td>3.8-3.11</td><td>Clang from xcode 10.15</td><td>Bazel 5.3.0</td></tr>
464485
<tr><td>tensorflow-2.11.0</td><td>3.7-3.10</td><td>Clang from xcode 10.14</td><td>Bazel 5.3.0</td></tr>
465486
<tr><td>tensorflow-2.10.0</td><td>3.7-3.10</td><td>Clang from xcode 10.14</td><td>Bazel 5.1.1</td></tr>

site/en/tutorials/video/video_classification.ipynb

Lines changed: 25 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@
9797
},
9898
"outputs": [],
9999
"source": [
100-
"!pip install remotezip tqdm opencv-python einops\n",
100+
"!pip install remotezip tqdm opencv-python einops \n",
101101
"# Install TensorFlow 2.10\n",
102102
"!pip install tensorflow==2.10.0"
103103
]
@@ -156,7 +156,7 @@
156156
" List the files in each class of the dataset given the zip URL.\n",
157157
"\n",
158158
" Args:\n",
159-
" zip_url: URL from which the files can be unzipped.\n",
159+
" zip_url: URL from which the files can be unzipped. \n",
160160
"\n",
161161
" Return:\n",
162162
" files: List of files in each of the classes.\n",
@@ -181,7 +181,7 @@
181181
"\n",
182182
"def get_files_per_class(files):\n",
183183
" \"\"\"\n",
184-
" Retrieve the files that belong to each class.\n",
184+
" Retrieve the files that belong to each class. \n",
185185
"\n",
186186
" Args:\n",
187187
" files: List of files in the dataset.\n",
@@ -242,7 +242,7 @@
242242
" Args:\n",
243243
" zip_url: Zip URL containing data.\n",
244244
" num_classes: Number of labels.\n",
245-
" splits: Dictionary specifying the training, validation, test, etc. (key) division of data\n",
245+
" splits: Dictionary specifying the training, validation, test, etc. (key) division of data \n",
246246
" (value is number of files per split).\n",
247247
" download_dir: Directory to download data to.\n",
248248
"\n",
@@ -282,7 +282,7 @@
282282
" Pad and resize an image from a video.\n",
283283
" \n",
284284
" Args:\n",
285-
" frame: Image that needs to resized and padded.\n",
285+
" frame: Image that needs to resized and padded. \n",
286286
" output_size: Pixel size of the output frame image.\n",
287287
"\n",
288288
" Return:\n",
@@ -306,7 +306,7 @@
306306
" \"\"\"\n",
307307
" # Read each video frame by frame\n",
308308
" result = []\n",
309-
" src = cv2.VideoCapture(str(video_path))\n",
309+
" src = cv2.VideoCapture(str(video_path)) \n",
310310
"\n",
311311
" video_length = src.get(cv2.CAP_PROP_FRAME_COUNT)\n",
312312
"\n",
@@ -338,11 +338,11 @@
338338
"\n",
339339
"class FrameGenerator:\n",
340340
" def __init__(self, path, n_frames, training = False):\n",
341-
" \"\"\" Returns a set of frames with their associated label.\n",
341+
" \"\"\" Returns a set of frames with their associated label. \n",
342342
"\n",
343343
" Args:\n",
344344
" path: Video file paths.\n",
345-
" n_frames: Number of frames.\n",
345+
" n_frames: Number of frames. \n",
346346
" training: Boolean to determine if training dataset is being created.\n",
347347
" \"\"\"\n",
348348
" self.path = path\n",
@@ -365,7 +365,7 @@
365365
" random.shuffle(pairs)\n",
366366
"\n",
367367
" for path, name in pairs:\n",
368-
" video_frames = frames_from_video_file(path, self.n_frames)\n",
368+
" video_frames = frames_from_video_file(path, self.n_frames) \n",
369369
" label = self.class_ids_for_name[name] # Encode labels\n",
370370
" yield video_frames, label"
371371
]
@@ -380,8 +380,8 @@
380380
"source": [
381381
"URL = 'https://storage.googleapis.com/thumos14_files/UCF101_videos.zip'\n",
382382
"download_dir = pathlib.Path('./UCF101_subset/')\n",
383-
"subset_paths = download_ufc_101_subset(URL,\n",
384-
" num_classes = 10,\n",
383+
"subset_paths = download_ufc_101_subset(URL, \n",
384+
" num_classes = 10, \n",
385385
" splits = {\"train\": 30, \"val\": 10, \"test\": 10},\n",
386386
" download_dir = download_dir)"
387387
]
@@ -447,7 +447,7 @@
447447
"\n",
448448
"![(2+1)D convolutions](https://www.tensorflow.org/images/tutorials/video/2plus1CNN.png)\n",
449449
"\n",
450-
"The main advantage of this approach is that it reduces the number of parameters. In the (2 + 1)D convolution the spatial convolution takes in data of the shape `(1, width, height)`, while the temporal convolution takes in data of the shape `(time, 1, 1)`. For example, a (2 + 1)D convolution with kernel size `(3 x 3 x 3)` would need weight matrices of size `(9 * channels**2) + (3 * channels**2)`, less than half as many as the full 3D convolution. This tutorial implements (2 + 1)D ResNet18, where each convolution in the ResNet is replaced by a (2+1)D convolution."
450+
"The main advantage of this approach is that it reduces the number of parameters. In the (2 + 1)D convolution the spatial convolution takes in data of the shape `(1, width, height)`, while the temporal convolution takes in data of the shape `(time, 1, 1)`. For example, a (2 + 1)D convolution with kernel size `(3 x 3 x 3)` would need weight matrices of size `(9 * channels**2) + (3 * channels**2)`, less than half as many as the full 3D convolution. This tutorial implements (2 + 1)D ResNet18, where each convolution in the resnet is replaced by a (2+1)D convolution."
451451
]
452452
},
453453
{
@@ -530,7 +530,7 @@
530530
" padding='same'),\n",
531531
" layers.LayerNormalization(),\n",
532532
" layers.ReLU(),\n",
533-
" Conv2Plus1D(filters=filters,\n",
533+
" Conv2Plus1D(filters=filters, \n",
534534
" kernel_size=kernel_size,\n",
535535
" padding='same'),\n",
536536
" layers.LayerNormalization()\n",
@@ -559,8 +559,8 @@
559559
"source": [
560560
"class Project(keras.layers.Layer):\n",
561561
" \"\"\"\n",
562-
" Project certain dimensions of the tensor as the data is passed through different\n",
563-
" sized filters and downsampled.\n",
562+
" Project certain dimensions of the tensor as the data is passed through different \n",
563+
" sized filters and downsampled. \n",
564564
" \"\"\"\n",
565565
" def __init__(self, units):\n",
566566
" super().__init__()\n",
@@ -595,9 +595,9 @@
595595
" Add residual blocks to the model. If the last dimensions of the input data\n",
596596
" and filter size does not match, project it such that last dimension matches.\n",
597597
" \"\"\"\n",
598-
" out = ResidualMain(filters,\n",
598+
" out = ResidualMain(filters, \n",
599599
" kernel_size)(input)\n",
600-
"\n",
600+
" \n",
601601
" res = input\n",
602602
" # Using the Keras functional APIs, project the last dimension of the tensor to\n",
603603
" # match the new filter size\n",
@@ -633,7 +633,7 @@
633633
"\n",
634634
" def call(self, video):\n",
635635
" \"\"\"\n",
636-
" Use the einops library to resize the tensor.\n",
636+
" Use the einops library to resize the tensor. \n",
637637
" \n",
638638
" Args:\n",
639639
" video: Tensor representation of the video, in the form of a set of frames.\n",
@@ -743,8 +743,8 @@
743743
},
744744
"outputs": [],
745745
"source": [
746-
"model.compile(loss = keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n",
747-
" optimizer = keras.optimizers.Adam(learning_rate = 0.0001),\n",
746+
"model.compile(loss = keras.losses.SparseCategoricalCrossentropy(from_logits=True), \n",
747+
" optimizer = keras.optimizers.Adam(learning_rate = 0.0001), \n",
748748
" metrics = ['accuracy'])"
749749
]
750750
},
@@ -813,7 +813,7 @@
813813
"\n",
814814
" ax1.set_ylim([0, np.ceil(max_loss)])\n",
815815
" ax1.set_xlabel('Epoch')\n",
816-
" ax1.legend(['Train', 'Validation'])\n",
816+
" ax1.legend(['Train', 'Validation']) \n",
817817
"\n",
818818
" # Plot accuracy\n",
819819
" ax2.set_title('Accuracy')\n",
@@ -837,7 +837,7 @@
837837
"source": [
838838
"## Evaluate the model\n",
839839
"\n",
840-
"Use Keras `Model.evaluate` to get the loss and accuracy on the test dataset.\n",
840+
"Use Keras `Model.evaluate` to get the loss and accuracy on the test dataset. \n",
841841
"\n",
842842
"Note: The example model in this tutorial uses a subset of the UCF101 dataset to keep training time reasonable. The accuracy and loss can be improved with further hyperparameter tuning or more training data. "
843843
]
@@ -870,7 +870,7 @@
870870
},
871871
"outputs": [],
872872
"source": [
873-
"def get_actual_predicted_labels(dataset):\n",
873+
"def get_actual_predicted_labels(dataset): \n",
874874
" \"\"\"\n",
875875
" Create a list of actual ground truth values and the predictions from the model.\n",
876876
"\n",
@@ -968,7 +968,7 @@
968968
"def calculate_classification_metrics(y_actual, y_pred, labels):\n",
969969
" \"\"\"\n",
970970
" Calculate the precision and recall of a classification model using the ground truth and\n",
971-
" predicted values.\n",
971+
" predicted values. \n",
972972
"\n",
973973
" Args:\n",
974974
" y_actual: Ground truth labels.\n",
@@ -989,7 +989,7 @@
989989
" row = cm[i, :]\n",
990990
" fn = np.sum(row) - tp[i] # Sum of row minus true positive, is false negative\n",
991991
" \n",
992-
" precision[labels[i]] = tp[i] / (tp[i] + fp) # Precision\n",
992+
" precision[labels[i]] = tp[i] / (tp[i] + fp) # Precision \n",
993993
" \n",
994994
" recall[labels[i]] = tp[i] / (tp[i] + fn) # Recall\n",
995995
" \n",

0 commit comments

Comments
 (0)