|
| 1 | +``` |
| 2 | +The start_processing() function handles the main processing loop: |
| 3 | +``` |
| 4 | + |
| 5 | +- Opens input files using the `mtl_st20p` plugin and sets the necessary options. |
| 6 | +- Finds stream information. |
| 7 | +- Sets up decoders. |
| 8 | +- Sets up the filter graph. |
| 9 | +- Sets up the output format context using the `mtl_st20p` plugin and sets the necessary options. |
| 10 | +- Creates the output stream and sets up the encoder. |
| 11 | +- Opens the output file. |
| 12 | +- Writes the output file header. |
| 13 | +- Allocates frames and packets. |
| 14 | +- Reads, decodes, filters, encodes, and writes frames. |
| 15 | +- Writes the output file trailer. |
| 16 | + |
| 17 | +# -filter_complex: |
| 18 | +This option specifies a complex filter graph. It allows you to define a series of filters and how they are connected, including multiple inputs and outputs. |
| 19 | + |
| 20 | +``` |
| 21 | +The setup_filter_graph() function sets up the filter graph, including buffer source filters, hwupload filters, scale_qsv filters, xstack_qsv filter, and format filters. |
| 22 | +``` |
| 23 | +command from [multiviewer_process.sh](https://github.com/OpenVisualCloud/Intel-Tiber-Broadcast-Suite/blob/main/pipelines/multiviewer_process.sh) : |
| 24 | + |
| 25 | +``` |
| 26 | +-filter_complex "[0:v]hwupload,scale_qsv=iw/4:ih/2[out0]; \ |
| 27 | + [1:v]hwupload,scale_qsv=iw/4:ih/2[out1]; \ |
| 28 | + [2:v]hwupload,scale_qsv=iw/4:ih/2[out2]; \ |
| 29 | + [3:v]hwupload,scale_qsv=iw/4:ih/2[out3]; \ |
| 30 | + [4:v]hwupload,scale_qsv=iw/4:ih/2[out4]; \ |
| 31 | + [5:v]hwupload,scale_qsv=iw/4:ih/2[out5]; \ |
| 32 | + [6:v]hwupload,scale_qsv=iw/4:ih/2[out6]; \ |
| 33 | + [7:v]hwupload,scale_qsv=iw/4:ih/2[out7]; \ |
| 34 | + [out0][out1][out2][out3] \ |
| 35 | + [out4][out5][out6][out7] \ |
| 36 | + xstack_qsv=inputs=8:\ |
| 37 | + layout=0_0|w0_0|0_h0|w0_h0|w0+w1_0|w0+w1+w2_0|w0+w1_h0|w0+w1+w2_h0, \ |
| 38 | + format=y210le,format=yuv422p10le" \ |
| 39 | +``` |
| 40 | + |
| 41 | +**Input Streams:** |
| 42 | + |
| 43 | +- `[0:v], [1:v], [2:v], [3:v], [4:v], [5:v], [6:v], [7:v]:` |
| 44 | + These are the video streams from the input files. The numbers (0, 1, 2, etc.) refer to the input file indices, and v indicates that these are video streams. |
| 45 | + |
| 46 | +**Filters:** |
| 47 | + |
| 48 | +**hwupload:** |
| 49 | + |
| 50 | + - This filter uploads the video frames to the GPU for hardware acceleration. It is used to prepare the frames for further processing by hardware-accelerated filters. |
| 51 | + |
| 52 | +**scale_qsv=iw/4:ih/2:** |
| 53 | + |
| 54 | + - This filter scales the video frames using Intel's Quick Sync Video (QSV) hardware acceleration. The iw/4 and ih/2 specify the new width and height of the frames, which are one-fourth and one-half of the original width and height, respectively. |
| 55 | + |
| 56 | +**Output Labels:** |
| 57 | + |
| 58 | + - `[out0], [out1], [out2], [out3], [out4], [out5], [out6], [out7]:` |
| 59 | + These labels are used to name the outputs of the scale_qsv filters. They are used as inputs to the next filter in the chain. |
| 60 | + |
| 61 | +**Stacking Filter:** |
| 62 | + |
| 63 | + - `xstack_qsv=inputs=8:layout=0_0|w0_0|0_h0|w0_h0|w0+w1_0|w0+w1+w2_0|w0+w1_h0|w0+w1+w2_h0:` |
| 64 | + This filter stacks multiple video frames together using Intel's QSV hardware acceleration. The inputs=8 specifies that there are 8 input streams. The layout parameter defines the layout of the stacked frames. The layout |
| 65 | + |
| 66 | +**positions the frames in a grid-like pattern:** |
| 67 | + - `0_0:` The first frame is placed at the top-left corner. |
| 68 | + - `w0_0:` The second frame is placed to the right of the first frame. |
| 69 | + - `0_h0:` The third frame is placed below the first frame. |
| 70 | + - `w0_h0:` The fourth frame is placed to the right of the third frame. |
| 71 | + - `w0+w1_0:` The fifth frame is placed to the right of the second frame. |
| 72 | + - `w0+w1+w2_0:` The sixth frame is placed to the right of the fifth frame. |
| 73 | + - `w0+w1_h0:` The seventh frame is placed below the fifth frame. |
| 74 | + - `w0+w1+w2_h0:` The eighth frame is placed to the right of the seventh frame. |
| 75 | + |
| 76 | +**Format Conversion:** |
| 77 | + - `format=y210le:` |
| 78 | + This filter converts the pixel format of the video frames to y210le, which is a 10-bit YUV 4:2:2 format with little-endian byte order. |
| 79 | + |
| 80 | + - `format=yuv422p10le:` |
| 81 | + This filter converts the pixel format of the video frames to yuv422p10le, which is another 10-bit YUV 4:2:2 format with little-endian byte order. |
0 commit comments