|
1 | | -# DigitalFilm - Kodak Gold 200 |
| 1 | +# DigitalFilm Digital Film |
2 | 2 |
|
3 | | -DigitalFilm: Use a neural network to simulate film style. |
| 3 | +DigitalFilm: Use a neural network to simulate film style. |
4 | 4 |
|
5 | 5 | --- |
6 | 6 |
|
7 | 7 | <!-- PROJECT LOGO --> |
8 | 8 | <br /> |
9 | 9 |
|
10 | 10 | <p align="center"> |
11 | | - <a href="./readme.md"> |
12 | | - <img src="images/logo.svg" alt="Logo" width="320" height="160"> |
13 | | - </a> |
14 | | - |
15 | | - <h3 align="center">"DigitalFilm</h3> |
16 | | - <p align="center"> |
17 | | - Use a neural network to simulate film style. |
18 | | - <br /> |
19 | | - <a href="https://github.com/shaojintian/Best_README_template"><strong>Explore the documentation for this project »</strong></a> |
20 | | - <br /> |
21 | | - <br /> |
22 | | - <a href="./app/digitalFilm.py">View Demo</a> |
23 | | - · |
24 | | - <a href="https://github.com/SongZihui-sudo/digitalFilm/issues">Report bugs</a> |
25 | | - · |
26 | | - <a href="https://github.com/SongZihui-sudo/digitalFilm/issues">Propose new features</a> |
27 | | - </p> |
| 11 | +<a href="./readme.md"> |
| 12 | +</a> |
28 | 13 |
|
| 14 | +<h3 align="center">"DigitalFilm" Digital Film</h3> |
| 15 | +<p align="center"> |
| 16 | +Use a neural network to simulate film style. |
| 17 | +<br /> |
| 18 | +<a href="https://github.com/shaojintian/Best_README_template"><strong>Explore the documentation of this project »</strong></a> |
| 19 | +<br /> |
| 20 | +<br /> |
| 21 | +<a href="./app/digitalFilm.py">View the demo</a> |
| 22 | +· |
| 23 | +<a href="https://github.com/SongZihui-sudo/digitalFilm/issues">Report a bug</a> |
| 24 | +· |
| 25 | +<a href="https://github.com/SongZihui-sudo/digitalFilm/issues">Propose a new feature</a> |
29 | 26 | </p> |
30 | 27 |
|
31 | | -This README.md is aimed at developers and users |
32 | | -[简体中文](./readme.md) |
| 28 | +</p> |
| 29 | + |
| 30 | +This README.md is for developers and users |
| 31 | +[English](./english.md) |
| 32 | + |
| 33 | +## Table of Contents |
33 | 34 |
|
34 | | -catalogue |
| 35 | +- [DigitalFilm Digital Film](#digitalfilm-digital-film) |
| 36 | + - [Table of Contents](#table-of-contents) |
| 37 | + - [Sample](#sample) |
| 38 | + - [Run Demo](#run-demo) |
| 39 | + - [training model](#training-model) |
| 40 | + - [**Installation steps**](#installation-steps) |
| 41 | + - [Overall architecture](#overall-architecture) |
| 42 | + - [Dataset](#dataset) |
| 43 | + - [File directory description](#file-directory-description) |
| 44 | + - [Version Control](#version-control) |
| 45 | + - [Author](#author) |
| 46 | + - [Copyright](#copyright) |
35 | 47 |
|
36 | | --Digital Film - Kodak Gold 200 |
37 | | --[Catalog] (# Catalog) |
38 | | --[Run Demo] (# Run Demo) |
39 | | --[* * Installation Steps * *] (# Installation Steps) |
40 | | --[Overall Architecture] (# Overall Architecture) |
41 | | --[Dataset] (# Dataset) |
42 | | --[Comparison of Generated Images] (# Comparison of Generated Images) |
43 | | --[Color Space for Generating Images] (# Color Space for Generating Images) |
44 | | --[File Directory Description] (# File Directory Description) |
45 | | --[Version Control] (# Version Control) |
46 | | --[Author] (# Author) |
47 | | --[Copyright Notice] (# Copyright Notice) |
| 48 | +### Sample |
| 49 | + |
| 50 | + |
| 51 | +<center style="font-size:14px;color:#C0C0C0;text-decoration:underline">Figure 1 Sample kodak gold 200</center> |
| 52 | + |
| 53 | + |
| 54 | +<center style="font-size:14px;color:#C0C0C0;text-decoration:underline">Figure 2 Sample fuji color 200</center> |
48 | 55 |
|
49 | 56 | ### Run Demo |
50 | 57 |
|
| 58 | +> The length and width of the input photo need to be divisible by **32**. |
| 59 | +
|
51 | 60 | ```bash |
52 | 61 | python digitalFilm.py [-v/-h/-g] -i <input> -o <ouput> -m <model> |
53 | 62 | ``` |
54 | | --- v Print version information |
55 | | --- h Help Information |
56 | | --- g Graphically select images |
57 | | --- i Input the directory of the image |
58 | | --- o Directory for outputting images |
59 | | --- m model directory |
| 63 | +- -v print version information |
| 64 | +- -h help information |
| 65 | +- -g graphical image selection |
| 66 | +- -i input image directory |
| 67 | +- -o output image directory |
| 68 | +- -m model directory |
| 69 | + |
| 70 | +### training model |
| 71 | + |
| 72 | +training model directly use cyclegan.ipynb. |
| 73 | +But you need to download the pre-trained model of resnet18 in advance. |
| 74 | +Prepare digital photos and film photos in two folders. |
| 75 | +The Kodak Gold 200 model and Fuji c200 model are included in the `app` file. |
60 | 76 |
|
61 | 77 | ###### **Installation steps** |
62 | 78 |
|
63 | 79 | ```sh |
64 | | -git clone https://github.com/SongZihui-sudo/digitalFilm.git |
| 80 | +git clone https://github.com/SongZihui-sudo/digitalFilm.git |
65 | 81 | ``` |
66 | 82 |
|
67 | | -It's best to create an environment in conda now and install various dependencies. |
| 83 | +It is best to create an environment in conda now and then install various dependencies. |
68 | 84 |
|
69 | 85 | ```sh |
70 | 86 | pip install -r requirement.txt |
71 | 87 | ``` |
72 | 88 |
|
73 | 89 | ### Overall architecture |
74 | 90 |
|
75 | | -The overall architecture first trains the data generator through manually annotated digital analog film image pairs, and then generates digital labels using the generator. Finally, the model was trained using digital simulated film images, and then fine tuned using the generated digital real film photo dataset. |
76 | | - |
77 | | -```txt |
78 | | ----------------------------------------------------------------- |
79 | | - Layer (type) Output Shape Param # |
80 | | -================================================================ |
81 | | - Conv2d-1 [-1, 32, 200, 320] 896 |
82 | | - BatchNorm2d-2 [-1, 32, 200, 320] 64 |
83 | | - LeakyReLU-3 [-1, 32, 200, 320] 0 |
84 | | - Conv2d-4 [-1, 64, 200, 320] 18,496 |
85 | | - BatchNorm2d-5 [-1, 64, 200, 320] 128 |
86 | | - LeakyReLU-6 [-1, 64, 200, 320] 0 |
87 | | - AdaptiveAvgPool2d-7 [-1, 64, 1, 1] 0 |
88 | | - Conv2d-8 [-1, 4, 1, 1] 256 |
89 | | - ReLU-9 [-1, 4, 1, 1] 0 |
90 | | - Conv2d-10 [-1, 64, 1, 1] 256 |
91 | | -AdaptiveMaxPool2d-11 [-1, 64, 1, 1] 0 |
92 | | - Conv2d-12 [-1, 4, 1, 1] 256 |
93 | | - ReLU-13 [-1, 4, 1, 1] 0 |
94 | | - Conv2d-14 [-1, 64, 1, 1] 256 |
95 | | - Sigmoid-15 [-1, 64, 1, 1] 0 |
96 | | - ChannelAttention-16 [-1, 64, 1, 1] 0 |
97 | | - Conv2d-17 [-1, 128, 1, 1] 73,856 |
98 | | - BatchNorm2d-18 [-1, 128, 1, 1] 256 |
99 | | - LeakyReLU-19 [-1, 128, 1, 1] 0 |
100 | | -AdaptiveAvgPool2d-20 [-1, 128, 1, 1] 0 |
101 | | - Conv2d-21 [-1, 8, 1, 1] 1,024 |
102 | | - ReLU-22 [-1, 8, 1, 1] 0 |
103 | | - Conv2d-23 [-1, 128, 1, 1] 1,024 |
104 | | -AdaptiveMaxPool2d-24 [-1, 128, 1, 1] 0 |
105 | | - Conv2d-25 [-1, 8, 1, 1] 1,024 |
106 | | - ReLU-26 [-1, 8, 1, 1] 0 |
107 | | - Conv2d-27 [-1, 128, 1, 1] 1,024 |
108 | | - Sigmoid-28 [-1, 128, 1, 1] 0 |
109 | | - ChannelAttention-29 [-1, 128, 1, 1] 0 |
110 | | - Conv2d-30 [-1, 256, 1, 1] 295,168 |
111 | | - BatchNorm2d-31 [-1, 256, 1, 1] 512 |
112 | | - LeakyReLU-32 [-1, 256, 1, 1] 0 |
113 | | -AdaptiveAvgPool2d-33 [-1, 256, 1, 1] 0 |
114 | | - Conv2d-34 [-1, 16, 1, 1] 4,096 |
115 | | - ReLU-35 [-1, 16, 1, 1] 0 |
116 | | - Conv2d-36 [-1, 256, 1, 1] 4,096 |
117 | | -AdaptiveMaxPool2d-37 [-1, 256, 1, 1] 0 |
118 | | - Conv2d-38 [-1, 16, 1, 1] 4,096 |
119 | | - ReLU-39 [-1, 16, 1, 1] 0 |
120 | | - Conv2d-40 [-1, 256, 1, 1] 4,096 |
121 | | - Sigmoid-41 [-1, 256, 1, 1] 0 |
122 | | - ChannelAttention-42 [-1, 256, 1, 1] 0 |
123 | | - Conv2d-43 [-1, 128, 1, 1] 295,040 |
124 | | - BatchNorm2d-44 [-1, 128, 1, 1] 256 |
125 | | - LeakyReLU-45 [-1, 128, 1, 1] 0 |
126 | | -AdaptiveAvgPool2d-46 [-1, 128, 1, 1] 0 |
127 | | - Conv2d-47 [-1, 8, 1, 1] 1,024 |
128 | | - ReLU-48 [-1, 8, 1, 1] 0 |
129 | | - Conv2d-49 [-1, 128, 1, 1] 1,024 |
130 | | -AdaptiveMaxPool2d-50 [-1, 128, 1, 1] 0 |
131 | | - Conv2d-51 [-1, 8, 1, 1] 1,024 |
132 | | - ReLU-52 [-1, 8, 1, 1] 0 |
133 | | - Conv2d-53 [-1, 128, 1, 1] 1,024 |
134 | | - Sigmoid-54 [-1, 128, 1, 1] 0 |
135 | | - ChannelAttention-55 [-1, 128, 1, 1] 0 |
136 | | - Conv2d-56 [-1, 64, 1, 1] 73,792 |
137 | | - BatchNorm2d-57 [-1, 64, 1, 1] 128 |
138 | | - LeakyReLU-58 [-1, 64, 1, 1] 0 |
139 | | -AdaptiveAvgPool2d-59 [-1, 64, 1, 1] 0 |
140 | | - Conv2d-60 [-1, 4, 1, 1] 256 |
141 | | - ReLU-61 [-1, 4, 1, 1] 0 |
142 | | - Conv2d-62 [-1, 64, 1, 1] 256 |
143 | | -AdaptiveMaxPool2d-63 [-1, 64, 1, 1] 0 |
144 | | - Conv2d-64 [-1, 4, 1, 1] 256 |
145 | | - ReLU-65 [-1, 4, 1, 1] 0 |
146 | | - Conv2d-66 [-1, 64, 1, 1] 256 |
147 | | - Sigmoid-67 [-1, 64, 1, 1] 0 |
148 | | - ChannelAttention-68 [-1, 64, 1, 1] 0 |
149 | | - Conv2d-69 [-1, 32, 1, 1] 18,464 |
150 | | - BatchNorm2d-70 [-1, 32, 1, 1] 64 |
151 | | - LeakyReLU-71 [-1, 32, 1, 1] 0 |
152 | | -AdaptiveAvgPool2d-72 [-1, 32, 1, 1] 0 |
153 | | - Conv2d-73 [-1, 2, 1, 1] 64 |
154 | | - ReLU-74 [-1, 2, 1, 1] 0 |
155 | | - Conv2d-75 [-1, 32, 1, 1] 64 |
156 | | -AdaptiveMaxPool2d-76 [-1, 32, 1, 1] 0 |
157 | | - Conv2d-77 [-1, 2, 1, 1] 64 |
158 | | - ReLU-78 [-1, 2, 1, 1] 0 |
159 | | - Conv2d-79 [-1, 32, 1, 1] 64 |
160 | | - Sigmoid-80 [-1, 32, 1, 1] 0 |
161 | | - ChannelAttention-81 [-1, 32, 1, 1] 0 |
162 | | - Conv2d-82 [-1, 3, 1, 1] 867 |
163 | | - Conv2d-83 [-1, 3, 200, 320] 99 |
164 | | - Conv2d-84 [-1, 3, 1, 1] 195 |
165 | | -================================================================ |
166 | | -Total params: 805,161 |
167 | | -Trainable params: 805,161 |
168 | | -Non-trainable params: 0 |
169 | | ----------------------------------------------------------------- |
170 | | -Input size (MB): 0.73 |
171 | | -Forward/backward pass size (MB): 142.14 |
172 | | -Params size (MB): 3.07 |
173 | | -Estimated Total Size (MB): 145.94 |
174 | | ----------------------------------------------------------------- |
175 | | -``` |
| 91 | +Converting digital photos to film style can be regarded as an image style conversion task. Therefore, the overall architecture adopts the cycleGAN network. |
| 92 | +[pytorch-CycleGAN-and-pix2pix](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix) |
| 93 | +In addition, it is difficult to obtain large-scale digital photos and film-style photos, so an unsupervised approach is adopted to use unpaired data for training. |
176 | 94 |
|
177 | 95 | ### Dataset |
178 | 96 |
|
179 | | -At the stage of creating the dataset. Firstly, based on data augmentation technology, simulation film effect images are generated using digital filters applied by Fimo. A preliminary training set is constructed for model pre training, and a digital photo label generation model is synchronously developed. In the second stage, the performance of the model is optimized for real-world scenarios. A dataset is constructed by collecting publicly available Kodajin 200 film samples from the network, and the pre trained label generation model is used to automatically create corresponding digital labels. Ultimately, a two-stage training framework was adopted: first, the model was initialized based on simulation data, and then fine tuned and optimized using a real sample dataset. This progressive training method effectively improved the model's ability to capture real image features while mitigating the risk of overfitting caused by insufficient raw data. |
180 | | - |
181 | | -The dataset consists of dual source image data, with the main body collected from high-quality digital photos taken by Xiaomi 13 Ultra smartphones, and the rest selected from professional HDR image datasets [1]. The data annotation system consists of two parallel dimensions: (1) manual annotation group: 1517 pairs of accurately registered digital analog film image pairs, where the film effect is achieved through Fimo professional filters; (2) Automatic generation group: 363 Kodak Gold 200 professional film samples and their corresponding digital labels are automatically generated through a pre trained model. In the data preprocessing stage, a dynamic data augmentation strategy is adopted to apply real-time random spatial transformations to the input image, effectively improving the geometric invariance of the model. The dataset is divided according to the principle of stratified sampling, with 80% (1517 × 0.8+363 × 0.8=1504 images) as the training set and 20% (376 images) as the testing set, ensuring the consistency of the distribution of digital/film samples in the training and testing set. |
182 | | - |
183 | | -### Comparison of generated images |
184 | | - |
185 | | - |
186 | | - |
187 | | -<center style="font size: 14px; color: # C0C0C0; text decoration: underline">Figure 1 comparison</center> |
188 | | - |
189 | | - |
190 | | - |
191 | | -<center style="font size: 14px; color: # C0C0C0; text decoration: underline">Figure 2 comparison</center> |
192 | | - |
193 | | - |
194 | | -<center style="font size: 14px; color: # C0C0C0; text decoration: underline">Figure 3 comparison</center> |
195 | | - |
196 | | - |
197 | | -###Generate color space for images |
198 | | - |
199 | | - |
200 | | -<center style="font size: 14px; color: # C0C0C0; text decoration: underline">Figure 4 The RGB channel in Figure 1</center> |
201 | | - |
202 | | - |
203 | | - |
204 | | -<center style="font size: 14px; color: # C0C0C0; text decoration: underline">Figure 4 The color space in Figure 1</center> |
| 97 | +The dataset consists of dual-source image data, the main part of which is collected from high-quality digital photos taken by Xiaomi 13 Ultra mobile phone, and the rest is selected from professional HDR image dataset. |
| 98 | +Film samples are collected from the Internet. |
205 | 99 |
|
206 | 100 | ### File directory description |
207 | 101 |
|
208 | | --DigitalFilm.ipynb is used to train models |
209 | | --An app demo |
210 | | -- digitalFilm.py |
| 102 | +- DigitalFilm.ipynb is used to train the model |
| 103 | +- app is a demo |
| 104 | +- digitalFilm.py |
211 | 105 | - mynet.py |
212 | | -- kodark_gold_200.pt |
| 106 | +- mynet2.py |
| 107 | +- kodark_gold_200.pth |
| 108 | +- fuji_color_200.pth |
213 | 109 |
|
214 | | -### Version control |
| 110 | +### Version Control |
215 | 111 |
|
216 | | -This project uses Git for version management. You can refer to the currently available versions in the repository. |
| 112 | +This project uses Git for version management. You can view the currently available version in the repository. |
217 | 113 |
|
218 | 114 | ### Author |
219 | 115 |
|
220 | | -151122876@qq.com SongZihui-sudo |
| 116 | +151122876@qq.com SongZihui-sudo |
221 | 117 |
|
222 | | -Zhihu: Dr.who   qq:1751122876 |
| 118 | +Zhihu:Dr.who   qq:1751122876 |
223 | 119 |
|
224 | | -*You can also see all the developers involved in the project in the list of contributors* |
| 120 | +*You can also view all the developers involved in the project in the list of contributors. * |
225 | 121 |
|
226 | | -### Copyright Notice |
| 122 | +### Copyright |
227 | 123 |
|
228 | | -This project has signed a GPLv3 license, please refer to [LICENSE. txt] (./LICENSE. txt) for details |
| 124 | +This project is licensed under GPLv3. For details, please refer to [LICENSE.txt](./LICENSE.txt) |
0 commit comments