You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Oct 15, 2025. It is now read-only.
@@ -33,9 +33,9 @@ A category of convolutional neural-net models, those based on the [U-net archite
33
33
34
34
We've trained such models on paired sets of our fluorescent images and the corresponding brightfield images. The details of our architecture and approach are described [in more detail in this pre-print](https://www.biorxiv.org/content/early/2018/03/27/289504), but let's take a look at how well structure prediction works, how close we are to being able to acquire fluorescence-like images without fluorescent markers.
35
35
36
-
## Predicting fluorescence images from bright-field images
36
+
## Predicting Fluorescence Images from Bright-field Images
37
37
38
-
### Getting support packages, the model, and input images
38
+
### Getting Support Packages, the Model, and Input Images
39
39
40
40
This notebook runs well on [Google's Colaboratory Environment](https://drive.google.com/file/d/1aXtGzmqXxKTrraVZu0eg7JBf9Zt4iFFE/view?usp=sharing), which gives you access to nice GPUs from your laptop. We've also created a version to run this model across larger sets of images that allows you to [upload your own datasets](https://drive.google.com/file/d/1gIBqWhMCRxfX_NZgy_rAywEMh8QrpHQZ/view?usp=sharing). Before we begin in this environment however, we will need to install some packages and get local copies of the trained model and some input images to test it on.
### Visualizing the difference between the observed and predicted images
219
+
### Visualizing the Difference
200
220
201
221
We'd like a better sense of how our prediction differs from the observed image than we can get from the above two images at a glance. Let's compare them in a couple of other ways. The simplest comparison is simply to toggle back and forth between the two images. This lets your eye pick out differences that aren't apparent when you have to move from one image to the next.
202
222
@@ -208,7 +228,7 @@ This may be helpful in finding areas where our prediction is noticeably off.
To my eye this shows that our predictions are largely accurate in location, with some underprediction of the extent of the DNA dye uptake by the mitotic cell. Also of interest is that our predicted image is a bit smoother or less grainy than is the observed image. We can put this down to stochastic uptake of dye or dye clumping that the model is unable to predict.
245
+
To my eye this shows that our predictions are largely accurate in location, with some underprediction of the extent of the DNA dye uptake by the mitotic cell. Also of interest is that our predicted image is a bit smoother or less grainy than the observed image. We can put this down to stochastic uptake of dye or dye clumping that the model is unable to predict.
226
246
227
-
What is the absolute difference between our observed image and our predicted image? Let's take a look:
247
+
But how else can we visualize the location differences? Let's take a look:
228
248
229
249
230
250
```python
231
-
# Absolute difference
232
-
dna_diff =abs(small_dna - predicted_dna)
233
-
234
-
# Display the observed DNA, the predicted DNA, and their absolute difference
235
-
images = {'Observed': small_dna,
236
-
'Difference': dna_diff,
237
-
'Predicted': predicted_dna}
238
-
239
-
# Normalize the greyscale intensities by finding the max/min intensity of all
240
-
max_gs =max([i.max(0).max(0).max(0) for l, i in images.items()])
241
-
min_gs =min([i.min(0).min(0).min(0) for l, i in images.items()])
242
-
243
-
# Display the images with appropriate normalization
**Observed - Predicted** provides us information on where we underpredict, and you can instantly see that we underpredict the mitotic the most out of all other cells present in the image.
254
275
255
-
An ideal prediction would have a difference image that is completely empty (black).
276
+
**Predicted - Observed** gives context as to where we overpredict. Noticable here is that it looks as though we generally add noise to the image, even though in the displayed predicted image it looked as though we didn't. This is due to how the model was trained. Because the model cannot predict the random noise generated by fluorescence microscopy, it instead averages the spots where it thinks noise would be present.
256
277
257
-
While our difference image indicates that we didn't predict exactly the same image as the observed image, this image also gives us more confidence that we are able to predict location reasonably well at first pass. Notably, visible error looks like it correleates with areas where the observed image has signal. This means we aren't getting the magnitude quite right, but we *are* getting the location of the fluorescence right.
278
+
**Absolute Difference** shows us all the spots in a unified image of where our prediction was incorrect when compared to the observed. With largest difference being at the mitotic of course and all other points looking like they are mainly differences in greyscale intensity between the prediction and observed and not too many cell location differences.
258
279
280
+
## Uses of Our Predictions
259
281
282
+
In the small examples above we showed the relative location accuracy of our predictions and if you would like to learn more about the overall accuracy of our predictions we encourage you to read our paper. But what value can we gain from these predictions. We will first look at segmentation followed by predicting structures not tagged.
260
283
261
-
That emphasizes the relative accuracy in location of the fluorescent marker. But what about the intensity error? It is hard to get a sense of magnitude from a colorscale. To build intuition about error magnitude, let's look at the observed image, the difference, and the predicted image side-by-side with pixel intensity mapped to height in a surface plot.
284
+
### Segmentation and Thresholding
285
+
286
+
Segmentation is a large part of our pipeline so let's see how the observed fluorescence image compare to the predicted under a threshold.
262
287
263
288
264
289
```python
265
-
from mpl_toolkits.mplot3d import Axes3D
266
-
import matplotlib.pyplot as plt
290
+
from skimage.filters import threshold_otsu
291
+
292
+
thresh_obs = threshold_otsu(small_dna)
293
+
binary_obs = small_dna > thresh_obs
294
+
thresh_pred = threshold_otsu(predicted_dna)
295
+
binary_pred = predicted_dna > thresh_pred
296
+
297
+
imgs = {'Observed': display_obs,
298
+
'Predicted': display_pred,
299
+
'Binary Observed': binary_obs,
300
+
'Binary Predicted': binary_pred}
267
301
268
-
# Update the max greyscale by first taking the sum across z
269
-
max_gs =max([i.sum(0).max(0).max(0) for l, i in images.items()])
It was pretty intuitive that our predictions were generally 'smoother' and had less noise in the observed fluorescence but the thresholding shows how much noise was present in the original that is filtered out on the thresholded prediction.
316
+
317
+
While we can continue on using the predicted image as a segmentation input itself we could also use the binary predicted image as a mask for the original to view a cleaned up, or de-noised original image. In short, let's use the binary predicted as a clipping mask for the original observed fluorescence image.
The middle surface plot would be completely flat for an ideal model. Note that the largest difference in intensity is at the mitotic cell, which is also where we have the largest intensity in the observed image. One of the other interesting things that pops out of this is the 'noise' overlaid on the error. The model is unable to predict what is likely detector noise.
346
+
That looks much better! Being able to filter out some of the noise while still using the original observed fluorescence will be valuable in generating better segmentations and performing feature analysis.
299
347
300
-
### Predicting structures not tagged
348
+
### Predicting Structures not Tagged
301
349
302
350
These error plots give us some confidence in our ability to predict where DNA fluorescence will appear. We can plot simmilarly estimate error for our other tag models, but this is left as an exercise for the reader. Hard values for how well each of these models work are found in [the accompanying paper](https://www.biorxiv.org/content/early/2018/03/28/289504).
303
351
@@ -351,7 +399,7 @@ for img_name, ax in zip(predictions.keys(), axes.flat):
What a nice way to get information from a brightfield image using patterns of structures we've seen in other images! We can see fluorescent structures in cells where we haven't even labeled them. In cases where we can do a little initial dying or labeling to train models of this type, this can drastically improve the interpretability of large sets of brightfield images.
0 commit comments