Skip to content

Commit

Permalink
added training for DAPI
Browse files Browse the repository at this point in the history
  • Loading branch information
tractatus committed Sep 19, 2023
1 parent f1ecd29 commit bdc98ef
Show file tree
Hide file tree
Showing 38 changed files with 400 additions and 0 deletions.
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,9 @@
# RStudio files
.Rproj.user/

/training_data/dapi/images/*.tif
/training_data/dapi/masks/*.tif

# produced vignettes
vignettes/*.html
vignettes/*.pdf
Expand Down
52 changes: 52 additions & 0 deletions augment.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
import os
import cv2

# Set the paths for the input folders
mask_org_folder = "./training_data/masks_nuclei"
images_org_folder = "./training_data/images_nuclei"

# Set the paths for the output folders
mask_folder = "./training_data/dapi/masks"
images_folder = "./training_data/dapi/images"

# Create the output folders if they don't exist
os.makedirs(mask_folder, exist_ok=True)
os.makedirs(images_folder, exist_ok=True)

# Set the ROI size and step
roi_size = (256, 256)
roi_step = 128

# Get the list of file names in the mask_org folder
file_names = [file for file in os.listdir(mask_org_folder) if file.lower().endswith(('.tif', '.tiff'))]

# Iterate over the file names
for file_name in file_names:
# Read the mask image
mask_path = os.path.join(mask_org_folder, file_name)
mask = cv2.imread(mask_path, cv2.IMREAD_GRAYSCALE)

# Read the corresponding input image
image_path = os.path.join(images_org_folder, file_name)
image = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE | cv2.IMREAD_ANYDEPTH)

# Get the dimensions of the images
mask_height, mask_width = mask.shape[:2]
image_height, image_width = image.shape[:2]

# Iterate over the ROI positions
for y in range(0, mask_height - roi_size[0] + 1, roi_step):
for x in range(0, mask_width - roi_size[1] + 1, roi_step):
# Extract the ROI from the mask
roi_mask = mask[y:y+roi_size[0], x:x+roi_size[1]]

# Extract the ROI from the image
roi_image = image[y:y+roi_size[0], x:x+roi_size[1]]

# Save the ROI as a new image in the mask folder
new_mask_path = os.path.join(mask_folder, f"{file_name}_{y}_{x}.tif")
cv2.imwrite(new_mask_path, roi_mask)

# Save the ROI as a new image in the images folder
new_image_path = os.path.join(images_folder, f"{file_name}_{y}_{x}.tif")
cv2.imwrite(new_image_path, roi_image)
Binary file added example_image.tif
Binary file not shown.
Binary file added example_labels.tif
Binary file not shown.
Binary file added example_rois.zip
Binary file not shown.
112 changes: 112 additions & 0 deletions figure01.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,11 @@
Daniel Fürth <br><br>Table of Contents:

- [Emission plot Fig. 1c](#emission-plot-fig.-1c)
- [DAPI nuclei cell segmentation in
2D.](#dapi-nuclei-cell-segmentation-in-2d.)
- [Installation](#installation)
- [Segmentation training](#segmentation-training)
- [Prediction](#prediction)

## Emission plot Fig. 1c

Expand Down Expand Up @@ -152,3 +157,110 @@ round(AZdye594, 2)
```

[1] 2.88

## DAPI nuclei cell segmentation in 2D.

### Installation

Make sure you have `conda` installed. Create a new conda environment:

```
conda create --name cellseg2D-env
conda activate cellseg2D-env
conda install -c conda-forge napari
conda install opencv
```

Install Tensorflow for macOS M1/M2:

```
pip install tensorflow-macos
pip install tensorflow-metal
```

Install stardist for cell nuclei segmentation:

```
pip install gputools
pip install stardist
pip install csbdeep
```

### Segmentation training

#### Augment training data set

```
python augment.py
```

This expands images_org and masks_org into images (input) and masks
(ground truth). Input and ground truth are matched based on file name.
Format is 8-bit monochrome TIF on both.

If more than 255 cells needs to be segmented within a single image you
can simply change mask format to 16-bit.

#### Perform training

```
python train_nuclei.py
```

Open up tensorboard to follow the results:

```
tensorboard --logdir=.
```

![Tensorboard lets you monitor the training of the neural
network](./repo_img/tensorboard.png)

Click the **images** tab of the Tensorboard to inspect the visual output
of the training. This is in the beginning: ![inspect the
output](./repo_img/tensorboard_img.png)

Here:

- `net_input` is the input images. Notice a cell is only really present
in the first out of three.
- `net_output0` is the current output from the network.
- `net_target0` is the ground truth (what the network ideally should
have generated).

### Prediction

We have a script we can apply to any image for prediction.

```
python predict_nuclei.py
```

<div id="fig-dapi">

<table>
<colgroup>
<col style="width: 50%" />
<col style="width: 50%" />
</colgroup>
<tbody>
<tr class="odd">
<td style="text-align: center;"><div width="50.0%"
data-layout-align="center">
<p><img src="./repo_img/example_image.jpg" id="fig-input"
data-ref-parent="fig-dapi" data-fig.extended="false"
alt="(a) input" /></p>
</div></td>
<td style="text-align: center;"><div width="50.0%"
data-layout-align="center">
<p><img src="./repo_img/example_labels.jpg" id="fig-output"
data-ref-parent="fig-dapi" data-fig.extended="false"
alt="(b) output" /></p>
</div></td>
</tr>
</tbody>
</table>

Figure 1: Segmentation results.

</div>
81 changes: 81 additions & 0 deletions figure01.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -118,3 +118,84 @@ AZdye594<-quench.ratio$`max(emission)`[3]/quench.ratio$`max(emission)`[4]
round(AZdye488, 2)
round(AZdye594, 2)
```

## DAPI nuclei cell segmentation in 2D.

### Installation

Make sure you have `conda` installed.
Create a new conda environment:
```{eval=FALSE}
conda create --name cellseg2D-env
conda activate cellseg2D-env
conda install -c conda-forge napari
conda install opencv
```

Install Tensorflow for macOS M1/M2:
```{eval=FALSE}
pip install tensorflow-macos
pip install tensorflow-metal
```

Install stardist for cell nuclei segmentation:
```{eval=FALSE}
pip install gputools
pip install stardist
pip install csbdeep
```

### Segmentation training

#### Augment training data set

```{eval=FALSE}
python augment.py
```

This expands images_org and masks_org into images (input) and masks (ground truth). Input and ground truth are matched based on file name. Format is 8-bit monochrome TIF on both.

If more than 255 cells needs to be segmented within a single image you can simply change mask format to 16-bit.

#### Perform training

```{eval=FALSE}
python train_nuclei.py
```

Open up tensorboard to follow the results:
```{eval=FALSE}
tensorboard --logdir=.
```



![Tensorboard lets you monitor the training of the neural network](./repo_img/tensorboard.png)


Click the **images** tab of the Tensorboard to inspect the visual output of the training.
This is in the beginning:
![inspect the output](./repo_img/tensorboard_img.png)

Here:

- `net_input` is the input images. Notice a cell is only really present in the first out of three.
- `net_output0` is the current output from the network.
- `net_target0` is the ground truth (what the network ideally should have generated).

### Prediction

We have a script we can apply to any image for prediction.

```{eval=FALSE}
python predict_nuclei.py
```

::: {#fig-dapi layout-ncol=2}

![input](./repo_img/example_image.jpg){#fig-input}

![output](./repo_img/example_labels.jpg){#fig-output}

Segmentation results.
:::
1 change: 1 addition & 0 deletions models/dapi/config.json
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"n_dim": 2, "axes": "YXC", "n_channel_in": 1, "n_channel_out": 33, "train_checkpoint": "weights_best.h5", "train_checkpoint_last": "weights_last.h5", "train_checkpoint_epoch": "weights_now.h5", "n_rays": 32, "grid": [2, 2], "backbone": "unet", "n_classes": null, "unet_n_depth": 3, "unet_kernel_size": [3, 3], "unet_n_filter_base": 32, "unet_n_conv_per_depth": 2, "unet_pool": [2, 2], "unet_activation": "relu", "unet_last_activation": "relu", "unet_batch_norm": false, "unet_dropout": 0.0, "unet_prefix": "", "net_conv_after_unet": 128, "net_input_shape": [null, null, 1], "net_mask_shape": [null, null, 1], "train_shape_completion": false, "train_completion_crop": 32, "train_patch_size": [256, 256], "train_background_reg": 0.0001, "train_foreground_only": 0.9, "train_sample_cache": true, "train_dist_loss": "mae", "train_loss_weights": [1, 0.2], "train_class_weights": [1, 1], "train_epochs": 400, "train_steps_per_epoch": 100, "train_learning_rate": 0.0003, "train_batch_size": 4, "train_n_val_patches": null, "train_tensorboard": true, "train_reduce_lr": {"factor": 0.5, "patience": 40, "min_delta": 0}, "use_gpu": false}
Binary file not shown.
Binary file not shown.
Binary file not shown.
1 change: 1 addition & 0 deletions models/dapi/thresholds.json
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"prob": 0.46905075238803473, "nms": 0.3}
Binary file added models/dapi/weights_best.h5
Binary file not shown.
Binary file added models/dapi/weights_last.h5
Binary file not shown.
35 changes: 35 additions & 0 deletions predict_nuclei.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
from __future__ import print_function, unicode_literals, absolute_import, division
import sys
import numpy as np
import matplotlib
matplotlib.rcParams["image.interpolation"] = 'none'
import matplotlib.pyplot as plt

from glob import glob
from tifffile import imread
from csbdeep.utils import Path, normalize
from csbdeep.io import save_tiff_imagej_compatible

from stardist import random_label_cmap, _draw_polygons, export_imagej_rois
from stardist.models import StarDist2D

np.random.seed(6)
lbl_cmap = random_label_cmap()

X = sorted(glob('./training_data/images_nuclei/*.tif'))
X = list(map(imread,X))

n_channel = 1 if X[0].ndim == 2 else X[0].shape[-1]
axis_norm = (0,1) # normalize channels independently
# axis_norm = (0,1,2) # normalize channels jointly
if n_channel > 1:
print("Normalizing image channels %s." % ('jointly' if axis_norm is None or 2 in axis_norm else 'independently'))

model = StarDist2D(None, name='dapi', basedir='models')

img = normalize(X[0], 1,99.8, axis=axis_norm)
labels, details = model.predict_instances(img)

save_tiff_imagej_compatible('example_image.tif', img, axes='YX')
save_tiff_imagej_compatible('example_labels.tif', labels, axes='YX')
export_imagej_rois('example_rois.zip', details['coord'])
Binary file added repo_img/example_image.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added repo_img/example_labels.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added repo_img/tensorboard.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added repo_img/tensorboard_img.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added repo_img/tensorboard_later.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Empty file removed train.py
Empty file.
Loading

0 comments on commit bdc98ef

Please sign in to comment.