Slide 8
Slide 8 text
or variation strength, called HED-light and
ning, we selected the value of the augmen-
parameters randomly within certain ranges
in variation. We tuned all ranges manually
amination. In particular, we used a scaling
n [0.8, 1.2], elastic deformation parameters
and 2 [9.0, 11.0], additive Gaussian noise
0.1], Gaussian blurring with 2 [0, 0.1],
ensity ratio between [0.65, 1.35], and con-
y ratio between [0.5, 1.5]. For HSV-light
ng, we used hue and saturation intensity ra-
[ 0.1, 0.1] and [ 1, 1], respectively. For
d HED-strong, we used intensity ratios be-
, 0.05] and [ 0.2, 0.2], respectively, for all
s.
or normalization
Figure 4: Network-based stain color normalization. From left to right:
patches from the training set are transformed with heavy color augmen-
tation and fed to a neural network. This network is trained to reconstruct
the original appearance of the input images by removing color augmen-
tation, e↵ectively learning how to perform stain color normalization.
alize to unseen stains in order to perform well. We eval-
uated several methods that implement g (see Fig. 3), and
propose a novel technique based on neural networks.
Identity. We performed no transformation on the in-
put patches, serving as a baseline method for the rest of
techniques.
Stain color normalization
1. Identity: 何もしない
2. Grayscale: RGB to grayscale - ⾊情報を除く
(augmentationはbasic, morphology, BCのみ)
3. LUT-based 核を検出して⾊の標準化
テンプレWSIからlook-up table (LUT)を作成する
4. Network-based(右図)
Downward 5 layers, BN, LRA
Upward 5 layers, nearest-neighbor upsampling + 1 conv
BN, LRA+tanh, 64-sample mini-batch
4臓器のWSIから
500Kパッチ集めて使⽤
HSV augmentation (color transformation only)
HSV value channel ratios b/w [-1, 1]