Biological Image Analysis¶
Count cells: use the segmentation notebook and/or other approaches.
Denoise with noise2void.
Try to replace the U-net with a W-net.
Try self-supervised cell segmentation with modified U-nets.
Deep leakage from gradients¶
Background. Privacy is essential in deep learning, particularly in federated settings/environments where different clients want to train models together without sharing their private data. An adopted method of hiding clients personal information while training models collaboratively is sharing gradient updates. Gradients have been widely used in federated/collaborative learning, however it has been shown that an attacker can retrieve the exact input data simply from the shared gradients Zhu et al. In this project, your task is to reimplement a gradient attack method from this paper and show that one can retrieve pixel wise correct input images that were initially used in the model training.
Compare some contrastive self-supervision methods on image classification problems, e.g. a vanilla supervised training vs training with self-supervision.