napari ConvPaint
A plugin for segmentation by pixel classification using pre-trained neural networks for feature extraction
This napari plugin can be used to segment objects or structures in images based on a few brush strokes providing examples of the classes. Based on the same idea as other tools like ilastik, its main strength is that it can use features from pretrained neural networks like VGG16 or DINOV2, enabling the segmentation of more complex images.
Find more information and tutorials in the docs or read the preprint.
Installation¶
You can install napari-convpaint
via pip
pip install napari-convpaint
To install latest development version :
pip install git+https://github.com/guiwitz/napari-convpaint.git
Example use case: Tracking shark body parts in a movie¶
These are the scribble annotations provided for training:
And this is the resulting Convpaint segmentation:
Check out the documentation or the paper for more usecases!
API¶
You can now use the API in a fashion very similar to the napari plugin. The ConvpaintModel class combines a feature extractor and a classifier model, and holds all the parameters defining the model. Initialize a ConvpaintModel object, train its classifier and use it to segment an image:
cp_model = ConvpaintModel("dino") # alternatively use vgg, cellpose or gaussian
cp_model.train(image, annotations)
segmentation = cp_model.segment(image)
There are many other options, such as predicting all class probabilities (see below) and we will update the documentation and notebook examples soon. In the meantime feel free to test it yourself.
probas = cp_model.predict_probas(image)
License¶
Distributed under the terms of the BSD-3 license, "napari-convpaint" is free and open source software
Contributing¶
Contributions are very welcome. Tests can be run with tox, please ensure the coverage at least stays the same before you submit a pull request.
Issues¶
If you encounter any problems, please file an issue along with a detailed description.
Authors¶
The idea behind this napari plugin was first developed by Lucien Hinderling in the group of Olivier Pertz, at the Institute of Cell Biology, University of Bern. Pertz lab obtained a CZI napari plugin development grant with the title "Democratizing Image Analysis with an Easy-to-Train Classifier" which supported the adaptation of the initial concept as a napari plugin called napari-convpaint. The plugin has been developed by Guillaume Witz1, Roman Schwob1,2 and Lucien Hinderling2 with much appreciated assistance of Benjamin Grädel2, Maciej Dobrzyński2, Mykhailo Vladymyrov1 and Ana Stojiljković1.
1Data Science Lab, University of Bern
2Pertz Lab, Institute of Cell Biology, University of Bern
Cite Convpaint¶
If you find Convpaint useful in your research, please consider citing our work. Please also cite any Feature Extractor you have used in Convpaint, such as ilastik, cellpose or DINOv2.
Convpaint:
@article {Hinderling2024,
author = {Hinderling, Lucien and Witz, Guillaume and Schwob, Roman and Stojiljković, Ana and Dobrzyński, Maciej and Vladymyrov, Mykhailo and Frei, Joël and Grädel, Benjamin and Frismantiene, Agne and Pertz, Olivier},
title = {Convpaint - Interactive pixel classification using pretrained neural networks},
elocation-id = {2024.09.12.610926},
doi = {10.1101/2024.09.12.610926},
journal = {bioRxiv},
publisher = {Cold Spring Harbor Laboratory},
year = {2024},
}
Suggested citations for feature extractors:
@article {Berg2019,
author = {Berg, Stuart and Kutra, Dominik and Kroeger, Thorben and Straehle, Christoph N. and Kausler, Bernhard X. and Haubold, Carsten and Schiegg, Martin and Ales, Janez and Beier, Thorsten and Rudy, Markus and Eren, Kemal and Cervantes, Jaime I. and Xu, Buote and Beuttenmueller, Fynn and Wolny, Adrian and Zhang, Chong and Koethe, Ullrich and Hamprecht, Fred A. and Kreshuk, Anna},
title = {ilastik: interactive machine learning for (bio)image analysis.},
issn = {1548-7105},
url = {https://doi.org/10.1038/s41592-019-0582-9},
doi = {10.1038/s41592-019-0582-9},
journal = {Nature Methods},
publisher = {Springer Nature},
year = {2019},
journal = {Nature Methods},
}
@article {Stringer2021,
author = {Stringer, Carsen and Wang, Tim and Michaelos, Michalis and Pachitariu Marius},
title = {Cellpose: a generalist algorithm for cellular segmentation.},
elocation-id = {s41592-020-01018-x},
doi = {10.1038/s41592-020-01018-x},
journal = {Nature Methods},
publisher = {Springer Nature},
year = {2021},
}
@article {oquab2024dinov2learningrobustvisual,
title={DINOv2: Learning Robust Visual Features without Supervision},
author={Maxime Oquab and Timothée Darcet and Théo Moutakanni and Huy Vo and Marc Szafraniec and Vasil Khalidov and Pierre Fernandez and Daniel Haziza and Francisco Massa and Alaaeldin El-Nouby and Mahmoud Assran and Nicolas Ballas and Wojciech Galuba and Russell Howes and Po-Yao Huang and Shang-Wen Li and Ishan Misra and Michael Rabbat and Vasu Sharma and Gabriel Synnaeve and Hu Xu and Hervé Jegou and Julien Mairal and Patrick Labatut and Armand Joulin and Piotr Bojanowski},
year={2024},
eprint={2304.07193},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2304.07193}
}
Supported data:
- Information not submitted
Plugin type:
GitHub activity:
- Stars: 31
- Forks: 7
- Issues + PRs: 4