The napari hub is transitioning to a community-run implementation due to launch in June 2025.
Since October 1, 2024, this version is no longer actively maintained and will not be updated. New plugins and plugin updates will continue to be listed.

cell-AAP

cell-AAP

Utilities for the semi-automated generation of instance segmentation annotations to be used for neural network training. Utilities are built ontop of UMAP, HDBSCAN and a finetuned encoder version of FAIR's Segment Anything Model developed by Computational Cell Analytics for the project micro-sam. In addition to providing utilies for annotation building, we train networks using FAIR's detectron2 to

  1. Demonstrate the efficacy of our utilities.
  2. Be used for microscopy annotation of supported cell lines

Cell-line specific models currently include:

  1. HeLa
  2. U2OS

Models have demonstrated performance efficacy on:

  1. HT1080 (HeLa model)
  2. RPE1 (U2OS model)

We've also developed a napari application for the usage of these pre-trained networks.

We highly recommend installing cell-AAP in a clean conda environment. To do so you must have miniconda or anaconda installed.

If a conda distribution has been installed:

  1. Create and activate a clean environment

     conda create -n cell-aap-env python=3.11.0
     conda activate cell-app-env
  2. Within this environment install pip

     conda install pip
  3. Then install cell-AAP from PyPi

     pip install cell-AAP --upgrade
  4. Finally detectron2 must be built from source, atop cell-AAP

     #For MacOS
     CC=clang CXX=clang++ ARCHFLAGS="-arch arm64" python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
    
     #For other operating systems 
     python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
  1. To open napari simply type "napari" into the command line, ensure that you are working the correct environment
  2. To instantiate the plugin navigate to the "Plugins" menu and select "cell-AAP"
  3. You should now see the Plugin, where you can select an image, display it, and run inference on it.

If running inference on large volumes of data, i.e. timeseries data >= 300 MB in size, we recommend to proceed in the following manner.

  1. Assemble a small, < 100 MB, substack of your data using python or a program like ImageJ
  2. Use this substack to find the optimal parameters for your data, (Number of Cells, Network confidence threshold)
  3. Run Inference over the volume using the discovered optimal parameters

Once inference is complete the following colors indicate class prediction

  • Red: Non-mitotic
  • Blue: Mitotic

For analysis purposes, masks in the semantic and instance segmentations have the following value mapping: Semantic

  • 1: Non-mitotic
  • 100: Mitotic

Instance

  • $2x$: Non-mitotic
  • $2x-1$: Mitotic

Version:

  • 0.0.9

Last updated:

  • 20 February 2025

First released:

  • 28 May 2024

License:

  • Information not submitted

Supported data:

  • Information not submitted

Plugin type:

GitHub activity:

  • Stars: 0
  • Forks: 0
  • Issues + PRs: 0

Python versions supported:

Operating system:

  • Information not submitted

Requirements:

  • napari[all]>=0.4.19
  • numpy==1.26.4
  • opencv-python>=4.9.0
  • tifffile>=2024.2.12
  • torch>=2.3.1
  • torchvision>=0.18.1
  • scikit-image>=0.22.0
  • qtpy>=2.4.1
  • pillow>=10.3.0
  • scipy>=1.3.0
  • timm>=1.0.7
  • pandas>=2.2.2
  • superqt>=0.6.3
  • btrack>=0.6.5
  • seaborn>=0.13.2
  • openpyxl>=3.1.4
  • joblib>=1.0
  • scikit-learn>=0.22
  • cython<3,>=0.27