Article Text

Download PDFPDF

1294 An automated machine learning framework for rapid quantification and analysis of multiplexed ion beam images (MIBI)
  1. Raghav Padmanabhan,
  2. Mate Nagy,
  3. Stanislaw Nowak,
  4. Peng Si,
  5. Sweta Bajaj,
  6. Mate Nagy and
  7. Monirath Hav
  1. Ionpath Inc., Menlo Park, CA, USA


Background Multiplexed Ion Beam Imaging (MIBI) offers high-parameter tissue imaging that is well suited for describing complex immuno-spatial features in tissues, including the enumeration of various cell phenotypes, expression of immune checkpoint proteins, and quantitative description of spatial distributions between different populations.1 The high imaging resolution combined with the rich mass spectral information in each image allows for the quantification of up to 40 biomarkers in a single slide and enables immediate processing without the need for additional imaging rounds. Leveraging this, we outline an automated machine learning framework that enables rapid, deep phenotypic and spatial profiling of tissues at the subcellular level.

Methods Our framework consists of five steps, linked together using Apache Airflow with Kubernetes. The five steps are: 1) Background correction, 2) Cell and region segmentation, 3) Cell classification, 4) Expression quantification, and 5) Spatial analysis. Background correction defines each channel in the staining panel by ensuring that there are no image artifacts that will interfere with subsequent steps. Segmentation is based on deep learning models that leverage multiple biomarker channels to delineate cells, including challenging ones that lack dsDNA signal in the plane of imaging. Cell classification uses deep learning models that leverage staining patterns alongside known phenotypic hierarchies to first define major cell lineages, then further divide them into specific sub phenotypes. Expression quantification calculates the expression levels for each checkpoint or inducible marker of interest and computes counts and densities of each cell phenotype. Lastly, spatial analysis allows for the quantitation of immune infiltration and various cell-to-cell and cell-to-region proximity features.

Results We demonstrate end-to-end analysis of multiple tissue types with diverse morphology and tissue architecture. The robust algorithms adapt to a wide range of tissue background and noise and achieve a high degree of accuracy in segmentation and classification. This obviates the need for multiple iterations and parameter tuning to optimize algorithm performance. The high-quality results generated by the framework have been used to discover interesting associations and spatial patterns of immune cells in the tumor microenvironment.

Conclusions We introduce an automated machine learning based framework for deep tissue profiling from MIBI images. The combination of pre-trained deep learning models connected through Airflow’s directed acyclic graphs on a Kubernetes cluster, leads to a rapid and scalable bioinformatics solution for MIBI images that can be used to uncover novel biology.

Acknowledgements The authors would like to acknowledge Ablimit Keskin and Murat Aksoy for their contributions to the development of the framework.


  1. Keren L, Bosse M, Thompson S, Risom T, Vijayaragavan K, McCaffrey E, et al. Mibi-TOF: A multiplexed imaging platform relates cellular phenotypes and tissue structure. Science Advances. 2019;5(10).

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.