Article Text

Download PDFPDF

1283 A novel, scalable deep learning-based approach to automated quality control of multiplex immunofluorescence images
  1. Annika F Fink1,
  2. Roman Schulte-Sasse2,
  3. Martin Bauw2,
  4. Deepti Agrawal2,
  5. Beatriz Perez2,
  6. Mariam Sadeq2,
  7. Hans Juergen Grote1,
  8. Yae Ohata1,
  9. Lukas Ruff2,
  10. Maximilian Alber2,3,
  11. Sharon Ruane2 and
  12. Thomas Mrowiec1
  1. 1Merck KGaA, Darmstadt, Hessen, Germany
  2. 2Aignostics GmbH, Berlin, Berlin, Germany
  3. 3Institute of Pathology, Charité, Berlin, Berlin, Germany
  • Journal for ImmunoTherapy of Cancer (JITC) preprint. The copyright holder for this preprint are the authors/funders, who have granted JITC permission to display the preprint. All rights reserved. No reuse allowed without permission.


Background Multiplex immunofluorescence (mIF) imaging allows identification of multiple protein markers on the same tissue section at cell-level resolution. AI-powered analysis of these images facilitates quantification of cell phenotypes in their spatial context. Artifacts arising from sample preparation, handling and image acquisition impact how reliably these analyses can be performed, necessitating a quality control (QC) process which allows only regions that can be reliably assessed to be passed to downstream analysis steps. As artifacts can present differently across mIF channels, each stain must be assessed independently, making manual quality control time consuming and not scalable. While automated artifact detection approaches for brightfield images exist, corresponding algorithms are currently missing to thoroughly QC mIF images. Here, we demonstrate performance of deep learning-based artifact detection to automate QC of mIF images for improved efficiency and scalability.

Methods 140 resection samples were used in the study, spanning four indications: non-small cell lung cancer (10), urothelial cancer (30), colorectal cancer (50) and head and neck cancer (50). Samples were stained with a 6-plex Akoya mIF panel and DAPI. A deep learning-based segmentation model was developed for mIF artifact detection. 120 slides were annotated across all available mIF channels and used as a training set. 20 slides were independently annotated as a hold-out evaluation set. For all samples, the resulting single AI model was run independently on all mIF channels and the pixel-wise predictions were combined to output a final ‘mIF QC’ mask.

Results The mIF-based artifact segmentation model demonstrated an accuracy above 90% for the task of identifying ‘Usable’ vs. ‘Unusable’ tissue on the hold-out test set across tumor indications. The model performed similarly well across all mIF channels and successfully identified artifacts arising from sample processing (e.g. tissue folds, physical contamination, air bubbles) and artifacts specifically associated with mIF image acquisition (e.g. illumination artifacts, mIF blur).

Conclusions Incorporation of an mIF-based image QC step is essential in the analysis of mIF samples to ensure exclusion of regions unsuitable for downstream cell-level analysis. We demonstrate a rapid, scalable approach to automated mIF image QC, removing the requirement for the time-consuming and potentially error-prone manual annotation of artifact regions.

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.